Google released Gemma 4, a family of open-source models (2B to 31B parameters) built on Gemini 3 technology, ranked #3 and #6 on Arena AI leaderboard for their sizes. The models are optimized for on-device deployment, agentic workflows, and fine-tuning across hardware from mobile to datacenter, with Apache 2.0 licensing enabling direct integration into engineering workflows.
IBM releases Granite 4.0 3B Vision, a modular vision-language model optimized for chart and document understanding, delivered as a LoRA adapter on Granite 4.0 Micro with a novel DeepStack architecture for multi-layer visual feature injection. The release includes ChartNet, a 1.7M-sample synthetic dataset for chart interpretation with code-guided augmentation, addressing a key VLM weakness in structured data reasoning.
OpenMed built an end-to-end open-source protein engineering pipeline combining structure prediction, sequence design, and codon optimization, with novel contributions in codon-level language modeling. They benchmarked transformer architectures (CodonRoBERTa-large-v2 vs ModernBERT) for codon optimization, scaled to 25 species in 55 GPU-hours, and released runnable code with full experimental transparency—directly applicable for engineers building biological AI systems.
A comprehensive retrospective on 2025's major LLM developments, starting with DeepSeek R1's January release showing that reinforcement learning (specifically RLVR/GRPO) can enable reasoning-like behavior in LLMs, and revealing that state-of-the-art model training may cost an order of magnitude less than previously estimated. The article examines how post-training scaling through verifiable rewards represents a significant algorithmic shift from SFT/RLHF approaches, opening new possibilities for capability unlocking.