GLM-5.1 reaches top-tier coding performance (#3 on Code Arena), while the 'cheap executor + expensive advisor' pattern emerges as a standard orchestration approach for reducing inference costs. Key implementations include Anthropic's API-level advisor tools, Berkeley's research, and new features in Qwen Code (v0.14.x) with agent engineering primitives like model routing and sub-agent selection.
Waypoint-1.5 is Overworld's improved real-time video world model now optimized for consumer hardware, running up to 720p/60fps on RTX 3090+ and 360p on broader gaming laptops/Apple Silicon. The model was trained on 100x more data than v1 with more efficient video modeling techniques, prioritizing interactive responsiveness and local deployment over pure visual fidelity.
Practical guide to multimodal embedding and reranker models that extend traditional RAG pipelines to handle text, images, and other modalities in a shared embedding space. Covers model loading, encoding mixed-modality inputs, and computing cross-modal similarities with concrete code examples and performance considerations.
Gemma 4 is gaining traction as a practical edge-inference model with strong on-device performance (40 tok/s on iPhone 17 Pro via MLX), achieving 2M downloads in its first week and becoming the top trending model on Hugging Face. The release demonstrates mature ecosystem support across llama.cpp, Ollama, vLLM, and other deployment tools, positioning it as a reference point for local-first development and reducing reliance on paid cloud APIs.
Gemma 4 launched under Apache 2.0 with strong day-0 ecosystem support across vLLM, llama.cpp, Ollama, and major inference platforms. Key technical highlights include MoE architecture, multimodal capabilities, impressive local inference benchmarks (162 tok/s on RTX 4090, runs on M4 MacBooks and iPhones), and ecosystem-wide quantization/optimization support within hours of release.
Marc Andreessen discusses AI's 80-year technical trajectory, scaling laws, reasoning models, agents, and edge inference in a long-form conversation. Key technical insights include his perspectives on agents as a Unix-like architecture, edge AI economics, open-source models, and why software bottlenecks may matter more than model improvements going forward.
Google DeepMind released Gemma 4, a family of open-weight models (31B dense, 26B MoE, edge variants) under Apache 2.0 license with native multimodal support (text/image/video/audio), 256K context, and function calling—positioning it as a top-tier open model for reasoning, agents, and edge deployment. The 31B variant achieves competitive performance with significantly fewer parameters than rivals, with strong benchmarks on GPQA and AIME, and rapid ecosystem adoption already underway.
Google released Gemma 4, a family of open-source models (2B to 31B parameters) built on Gemini 3 technology, ranked #3 and #6 on Arena AI leaderboard for their sizes. The models are optimized for on-device deployment, agentic workflows, and fine-tuning across hardware from mobile to datacenter, with Apache 2.0 licensing enabling direct integration into engineering workflows.
Holo3 is a new 10B-parameter agent model achieving 78.85% on OSWorld benchmark for autonomous desktop task execution, with weights openly available on Hugging Face under Apache2 license. The model is production-ready and trained via a specialized flywheel combining synthetic navigation data, out-of-domain augmentation, and curated reinforcement learning for computer use tasks across enterprise applications.
apfel is an open-source tool that exposes Apple's on-device foundation model through a CLI, OpenAI-compatible API server, and shell integration—enabling local LLM inference on Apple Silicon Macs with no cloud dependency, API keys, or per-token billing. It supports tool calling via Model Context Protocol (MCP), includes demo shell scripts for practical workflows, and manages a 4096-token context window automatically.
A curated resource listing LLM APIs with permanent free tiers for text inference, including first-party APIs from model trainers and third-party platforms hosting open-weight models. Covers rate limits, available regions, and notable models—useful reference for engineers exploring cost-free inference options during development and experimentation.
Google released Gemini 3.1 Flash-Lite, a new lightweight model optimized for high-volume production workloads at $0.25/1M input tokens and $1.50/1M output tokens. It delivers 2.5X faster time-to-first-token and 45% faster output speeds than 2.5 Flash while maintaining quality, making it ideal for real-time applications like translation, content moderation, UI generation, and agentic workflows at scale.
Google DeepMind released Nano Banana 2 (Gemini 3.1 Flash Image), a new image generation model combining advanced reasoning and world knowledge with Flash-speed inference. The model is now available across Google products (Gemini app, Search) and offers improved subject consistency, photorealism, and instruction-following capabilities with reduced latency compared to the Pro version.
Comprehensive overview of inference-time scaling techniques for LLMs, covering methods like chain-of-thought prompting, self-consistency, best-of-N ranking, and rejection sampling with verifiers. The author shares practical experimentation results (achieving 15% to 52% accuracy improvement) and categorizes approaches from both academic literature and proprietary LLM implementations, making it directly applicable to deployed systems.
DeepSeek V3.2 is a new open-weight flagship model achieving GPT-5/Gemini 3.0 Pro-level performance with a custom sparse attention mechanism requiring specialized inference infrastructure. The article provides technical deep-dive into the model's architecture, training pipeline, and what's changed since V3/R1, making it essential for engineers working with state-of-the-art open-source models.