GitHub Trending AI · 21d ago · 8 · tool open source api update inference deployment

apfel is an open-source tool that exposes Apple's on-device foundation model through a CLI, OpenAI-compatible API server, and shell integration—enabling local LLM inference on Apple Silicon Macs with no cloud dependency, API keys, or per-token billing. It supports tool calling via Model Context Protocol (MCP), includes demo shell scripts for practical workflows, and manages a 4096-token context window automatically.

GitHub Trending AI · 21d ago · 7 · tool open source library agent rag deployment

A curated directory of production-ready open-source AI tools and libraries organized by category (core frameworks, models, inference, agents, RAG, training, deployment, benchmarks, safety). Highlights practical CLI tools like PR-Agent, Gemini CLI, LLM, and Repomix that directly integrate AI into developer workflows.

Ahead of AI · 24d ago · 8 · research tutorial open source

Comprehensive reference guide organizing 45+ LLM architectures with visual model cards and detailed explanations of attention variants (MHA, GQA, sliding window, etc.) used in modern models. Includes both a web gallery and printable poster, serving as a practical learning resource for understanding contemporary transformer architectures.

GitHub Trending AI · 24d ago · 7 · tool open source agent deployment

holaOS is an agent operating system framework that provides infrastructure for long-running AI agents with persistent memory, durable state, and continuity across executions rather than one-off tasks. The project includes a local desktop environment (Holaboss) with quick-start installation and integration points for coding agents like Claude, Cursor, and Windsurf.

GitHub Trending AI · 24d ago · 7 · api update tool inference

A curated resource listing LLM APIs with permanent free tiers for text inference, including first-party APIs from model trainers and third-party platforms hosting open-weight models. Covers rate limits, available regions, and notable models—useful reference for engineers exploring cost-free inference options during development and experimentation.

GitHub Trending AI · 27d ago · 7 · tutorial workflow agent open source

A comprehensive AI engineering curriculum spanning 260+ lessons across 20 phases (~290 hours) covering fundamentals from linear algebra to autonomous agent swarms in Python, TypeScript, Rust, and Julia. Each lesson produces reusable artifacts (prompts, skills, agents, MCP servers) that can be immediately integrated into AI coding workflows, with personalized learning paths based on existing ML/DL knowledge.

DeepMind Blog · 28d ago · 7 · benchmark research tool

Google DeepMind released a cognitive taxonomy framework for measuring AGI progress, grounded in psychology and neuroscience, identifying 10 key cognitive abilities. They're launching a $200K Kaggle hackathon where engineers can design evaluations for five priority abilities (learning, metacognition, attention, executive functions, social cognition) using their new Community Benchmarks platform to test against frontier models.

OpenAI Research · 36d ago · 7 · research fine tuning safety prompt engineering

IH-Challenge is a training framework that teaches models to respect instruction hierarchy and distinguish between trusted vs. untrusted inputs, improving robustness against prompt injection attacks and enhancing safety steerability. This is practically useful for engineers building production AI systems that need stronger defenses against adversarial inputs.

OpenAI Research · 41d ago · 7 · research prompt engineering agent

OpenAI presents CoT-Control, a technique for steering chain-of-thought reasoning in language models, revealing that current reasoning models have difficulty maintaining controlled thought processes. This research addresses interpretability and monitorability concerns, providing practical insights for building more controllable AI systems in production.

DeepMind Blog · 42d ago · 9 · new model api update inference

Google released Gemini 3.1 Flash-Lite, a new lightweight model optimized for high-volume production workloads at $0.25/1M input tokens and $1.50/1M output tokens. It delivers 2.5X faster time-to-first-token and 45% faster output speeds than 2.5 Flash while maintaining quality, making it ideal for real-time applications like translation, content moderation, UI generation, and agentic workflows at scale.

DeepMind Blog · 47d ago · 7 · new model api update inference

Google DeepMind released Nano Banana 2 (Gemini 3.1 Flash Image), a new image generation model combining advanced reasoning and world knowledge with Flash-speed inference. The model is now available across Google products (Gemini app, Search) and offers improved subject consistency, photorealism, and instruction-following capabilities with reduced latency compared to the Pro version.

Ahead of AI · 49d ago · 8 · new model research benchmark

Comprehensive technical comparison of 10+ major open-weight LLM releases from January-March 2026, analyzing architectural innovations like mixture-of-experts, sliding window attention, QK-norm, and gating mechanisms across models from Arcee, Moonshot, Qwen, and others. Serves as a practical reference for understanding current design patterns and trade-offs in large model architecture.

OpenAI Research · 51d ago · 7 · benchmark research

Analysis reveals significant data contamination and training leakage issues in SWE-bench Verified, a widely-used benchmark for evaluating AI coding models, with recommendations to use SWE-bench Pro instead. This is technically important for engineers evaluating code generation models and understanding the reliability of current benchmarking standards.

OpenAI Research · 54d ago · 7 · benchmark research agent

Research team demonstrates AI model performance on expert-level mathematical proof problems from the First Proof challenge, providing insights into current capabilities and limitations of AI reasoning on formal mathematics. This benchmarking work is relevant for engineers building AI systems that require complex reasoning and problem-solving.

DeepMind Blog · 54d ago · 9 · new model api update benchmark

Google released Gemini 3.1 Pro, an upgraded core model with significantly improved reasoning capabilities (77.1% on ARC-AGI-2, more than 2x better than 3 Pro). Available through Gemini API, Vertex AI, and consumer products, it excels at complex problem-solving tasks including code generation, system synthesis, and advanced reasoning workflows that engineers building with AI will find immediately applicable.

DeepMind Blog · 55d ago · 6 · new model api update deployment

Google DeepMind released Lyria 3, an advanced music generation model integrated into the Gemini app, allowing users to create 30-second tracks from text descriptions or images with SynthID watermarking for AI-generated content detection. The model improves on previous versions with better audio quality and customization, and is also rolling out to YouTube creators for Dream Track.

OpenAI Research · 56d ago · 6 · benchmark agent research

EVMbench is a new benchmark for evaluating AI agents on smart contract security tasks like vulnerability detection and patching. While technically interesting for agent evaluation, it's specialized to blockchain/security domains rather than general AI engineering workflows.

OpenAI Research · 61d ago · 6 · research benchmark

GPT-5.2 generated a novel theoretical physics formula for gluon amplitudes that was subsequently validated by formal proof and peer verification. While intellectually interesting, this represents a scientific application outcome rather than actionable technical guidance for AI builders developing with current models.

OpenAI Research · 69d ago · 6 · agent workflow api update

An autonomous lab system integrates GPT-5 with cloud automation for closed-loop experimentation in synthetic biology, demonstrating a 40% cost reduction in protein synthesis. While showcasing practical AI agent application in scientific workflows, the focus is primarily on biotech outcomes rather than AI engineering techniques or tools.

Ahead of AI · 81d ago · 8 · inference prompt engineering tutorial research

Comprehensive overview of inference-time scaling techniques for LLMs, covering methods like chain-of-thought prompting, self-consistency, best-of-N ranking, and rejection sampling with verifiers. The author shares practical experimentation results (achieving 15% to 52% accuracy improvement) and categorizes approaches from both academic literature and proprietary LLM implementations, making it directly applicable to deployed systems.