Google’s Leaner AI & GPT-5.2’s 6.6-Hour Benchmark Redefine Efficiency & Endurance
AI Research & Model Advancements
Google Research Introduces "Sequential Attention" for Leaner, Faster AI Models
Google Research announced a new technique called Sequential Attention, designed to optimize AI model efficiency by refining the attention mechanism. The method aims to reduce computational overhead without sacrificing accuracy, enabling deployment in resource-constrained environments like mobile devices or cloud services.
OpenAI’s GPT-5.2 High Achieves Record 6.6-Hour Time Horizon on METR Benchmark
GPT-5.2 High set a new benchmark on the METR 50%-time-horizon evaluation, demonstrating the ability to sustain complex tasks for up to 6.6 hours. This milestone highlights advancements in long-duration AI performance, critical for applications requiring extended reasoning or multi-step operations.
AI Tools & Developer Products
Perplexity Develops "Model Council" for Consensus-Based AI Research
Perplexity is building a Model Council feature that aggregates outputs from three frontier AI models to generate verified, consensus-driven answers. The tool targets research applications, aiming to improve response reliability by cross-referencing multiple model perspectives.
Open-Source VSCode Extension "Codag" Visualizes LLM Workflows as Interactive Graphs
A developer released Codag, a VSCode extension that transforms LLM workflows into interactive, shareable graphs. The tool supports multiple AI models and programming languages, helping developers debug and collaborate on complex AI-driven projects more efficiently.