2 min read

Anthropic’s $350B Valuation & GPT-5.2 Cracks Unsolved Math Problem in AI Milestone Week

Funding and Valuation

Anthropic reportedly raising $10B at a $350B valuation as AI funding accelerates
Anthropic is in talks to raise $10 billion at a $350 billion valuation, one of the largest private AI fundraises to date. The move reflects surging investor confidence in frontier AI models and hints at a potential wave of AI IPOs in 2026, driven by demand for compute and infrastructure.


New Models and Research

DeepSeek-R1 paper expanded with detailed updates
The DeepSeek-R1 research paper was updated from 22 to 86 pages, adding comprehensive details on architecture, training methods, and performance benchmarks, offering deeper insights into the model’s advancements.


GPT-5.2 solves an unsolved Erdős mathematical problem
A team leveraged GPT-5.2 to solve a previously unsolved Erdős problem, marking the first time an LLM autonomously resolved such a challenge. The process involved iterative research, proof refinement, and peer review, showcasing LLMs' potential in advanced mathematical research.


Sopro: A lightweight real-time TTS model with zero-shot voice cloning
A developer trained Sopro, a 169M-parameter TTS model supporting streaming and zero-shot voice cloning. It generates 30 seconds of audio in 7.5 seconds on a CPU, trained on a single L40S GPU for ~$250, and requires 3-12 seconds of reference audio for cloning.


Products and Services

OpenAI launches ChatGPT Health with medical record integration
OpenAI introduced ChatGPT Health, a private space for health-related conversations that allows users to connect medical records and wellness apps (e.g., Apple Health, Peloton). The feature aims to provide personalized, integrated health management within ChatGPT.


OpenAI prepares to test ads in ChatGPT
OpenAI is reportedly planning to introduce advertisements in ChatGPT, a move that could impact both free and paid users. The strategy aims to further monetize the platform but may affect user experience and retention.


Infrastructure and Performance

Deepseek V3.2 achieves high throughput on AMD MI50 GPUs
A developer ran Deepseek V3.2 AWQ 4-bit on 16 AMD MI50 32GB GPUs, achieving 10 tokens/sec (output) and 2,000 tokens/sec (input). The setup demonstrates a cost-effective approach to local AI inference, with plans to scale to 32 GPUs for future models like Kimi K2 Thinking.