OpenAI Unveils GPT-5.5 as DeepSeek V4 and Qwen Set New Performance Records
Model Releases
DeepSeek Launches V4 Model Suite with Pro and Flash Versions: DeepSeek has released DeepSeek V4, featuring a 1.6T parameter Pro model and a 284B parameter Flash model, both supporting a one-million-token context length. The Pro version is significantly more efficient than its predecessor, requiring only 27% of the inference FLOPs, and pricing is expected to drop further following a large-scale deployment of Huawei-based supernodes.
- DeepSeek V4 has released
- DeepSeek V4 Benchmarks!
- DeepSeek V4 Pro is out
- DeepSeek confirms Huawei-based V4 inference: "After the 950 supernodes are launched at scale in the second half of this year, the price of Pro is expected to be reduced significantly."
- Deepseek V4 Flash and Non-Flash Out on HuggingFace
- Buried lede: Deepseek v4 Flash is incredibly inexpensive from the official API for its weight category
OpenAI Introduces GPT-5.5 with Enhanced Intelligence and Efficiency: OpenAI has launched GPT-5.5, a more token-efficient and intelligent successor to GPT-5.4, priced at $5 per 1M input tokens and $30 per 1M output tokens. Early benchmarks show the model excelling in software engineering, terminal capabilities, and web browsing compared to competitors like Claude Opus 4.7 and Gemini 3.1 Pro.
Model Benchmarks & Performance
Qwen 3.6 27B Achieves Major Gains in Agentic Capabilities: The new Qwen 3.6 27B model has reached parity with Claude Sonnet 4.6 on Artificial Analysis agency benchmarks. The model shows significant improvements in reasoning and tool use, signaling strong performance for agent-based AI tasks.