Latest DeepSeek Update Called R1-0528 Is Matching OpenAI’s o3 & Gemini 2.5 Pro

In late May 2025, Chinese startup DeepSeek quietly rolled out R1-0528, a beefed-up version of its open-source R1 reasoning model. The upgrade boosts parameter count from 671 billion to 685 billion, adds a lightweight distilled variant, and publishes all weights, training recipes and docs under an MIT license on GitHub and Hugging Face. Under the hood, R1-0528 deepens chain-of-thought layers to improve multi-step logic, applies post-training tweaks to curb hallucinations, and slashes latency. On the integration side it introduces JSON outputs, native function calling and simpler system prompts—bringing its performance on math, coding and logic benchmarks within striking distance of OpenAI’s o3 and Google’s Gemini 2.5 Pro, all without per-token […]