DeepSeek Rises to New Heights, Offering a Strong Open Source Challenge to Giants Like OpenAI and Google
This year, a newcomer is stirring up the global AI market. Chinese startup DeepSeek, initially a protege of Hong Kong-based High-Flyer Capital Management, has unveiled its latest brainchild - DeepSeek R1-0528. As a significant upgrade of their renowned open-source AI model R1, this new release boasts revolutionary advances that bring DeepSeek near parity with renowned proprietary models like OpenAI’s o3 and Google Gemini 2.5 Pro.
But the potential of DeepSeek’s newest offering doesn’t end here. The company displays remarkable user-centric dedication, simplifying the process of local deployment with detailed guidelines and ensuring robust assistance via service emails. Plus, users looking for a personal experience can delve into DeepSeek’s potential through their website, using their Google Account or providing a phone number.
At the heart of DeepSeek’s enrichment is a remarkable stride in their model’s reasoning capacities. Harnessing amplified computational resources and capitalizing on algorithmic fine-tuning in post-training, the company’s maneuvers have achieved conspicuous reformations across different benchmarks. When assessed using the AIME 2025, DeepSeek-R1-0528’s accuracy metric skyrocketed from a mere 70% to a staggering 87.5%. Similar leaps were observed in coding performance, which was assessed using the LiveCodeBench dataset. Indicative of these leaps is an almost twofold improvement in performance on ‘Humanity’s Last Exam,’ bringing DeepSeek-R1-0528 closer to the consistency seen in stalwarts like OpenAI’s o3 and Google’s Gemini 2.5 Pro.
The new release also trumpets various novel features aimed at enhancing user interaction, which DeepSeek believes will streamline the usually convoluted AI-model workflow. The model’s newly included support for JSON output and function calling along with an enhanced front-end will give developers a more interactive, efficient experience. A key introduction is ‘system prompts,’ which removes the need for a special token for ’thinking’ mode, making deployment streamlined.
- •QwenLong-L1 solves long-context reasoning challenge that stumps current LLMs venturebeat.com31-05-2025
- •DeepSeek R1-0528 arrives in powerful open source challenge to OpenAI o3 and Google Gemini 2.5 Pro venturebeat.com30-05-2025
- •Google claims Gemini 2.5 Pro preview beats DeepSeek R1 and Grok 3 Beta in coding performance venturebeat.com06-06-2025
- •OpenAI hits 3M business users and launches workplace tools to take on Microsoft venturebeat.com06-06-2025
- •Sam Altman calls for ‘AI privilege’ as OpenAI clarifies court order to retain temporary and deleted ChatGPT sessions venturebeat.com06-06-2025
- •Mistral AI’s new coding assistant takes direct aim at GitHub Copilot venturebeat.com06-06-2025
- •Databricks and Noma tackle CISOs’ AI nightmares around inference vulnerabilities venturebeat.com05-06-2025