China's DeepSeek Coder Becomes First Open-Source Coding Model To Beat GPT-4 Turbo
Shubham Sharma reports via VentureBeat: Chinese AI startup DeepSeek, which previously made headlines with a ChatGPT competitor trained on 2 trillion English and Chinese tokens, has announced the release of DeepSeek Coder V2, an open-source mixture of experts (MoE) code language model. Built upon DeepSeek-V2, an MoE model that debuted last month, DeepSeek Coder V2 excels at both coding and math tasks. It supports more than 300 programming languages and outperforms state-of-the-art closed-source models, including GPT-4 Turbo, Claude 3 Opus and Gemini 1.5 Pro. The company claims this is the first time an open model has achieved this feat, sitting way ahead of Llama 3-70B and other models in the category. It also notes that DeepSeek Coder V2 maintains comparable performance in terms of general reasoning and language capabilities. Founded last year with a mission to "unravel the mystery of AGI with curiosity," DeepSeek has been a notable Chinese player in the AI race, joining the likes of Qwen, 01.AI and Baidu. In fact, within a year of its launch, the company has already open-sourced a bunch of models, including the DeepSeek Coder family. The original DeepSeek Coder, with up to 33 billion parameters, did decently on benchmarks with capabilities like project-level code completion and infilling, but only supported 86 programming languages and a context window of 16K. The new V2 offering builds on that work, expanding language support to 338 and context window to 128K -- enabling it to handle more complex and extensive coding tasks. When tested on MBPP+, HumanEval, and Aider benchmarks, designed to evaluate code generation, editing and problem-solving capabilities of LLMs, DeepSeek Coder V2 scored 76.2, 90.2, and 73.7, respectively -- sitting ahead of most closed and open-source models, including GPT-4 Turbo, Claude 3 Opus, Gemini 1.5 Pro, Codestral and Llama-3 70B. Similar performance was seen across benchmarks designed to assess the model's mathematical capabilities (MATH and GSM8K). The only model that managed to outperform DeepSeek's offering across multiple benchmarks was GPT-4o, which obtained marginally higher scores in HumanEval, LiveCode Bench, MATH and GSM8K. [...] As of now, DeepSeek Coder V2 is being offered under a MIT license, which allows for both research and unrestricted commercial use. Users can download both 16B and 236B sizes in instruct and base avatars via Hugging Face. Alternatively, the company is also providing access to the models via API through its platform under a pay-as-you-go model. For those who want to test out the capabilities of the models first, the company is offering the option to interact. with Deepseek Coder V2 via chatbot.
Read more of this story at Slashdot.