Moonshot AI Launches New Model With Improved Coding and Agent Capabilities
Listen to the full version

Chinese artificial intelligence startup Moonshot AI has launched an open-source large model that it claims rivals top foreign competitors like OpenAI’s GPT-5.4 in coding and agent-based tasks.
Released on Monday, the Kimi K2.6 model is designed to execute extended programming assignments and coordinate a large cluster of AI agents for complex output. The new model is available across the company’s platforms, including its website, app and Kimi Code assistant.
Unlock exclusive discounts with a Caixin group subscription — ideal for teams and organizations.
Subscribe to Save an extra $50. Introductory offer for new readers. Subscribe now.
- DIGEST HUB
- Moonshot AI launched open-source Kimi K2.6, rivaling GPT-5.4 in coding and agent tasks.
- Matches/outperforms GPT-5.4, Claude Opus 4.6, Gemini 3.1 Pro on SWE-Bench Pro and DeepSearchQA.
- Codes 4,000+ lines over 13 hours; coordinates 300 sub-agents for complex outputs; praised on Reddit.
- Moonshot AI
- Moonshot AI, a Chinese startup, launched the open-source Kimi K2.6 model, rivaling GPT-5.4 in coding and agent tasks. It excels on SWE-Bench Pro and DeepSearchQA benchmarks, handles 13-hour sessions with 4,000+ code lines, and coordinates 300 sub-agents for complex workflows like building websites and presentations. Available on its platforms, it's praised on Reddit for front-end development.
- OpenAI
- Moonshot AI claims its open-source Kimi K2.6 model rivals OpenAI's GPT-5.4 in coding and agent-based tasks, matching or outperforming it on benchmarks like SWE-Bench Pro and DeepSearchQA.
- Anthropic
- Moonshot AI's K2.6 model matched or outperformed Anthropic’s Claude Opus 4.6 on benchmarks like SWE-Bench Pro (software engineering) and DeepSearchQA (AI agent research).
- Moonshot AI's K2.6 model matched or outperformed Google’s Gemini 3.1 Pro on benchmarks like SWE-Bench Pro (software engineering) and DeepSearchQA (AI agents’ deep web research).
- PODCAST
- MOST POPULAR





