1T Parameters
Coding with Vision
Agent Swarm

Try Kimi K2.5 for Free

Kimi K2.5 is a native multimodal model developed by Moonshot AI, featuring coding with vision capabilities and agent swarm execution.

Kimi K2.5 Assistant

256K context • vision + text reasoning • agent swarm

Hi there. Pick a prompt below, or type your own to jump into the full chat.
Ctrl/Cmd + Enter to send/chat
Try these prompts

Community Voices

What People Are Saying About Kimi K2.5

Watch these in-depth reviews and tutorials from the AI community showcasing Kimi K2.5's capabilities.

Why is Everyone OBSESSED With The New Kimi K2.5 AI Model

This video explores the Kimi K2.5 model's capabilities including Agent Swarm features, video-to-code functionality, and creative stress tests.

BetterStack

Kimi K2.5 might be my new favorite model...

An in-depth review of Kimi K2.5's capabilities including coding, vision understanding, and agent swarm execution.

T3

Kimi K2.5 and Agent Swarm Explained

A deep dive into Kimi K2.5's Agent Swarm technology, explaining how it coordinates multiple sub-agents for parallel task execution.

AI Explained

Kimi K2.5 - The Agent Swarm in Action

Watch the Agent Swarm feature in action as it orchestrates 100 sub-agents to complete complex multi-step tasks in parallel.

AI News

Kimi K2.5 just dropped... (WOAH)

Matthew Berman covers the Kimi K2.5 release, highlighting its native multimodal capabilities, Agent Swarm, and why open-source AI is back.

Matthew Berman

New #1 open-source AI model is WILD

Hands-on testing of Kimi K2.5 with real-world demos including Figma clone, video-to-website, maze solving, hand tracking, and deep research tasks.

AI Search

Key Capabilities

Coding with Vision

Agent Swarm

Office Productivity

Explore what Kimi K2.5 can do

Coding with Vision
Coding with Vision

Visual Understanding

K2.5 can generate code from visual inputs like UI designs and video workflows, and autonomously orchestrate tools for visual data processing.

  • Turn conversations into complete front-end interfaces
  • Implement interactive layouts and scroll-triggered effects
  • Generate code from UI designs and video workflows
  • Autonomously orchestrate tools for visual data processing
Technical Architecture

Mixture of Experts

Based on publicly available information, Kimi K2.5 employs an MoE architecture with 384 experts, dynamically routing each token to the most relevant 8 experts for efficient processing.

K2.5
Language
Code
Vision
Math
Reasoning
Tools
1T
Total Parameters
32B
Activated per Token
256K
Context Length
384
Experts

According to the technical documentation, Kimi K2.5 uses a sparse Mixture of Experts architecture with 384 experts where 8 experts are activated per token. The model also features a 400M parameter MoonViT vision encoder for multimodal capabilities.

Benchmark Results

Performance Highlights

Based on publicly available benchmark data from kimi.com and official sources.

HLE-Full w/ tools

Humanity's Last Exam with tool use

Agentic
Kimi K2.50%
GPT-5.245.5%
Claude 4.5 Opus43.2%

AIME 2025

Competition-level mathematics

Mathematics
Kimi K2.50%
GPT-5.2100%
Claude 4.5 Opus92.8%

GPQA-Diamond

Graduate-level reasoning

Reasoning
Kimi K2.50%
GPT-5.292.4%
Claude 4.5 Opus87%

MMLU-Pro

Professional knowledge evaluation

Knowledge
Kimi K2.50%
GPT-5.286.7%
Claude 4.5 Opus89.3%

MMMU-Pro

Multimodal understanding

Multimodal
Kimi K2.50%
GPT-5.279.5%
Claude 4.5 Opus74%

MathVision

Mathematical visual reasoning

Vision + Math
Kimi K2.50%
GPT-5.283%
Claude 4.5 Opus77.1%

Notable Results

50.2%
HLE-Full w/ tools
Agentic reasoning
96.1%
AIME 2025
Competition math
87.6%
GPQA-Diamond
Graduate reasoning
Open
Source
Available on HuggingFace
FAQ

Frequently Asked Questions

Everything you need to know about Kimi K2.5.