xAI launches agentic model
Briefly

xAI launches agentic model
"Its technical performance is particularly notable for its speed. Thanks to prompt caching, grok-code-fast-1 achieves cache hit rates above 90 percent and can handle multiple tool calls before the first output lines are visible. The model supports a wide range of programming languages, including TypeScript, Python, Java, Rust, C++, and Go. It can perform a variety of tasks, from setting up new projects to answering programming questions and targeted bug fixing."
"The new model, grok-code-fast-1, is being offered free of charge for a limited time and is available through several partners, including GitHub Copilot and Windsurf. According to xAI, the model was developed entirely from scratch with its own architecture and a training set that relies heavily on realistic pull requests and development tasks. During development, xAI worked closely with partners such as GitHub Copilot, Cursor, Cline, Roo Code, Kilo Code, opencode, and Windsurf. This allowed the model to be tested directly in agentic workflows."
"After the free introductory period, xAI has opted for competitive pricing: 20 cents per million input tokens, $1.50 per million output tokens, and 2 cents per million cached tokens. This positions the model as a relatively accessible option for everyday use. The launch is part of a broader trend in which major players are expanding their AI tools for software development."
xAI launched grok-code-fast-1, an agentic programming model offered free for a limited time through partners including GitHub Copilot and Windsurf. The model was developed from scratch with a proprietary architecture and a training set focused on realistic pull requests and development tasks. Partners such as GitHub Copilot, Cursor, Cline, Roo Code, Kilo Code, opencode, and Windsurf participated in testing within agentic workflows. grok-code-fast-1 uses prompt caching to achieve cache hit rates above 90 percent and can execute multiple tool calls before producing initial output. The model supports TypeScript, Python, Java, Rust, C++, and Go and performs project setup, question answering, and targeted bug fixes. Post-introductory pricing is 20 cents per million input tokens, $1.50 per million output tokens, and 2 cents per million cached tokens.
Read at Techzine Global
Unable to calculate read time
[
|
]