2 min Devops

JetBrains now also supports Claude, OpenAI o1 and local AI

JetBrains now also supports Claude, OpenAI o1 and local AI

Claude, a favorite AI model among programmers, is now supported by the JetBrains AI Assistant. In addition, OpenAI’s most powerful LLMs have been rolled out along with it. Those seeking more privacy or freedom of choice can now also run the AI IDE tool with a local model.

Integration of the local AI functionality is done through LM Studio, which developers can activate under “Third-party AI providers” in the AI Assistant settings. This platform provides an interface for managing and running AI models on local machines, much like llama.cpp, Ollama or Jan Desktop.

New AI models

The more conventional way to run AI is via APIs. JetBrains now supports Anthropic’s models, namely Claude 3.5 Sonnet and the cheaper Claude 3.5 Haiku. From OpenAI’s LLM offerings, the more powerful (and pricey) LLMs o1, o1-mini and o3-mini have been added. Since o1-mini has simply been replaced by o3-mini within ChatGPT, we can consider that option somewhat redundant.

What is clear is that Claude is already wildly popular among programmers via its web interface. That was evident from the first Anthropic Economic Index, which mapped which professions are using the AI tool in particular.

Read also: How Anthropic’s Claude is actually being used

IDE with options

The JetBrains AI Assistant, introduced in 2023, is fully integrated into the company’s IDEs. The tool can automate or facilitate various tasks, including code completion, documentation generation and code refactoring. In many ways, it is similar to GitHub Copilot, its popular counterpart.

Options for local AI will expand significantly over time. This is especially the case after the recent introduction of the open-source DeepSeek-R1 and V3, it appears that impressive performance is possible via locally run LLMs. The full DeepSeek-R1 with its sizeable 671 billion parameters does require several powerful GPUs to run at an acceptable level, but distilled DeepSeek variants of Llama and Qwen with tens of billions of parameters are a lot more feasible. For development teams with high compliance requirements, this may be a starting point for domain-specific, small AI models that no longer require API contact.

Also read: JetBrains users struggle with transition to new UI