AI chat
AI chat is the main entry point for interacting with the LLMs supported by AI Assistant. Here, you can have a conversation with the language model, ask questions about your project, or iterate on a task.
Basically, the interaction comes down to the following steps:
Select a chat mode
AI chat can operate in two modes: Chat for everyday questions, and Agent for advanced development tasks.
Select an AI model
Choose a cloud-based model from the predefined list, or select a local model if one is set up. The chosen model will be used to process your requests.Add context to your request
Provide information relevant to your request. Add files, folders, images, symbols, or other elements that can serve as context for your query.Process the response
AI Assistant can answer questions, generate code snippets and terminal commands, and edit files. Each suggestion can be processed individually, as you see fit.
Start a new chat
To start a chat with AI Assistant, click AI Chat on the right toolbar (in DataGrip, click
More tool windows in the header and select
AI Assistant).

This opens the AI Chat tool window that consists of the following elements:

Chat mode selector – switch between Chat for quick conversations and Agent for complex tasks such as multi-file edits.
Model picker – choose from the list of supported models, including local ones, if they are set up.
Attachments – add files, folders, images, symbols, or other elements that can serve as context for your request.
Chat history and view settings – review, rename, or delete previous conversations, and configure the chat window layout.
Use AI Chat to ask questions about your codebase, request suggestions, generate code snippets, edit files, and more.
Select the chat mode
AI Chat can operate in two modes: a general-purpose chat for answering questions, or a specialized agent that helps with more complex development tasks.
To select the chat mode:
In the chat, click
.Select the mode from the list.

Chat mode
Chat mode is used to ask general or project-related questions. In this mode, no changes are made to files – you can only request information, clarifications, or suggestions. Any generated code snippets must be reviewed and applied manually.
By default, AI Assistant automatically gathers the context it needs to provide an answer. If you prefer to add the context manually, you can disable this behavior. To do this, click and disable the Enable Codebase Mode setting.

After that, you can add the relevant information manually via the button or using
@ references.
Agent mode
Agent mode is intended for more complex development tasks. Agents can help implement fixes, refactor code, generate tests, and more. Suggested changes can be introduced to multiple files, with the ability to review them before applying.
Currently, AI Assistant supports the following agents:
Junie by JetBrains
Junie is an AI coding agent whose primary task is to autonomously plan and execute complex, multistep actions based on a prompt from the user. It can introduce large-scale edits to your project and run tests or terminal commands, while reporting to the user on the progress of the task completion.
Install Junie
To install and use Junie:
In the chat, click
and select
Junie by JetBrains from the list.

Do either of the following to install the plugin:
Click Install and open in the notification.

Type your prompt, press Enter, and then click Accept.

Once installed, you will be redirected to the
Junie tool window, where you can continue your work.
Junie's features
For the detailed description of the Junie feature set and their usage, refer to the Junie documentation.
Claude Agent
Claude Agent is a third-party coding agent by Anthropic integrated with AI Assistant. It understands your codebase, can plan and execute development tasks, and interact with your environment using tools, running commands, and analyzing their results to complete complex programming workflows.
The main benefit of this integration is that Claude Agent, besides its own tools, can also use tools provided by JetBrains MCP Server.
Limitations
Download Claude Agent
Initially, Claude Agent is not installed in AI Assistant. To install it:
In the chat, click
and select
Claude Agent from the list.

Type your question in the chat input field and submit it. This will trigger the installation.
Wait until the installation is complete.
Approve operations
By default, Claude Agent requests your permission to run suggested bash commands, make file edits, or use external tools. In this case, you can either approve or skip the operation.

Brave mode
You can allow Claude Agent to execute commands or modify files without asking for your confirmation. To enable this mode, click Brave in the chat input field.

Plan mode
Claude Agent can analyze your codebase and generate multistep implementation plans before making any changes to your files. The agent operates in read-only mode, collecting the required context and proposing structured code modifications. To enable this mode, click Plan in the chat input field.

After processing your request, Claude Agent prepares a detailed plan for implementing the changes. You must review and approve this plan to verify that it meets your requirements before any modifications are applied to the codebase.
Quick Edit
Quick Edit is an AI Assistant's lightweight agent designed to make small, focused adjustments to one or more files.
This agent can break a task into steps and execute them according to the plan, gather context automatically, and use internal MCP tools. You can also select which model will handle requests for this agent.

Select a model
Different models have different capabilities, so you may want to switch models depending on your task. AI Assistant allows you to choose from a variety of cloud-based LLMs or connect to a local LLM.
Select a cloud-based LLM
To select a cloud-based model:
In the chat, click the
button next to the model's name.Select the desired model from the list.

If you are unsure which model to choose, you can set it to Auto. AI Assistant will automatically select the model that offers the best balance between performance and cost.
Connect AI Chat to your local LLM
Before selecting a local model, you must connect to the third-party provider installed on your machine:
Go to .
In the Third-party AI providers section, select your LLM provider, specify the URL where the provider is accessible, and apply changes.

Once you configured a third-party provider, installed local models become available for use in chat. They are listed under the Local Models section.

Select the desired model from the list.
Add and manage context
Attaching the right context to your request helps AI Assistant provide more accurate and relevant responses. You can add files, folders, images, symbols, commits, or other items to give AI Assistant additional information related to your question.
Add context
Context can be added in a couple of ways. You can do it either using the @ references or via the Add attachment button:
Type
@in the chat input field, choose the relevant category, and select the item you want to add.
- Available categories
@thisFilerefers to the currently open file.@selectionrefers to a piece of code that is currently selected in the editor.@projectStructurerefers to the structure of the project displayed in the Project tool window.@localChangesrefers to the uncommitted changes.@file:invokes a popup with selection of files from the current project. You can select the necessary file or image from the popup or write the name of the file (for example,@file:Foo.mdor@file:img.png).@folder:refers to a folder in the current project. The selected folder, along with all its contents, is added as context to the prompt.@rule:adds a project rule into prompt. You can either select a rule from the invoked popup or write the rule name manually.@dbObject:refers to a database object such as a schema or table. For example, you can attach a database schema to your request to improve the quality of generated SQL queries.@commit:adds a commit reference into prompt. You can either select a commit from the invoked popup or write the commit hash manually.@symbol:adds a symbol into prompt (for example,@symbol:FieldName).@jupyter:for PyCharm and DataDrip, adds a Jupyter variable into prompt (for example,@jupyter:df).
As an alternative, you can select the relevant context by clicking the
Add attachment button and selecting from the list. This method also lets you add context from the UI, which cannot be done with an
@reference.
Either way, the selected item will be attached to your request as context. Below you can find detailed instructions on adding specific types of context to your query.
Add files or folders to context
Adding files and folders to the context gives AI Assistant access to relevant code and project structure, helping it understand dependencies and provide more accurate, context-aware answers.
To add a file or folder to the context:
In the chat, click
Add attachment.
Select the Files and Folders option from the menu and specify the file or folder you want to add.

Type your question in the chat and submit the query.
AI Assistant will use the attached file or folder to collect additional context when providing an answer.
Add images to context
AI Assistant can extract relevant information from images and use it as context when processing your requests. It can read code snippets from screenshots, analyze error messages, or interpret other visual context.
To add an image to your request:
In the chat, select the model that supports image processing. Such models are marked with the
icon.

Click
Add attachment.
Select the Add Image option from the menu and specify the image you want to add. If needed, you can attach multiple images.

Type your question in the chat and submit the query.
AI Assistant will process the image and extract relevant information needed to generate a reply.
The extracted code snippets can then be further processed as needed.
Add context from UI
When asking questions in the chat, you can add context to your query directly from a UI element. It can be a terminal, tool window, console, etc. For example, you can attach a build log from the console to ask why your build failed.
In the chat, click
Add attachment.
Select the Add context from UI option from the menu.

Select the UI element that contains data that you want to add to the context.
Type your question in the chat and submit the query.
AI Assistant will consider the added context when generating the response.
Attach database object
Available in: DataGrip and IDEs with Database Tools and SQL plugin starting from IDE version 2025.2
You can attach a specific database object to your request in AI chat to provide the LLM with additional context. To do this:
In the chat, type
@, then start typing or selectdbObject:.From the list of database objects that appears, select the one you want to attach.

You can see which object was attached to your message and navigate to it by clicking the corresponding attachment in the chat.
Type your question in the chat and submit the query.
Attach selection as context
Sometimes, it is necessary to explain a specific part of the code, a runtime warning, terminal output, or other results shown in various tool windows while working with your code. AI Assistant allows you to select this content and add it to the chat as context for your request.
To get an explanation:
Select the content you want explained. This can be a code snippet from the editor, a runtime error, terminal output, or other console messages shown in a corresponding tool window.

The selection is automatically added to the chat as context.
In the chat, ask AI Assistant to explain the selection.
Review attachments
You can review any attachment by clicking it. The item will be opened in a separate window.
If the request was already sent, you can find the attachments that were added to it by clicking the button.

The attachments provided by AI Assistant in the answer are always shown but can be hidden if needed by clicking .
Set a message trimming threshold
Each language model has a context window – the maximum amount of context it can process at once. If this limit is exceeded, the model may produce errors or incomplete responses, and earlier parts of the conversation may be discarded.

To ensure your requests stay within the model's capacity, you can configure a message trimming threshold. If this threshold is exceeded, AI Assistant starts prioritizing smaller files and extracting key content from larger ones to optimize the amount of context sent to the model.
To set a message trimming threshold:
Go to .
Alternatively, hover over the trimmed attachment, marked with the
icon, and click Adjust threshold.

In the Message Trimming Threshold section, select a value for the Trim message if it exceeds % of a model context window setting.

Click OK to save changes.
As a result, when your message exceeds the specified threshold, AI Assistant trims the attachments to ensure the model can process the request. The trimmed content is marked with the icon.

Use commands
Commands work as shortcuts for specific actions, allowing you to save time when typing your query. You can use them in combination with @ references.
AI Assistant supports the following / commands:
/explain– explains a mentioned entity./refactor– suggest refactoring for the code selected in the editor./docs– searches the IDE documentation for information on the specified topic. If applicable, AI Assistant will provide a link to the corresponding setting or documentation page./web– searches for information on the internet. AI Assistant will provide an answer and attach a set of relevant links that were used to retrieve the information.
Process the response
AI Assistant's responses can contain code snippets, terminal commands, edit suggestions, or changes to single or multiple files. Depending on the selected chat mode, the available processing options may differ.
For example, in Chat mode, you can process the suggestions using the tools in the top-right corner of the code snippet:

Apply – applies the suggestion to the currently open file. This action updates the entire file, adjusting relevant code to integrate the updates.
– copies the code snippet. You can then paste it manually where required.
– inserts the generated code snippet, or a selected fragment of it, into the file open in the editor at the caret position.
– creates a separate file with the AI-generated code.
– runs the generated terminal command. In PyCharm, this button is also used to run the generated code snippet in the Python console, separately from the rest of the project.
In Agent mode, suggested changes are often more complex and may affect multiple files. To help you review them, AI Assistant provides a diff view where you can examine each change before accepting it.

For instructions on how to process changes to multiple files, refer to Apply changes to multiple files.
Apply a suggestion to the current file
Code snippets generated by AI Assistant in the Chat mode can be applied to the currently open file. The changes are made across the entire file, with relevant code adjusted to integrate the updates.
To apply the suggestion:
Locate the code snippet that you want to apply.
Click the Apply button.

In the editor, review the changes by clicking
Next Change or
Previous Change buttons.

When you are ready to apply the changes, click Accept All. Otherwise, click Discard All to reject the changes.
Apply changes to multiple files
When working in Agent mode, AI Assistant can suggest changes to multiple files across the project. You can review the affected files and apply the changes directly in the chat.
To apply changes to multiple files:
Review the suggested changes. Along with an explanation of the issue, AI Assistant displays a list of affected files and lets you choose how to proceed:

Discard All – click to discard the proposed changes.
Accept All – click to apply changes to all affected files.
Show Changes in Toolbar – click to display the list of affected files in a separate toolbar.

Show Diff – click to open the diff viewer for the selected file. Diff viewer is used to visually compare and review differences between the file versions, helping you to understand what has changed.

± Create Patch – click to create a .patch file containing the changes. This file can be applied to your sources later.
Accept – click to apply changes to the selected file.
Discard – click to discard changes proposed for the selected file.
Group By – click to select how you want to group the modified files – by directory or module.
Expand All – click to expand all nodes in the file tree.
Collapse All – click to collapse all nodes in the file tree.
Process the changes as needed.
Regenerate the response
If you do not like the answer provided by AI Assistant, click Regenerate this response at the end of the response to generate a new one.

Review chat history
AI Assistant keeps the chats' history separately for each project across IDE sessions. You can find the saved chats in the Chat History list.

Names of the chats are generated automatically and contain the summary of the initial query. Right-click the chat's name to rename it or delete it from the list. Search for a particular chat name using Ctrl+F.
Besides searching for a specific chat, you can also search within a chat instance. To revisit a specific part of the conversation:
In the chat instance, press Ctrl+F. Alternatively, click
and select Find in Chat.
In the search field, type your query. AI Assistant will highlight all occurrences of the specified text in the chat.
Use
buttons to navigate to the next/previous occurrence.

Customize chat
You can adjust how AI Chat behaves to fit your preferences.
Change the chat response language
You can instruct AI Assistant to provide responses in a specific language.
Press Ctrl+Alt+S to open settings and then select .
In the Natural Language section, enable the Receive AI Assistant chat responses in a custom language setting.
In the text field, specify the language in which you want to receive chat responses.

Click Apply.
After that, AI Assistant will use the specified language in its responses.