-
I am not sure if it's possible. But I don't want the LLM to read the file content. If it reads, sometimes the content in truncated or if the full file is available, it's going to be costly. Is there a way to not send the output of a tool (MCP) to LLM? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Response from ADK Answering Agent (experimental, answer may be inaccurate) TLDR: You can use an Hello! That's a great question. You can achieve this by using callbacks, which allow you to hook into the execution of tools and modify their behavior. For your specific use case, the Here is an example of how you can implement this: from adk.core import Message
from adk.llms import LLM
from adk.mcp import MultiConversationalProcess
from adk.tools import Tool
def after_tool_callback(
tool: Tool, args: dict, result: dict | str
) -> dict | str:
"""
This callback is executed after a tool runs.
If the tool is 'read_file', it replaces the file content with a summary message.
"""
if tool.name == "read_file":
# The tool has run, but we intercept the result.
# Instead of sending the file content to the LLM,
# we send this dictionary.
return {
"status": "success",
"message": "File has been read, but content is not displayed.",
"filename": args.get("path"), # Assuming 'path' is an arg to read_file
}
# For any other tool, return the result as is.
return result
# When creating your MCP, add the callback:
my_mcp = MultiConversationalProcess(
llm=my_llm,
tools=[read_file_tool, write_file_tool],
after_tool_callback=after_tool_callback,
) In this example:
This approach ensures that the You can find more detailed information about callbacks and their capabilities in the official documentation. [1] ADK Callbacks Documentation |
Beta Was this translation helpful? Give feedback.
Response from ADK Answering Agent (experimental, answer may be inaccurate)
TLDR: You can use an
after_tool_callback
to intercept the result of theread_file
tool and prevent its content from being sent to the LLM. This allows the tool to run without incurring the cost or truncation issues of passing large amounts of data to the model.Hello! That's a great question. You can achieve this by using callbacks, which allow you to hook into the execution of tools and modify their behavior.
For your specific use case, the
after_tool_callback
is the most suitable solution. It runs after a tool has executed but before its output is sent to the LLM. You can inspect the result and decide what to se…