Skip to content

ContextVar Token Error with exit_loop Tool #949

@YanekHay

Description

@YanekHay

Bug Report: ContextVar ValueError When Using exit_loop Tool Inside LoopAgent


Bug Description

Using the exit_loop tool inside an agent running in a LoopAgent (e.g., a critic in a critic–reviser pattern) results in the following runtime error:

ValueError: <Token var=<ContextVar name='current_context' default={} at 0x...> at 0x...> was created in a different Context

Context

This occurs when the agent calls exit_loop() to update the session state and break the loop. The bug appears related to how ContextVar tokens are managed across async boundaries in LoopAgent.

Even when trying to bypass the tool and instead manually update state via ctx.session.state, or return {"exit_loop": true} from the agent and intercept it in the outer loop, the error persists. This suggests that the context boundary violation lies deeper in the loop infrastructure, and no workaround currently works.


To Reproduce

Steps to reproduce the behavior:

  1. Install the ADK version 1.0.0:

    pip install google-adk==1.0.0
  2. Run the example via ADK Web UI:

    adk web
  3. Use the full runnable example code below inside your agent.

  4. Observe that when the Critic agent calls exit_loop(), the program crashes with a ValueError.


Expected Behavior

The tool exit_loop should safely mutate session state and terminate the loop when called inside a LoopAgent context. The tool should not cause any context-related errors and should allow smooth termination of iterative flows.


Desktop

  • OS: Windows
  • Python version: 3.11.11
  • ADK version: 1.0.0

Additional Context

This issue makes LoopAgent workflows unstable when used with tool-based exit conditions, especially for common iterative patterns such as Critic → Reviser. The underlying cause appears to be related to ContextVar token management across async contexts within LoopAgent. Workarounds like manually updating the session state or returning exit flags in the output do not resolve the problem.


Full Runnable Code Example

import logging
from typing import AsyncGenerator
from typing_extensions import override
from google.adk.tools import exit_loop

from google.adk.agents import LlmAgent, BaseAgent, LoopAgent, SequentialAgent
from google.adk.agents.invocation_context import InvocationContext
from google.adk.events import Event, EventActions

# --- Constants ---
APP_NAME = "story_app"
USER_ID = "12345"
SESSION_ID = "123344"
GEMINI_2_FLASH = "gemini-2.0-flash"

# --- Configure Logging ---
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

class StoryFlowAgent(BaseAgent):
    """
    Custom agent for a story generation and refinement workflow.

    This agent orchestrates a sequence of LLM agents to generate a story,
    critique it, revise it, check grammar and tone, and potentially
    regenerate the story if the tone is negative.
    """

    story_generator: LlmAgent
    critic: LlmAgent
    reviser: LlmAgent
    grammar_check: LlmAgent
    tone_check: LlmAgent

    loop_agent: LoopAgent
    sequential_agent: SequentialAgent

    model_config = {"arbitrary_types_allowed": True}

    def __init__(
        self,
        name: str,
        story_generator: LlmAgent,
        critic: LlmAgent,
        reviser: LlmAgent,
        grammar_check: LlmAgent,
        tone_check: LlmAgent,
    ):
        loop_agent = LoopAgent(
            name="CriticReviserLoop", sub_agents=[critic, reviser], max_iterations=10
        )
        sequential_agent = SequentialAgent(
            name="PostProcessing", sub_agents=[grammar_check, tone_check]
        )

        sub_agents_list = [
            story_generator,
            loop_agent,
            sequential_agent,
        ]

        super().__init__(
            name=name,
            story_generator=story_generator,
            critic=critic,
            reviser=reviser,
            grammar_check=grammar_check,
            tone_check=tone_check,
            loop_agent=loop_agent,
            sequential_agent=sequential_agent,
            sub_agents=sub_agents_list,
        )

    @override
    async def _run_async_impl(
        self, ctx: InvocationContext
    ) -> AsyncGenerator[Event, None]:
        import time
        state_change = {"topic": ctx.user_content}
        action_with_state_update = EventActions(state_delta=state_change)
        system_event = Event(
            invocation_id="section_classification_update",
            author="system",
            actions=action_with_state_update,
            timestamp=time.time(),
        )
        await ctx.session_service.append_event(ctx.session, system_event)

        logger.info(f"[{self.name}] Starting story generation workflow.")

        # 1. Initial Story Generation
        logger.info(f"[{self.name}] Running StoryGenerator...")
        async for event in self.story_generator.run_async(ctx):
            logger.info(f"[{self.name}] Event from StoryGenerator: {event.model_dump_json(indent=2, exclude_none=True)}")
            yield event

        if "current_story" not in ctx.session.state or not ctx.session.state["current_story"]:
            logger.error(f"[{self.name}] Failed to generate initial story. Aborting workflow.")
            return

        logger.info(f"[{self.name}] Story state after generator: {ctx.session.state.get('current_story')}")

        # 2. Critic-Reviser Loop
        logger.info(f"[{self.name}] Running CriticReviserLoop...")
        async for event in self.loop_agent.run_async(ctx):
            logger.info(f"[{self.name}] Event from CriticReviserLoop: {event.model_dump_json(indent=2, exclude_none=True)}")
            yield event

        logger.info(f"[{self.name}] Story state after loop: {ctx.session.state.get('current_story')}")

        # 3. Sequential Post-Processing (Grammar and Tone Check)
        logger.info(f"[{self.name}] Running PostProcessing...")
        async for event in self.sequential_agent.run_async(ctx):
            logger.info(f"[{self.name}] Event from PostProcessing: {event.model_dump_json(indent=2, exclude_none=True)}")
            yield event

        # 4. Tone-Based Conditional Logic
        tone_check_result = ctx.session.state.get("tone_check_result")
        logger.info(f"[{self.name}] Tone check result: {tone_check_result}")

        if tone_check_result == "negative":
            logger.info(f"[{self.name}] Tone is negative. Regenerating story...")
            async for event in self.story_generator.run_async(ctx):
                logger.info(f"[{self.name}] Event from StoryGenerator (Regen): {event.model_dump_json(indent=2, exclude_none=True)}")
                yield event
        else:
            logger.info(f"[{self.name}] Tone is not negative. Keeping current story.")

        logger.info(f"[{self.name}] Workflow finished.")


# --- Define the individual LLM agents ---

story_generator = LlmAgent(
    name="StoryGenerator",
    model=GEMINI_2_FLASH,
    instruction="""You are a story writer. Write a short story (around 100 words) about a cat,
based on the topic provided in session state with key 'topic'""",
    input_schema=None,
    output_key="current_story",
)

critic = LlmAgent(
    name="Critic",
    model=GEMINI_2_FLASH,
    instruction="""You are a story critic. Review the story provided in
session state with key 'current_story'. Provide 1-2 sentences of constructive criticism
on how to improve it. Focus on plot or character. use the ‘exit_loop’ tool to stop the loop if the story is good enough.""",
    input_schema=None,
    output_key="criticism",
    tools=[exit_loop],
)

reviser = LlmAgent(
    name="Reviser",
    model=GEMINI_2_FLASH,
    instruction="""You are a story reviser. Revise the story provided in
session state with key 'current_story', based on the criticism in
session state with key 'criticism'. Output only the revised story.""",
    input_schema=None,
    output_key="current_story",
)

grammar_check = LlmAgent(
    name="GrammarCheck",
    model=GEMINI_2_FLASH,
    instruction="""You are a grammar checker. Check the grammar of the story
provided in session state with key 'current_story'. Output only the suggested
corrections as a list, or output 'Grammar is good!' if there are no errors.""",
    input_schema=None,
    output_key="grammar_suggestions",
)

tone_check = LlmAgent(
    name="ToneCheck",
    model=GEMINI_2_FLASH,
    instruction="""You are a tone analyzer. Analyze the tone of the story
provided in session state with key 'current_story'. Output only one word: 'positive' if
the tone is generally positive, 'negative' if the tone is generally negative, or 'neutral'
otherwise.""",
    input_schema=None,
    output_key="tone_check_result",
)

# --- Create the custom agent instance ---

story_flow_agent = StoryFlowAgent(
    name="StoryFlowAgent",
    story_generator=story_generator,
    critic=critic,
    reviser=reviser,
    grammar_check=grammar_check,
    tone_check=tone_check,
)

root_agent = story_flow_agent

Metadata

Metadata

Assignees

Labels

core[Component] This issue is related to the core interface and implementation

Type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions