Skip to content

Streaming Conversation History Fragmentation Issue #2273

@gianluca-henze-parloa

Description

@gianluca-henze-parloa

Streaming Conversation History Fragmentation Issue

Problem Description

When using the Gemini Live API with ADK for streaming conversations, the conversation history becomes severely fragmented due to how streaming chunks are stored. Each individual streaming token/chunk is saved as a separate Content entry in the conversation history instead of being consolidated into meaningful conversation turns.

Example of Fragmented History

Content(parts=[Part(text=' Hey')], role='user'),
Content(parts=[Part(text=',')], role='user'), 
Content(parts=[Part(text=' how')], role='user'),
Content(parts=[Part(text=' you')], role='user'),
Content(parts=[Part(text=' do')], role='user'),
Content(parts=[Part(text='ing')], role='user'),
Content(parts=[Part(text='?')], role='user'),
Content(parts=[Part(text="I'm doing well, thank you for asking! How can I help you today?")], role='model'),
Content(parts=[Part(text=' I')], role='user'),
Content(parts=[Part(text=' would')], role='user'),
Content(parts=[Part(text=' like')], role='user'),
# ... and so on

Impact

  1. Token Waste: Massive token consumption due to fragmented history
  2. Context Degradation: Important conversation context gets lost as history grows
  3. Performance Issues: Slower response times due to excessive tokens
  4. Cost Escalation: Higher API costs due to unnecessary token usage
  5. Model Confusion: Fragmented context may confuse the model's understanding

Metadata

Metadata

Assignees

Labels

bot triaged[Bot] This issue is triaged by ADK botlive[Component] This issue is related to live, voice and video chat

Type

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions