-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Description
When using the Python ADK with Gemini models, the Gemini
class internally creates its own google.genai.Client
based on environment variables or ADC.
There is no way to inject a preconfigured genai.Client
instance directly.
This makes it impossible to control the underlying genai.Client
entirely in code (e.g., to avoid environment variable reliance or to set vertexai=True
, project
, and location
programmatically).
In contrast, the Java ADK’s Gemini
wrapper accepts a Client
object, which allows this flexibility and makes the model configuration explicit when called, as can be seen from the docs
// --- Example #2: using a powerful Gemini Pro model with API Key in model ---
LlmAgent agentGeminiPro =
LlmAgent.builder()
// Use the latest generally available Pro model identifier
.model(new Gemini("gemini-2.5-pro-preview-03-25",
Client.builder()
.vertexAI(false)
.apiKey("API_KEY") // Set the API Key (or) project/ location
.build()))
// Or, you can also directly pass the API_KEY
// .model(new Gemini("gemini-2.5-pro-preview-03-25", "API_KEY"))
.name("gemini_pro_agent")
.instruction("You are a powerful and knowledgeable Gemini assistant.")
// ... other agent parameters
.build();
Is your feature request related to a problem? Please describe.
I need more fine-grained control over the configuration of Gemini models - specifically the ability to set different locations (and other client settings) for different agents running within the same Python ADK application. Currently, model configuration is determined globally via environment variables or ADC, which makes it difficult to run agents in the same process that target different Vertex AI regions.
Describe the solution you'd like
Add support for passing an existing google.genai.Client
instance into the Gemini model constructor (similar to the Java ADK API).
Example desired usage:
from google import genai
from google.adk.models import Gemini
from google.adk.agents import LlmAgent
client = genai.Client(vertexai=True, project="my-project", location="us-central1")
gemini_model = Gemini(model="gemini-2.5-flash", client=client) # or api_client=client
agent = LlmAgent(name="my_agent", model=gemini_model, instruction="Be concise.")
Describe alternatives you've considered
-
Environment variables: Setting
GOOGLE_GENAI_USE_VERTEXAI
,GOOGLE_CLOUD_PROJECT
, andGOOGLE_CLOUD_LOCATION
programmatically before importing ADK. This works, but is indirect, less explicit, and affects the entire process environment - e.g., it prevents using different locations for different model. -
Custom
BaseLlm
wrapper: Creating aBaseLlm
subclass that usesgoogle.genai.Client
directly. This is more verbose, requires reimplementing request/response mapping logic that already exists in ADK, and adds maintenance overhead.
Additional context
Since this is already possible in the Java ADK, adding the same capability to the Python ADK would:
- Improve cross-language parity.
- Make the model configuration explicit and local to the code using it.
- Reduce reliance on environment variable management in containerized or sandboxed environments.
- Enable scenarios where multiple agents in the same application require different Vertex AI regions or other client-level settings.