Check for existing issues
What happened?
LiteLLM's packaged model metadata for the Vertex Qwen model
vertex_ai/qwen/qwen3-235b-a22b-instruct-2507-maas
does not include us-south1 in supported_regions.
My LiteLLM proxy config uses this model in us-south1:
- model_name: qwen3-paygo
litellm_params:
model: vertex_ai/qwen/qwen3-235b-a22b-instruct-2507-maas
vertex_project: os.environ/VERTEXAI_PROJECT
vertex_location: us-south1
vertex_credentials: os.environ/GOOGLE_APPLICATION_CREDENTIALS
extra_headers:
X-Vertex-AI-LLM-Request-Type: shared
- model_name: qwen3-pt
litellm_params:
model: vertex_ai/qwen/qwen3-235b-a22b-instruct-2507-maas
vertex_project: os.environ/VERTEXAI_PROJECT
vertex_location: us-south1
vertex_credentials: os.environ/GOOGLE_APPLICATION_CREDENTIALS
extra_headers:
X-Vertex-AI-LLM-Request-Type: dedicated
To work around this, I had to patch LiteLLM's local model cost map by adding us-south1 to:
vertex_ai/qwen/qwen3-235b-a22b-instruct-2507-maas.supported_regions
and then mount that patched JSON over LiteLLM's packaged backup cost map.
Expected behavior:
LiteLLM should ship the correct supported_regions metadata for this Vertex Qwen model so us-south1 works without a local cost-map override.
Steps to Reproduce
- Start LiteLLM Proxy without any local cost-map override, with a config that routes this model to
us-south1:
- model_name: qwen3-paygo
litellm_params:
model: vertex_ai/qwen/qwen3-235b-a22b-instruct-2507-maas
vertex_project: os.environ/VERTEXAI_PROJECT
vertex_location: us-south1
vertex_credentials: os.environ/GOOGLE_APPLICATION_CREDENTIALS
extra_headers:
X-Vertex-AI-LLM-Request-Type: shared
- model_name: qwen3-pt
litellm_params:
model: vertex_ai/qwen/qwen3-235b-a22b-instruct-2507-maas
vertex_project: os.environ/VERTEXAI_PROJECT
vertex_location: us-south1
vertex_credentials: os.environ/GOOGLE_APPLICATION_CREDENTIALS
extra_headers:
X-Vertex-AI-LLM-Request-Type: dedicated
- Send a request through the proxy to one of those aliases.
- Observe that LiteLLM warns that the model does not support us-south1 and routes to global instead.
- Inspect the packaged LiteLLM cost map inside the image:
docker run --rm --entrypoint python ghcr.io/berriai/litellm:v1.81.9-stable -c 'import json; p="/usr/lib/python3.13/site-packages/litellm/model_prices_and_context_window_backup.json"; d=json.load(open(p)); print(d["ver
tex_ai/qwen/qwen3-235b-a22b-instruct-2507-maas"].get("supported_regions"))'
This prints:
['global']
- Apply a workaround by patching the cost map:
."vertex_ai/qwen/qwen3-235b-a22b-instruct-2507-maas".supported_regions |= ((. // []) + ["us-south1"] | unique)
- Mount the patched JSON over LiteLLM's packaged backup cost map and enable LITELLM_LOCAL_MODEL_COST_MAP=True.
After that, the warning goes away and the model metadata shows ["global", "us-south1"].
Relevant log output
{"message": "Vertex AI model 'qwen/qwen3-235b-a22b-instruct-2507-maas' does not support region 'us-south1' (supported: ['global']). Routing to 'global'.", "level": "WARNING", "timestamp": "<redacted-timestamp>"}
Packaged LiteLLM metadata inside v1.81.9-stable:
['global']
Packaged LiteLLM metadata inside main-stable:
['global']
{
"vertex_ai/qwen/qwen3-235b-a22b-instruct-2507-maas": {
"supported_regions": [
"global",
"us-south1"
]
}
}
That makes the root cause much clearer:
- LiteLLM’s built-in metadata says only global
- LiteLLM logs a warning and reroutes to global
- adding us-south1 to the cost map fixes it
Relevant log output
litellm-proxy | {"message": "Vertex AI model 'qwen/qwen3-235b-a22b-instruct-2507-maas' does not support region 'us-south1' (supported: ['global']). Routing to 'global'.", "level": "WARNING", "timestamp": "2026-04-08T20:03:23.481696"}
What part of LiteLLM is this about?
Proxy
What LiteLLM version are you on ?
v1.82.3
Twitter / LinkedIn details
No response
Check for existing issues
What happened?
LiteLLM's packaged model metadata for the Vertex Qwen model
vertex_ai/qwen/qwen3-235b-a22b-instruct-2507-maasdoes not include
us-south1insupported_regions.My LiteLLM proxy config uses this model in
us-south1:To work around this, I had to patch LiteLLM's local model cost map by adding us-south1 to:
vertex_ai/qwen/qwen3-235b-a22b-instruct-2507-maas.supported_regions
and then mount that patched JSON over LiteLLM's packaged backup cost map.
Expected behavior:
LiteLLM should ship the correct supported_regions metadata for this Vertex Qwen model so us-south1 works without a local cost-map override.
Steps to Reproduce
us-south1:docker run --rm --entrypoint python ghcr.io/berriai/litellm:v1.81.9-stable -c 'import json; p="/usr/lib/python3.13/site-packages/litellm/model_prices_and_context_window_backup.json"; d=json.load(open(p)); print(d["ver
tex_ai/qwen/qwen3-235b-a22b-instruct-2507-maas"].get("supported_regions"))'
This prints:
['global']
."vertex_ai/qwen/qwen3-235b-a22b-instruct-2507-maas".supported_regions |= ((. // []) + ["us-south1"] | unique)
After that, the warning goes away and the model metadata shows ["global", "us-south1"].
Relevant log output
Packaged LiteLLM metadata inside v1.81.9-stable:
['global']
Packaged LiteLLM metadata inside main-stable:
['global']
{
"vertex_ai/qwen/qwen3-235b-a22b-instruct-2507-maas": {
"supported_regions": [
"global",
"us-south1"
]
}
}
That makes the root cause much clearer:
Relevant log output
What part of LiteLLM is this about?
Proxy
What LiteLLM version are you on ?
v1.82.3
Twitter / LinkedIn details
No response