Available Models
There are a couple of different ways to check which models are available in your current session.
px.models.list_models()
px.models.list_providers()
px.models.list_provider_models()
px.models.get_model()
px.models.list_models()
It is possible to list all available models as follows:
provider_models = px.models.list_models()
for provider_model in provider_models:
print(provider_model)
(claude, haiku)
(claude, opus)
(claude, sonnet)
(gemini, gemini-2.0-flash)
(gemini, gemini-2.0-flash-lite)
(gemini, gemini-2.5-pro-preview-03-25)
(openai, chatgpt-4o-latest)
(openai, gpt-4.1)
...
- Returns a list of
px.types.ProviderModel
.
Parameters
Option | Type | Default Value | Description |
---|---|---|---|
model_size | str | None | One of 'small' , 'medium' , 'large' , or 'largest' as string or
px.types.ModelSizeType enum.
If provided, only models of this size are returned.
'largest' is for the largest models of each provider. |
return_all | bool | False | Previously failed models are cached by default. If you set this option to True , all models are returned, including both successfully available and previously failed ones. |
verbose | bool | False | If True , the function will print the details of how the models are filtered. |
clear_model_cache | bool | False | Model results (both successful and failed) are cached by default,
even without a specified cache path. Setting this option to True clears
the cache and forces a fresh check of all models. • This option is handy when API keys are changed, quotas of some providers are changed, etc. |
px.models.list_providers()
It is possible to list all available providers as follows:
providers = px.models.list_providers()
for provider in providers:
print(provider)
claude
gemini
openai
...
- Returns
List[str]
.
Parameters
Option | Type | Default Value | Description |
---|---|---|---|
verbose | bool | False | If True , the function will print the details of how the providers are generated. |
clear_model_cache | bool | False | Model results (both successful and failed) are cached by default,
even without a specified cache path. Setting this option to True clears
the cache and forces a fresh check of all providers. • This option is handy when API keys are changed, quotas of some providers are changed, etc. |
px.models.list_provider_models()
This function lists all available models for a given provider.
provider_models = px.models.list_provider_models(provider='claude')
for provider_model in provider_models:
print(provider_model)
(claude, 3-haiku)
(claude, 3-sonnet)
(claude, 3.5-sonnet)
(claude, 3.5-sonnet-v2)
(claude, haiku)
(claude, opus)
(claude, sonnet)
- Returns
List[str]
.
Parameters
Option | Type | Default Value | Description |
---|---|---|---|
model_size | str | None | One of 'small' , 'medium' , 'large' , or 'largest' as string or
px.types.ModelSizeType enum.
If provided, only models of this size are returned.
'largest' is for the largest model of the provider. |
provider | str | None | The provider to list models for. Check available providers on ProxAI Github Repo. |
verbose | bool | False | If True , the function will print the details of how the models are generated. |
clear_model_cache | bool | False | Model results (both successful and failed) are cached by default,
even without a specified cache path. Setting this option to True clears
the cache and forces a fresh check of all models. • This option is handy when API keys are changed, quotas of some providers are changed, etc. |
px.models.get_model()
This function returns the model details for a given provider and model name.
provider_model = px.models.get_model(provider='claude', model='3-haiku')
print(repr(provider_model))
ProviderModelType(provider=claude, model=3-haiku, provider_model_identifier=claude-3-haiku-20240307)
- Returns
px.types.ProviderModelType
.
Parameters
Option | Type | Default Value | Description |
---|---|---|---|
provider | str | None | String value of the provider. Check available providers on ProxAI Github Repo. |
model | str | None | String value of the model. Check available models for a given provider on ProxAI Github Repo. |
allow_non_working_model | bool | False | If True , the function will return the model even if it is not working. If False , the function will raise an error if the model is not working. |
verbose | bool | False | If True , the function will print the details of how the models are generated. |
clear_model_cache | bool | False | Model results (both successful and failed) are cached by default,
even without a specified cache path. Setting this option to True clears
the cache and forces a fresh check of all models. • This option is handy when API keys are changed, quotas of some providers are changed, etc. |
Details of how models are checked
The available models are determined as follows:
- It checks environment variables for provider API keys (see Provider Integrations)
- Then, it looks if the model was previously cached
- Finally, it makes a simple request for each un-cached model to check if these models are available