ProxAI v1.0.0 - First Stable Release
We’re excited to announce the first stable release of ProxAI!
Version: 1.0.0 Release Date: May 7, 2025 (tentative)
Overview
ProxAI v1.0.0 marks our first stable release. This release provides a unified interface for interacting with various AI providers through a clean, consistent Python API. ProxAI allows developers to switch between different AI providers like OpenAI, Anthropic Claude, and Google Gemini.
Key Features
Unified API
- Cross-Provider Consistency: Interact with any supported AI model using the same API
- Cross-Model Features: Best effort support for all text generation features for all models (message history, system prompt, temperature, max tokens, stop)
- Strict Feature Control: Fail if the model does not support the feature
Provider Integrations
- Major Providers: Connectors for Gemini, OpenAI, Claude, Grok, DeepSeek, Mistral, Cohere, Databricks, and HuggingFace
- List Providers: Automatically detect available providers on your session
- List Models: Automatically detect available models for each provider
- Model Size Control: Get all models of a certain size: “small”, “medium”, “large”, “largest” (of each provider)
- Add More Providers: Add or update providers without code changes simply by adding API keys
Check Health
- Check Health: Easy health summary of all providers and models
- ProxDash: Health reports on ProxDash website to debug issues
Error Handling
- Error Tracking: Robust error tracking with details on query records.
- Error Control: Option of skipping errors or stopping
- ProxDash: Tracking errors on ProxDash website
Caching System
- Query Caching: Caching query results to reduce API calls and improve response time
- Model Caching: Caching list of available models to speed up experiments
- Extensive Configuration: Configure caching behavior with options like unique response limit, cache size, cache expiration time, etc.
- ProxDash: Keep track of cache hits/misses, time saved, money saved, and more
Cost Estimation
- Cost Estimation: Track estimated costs across different providers and models
- ProxDash: Keep track of cost estimates: daily, weekly, monthly, per provider, per model, etc.
Logging System
- Logging System: Detailed logging for debugging and monitoring
- Hide Sensitive Information: Mask sensitive information like prompts, responses, and etc.
- ProxDash: Keep track of logging information with different privacy settings
Usage
Install ProxAI: Install stable version from PyPI
pip install proxai
Add Provider API Keys: Add API keys to your environment variables
$ export OPENAI_API_KEY="sk-..."
$ export ANTHROPIC_API_KEY="sk-..."
...
Example Usage:
import proxai as px
provider_models = px.models.list_models()
for provider_model in provider_models:
answer = px.generate_text(
prompt='Hello model! Which company built you?',
provider_model=provider_model)
print(f'{provider_model}: {answer}')
Check Documentation Page: Please refer to the ProxAI Documentation for more details.
Breaking Changes
As this is the first stable release, there are no breaking changes from previous stable versions. However, if you were using the pre-release versions, note the following changes:
- The API structure has been stabilized and may differ from pre-release versions
- Configuration format has been standardized
- Some parameter names and return types have been standardized
Bug Fixes
- Fixed multiprocessing issues for different platforms
- Improved model connection stability
- Improved model size detection
- More robust error handling
- Extensive testing and documentation
Known Limitations
- Thinking models are not supported yet
- Function calling and tool use features are coming in future releases
- Multi-modal support (images, audio) is planned for future versions
The ProxAI Development Team