One Interface
for
Every AI Model

  • • Intuitive, simple API
  • • 100% private and open source
  • • No third party routers
Python code showing how to use ProxAI with different models

LLM Connection Layer forProduction, Research, MVP, Side Project, EnterpriseProductionResearchMVPSide ProjectEnterprise

Codebase

Production Server
Google Colab
Different Python Environments
Quick Server
Automated Parsing

ProxAI

AI Provider

DeepSeek

A Python library that provides a unified API for multiple AI providers across production, MVP, enterprise, and research projects

How It Works

From setup to production in 5 steps

1
InstallAdd ProxAI to your Python project
pip install proxai
2
ConfigureSet up your AI provider API keys
export OPENAI_API_KEY="..."
export ANTHROPIC_API_KEY="..."
3
CodeUse any AI model with unified API
import proxai as px
 
px.set_model(provider="openai", model="gpt-4o")
px.generate_text(prompt="What is your name?")
4
ScaleProxAI handles production concerns
Automatic rate limiting
Provider failover handling
Request retry logic
Response caching
Error handling & logging
Automatic rate limiting
Provider failover handling
Request retry logic
Response caching
Error handling & logging
Automatic rate limiting
Provider failover handling
Request retry logic
Response caching
Error handling & logging
5
Analyze (Optional)Use ProxDash for advanced analytics

Real-time system health and uptime monitoring

System Uptime
99.98%
Last 30 days
Active Services
15/15
All operational
Avg Load Time
287ms
-12% from last week

Why ProxAI?

The AI landscape has many layer products. Here's what makes ProxAI different.

ProxAIOpenRouterLangchainLiteLLMHuggingFacePydantic AI
SpeedDirectThird Party RouterDirectDirect OptionThird Party RouterDirect
PrivacyLocalThird Party RouterLocalLocal OptionThird Party RouterLocal
Open SourceYesNoYesYesNoYes
Complexity to UseEasyEasyHardMediumEasyEasy
Credit Markup FeeNoYes (5%)NoNoNoNo
Local CacheYesNoYesYesNoNo
CLINo†NoYesYesYesYes
DashboardYesYesYesYesNoYes

* This table is for quick reference. Please reach out to admin@proxai.co if there are mistakes in this table.

† CLI is under development, check out our roadmap page.

No Third-Party Routers

ProxAI is a local Python library and handles all operations on your local machine. There is no need to send your queries to third-party vendors.

100% Open Sourced

This is important to ensure your data stays on your machine and is not sent to other platforms. Open source community support is important.

No Markup Fee

Some routers have up to 5.5% markup fees per query. ProxAI has a free-to-use open source core and paid dashboard for enterprise business model.

No Frameworks

It is our core philosophy to make the best API currently available. The ProxAI team wants to deliver modular libraries, not frameworks.

Easy to Use

Engineering experience is important to us. We are trying to simplify all the hassle and make everything simpler for solo builders to structured companies.

Battle Tested

Against "Fake it until you make it"! We are delivering features after they are well tested on different use cases. No half-baked products.

Continuously Shipping

Our team is agile and continuously shipping must-have features. Check out our roadmap page.

Dashboard (Premium)

ProxAI comes with built-in experiment tracking features. You just need to add ProxDash API token to your environment variable and then you can track:

Pricing breakdowns - Detailed cost analysis across different AI providers
Performances and server up-times - Monitor reliability and response times
Logging and easy error tracking and tracebacks - Debug issues quickly
Health monitoring - Real-time status of your AI integrations
Organizing your experiments and products - Structured project management

Open Source

Fully transparent, MIT licensed, and built with community input. Join developers creating better AI tooling.

Star & Contribute

Fork, improve, share

GitHub →

Join Community

Get help, share ideas

Discord →
MIT Licensed
Community Driven

Ready to Start?

ProxAI is simple to use. Get your first AI response in 2 minutes.