Generate Text
To interact with LLMs, you can use the generate_text function.
text = px.generate_text(prompt="Hello AI world!")This function makes best effort to unify the API calls to different providers.
Features
- Basic Usage
- System Prompt
- Messages (Multi-turn)
- Structured Output (Pydantic)
- Web Search
- Custom Provider Model
- Extensive Return
Check parameters section for all features.
Basic Usage
text = px.generate_text(prompt="Hello, world!")Hello! How can I assist you today?System Prompt
Use the system parameter to set the AI’s behavior and provide instructions.
text = px.generate_text(
system="Only answer with the single digit number.",
prompt="What is 2 + 2?",
)4Messages
Use messages for multi-turn conversations. Each message has a role (user or assistant) and content.
text = px.generate_text(
system="No matter what, you must answer with 7.",
messages=[
{"role": "user", "content": "Hello AI Model!"},
{"role": "assistant", "content": "7"},
{"role": "user"
7Structured Output (Pydantic)
Use response_format to get structured responses. You can pass a Pydantic model
class and the response will be automatically parsed and validated.
from pydantic import BaseModel
class City(BaseModel):
name: str
country: str
population: int
result = px.generate_text(
prompt="What is the capital of France? Include population.",
response_format=City
)
print(result.name)
Paris
France
2161000You can also use more complex nested structures:
from pydantic import BaseModel
from typing import List
class Step(BaseModel):
explanation: str
output: str
class MathSolution(BaseModel):
steps: List[Step]
final_answer: str
result = px.generate_text(
Step 1: Multiply 25 by 4 to get 100
Step 2: Add 10 to 100 to get 110
Answer: 110Web Search
Enable web_search to allow the model to search the web for up-to-date information.
This is useful for questions about recent events or real-time data.
text = px.generate_text(
prompt="What are the latest news about AI today?",
web_search=True
)Web search works best with models that support this feature natively (like some OpenAI and Anthropic models). ProxAI will automatically handle the capability based on the selected model.
Custom Provider Model
Set provider_model to a tuple of (provider, model) string values.
text = px.generate_text(
prompt="Hello model! What is your name and which company do you belong to?",
provider_model=('claude', 'opus')
)My name is Claude and I was created by Anthropic. It's nice to meet you!You can also use px.types.ProviderModel to specify the provider and model.
provider_model = px.models.get_model(
provider='openai',
model='o4-mini')
print(provider_model)
text = px.generate_text(
prompt="Hello model! What is your name and which company do you belong to?",
provider_model=provider_model
)('openai', 'o4-mini')
I’m ChatGPT, a language model developed and maintained by OpenAI.- Check all available models on ProxAI Github Repo
Extensive Return
Set extensive_return to True to get more information about the API call.
This returns a px.types.LoggingRecord object.
from pprint import pprint
response = px.generate_text(
prompt="Hello model!",
extensive_return=True
)
pprint(response)LoggingRecord(query_record=QueryRecord(call_type=<CallType.GENERATE_TEXT: 'GENERATE_TEXT'>,
model=(<Provider.OPENAI: 'openai'>,
<OpenAIModel.GPT_3_5_TURBO: 'gpt-3.5-turbo'>),
prompt='Hello model!',
system=None,
messages=None,
max_tokens=100,
temperature=None,
stop=None,
hash_value=None),
response_record=QueryResponseRecord(response='Hello! How can I '
'help you today?',
error=None,
error_traceback=None,
start_utc_date=datetime.datetime(2025, 2, 13, 0, 38, 17, 573491, tzinfo=datetime.timezone.utc),
end_utc_date=datetime.datetime(2025, 2, 13, 0, 38, 17, 930739, tzinfo=datetime.timezone.utc),
local_time_offset_minute=-0.0,
response_time=datetime.timedelta(microseconds=357285),
estimated_cost=150),
response_source=<ResponseSource.PROVIDER: 'PROVIDER'>,
look_fail_reason=None)You can also convert response to a dictionary.
from dataclasses import asdict
pprint(asdict(response)){'look_fail_reason': None,
'query_record': {'call_type': <CallType.GENERATE_TEXT: 'GENERATE_TEXT'>,
'hash_value': None,
'max_tokens': 1000,
'messages': None,
'prompt': 'Hello model!',
'provider_model': {'model': 'gpt-4',
'provider': 'openai',
'provider_model_identifier': 'gpt-4-0613'},
'stop': None,
'system': None,
'temperature': None},
'response_record': {'end_utc_date': datetime.datetime(2025, 5, 3, 17, 16, 58, 827118, tzinfo=datetime.timezone.utc),
'error': None,
'error_traceback': None,
'estimated_cost': 30000,
'local_time_offset_minute': -0.0,
'response': 'Hello! How can I assist you today?',
'response_time': datetime.timedelta(microseconds=790699),
'start_utc_date': datetime.datetime(2025, 5, 3, 17, 16, 58, 36446, tzinfo=datetime.timezone.utc)},
'response_source': <ResponseSource.PROVIDER: 'PROVIDER'>}Parameters
px.generate_text() parameters:
| Option | Type | Default Value | Description |
|---|---|---|---|
prompt | str | None | The prompt to generate text from. |
system | str | None | System prompt to the model that will be prioritized over user prompt. |
messages | List[Dict[str, str]] | None | List of messages that represents the history of the conversation. |