Skip to main content

2 posts tagged with "new models"

View All Tags

v1.56.3

Krrish Dholakia
CEO, LiteLLM
Ishaan Jaffer
CTO, LiteLLM

guardrails, logging, virtual key management, new models

info

Get a 7 day free trial for LiteLLM Enterprise here.

no call needed

New Features

✨ Log Guardrail Traces

Track guardrail failure rate and if a guardrail is going rogue and failing requests. Start here

Traced Guardrail Success

Traced Guardrail Failure

/guardrails/list

/guardrails/list allows clients to view available guardrails + supported guardrail params

curl -X GET 'http://0.0.0.0:4000/guardrails/list'

Expected response

{
"guardrails": [
{
"guardrail_name": "aporia-post-guard",
"guardrail_info": {
"params": [
{
"name": "toxicity_score",
"type": "float",
"description": "Score between 0-1 indicating content toxicity level"
},
{
"name": "pii_detection",
"type": "boolean"
}
]
}
}
]
}

✨ Guardrails with Mock LLM

Send mock_response to test guardrails without making an LLM call. More info on mock_response here

curl -i http://localhost:4000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-npnwjPQciVRok5yNZgKmFQ" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [
{"role": "user", "content": "hi my email is ishaan@berri.ai"}
],
"mock_response": "This is a mock response",
"guardrails": ["aporia-pre-guard", "aporia-post-guard"]
}'

Assign Keys to Users

You can now assign keys to users via Proxy UI

New Models

  • openrouter/openai/o1
  • vertex_ai/mistral-large@2411

Fixes

v1.55.8-stable

Krrish Dholakia
CEO, LiteLLM
Ishaan Jaffer
CTO, LiteLLM

A new LiteLLM Stable release just went out. Here are 5 updates since v1.52.2-stable.

langfuse, fallbacks, new models, azure_storage

Langfuse Prompt Management

This makes it easy to run experiments or change the specific models gpt-4o to gpt-4o-mini on Langfuse, instead of making changes in your applications. Start here

Control fallback prompts client-side

Claude prompts are different than OpenAI

Pass in prompts specific to model when doing fallbacks. Start here

New Providers / Models

✨ Azure Data Lake Storage Support

Send LLM usage (spend, tokens) data to Azure Data Lake. This makes it easy to consume usage data on other services (eg. Databricks) Start here

Docker Run LiteLLM

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
docker.litellm.ai/berriai/litellm:litellm_stable_release_branch-v1.55.8-stable

Get Daily Updates

LiteLLM ships new releases every day. Follow us on LinkedIn to get daily updates.