Skip to main content

/audio/transcriptions

Overview

FeatureSupportedNotes
Cost TrackingWorks with all supported models
LoggingWorks across all integrations
FallbacksWorks between supported models
LoadbalancingWorks between supported models
Supported Providersopenai, azure, vertex_ai, gemini, deepgram, groq, fireworks_ai

Quick Start

Python

from openai import OpenAI

client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://api.haimaker.ai/v1"
)

audio_file = open("speech.mp3", "rb")
transcript = client.audio.transcriptions.create(
model="openai/whisper-1",
file=audio_file
)

print(transcript.text)

cURL

curl https://api.haimaker.ai/v1/audio/transcriptions \
-H "Authorization: Bearer YOUR_API_KEY" \
-F file=@"speech.mp3" \
-F model="openai/whisper-1"

Supported Providers

Using Different Models

OpenAI Whisper

from openai import OpenAI

client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://api.haimaker.ai/v1"
)

audio_file = open("speech.mp3", "rb")
transcript = client.audio.transcriptions.create(
model="openai/whisper-1",
file=audio_file
)

Groq Whisper

from openai import OpenAI

client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://api.haimaker.ai/v1"
)

audio_file = open("speech.mp3", "rb")
transcript = client.audio.transcriptions.create(
model="groq/whisper-large-v3",
file=audio_file
)

Fallbacks

You can configure fallbacks for audio transcription to automatically retry with different models if the primary model fails.

from openai import OpenAI

client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://api.haimaker.ai/v1"
)

audio_file = open("speech.mp3", "rb")
transcript = client.audio.transcriptions.create(
model="groq/whisper-large-v3",
file=audio_file,
extra_body={
"fallbacks": ["openai/whisper-1"]
}
)

cURL with Fallbacks

curl https://api.haimaker.ai/v1/audio/transcriptions \
-H "Authorization: Bearer YOUR_API_KEY" \
-F file=@"speech.mp3" \
-F model="groq/whisper-large-v3" \
-F 'fallbacks[]=openai/whisper-1'