Skip to main content

Documentation Index

Fetch the complete documentation index at: https://daily-docs-pr-4386.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

The Pipecat client emits events throughout the session lifecycle — when the bot connects, when the user speaks, when a transcript arrives, and more.

Subscribing to events

Callbacks

Event listeners


Event reference

Session and connectivity

These events track the connection state of the client and bot. See Session Lifecycle for the full state progression.
EventCallbackWhen it fires
ConnectedonConnectedClient transport connection established
DisconnectedonDisconnectedClient disconnected (intentional or error)
TransportStateChangedonTransportStateChangedAny transport state change; receives the new TransportState string
BotConnectedonBotConnectedBot joined the transport; pipeline may still be initializing
BotReadyonBotReadyBot pipeline is ready; safe to send messages and expect audio
BotDisconnectedonBotDisconnectedBot left the session; client will also disconnect unless disconnectOnBotDisconnect: false
ParticipantConnectedonParticipantJoinedAny participant joined (bot, local, or other)
ParticipantLeftonParticipantLeftAny participant left (bot, local, or other)
BotReady receives a BotReadyData object with a version field — the RTVI version the bot is running. You can use this to check compatibility if your client and server may be on different versions.

Voice activity

These events are driven by the bot’s VAD (voice activity detection) model. VAD is smarter than tracking raw audio levels — it understands turn-taking, so it can distinguish between a user who has finished speaking and one who has simply paused or is speaking slowly.
EventCallbackWhen it fires
UserStartedSpeakingonUserStartedSpeakingVAD detected the user started speaking
UserStoppedSpeakingonUserStoppedSpeakingVAD detected the user stopped speaking
BotStartedSpeakingonBotStartedSpeakingBot started sending audio
BotStoppedSpeakingonBotStoppedSpeakingBot stopped sending audio
LocalAudioLevelonLocalAudioLevelLocal audio gain level (0–1); fires continuously
RemoteAudioLevelonRemoteAudioLevelRemote audio gain level (0–1); fires continuously
UserMuteStartedonUserMuteStartedServer started ignoring client audio (server-side mute)
UserMuteStoppedonUserMuteStoppedServer resumed processing client audio
UserMuteStarted/UserMuteStopped reflect server-side muting — the client continues sending audio, but the bot is ignoring it. Use these to update your UI (e.g., show a muted indicator) without actually stopping the local mic.

Transcription and bot output

EventCallbackDataWhen it fires
UserTranscriptonUserTranscriptTranscriptDataUser speech transcribed; fires for both partial (final: false) and final results
BotOutputonBotOutputBotOutputDataBot text output, typically aggregated by sentence or word during TTS synthesis
BotLlmTextonBotLlmTextBotLLMTextDataRaw LLM token stream
BotLlmStartedonBotLlmStartedLLM inference started
BotLlmStoppedonBotLlmStoppedLLM inference finished
BotTtsTextonBotTtsTextBotTTSTextDataWords from TTS as they are synthesized (streaming TTS only)
BotTtsStartedonBotTtsStartedTTS synthesis started
BotTtsStoppedonBotTtsStoppedTTS synthesis finished
UserTranscript fires continuously as speech is recognized. Check data.final to distinguish committed transcripts from work-in-progress partials: BotOutput is the recommended way to display the bot’s response text. It provides the best possible representation of what the bot is saying — supporting interruptions and unspoken responses. By default, Pipecat aggregates output by sentences and words (assuming your TTS supports streaming), but custom aggregation strategies are supported too - like breaking out code snippets or other structured content:

Errors

EventCallbackWhen it fires
ErroronErrorBot signalled an error; data.fatal is true if the session is unrecoverable
MessageErroronMessageErrorA client message failed or got an error response
Always handle Error. If data.fatal is true, the bot has already disconnected — update your UI accordingly:

Devices and tracks

EventCallbackWhen it fires
AvailableMicsUpdatedonAvailableMicsUpdatedMic list changed or initDevices() called
AvailableCamsUpdatedonAvailableCamsUpdatedCamera list changed or initDevices() called
AvailableSpeakersUpdatedonAvailableSpeakersUpdatedSpeaker list changed or initDevices() called
MicUpdatedonMicUpdatedActive microphone changed
CamUpdatedonCamUpdatedActive camera changed
SpeakerUpdatedonSpeakerUpdatedActive speaker changed
DeviceErroronDeviceErrorMic, camera, or permission error
TrackStartedonTrackStartedA media track (audio or video) became playable
TrackStoppedonTrackStoppedA media track stopped

Function calling

Other

EventCallbackWhen it fires
ServerMessageonServerMessageCustom message sent from the bot to the client
MetricsonMetricsPipeline performance metrics from Pipecat
For custom server<->client messaging, see Custom Messaging.

API reference