Skip to main content

Documentation Index

Fetch the complete documentation index at: https://daily-docs-pr-4386.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

The Pipecat client handles media at two levels: local devices (the user’s mic, camera, and speakers) and media tracks (the live audio/video streams flowing between client and bot). This page covers how to work with both.

Bot audio output


Microphone

Enabling and muting

Switching microphones


Camera

Not all transports support video. enableCam has no effect on transports that don’t support it (e.g. WebSocket).

Speakers


Device initialization before connecting

By default, device access is requested when connect() is called. If you want to enumerate or test devices before the session starts — for example, to show a device picker pre-call — call initDevices() first:
await client.initDevices();
// devices are now available; user hasn't connected yet
const mics = await client.getAllMics();

Media tracks

For advanced use cases — custom rendering, audio processing, recording — you can access the raw MediaStreamTrack objects directly.

Audio visualization


API reference