FAQs

Frequently asked questions about the Companion API

Getting started

How do I get started?

The Companion API is available through the Azure Marketplace. Purchase a subscription there, then sign in to the dashboard to create your organization and first project. From there, create a Companion, set up your knowledge and tools, and generate an API key. Use the REST API to manage your agents, knowledge, and configuration programmatically, and connect to your agent using the Web SDK or WebSocket. You can test everything in the Playground before writing any integration code.

Do I need my own LLM deployment?

Not necessarily. The platform supports both customer-hosted and Napster-hosted model configurations. With customer-hosted (Azure OpenAI), you provide your own deployment and credentials. With Napster-hosted, Napster manages the model infrastructure and you don't need your own cloud account or deployment. You choose the option when creating an API key.

Can I use the Companion API without the dashboard?

Yes. Once your API keys, companions, knowledge, tools, and FAQs are set up, developers interact with the product through the REST API and the connectivity layer (Web SDK, WebSocket, VoIP, or SIP). The dashboard is primarily a management and testing tool.

Can I test my agent without writing code?

Yes. The Playground in the dashboard lets you have a live video and voice conversation with any agent you assemble. No integration required. It's designed for rapid testing and iteration.

What's the difference between Admin and Member roles?

Admins have full access to all dashboard features including API Keys, Settings, billing, member management, and project management (create, edit, delete). Members can browse companions, use the Playground, and view existing projects, but do not have access to API Keys, Settings, or the ability to create, edit, or delete projects.


Companions and agents

What's the difference between a Companion and an agent?

A Companion is the identity layer: the avatar, name, personality, and voice. On its own, it doesn't do much. A full agent is what you get when you assemble a Companion with knowledge, tools, FAQs, and memory at session time. Think of the Companion as the face, and the agent as the complete package.

Can I edit a companion after creating it?

You can modify the companion's behavior by updating the system instructions in the Playground. The avatar and name are defined at creation time. Coming soon, you will have the ability to fully manage the companion's appearance, update its properties, and configure all aspects of it through both the dashboard and the API.

What image should I upload for the companion avatar?

A 16:9 photo showing a person from the waist up, facing the camera with relaxed arms and a gentle smile. The image is used to generate the companion's video avatar. You can use a real person's photo (with their consent), a stock photo, or any suitable portrait. No public figures are allowed.

What avatar views are available?

Three options, rendered through the Web SDK: Round (circle, focused on face), Rectangle (full body with background), and Silhouette (full body, background removed, floating on page).

What voices are available?

Standard voices are available by default, sourced from your LLM provider (Azure OpenAI). Whatever voices the provider supports, the Companion API supports. You can select a voice when configuring a session in the Playground or programmatically when creating a connection. Custom cloned voices are planned for a future release.


Knowledge and tools

How do I give my agent knowledge?

Create a Knowledge Collection and upload your documents, either through the dashboard or through the API. Then attach the collection to your agent when assembling a session in the Playground or programmatically through the API. The agent uses the uploaded content to ground its answers.

What's the difference between implicit and explicit tools?

Implicit tools are handled by your calling application. The platform delivers the tool name and arguments to your application, you handle the logic, and send the result back. Explicit tools are forwarded to a server-side endpoint (HTTP or WebSocket URL) that you configure. Your server executes the logic and returns the result. In both cases, the agent uses the returned result to continue the conversation. You choose the execution flow when you create the tool.


Connectivity

What are the different ways to connect to the Companion API?

Three are available today. The Web SDK is a client library for browser-based integrations that handles video, audio, and session lifecycle. WebSocket provides a persistent bidirectional connection for real-time audio streaming from any environment. The REST API handles session creation, configuration, and management. Additional connectivity options (VoIP, SIP) are in progress.

Is there an inactive session timeout?

Yes. Inactivity timeout automatically disconnects the avatar after a period of user inactivity. Users receive a countdown notification before disconnection. You can try it in the Playground's Advanced Settings, and configure it programmatically through the Web SDK with control over the timeout duration and countdown length.

What does the Disclaimer setting do?

When enabled through the Web SDK, it displays a customizable message to the user (default: "This is an AI-generated avatar"). You can edit the text to suit your use case or compliance requirements.


Billing and data

How is billing and pricing handled?

Billing is managed through the Azure Marketplace. No setup fees, no commitments. You pay only for the minutes your agents are active. Pricing depends on your setup: if you bring your own LLM, you pay $0.01/min ($0.60/hour). If you use a Napster-hosted model, you pay approximately $0.058/min ($3.50/hour). You can view usage in the dashboard and manage your subscription from the Azure Portal.

Does Napster have access to my data or models?

With a customer-hosted deployment, Napster does not access your LLM or its inference data. AI inference runs entirely in your cloud environment. However, Napster stores the agent configuration (system prompts, personality, instructions) on its platform and handles video rendering and agent orchestration. With a Napster-hosted deployment, Napster manages the model infrastructure on your behalf.

Can I have multiple API keys?

Yes. You can create as many API keys as you need, each linked to a different LLM provider or deployment. API keys are scoped to a project, so you can use projects to separate environments (Production, Staging, Development) or applications.

On this page