The chat module ships with first-class AI integration. The same socket that delivers customer messages can deliver streamed AI tokens — so to the widget, "talking to the AI" and "talking to a person" are indistinguishable.
How AI gets attached to a chat
A chat can have an AI assistant if all three are true:
- The customer connects with a
configIdin the handshake context - The chat config record has
data.ai = '<assistantId>' - The config's status is
active
When this is set, the gateway calls AIAssistantService.executeStream() for every customer message that doesn't have a human assigned.
Authoring the assistant
Assistants are records in the ai_assistant collection. Configure them through the CRM AI Assistant module — see AI assistant configuration.
Each assistant carries:
title,description,personality— composed into the system promptcapabilities[]— playbooks injected into the prompt by ID (lookup table inai-assistant.config.ts)tools[]— function names from the CRM tool registry the assistant can callvoice,model— used by both text and voice gateways
The chat gateway reads the assistant on each message and rebuilds the system prompt from these fields, so changes take effect on the next turn without restart.
Streamed responses
The customer sends chat-message. The server emits a sequence of chat-stream events into the chat room (so observing agents see the same stream live):
socket.on('chat-stream', (evt) => {
switch (evt.event) {
case 'chunk': // evt.data — partial text
case 'tool-use': // evt.tool, evt.input — function call
case 'tool-result': // evt.tool, evt.result — function returned
case 'end': // evt.sk — final stored message id
case 'error': // evt.error
}
});
The complete text is also persisted as a chat_message record with type: 'ai-assistant' once the stream ends.
Selecting an assistant at runtime
If the config doesn't pin a single assistant, customers can choose:
socket.emit('list-assistants', {}, (res) => {
// res.assistants: [{ id, name, title, description }, ...]
});
socket.emit('select-assistant', { assistantId: '<id>' });
The choice is stored on the session and used until the customer picks a different one or transfers to a human.
Hand-off to a human
Agents have three primitives for taking over an AI conversation.
takeover-chat
socket.emit('takeover-chat', { chatId: '...' });
Aborts the active AI stream, sets currentAgent on the customer's session so future messages route to the agent, posts an "agent has joined" system message. The AI is silent until explicitly resumed.
assist-ai
socket.emit('assist-ai', {
chatId: '...',
message: 'Suggested reply: "...". Send?',
mode: 'suggest' | 'inject',
});
The agent watches the AI work and either suggests a reply (presented to the agent only) or injects a final message in the AI's voice.
resume-ai
socket.emit('resume-ai', { chatId: '...' });
Hands control back to the assistant. The next customer message is processed by the AI again.
AI-initiated transfer
The assistant has a transfer_chat_to_agent tool. When the AI decides a human is needed it calls the tool with a summary; the gateway converts that into a chat-queue-notification event:
{
orgId, chatId, customerEmail,
reason: 'wants to talk to a human',
summary: 'Refund $240 for order ABC-12, customer frustrated',
position, aiHandoff: true,
}
Every agent — both those configured on the chat config and any currently online — receives a queue-notification and a queue-updated event. The customer sees the standard "you're in queue" experience while the AI explains the wait.
Observing AI conversations
/chat/ai-activeJWTReturns every chat currently being handled by an AI. The admin "supervise AI" panel is built on this — agents can join the chat room (join-chat), watch the stream live, and step in via takeover-chat.
Aborting a stream
Each AI stream is wired to an AbortController keyed by chatId. takeover-chat calls controller.abort() so partial output is suppressed and the saved message reflects only what the customer actually saw.
Tools the AI calls (CRM lookups, knowledge base search, transfer to agent) emit tool-use and tool-result events. Surface them in the agent UI so supervisors know exactly what the AI did during the conversation — not just what it said.