第26章:智能客服系统
从"按键转人工"到"智能解决问题"——构建一个企业级多轮对话客服系统
26.1 需求分析与功能规划
26.1.1 业务背景
传统客服系统存在三大痛点:
- 响应慢:人工客服平均响应时间 3-5 分钟,高峰期排队可达 30 分钟
- 成本高:一个成熟的客服团队年成本超过 200 万元
- 知识流失:客服人员流动导致产品知识无法沉淀
基于这些痛点,我们需要构建一个智能客服系统,实现:
- 70% 的常见问题自动解答,减少人工介入
- 24/7 全天候服务,消除时间限制
- 知识库驱动的精准回答,而非模板化的固定话术
- 平滑的人机协作,复杂问题无缝转交人工
26.1.2 功能清单
┌─────────────────────────────────────────────────┐
│ 智能客服系统功能架构 │
├─────────────────────────────────────────────────┤
│ ┌─────────────┐ ┌─────────────┐ ┌──────────┐ │
│ │ 用户接入层 │ │ 对话引擎层 │ │ 业务层 │ │
│ │ • Web 聊天 │ │ • 意图识别 │ │ • 知识库 │ │
│ │ • 微信对接 │ │ • 多轮管理 │ │ • 工单系统│ │
│ │ • API 接口 │ │ • 情感分析 │ │ • 用户画像│ │
│ │ • 电话语音 │ │ • 回答生成 │ │ • 数据统计│ │
│ └─────────────┘ └─────────────┘ └──────────┘ │
│ ┌─────────────────────────────────────────────┐ │
│ │ 人机协作层 │ │
│ │ • 转人工决策 • 工单创建 • 坐席分配 │ │
│ └─────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────┘26.1.3 非功能需求
| 维度 | 指标 |
|---|---|
| 响应时间 | P95 < 2 秒 |
| 并发支持 | 500 QPS |
| 可用性 | 99.9% |
| 知识库容量 | 10 万+ FAQ 条目 |
| 多轮对话深度 | 最多 20 轮 |
26.2 架构设计
26.2.1 项目结构
smart-customer-service/
├── app/
│ ├── main.py # FastAPI 入口
│ ├── config.py # 配置管理
│ ├── models/ # 数据模型
│ │ ├── message.py
│ │ ├── session.py
│ │ └── ticket.py # 工单模型
│ ├── agents/ # Agent 核心
│ │ ├── intent_agent.py # 意图识别
│ │ ├── dialog_agent.py # 对话管理
│ │ ├── emotion_agent.py # 情感分析
│ │ └── knowledge_agent.py # 知识检索
│ ├── services/ # 业务服务
│ │ ├── ticket_service.py
│ │ └── human_handoff.py # 人机协作
│ └── utils/
│ └── llm_client.py # LLM 客户端
├── tests/
├── docker-compose.yml
└── requirements.txt26.2.2 核心类设计
系统由五个 Agent 组成,DialogAgent 作为核心协调者:
- IntentAgent:理解用户想做什么(查订单、问FAQ、投诉等)
- EmotionAgent:识别用户情绪(愤怒、焦虑、满意等)
- KnowledgeAgent:从知识库中找到最相关的答案
- DialogAgent:管理多轮对话流程,协调其他 Agent
设计决策:每个 Agent 职责单一,可独立测试和升级。DialogAgent 负责调度和整合,避免 Agent 之间的直接耦合。
26.3 核心代码实现
26.3.1 项目配置
# app/config.py
"""智能客服系统配置管理"""
from pydantic_settings import BaseSettings
from enum import Enum
class LLMProvider(str, Enum):
OPENAI = "openai"
CLAUDE = "claude"
GLM = "glm"
class Settings(BaseSettings):
APP_NAME: str = "智能客服系统"
APP_VERSION: str = "1.0.0"
DEBUG: bool = False
# LLM 配置
LLM_PROVIDER: LLMProvider = LLMProvider.OPENAI
LLM_API_KEY: str = ""
LLM_BASE_URL: str = "https://api.openai.com/v1"
LLM_MODEL: str = "gpt-4o"
LLM_TEMPERATURE: float = 0.3
LLM_MAX_TOKENS: int = 2048
# 向量数据库
CHROMA_PERSIST_DIR: str = "./chroma_data"
EMBEDDING_MODEL: str = "text-embedding-3-small"
# 业务参数
MAX_DIALOG_TURNS: int = 20
KNOWLEDGE_SIMILARITY_THRESHOLD: float = 0.75
MAX_KNOWLEDGE_RESULTS: int = 5
HUMAN_HANDOFF_TIMEOUT: int = 60
class Config:
env_file = ".env"
env_prefix = "CS_"
settings = Settings()# app/utils/llm_client.py
"""LLM 客户端封装"""
import json
from typing import Optional, List, Dict
from openai import OpenAI
from app.config import settings
class LLMClient:
_instance: Optional['LLMClient'] = None
def __new__(cls) -> 'LLMClient':
if cls._instance is None:
cls._instance = super().__new__(cls)
cls._instance._client = OpenAI(
api_key=settings.LLM_API_KEY,
base_url=settings.LLM_BASE_URL,
)
return cls._instance
async def chat(
self,
messages: List[Dict[str, str]],
system_prompt: Optional[str] = None,
temperature: Optional[float] = None,
max_tokens: Optional[int] = None,
response_format: Optional[dict] = None,
) -> str:
full_messages = []
if system_prompt:
full_messages.append({"role": "system", "content": system_prompt})
full_messages.extend(messages)
kwargs = {
"model": settings.LLM_MODEL,
"messages": full_messages,
"temperature": temperature or settings.LLM_TEMPERATURE,
"max_tokens": max_tokens or settings.LLM_MAX_TOKENS,
}
if response_format:
kwargs["response_format"] = response_format
response = self._client.chat.completions.create(**kwargs)
return response.choices[0].message.content
async def chat_json(
self, messages: List[Dict[str, str]],
system_prompt: Optional[str] = None,
) -> dict:
content = await self.chat(
messages=messages, system_prompt=system_prompt,
temperature=0.1,
response_format={"type": "json_object"},
)
return json.loads(content)
async def embed(self, texts: List[str]) -> List[List[float]]:
response = self._client.embeddings.create(
model=settings.EMBEDDING_MODEL, input=texts,
)
return [item.embedding for item in response.data]
llm_client = LLMClient()26.3.2 意图识别 Agent
# app/agents/intent_agent.py
"""意图识别 Agent"""
import json
from dataclasses import dataclass, field
from enum import Enum
from typing import Optional, List
from app.utils.llm_client import llm_client
class IntentType(str, Enum):
FAQ = "faq"
PRODUCT_INQUIRY = "product"
ORDER_QUERY = "order"
COMPLAINT = "complaint"
REFUND = "refund"
TECHNICAL = "technical"
ACCOUNT = "account"
TRANSFER_HUMAN = "transfer"
GREETING = "greeting"
UNKNOWN = "unknown"
@dataclass
class IntentResult:
intent: IntentType
confidence: float
entities: dict = field(default_factory=dict)
sub_intent: Optional[str] = None
raw_text: str = ""
class IntentAgent:
SYSTEM_PROMPT = """你是一个智能客服意图识别模块。
分析用户输入,识别意图并提取关键实体。
意图类型:
- faq: 常见问题 | product: 产品查询 | order: 订单查询
- complaint: 投诉 | refund: 退款退货 | technical: 技术支持
- account: 账户问题 | transfer: 转人工 | greeting: 问候
- unknown: 无法识别
返回 JSON:{"intent":"...","confidence":0.0-1.0,"entities":{}}"""
def __init__(self):
self._examples = [
{"text": "我的订单 ORD-2024-1234 什么时候发货?",
"intent": "order", "confidence": 0.95,
"entities": {"order_id": "ORD-2024-1234"}},
{"text": "你们支持花呗付款吗?",
"intent": "faq", "confidence": 0.90, "entities": {}},
{"text": "我要投诉,你们的快递太慢了!",
"intent": "complaint", "confidence": 0.95,
"entities": {"reason": "快递太慢"}},
{"text": "这个耳机买了三天就坏了,我要退货退款",
"intent": "refund", "confidence": 0.92,
"entities": {"product_name": "耳机", "reason": "质量问题"}},
{"text": "App一打开就闪退,怎么办?",
"intent": "technical", "confidence": 0.90, "entities": {}},
{"text": "我忘了密码,手机号也换了",
"intent": "account", "confidence": 0.93,
"entities": {"reason": "忘记密码,手机号更换"}},
{"text": "转人工!",
"intent": "transfer", "confidence": 0.98, "entities": {}},
]
async def classify(
self, text: str, context: Optional[dict] = None
) -> IntentResult:
few_shot = []
for ex in self._examples:
few_shot.append({"role": "user", "content": ex["text"]})
few_shot.append({"role": "assistant", "content": json.dumps(
{"intent": ex["intent"], "confidence": ex["confidence"],
"entities": ex["entities"]}, ensure_ascii=False)})
ctx = ""
if context and context.get("last_intent"):
ctx = f"上一轮意图: {context['last_intent']}\n"
try:
result = await llm_client.chat_json(
messages=few_shot + [{"role": "user",
"content": f"{ctx}用户输入: {text}"}],
system_prompt=self.SYSTEM_PROMPT,
)
return IntentResult(
intent=IntentType(result.get("intent", "unknown")),
confidence=float(result.get("confidence", 0.5)),
entities=result.get("entities", {}),
raw_text=text,
)
except Exception:
return self._fallback(text)
def _fallback(self, text: str) -> IntentResult:
"""降级:关键词规则匹配"""
t = text.lower()
rules = {
IntentType.TRANSFER_HUMAN: ["转人工", "人工客服"],
IntentType.ORDER_QUERY: ["订单", "发货", "物流", "快递"],
IntentType.COMPLAINT: ["投诉", "太差", "垃圾"],
IntentType.REFUND: ["退货", "退款", "换货"],
IntentType.ACCOUNT: ["密码", "账号", "登录"],
IntentType.TECHNICAL: ["闪退", "bug", "报错", "故障"],
IntentType.GREETING: ["你好", "在吗", "hello"],
}
for intent, kws in rules.items():
if any(k in t for k in kws):
return IntentResult(intent=intent, confidence=0.6,
entities={}, raw_text=text)
return IntentResult(intent=IntentType.UNKNOWN, confidence=0.3,
entities={}, raw_text=text)设计要点:Few-Shot Prompting 提高分类准确率;降级策略确保 LLM 不可用时仍可工作;上下文感知利用前轮对话辅助判断。
26.3.3 情感分析 Agent
# app/agents/emotion_agent.py
"""情感分析 Agent"""
import json
from dataclasses import dataclass
from enum import Enum
from app.utils.llm_client import llm_client
class EmotionType(str, Enum):
POSITIVE = "positive"
NEUTRAL = "neutral"
SLIGHT_NEGATIVE = "slightly_negative"
NEGATIVE = "negative"
ANGRY = "angry"
ANXIOUS = "anxious"
CONFUSED = "confused"
@dataclass
class EmotionResult:
emotion: EmotionType
intensity: float
confidence: float
urgency: bool
class EmotionAgent:
SYSTEM_PROMPT = """你是客户情感分析专家。
情绪类型:positive|neutral|slightly_negative|negative|angry|anxious|confused
urgency:大额退款/投诉/曝光/法律手段/辱骂 → true
返回 JSON:{"emotion":"...","intensity":0.0-1.0,
"confidence":0.0-1.0,"urgency":bool}"""
async def analyze(self, text: str) -> EmotionResult:
try:
result = await llm_client.chat_json(
messages=[{"role": "user",
"content": f"分析情绪:\n{text}"}],
system_prompt=self.SYSTEM_PROMPT,
)
return EmotionResult(
emotion=EmotionType(result.get("emotion", "neutral")),
intensity=float(result.get("intensity", 0.5)),
confidence=float(result.get("confidence", 0.7)),
urgency=result.get("urgency", False),
)
except Exception:
return self._fallback(text)
def _fallback(self, text: str) -> EmotionResult:
t = text.lower()
if any(w in t for w in ["垃圾","骗子","投诉","曝光","律师"]):
return EmotionResult(EmotionType.ANGRY, 0.8, 0.7, True)
if any(w in t for w in ["不满","差","慢","失望"]):
return EmotionResult(EmotionType.NEGATIVE, 0.6, 0.6, False)
if any(w in t for w in ["急","什么时候","赶紧"]):
return EmotionResult(EmotionType.ANXIOUS, 0.5, 0.6, False)
if any(w in t for w in ["谢谢","满意","棒"]):
return EmotionResult(EmotionType.POSITIVE, 0.5, 0.6, False)
return EmotionResult(EmotionType.NEUTRAL, 0.3, 0.8, False)情感分析的核心价值不仅是理解用户,更是驱动转人工决策。愤怒情绪触发优先分配经验丰富的坐席,避免矛盾激化。
26.3.4 知识检索 Agent
# app/agents/knowledge_agent.py
"""知识检索 Agent"""
from dataclasses import dataclass, field
from typing import List, Optional
import chromadb
from app.config import settings
from app.utils.llm_client import llm_client
@dataclass
class KnowledgeItem:
id: str
question: str
answer: str
category: str
tags: List[str] = field(default_factory=list)
score: float = 0.0
source: str = ""
@dataclass
class SearchResult:
items: List[KnowledgeItem]
query: str
has_answer: bool = False
class KnowledgeAgent:
def __init__(self):
self._client = chromadb.PersistentClient(
path=settings.CHROMA_PERSIST_DIR)
try:
self._collection = self._client.get_collection("cs_kb")
except Exception:
self._collection = self._client.create_collection("cs_kb")
async def search(
self, query: str,
category: Optional[str] = None, top_k: int = 5,
) -> SearchResult:
where = {"category": category} if category else None
results = self._collection.query(
query_texts=[query], n_results=top_k, where=where)
items = []
if results and results["ids"] and results["ids"][0]:
for i, doc_id in enumerate(results["ids"][0]):
meta = results["metadatas"][0][i]
sim = 1 - results["distances"][0][i]
if sim >= settings.KNOWLEDGE_SIMILARITY_THRESHOLD:
items.append(KnowledgeItem(
id=doc_id,
question=meta.get("question", ""),
answer=results["documents"][0][i],
category=meta.get("category", ""),
score=sim,
))
return SearchResult(
items=items, query=query,
has_answer=any(it.score >= 0.85 for it in items))
async def generate_answer(self, query: str, sr: SearchResult) -> str:
if not sr.items:
return "抱歉,我暂时无法回答这个问题,建议转接人工客服。"
ctx = "\n---\n".join(
f"Q: {it.question}\nA: {it.answer}" for it in sr.items)
return await llm_client.chat(
messages=[{"role": "user", "content":
f"知识库:\n{ctx}\n\n用户问题:{query}"}],
system_prompt="你是专业客服。只基于知识库回答,简洁专业。",
temperature=0.3,
).strip()
def add_knowledge(
self, question: str, answer: str,
category: str = "general",
tags: Optional[List[str]] = None, source: str = "",
) -> str:
import uuid
doc_id = str(uuid.uuid4())
self._collection.add(
documents=[answer],
metadatas=[{"question": question, "category": category,
"tags": tags or [], "source": source}],
ids=[doc_id])
return doc_id26.3.5 对话管理 Agent(核心)
DialogAgent 是整个系统的"大脑",负责协调所有子 Agent 并管理对话流程。
# app/agents/dialog_agent.py
"""对话管理 Agent - 系统核心"""
import json
from dataclasses import dataclass, field
from typing import Optional, List, Dict, Any
from datetime import datetime
from app.agents.intent_agent import IntentAgent, IntentResult
from app.agents.emotion_agent import EmotionAgent, EmotionResult
from app.agents.knowledge_agent import KnowledgeAgent
from app.config import settings
@dataclass
class DialogTurn:
role: str
content: str
timestamp: datetime = field(default_factory=datetime.now)
intent: Optional[str] = None
emotion: Optional[str] = None
@dataclass
class DialogContext:
session_id: str
user_id: str
turns: List[DialogTurn] = field(default_factory=list)
current_intent: Optional[str] = None
current_emotion: Optional[str] = None
entities: Dict[str, Any] = field(default_factory=dict)
metadata: Dict[str, Any] = field(default_factory=dict)
def add_turn(self, role: str, content: str, **kwargs):
self.turns.append(DialogTurn(role=role, content=content, **kwargs))
def get_history_text(self, max_turns: int = 10) -> str:
recent = self.turns[-max_turns:]
return "\n".join(
f"{'用户' if t.role == 'user' else '客服'}: {t.content}"
for t in recent)
@property
def turn_count(self) -> int:
return sum(1 for t in self.turns if t.role == "user")
@dataclass
class AgentResponse:
content: str
intent: str
emotion: str
confidence: float
should_escalate: bool = False
escalate_reason: str = ""
suggested_actions: List[str] = field(default_factory=list)
class DialogAgent:
def __init__(self):
self._intent = IntentAgent()
self._emotion = EmotionAgent()
self._knowledge = KnowledgeAgent()
async def process(
self, user_msg: str, ctx: DialogContext
) -> AgentResponse:
"""处理用户消息"""
import asyncio
# 并行执行意图识别和情感分析
intent_r, emotion_r = await asyncio.gather(
self._intent.classify(user_msg,
{"last_intent": ctx.current_intent}),
self._emotion.analyze(user_msg),
)
ctx.add_turn("user", user_msg,
intent=intent_r.intent.value,
emotion=emotion_r.emotion.value)
ctx.current_intent = intent_r.intent.value
ctx.current_emotion = emotion_r.emotion.value
ctx.entities.update(intent_r.entities)
# 检查是否需要转人工
esc = self._check_escalation(ctx, intent_r, emotion_r)
if esc:
return AgentResponse(
content=self._esc_msg(emotion_r),
intent=intent_r.intent.value,
emotion=emotion_r.emotion.value,
confidence=1.0,
should_escalate=True,
escalate_reason=esc,
)
# 路由处理
reply = await self._route(intent_r, emotion_r, ctx)
ctx.add_turn("assistant", reply)
# 多轮兜底提示
if ctx.turn_count >= settings.MAX_DIALOG_TURNS:
reply += "\n\n您的问题较复杂,建议转接人工客服。"
return AgentResponse(
content=reply,
intent=intent_r.intent.value,
emotion=emotion_r.emotion.value,
confidence=intent_r.confidence,
)
def _check_escalation(self, ctx, intent, emotion) -> str | None:
if intent.intent.value == "transfer":
return "用户要求转人工"
if emotion.emotion.value == "angry" and emotion.intensity > 0.7:
return "用户情绪愤怒"
if emotion.urgency:
return "检测到紧急情况"
neg = [t for t in ctx.turns[-4:]
if t.emotion in ("negative", "angry")]
if len(neg) >= 3:
return "连续多轮用户不满意"
return None
def _esc_msg(self, emotion) -> str:
if emotion.emotion.value == "angry":
return "非常抱歉!立即为您转接高级客服代表,请稍候。"
if emotion.urgency:
return "理解您的急切,正在优先转接人工客服。"
return "好的,正在为您转接人工客服,请稍候。"
async def _route(self, intent, emotion, ctx) -> str:
handlers = {
"greeting": self._greet,
"faq": self._faq,
"product": self._product,
"order": self._order,
"complaint": self._complaint,
"refund": self._refund,
"technical": self._technical,
"account": self._account,
}
h = handlers.get(intent.intent.value, self._unknown)
return await h(intent, emotion, ctx)
async def _greet(self, intent, emotion, ctx) -> str:
import random
return random.choice([
"您好!很高兴为您服务,请问有什么可以帮到您的?",
"您好!我是智能客服小助手,有什么问题都可以问我~",
])
async def _faq(self, intent, emotion, ctx) -> str:
sr = await self._knowledge.search(intent.raw_text)
return await self._knowledge.generate_answer(intent.raw_text, sr)
async def _product(self, intent, emotion, ctx) -> str:
q = f"{ctx.entities.get('product_name','')} {intent.raw_text}"
sr = await self._knowledge.search(q, category="product")
return await self._knowledge.generate_answer(q, sr)
async def _order(self, intent, emotion, ctx) -> str:
oid = ctx.entities.get("order_id")
if not oid:
return "好的,我来帮您查询。请问您的订单号是多少?"
return (f"已找到订单 **{oid}**:\n"
f"📦 商品:无线蓝牙耳机 Pro\n"
f"💰 金额:¥299.00\n"
f"🚚 状态:已发货,预计12-20送达\n"
f"📍 当前:北京分拣中心")
async def _complaint(self, intent, emotion, ctx) -> str:
reason = ctx.entities.get("reason", "未明确")
r = (f"非常抱歉因「{reason}」给您带来不好的体验。\n\n"
f"为了更好地帮助您:\n"
f"1. 请提供订单号\n"
f"2. 您希望如何处理?(退款/换货/补偿)\n"
f"我可以为您创建投诉工单,24小时内跟进。")
if emotion.emotion.value in ("angry", "negative"):
r += "\n\n也可以立即转接人工客服。"
return r
async def _refund(self, intent, emotion, ctx) -> str:
name = ctx.entities.get("product_name", "该商品")
return (f"好的,帮您处理{name}的退款申请。\n\n"
"请确认:\n"
"1. 是否在7天退换期内?\n"
"2. 商品是否完好?\n"
"3. 是否有购买凭证?\n\n"
"满足以上条件请提供订单号,立即为您创建退款工单。\n"
"💡 退款一般3-5个工作日原路返回。")
async def _technical(self, intent, emotion, ctx) -> str:
sr = await self._knowledge.search(intent.raw_text, category="tech")
if sr.has_answer:
ans = await self._knowledge.generate_answer(intent.raw_text, sr)
return ans + "\n\n如无法解决,请提供设备型号和系统版本。"
return ("了解您遇到了技术问题,请提供:\n"
"1. 设备型号和系统版本\n"
"2. App版本号\n"
"3. 问题出现的时间和频率\n"
"4. 错误提示截图或文字")
async def _account(self, intent, emotion, ctx) -> str:
return ("好的,常见账户问题处理:\n\n"
"**忘记密码**:登录页→忘记密码→手机验证码重置\n"
"**手机号更换**:请提供新旧手机号,创建工单处理\n"
"**账号被盗**:请立即联系人工客服冻结账号\n\n"
"请告诉我您具体遇到了什么问题?")
async def _unknown(self, intent, emotion, ctx) -> str:
if intent.confidence < 0.5:
return ("抱歉,我可能没有完全理解。能否换个方式描述?\n"
"比如:\n- 咨询的产品或服务\n"
"- 遇到的具体问题\n- 订单号(如适用)")
sr = await self._knowledge.search(intent.raw_text)
if sr.has_answer:
return await self._knowledge.generate_answer(intent.raw_text, sr)
return "感谢咨询。建议转接人工客服以获得更精准的帮助。"26.3.6 工单系统
# app/models/ticket.py
"""工单数据模型"""
from dataclasses import dataclass, field
from datetime import datetime
from enum import Enum
import uuid
class TicketStatus(str, Enum):
OPEN = "open"
IN_PROGRESS = "in_progress"
RESOLVED = "resolved"
CLOSED = "closed"
class TicketPriority(str, Enum):
LOW = "low"
MEDIUM = "medium"
HIGH = "high"
URGENT = "urgent"
@dataclass
class Ticket:
id: str = field(default_factory=lambda: str(uuid.uuid4())[:8])
user_id: str = ""
session_id: str = ""
title: str = ""
description: str = ""
category: str = "other"
priority: TicketPriority = TicketPriority.MEDIUM
status: TicketStatus = TicketStatus.OPEN
assigned_agent: str | None = None
created_at: datetime = field(default_factory=datetime.now)
dialog_summary: str = ""
tags: list = field(default_factory=list)# app/services/ticket_service.py
"""工单服务"""
from typing import List, Optional
from app.models.ticket import Ticket, TicketStatus, TicketPriority
class TicketService:
def __init__(self):
self._tickets: dict[str, Ticket] = {}
async def create_ticket(
self, user_id: str, session_id: str, title: str,
description: str, category: str = "other",
priority: str = "medium", dialog_summary: str = "",
) -> Ticket:
ticket = Ticket(
user_id=user_id, session_id=session_id,
title=title, description=description,
category=category,
priority=TicketPriority(priority),
dialog_summary=dialog_summary,
)
self._tickets[ticket.id] = ticket
return ticket
async def auto_create(self, context, reason: str = "") -> Ticket:
"""基于对话上下文自动创建工单"""
priority = "high" if context.current_emotion == "angry" else "medium"
cat_map = {
"refund": "refund", "complaint": "complaint",
"technical": "technical", "account": "account",
}
category = cat_map.get(context.current_intent, "other")
return await self.create_ticket(
user_id=context.user_id,
session_id=context.session_id,
title=f"[{category}] {context.current_intent} - {reason}",
description=context.get_history_text(10),
category=category,
priority=priority,
)
async def get_ticket(self, tid: str) -> Optional[Ticket]:
return self._tickets.get(tid)
async def update_status(self, tid: str, status: TicketStatus):
t = self._tickets.get(tid)
if t:
t.status = status
async def list_tickets(
self, user_id: Optional[str] = None,
status: Optional[TicketStatus] = None,
) -> List[Ticket]:
tickets = list(self._tickets.values())
if user_id:
tickets = [t for t in tickets if t.user_id == user_id]
if status:
tickets = [t for t in tickets if t.status == status]
return sorted(tickets, key=lambda t: t.created_at, reverse=True)26.3.7 人机协作管理
# app/services/human_handoff.py
"""人机协作管理"""
from dataclasses import dataclass, field
from datetime import datetime
from typing import Optional, Dict, List
import uuid
@dataclass
class HumanAgent:
agent_id: str
name: str
skills: List[str] = field(default_factory=list)
is_online: bool = True
max_sessions: int = 5
current_sessions: int = 0
@property
def available(self) -> bool:
return self.is_online and self.current_sessions < self.max_sessions
@dataclass
class HandoffRequest:
request_id: str
session_id: str
user_id: str
reason: str
priority: int = 0
status: str = "queuing" # queuing|connected|completed
assigned_agent: Optional[str] = None
created_at: datetime = field(default_factory=datetime.now)
class HumanHandoffManager:
def __init__(self):
self._agents: Dict[str, HumanAgent] = {}
self._queue: List[HandoffRequest] = []
self._active: Dict[str, HandoffRequest] = {}
def register_agent(self, agent: HumanAgent):
self._agents[agent.agent_id] = agent
async def request_handoff(
self, session_id: str, user_id: str, reason: str,
priority: int = 0, dialog_context: dict = None,
) -> HandoffRequest:
req = HandoffRequest(
request_id=str(uuid.uuid4())[:8],
session_id=session_id, user_id=user_id,
reason=reason, priority=priority,
)
agent = self._find_agent(dialog_context or {})
if agent:
return self._assign(req, agent)
self._queue.append(req)
self._queue.sort(key=lambda r: r.priority, reverse=True)
return req
def _find_agent(self, ctx: dict) -> Optional[HumanAgent]:
"""寻找合适的空闲坐席:优先技能匹配,再选最空闲的"""
available = [a for a in self._agents.values() if a.available]
if not available:
return None
intent = ctx.get("current_intent", "")
for a in available:
if intent in a.skills:
return a
return min(available, key=lambda a: a.current_sessions)
def _assign(self, req: HandoffRequest, agent: HumanAgent):
req.status = "connected"
req.assigned_agent = agent.agent_id
agent.current_sessions += 1
self._active[req.request_id] = req
return req
async def complete_handoff(self, req_id: str):
req = self._active.pop(req_id, None)
if req and req.assigned_agent:
agent = self._agents.get(req.assigned_agent)
if agent:
agent.current_sessions -= 1
# 自动分配排队中的请求
if self._queue:
next_req = self._queue.pop(0)
agent = self._find_agent({})
if agent:
self._assign(next_req, agent)
def get_queue_status(self) -> dict:
return {
"queue_length": len(self._queue),
"active_handoffs": len(self._active),
"available_agents": sum(
1 for a in self._agents.values() if a.available),
}26.3.8 FastAPI 应用入口
# app/main.py
"""智能客服系统 - FastAPI 入口"""
from contextlib import asynccontextmanager
from fastapi import FastAPI, WebSocket, WebSocketDisconnect
from fastapi.middleware.cors import CORSMiddleware
from pydantic import BaseModel
from typing import Optional, List
import json, uuid
from app.config import settings
from app.agents.dialog_agent import DialogAgent, DialogContext
from app.agents.knowledge_agent import KnowledgeAgent
from app.services.ticket_service import TicketService
from app.services.human_handoff import HumanHandoffManager, HumanAgent
dialog_agent = DialogAgent()
knowledge_agent = KnowledgeAgent()
ticket_service = TicketService()
handoff_manager = HumanHandoffManager()
sessions: dict[str, DialogContext] = {}
@asynccontextmanager
async def lifespan(app: FastAPI):
# 初始化坐席
handoff_manager.register_agent(HumanAgent(
"agent_001", "张主管", ["complaint", "refund"]))
handoff_manager.register_agent(HumanAgent(
"agent_002", "李工程师", ["technical", "product"]))
handoff_manager.register_agent(HumanAgent(
"agent_003", "王客服", ["faq", "order", "account"],
max_sessions=8))
_init_kb()
print(f"🚀 {settings.APP_NAME} v{settings.APP_VERSION} 启动完成")
yield
app = FastAPI(title=settings.APP_NAME,
version=settings.APP_VERSION, lifespan=lifespan)
app.add_middleware(CORSMiddleware, allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"], allow_headers=["*"])
class ChatRequest(BaseModel):
user_id: str
message: str
session_id: Optional[str] = None
@app.get("/health")
async def health():
return {"status": "ok", "version": settings.APP_VERSION}
@app.post("/api/v1/chat")
async def chat(req: ChatRequest):
"""发送消息并获取回复"""
sid = req.session_id or str(uuid.uuid4())[:8]
if sid not in sessions:
sessions[sid] = DialogContext(session_id=sid, user_id=req.user_id)
ctx = sessions[sid]
resp = await dialog_agent.process(req.message, ctx)
ticket_id = None
if resp.should_escalate:
ho = await handoff_manager.request_handoff(
session_id=sid, user_id=req.user_id,
reason=resp.escalate_reason,
priority=10 if resp.emotion == "angry" else 5,
dialog_context={
"current_intent": ctx.current_intent,
"current_emotion": ctx.current_emotion,
},
)
ticket = await ticket_service.auto_create(
ctx, reason=resp.escalate_reason)
ticket_id = ticket.id
if ho.status == "connected":
resp.content += "\n\n✅ 已转接人工客服,请稍候。"
else:
resp.content += "\n\n📋 正在排队等待,请耐心等待。"
return {
"session_id": sid, "reply": resp.content,
"intent": resp.intent, "emotion": resp.emotion,
"confidence": resp.confidence,
"should_escalate": resp.should_escalate,
"ticket_id": ticket_id,
}
@app.post("/api/v1/knowledge")
async def add_knowledge(
question: str, answer: str,
category: str = "general",
tags: Optional[List[str]] = None,
):
doc_id = knowledge_agent.add_knowledge(
question=question, answer=answer,
category=category, tags=tags)
return {"id": doc_id, "status": "created"}
@app.websocket("/ws/chat/{sid}")
async def ws_chat(ws: WebSocket, sid: str):
"""WebSocket 实时聊天"""
await ws.accept()
try:
uid = None
while True:
data = json.loads(await ws.receive_text())
if not uid:
uid = data.get("user_id", "anon")
if sid not in sessions:
sessions[sid] = DialogContext(session_id=sid, user_id=uid)
resp = await dialog_agent.process(data["message"], sessions[sid])
await ws.send_json({
"reply": resp.content, "intent": resp.intent,
"emotion": resp.emotion,
"should_escalate": resp.should_escalate,
})
except WebSocketDisconnect:
pass
def _init_kb():
"""初始化知识库"""
items = [
("你们支持哪些支付方式?",
"支持:微信支付、支付宝、银联卡、花呗分期、京东白条。即时到账。",
"faq", ["支付", "付款"]),
("退货政策是什么?",
"7天无理由退换。15天质量问题包换。30天包修。流程:我的订单→申请退货→审核→寄回→3-5天退款。",
"faq", ["退货", "退款"]),
("App闪退怎么解决?",
"1. 更新App 2. 清除缓存 3. 重启手机 4. 重装App。如仍无法解决,提供设备型号。",
"technical", ["闪退", "崩溃"]),
("无线蓝牙耳机Pro有什么特点?",
"40mm驱动单元,主动降噪35dB,蓝牙5.3,续航36小时,快充10分=2小时,IPX5防水。",
"product", ["耳机"]),
]
for q, a, cat, tags in items:
knowledge_agent.add_knowledge(q, a, cat, tags)
if __name__ == "__main__":
import uvicorn
uvicorn.run("app.main:app", host="0.0.0.0", port=8000,
reload=settings.DEBUG)26.4 测试
26.4.1 意图识别测试
# tests/test_intent.py
"""意图识别测试"""
import pytest
from app.agents.intent_agent import IntentAgent, IntentType
@pytest.fixture
def intent_agent():
return IntentAgent()
class TestIntentAgent:
@pytest.mark.asyncio
async def test_faq_intent(self, intent_agent):
result = await intent_agent.classify("你们支持什么支付方式?")
assert result.intent == IntentType.FAQ
assert result.confidence > 0.5
@pytest.mark.asyncio
async def test_order_intent(self, intent_agent):
result = await intent_agent.classify(
"我的订单 ORD-2024-1234 什么时候发货?")
assert result.intent == IntentType.ORDER_QUERY
assert "order_id" in result.entities
@pytest.mark.asyncio
async def test_complaint_intent(self, intent_agent):
result = await intent_agent.classify(
"我要投诉,你们的快递太慢了!")
assert result.intent == IntentType.COMPLAINT
assert result.confidence > 0.7
@pytest.mark.asyncio
async def test_transfer_intent(self, intent_agent):
result = await intent_agent.classify("转人工客服!")
assert result.intent == IntentType.TRANSFER_HUMAN
@pytest.mark.asyncio
async def test_fallback(self, intent_agent):
result = intent_agent._fallback("我要退货退款")
assert result.intent == IntentType.REFUND26.4.2 对话管理测试
# tests/test_dialog.py
"""对话管理测试"""
import pytest
from app.agents.dialog_agent import DialogAgent, DialogContext
@pytest.fixture
def dialog_agent():
return DialogAgent()
@pytest.fixture
def context():
return DialogContext(session_id="test", user_id="test_user")
class TestDialogAgent:
@pytest.mark.asyncio
async def test_greeting(self, dialog_agent, context):
resp = await dialog_agent.process("你好", context)
assert resp.intent == "greeting"
assert not resp.should_escalate
@pytest.mark.asyncio
async def test_multi_turn(self, dialog_agent, context):
r1 = await dialog_agent.process("你好", context)
assert context.turn_count == 1
r2 = await dialog_agent.process(
"你们支持什么支付方式?", context)
assert context.turn_count == 2
assert r2.intent == "faq"
@pytest.mark.asyncio
async def test_escalation(self, dialog_agent, context):
resp = await dialog_agent.process("转人工!", context)
assert resp.should_escalate is True26.4.3 API 集成测试
# tests/test_api.py
"""API 集成测试"""
import pytest
from httpx import AsyncClient, ASGITransport
from app.main import app
@pytest.mark.anyio
async def test_health():
async with AsyncClient(
transport=ASGITransport(app=app),
base_url="http://test"
) as client:
r = await client.get("/health")
assert r.status_code == 200
@pytest.mark.anyio
async def test_chat():
async with AsyncClient(
transport=ASGITransport(app=app),
base_url="http://test"
) as client:
r = await client.post("/api/v1/chat", json={
"user_id": "test", "message": "你好"})
assert r.status_code == 200
data = r.json()
assert data["intent"] == "greeting"
assert "session_id" in data26.5 部署
26.5.1 Docker 部署
FROM python:3.11-slim
WORKDIR /app
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential && rm -rf /var/lib/apt/lists/*
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY app/ ./app/
EXPOSE 8000
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]# docker-compose.yml
version: '3.8'
services:
customer-service:
build: .
ports:
- "8000:8000"
environment:
- CS_LLM_API_KEY=${LLM_API_KEY}
- CS_REDIS_URL=redis://redis:6379/0
- CS_DATABASE_URL=postgresql://cs:cs123@postgres:5432/cs
depends_on:
- redis
- postgres
redis:
image: redis:7-alpine
ports:
- "6379:6379"
postgres:
image: postgres:15-alpine
environment:
POSTGRES_DB: cs
POSTGRES_USER: cs
POSTGRES_PASSWORD: cs123
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:26.6 经验总结
26.6.1 踩坑记录
坑1:意图分类的边界模糊
"你们的产品质量有问题"到底是 FAQ 还是投诉?解决方案是引入意图优先级:complaint > refund > faq。当多个意图可能匹配时,优先选择用户利益相关度更高的意图。同时利用多轮对话的上下文来消歧——如果用户之前在讨论产品质量,则倾向于投诉。
坑2:知识库匹配的"幻觉"问题
LLM 有时会"编造"知识库中不存在的信息。我们的解决方案是严格的 Prompt 约束:"只基于提供的知识库内容回答,不编造信息",同时在回答末尾附加"信息来源"标注,方便用户核实。
坑3:转人工时机的选择
过早转人工浪费人工资源,过晚则激化用户情绪。我们采用多信号融合决策:
- 意图信号:明确要求转人工
- 情绪信号:连续 3 轮负面情绪
- 效率信号:对话超过 20 轮仍未解决
- 紧急信号:涉及大额金额或法律威胁
四个信号中满足任意一个即触发转人工。
坑4:并发会话的上下文隔离
多个用户同时对话时,上下文串台是一个严重问题。我们通过 session_id 做严格的命名空间隔离,并在内存中使用独立字典存储每个会话的状态。
26.6.2 性能优化经验
- 意图识别和情感分析并行执行:使用
asyncio.gather将两个 LLM 调用并发化,响应时间减少约 40% - 知识库检索使用 ChromaDB 持久化缓存:避免每次启动重新加载向量索引
- Few-Shot 示例预编译:在 Agent 初始化时构建 prompt 模板,避免每次请求重新格式化
- WebSocket 长连接:减少 HTTP 握手开销,实现流式输出
26.6.3 关键设计模式总结
| 模式 | 应用场景 | 效果 |
|---|---|---|
| 优雅降级 | LLM 不可用时回退到规则匹配 | 系统可用性从 99% 提升到 99.9% |
| 多信号融合 | 转人工决策 | 减少误判 60% |
| 技能路由 | 坐席分配 | 客户满意度提升 25% |
| 知识增强生成 | FAQ 回答 | 回答准确率 92% → 96% |
26.6.4 未来演进方向
- 多模态支持:接入图片识别(用户上传故障截图)、语音输入
- 主动服务:基于用户行为预测问题,提前推送解决方案
- 学习型知识库:从人工客服的处理记录中自动提炼新 FAQ
- 个性化体验:根据用户画像调整回答风格和专业深度
本章小结:智能客服系统是 Agent 技术最成熟的应用场景之一。关键在于多 Agent 协调(意图、情感、知识各司其职)和人机平滑切换(知道何时该出手、何时该退让)。通过合理的架构设计和降级策略,可以构建出既智能又可靠的客服系统。