Conversation Analysis
WorkWave
Turning thousands of customer conversations into structured insight, generated automatically, without slowing agents down.
Overview
Inside WorkWave's Communication Center, teams handle thousands of customer conversations containing signals about sentiment, recurring issues, and service history. That insight was usually lost unless agents manually summarized interactions and copied notes into the host CRM, PestPac. In practice, documentation was inconsistent, records fell out of sync, and managers lacked visibility into trends.
This project explored how automatic summaries, topics, and sentiment could preserve customer knowledge and keep systems aligned without slowing agents down. The challenge was introducing AI in a way that protected speed, minimized friction, and respected how work already gets done.
Role & design principles
I led end-to-end design for Conversation Analysis, partnering with product and engineering to translate an AI initiative into a workflow grounded in real agent behavior. My focus was defining interaction patterns, system states, and permissions while ensuring the feature fit naturally inside existing inbox and archive flows.
We aligned around a small set of principles: save time, don't add work. Automation removes steps. Stay in the workflow with no new tools to manage. Human control first: outputs are editable. Build trust: AI assists, it doesn't override. Auto-sync knowledge so records stay aligned across systems.
Discovery & research
I mapped how conversations were actually closed in production. Interviews and workflow reviews showed manual summaries were often skipped because agents prioritize speed. Managers confirmed they had no scalable way to understand patterns across interactions. Insight lived in raw message history, not structured data.
The key insight: AI would only be adopted if it attached itself to behavior agents already perform. Any extra dashboard or review queue would fail. That finding anchored the direction: analysis had to run in the background, appear at the right moment, and stay editable so humans retained final control.
Early concepts
Early exploration focused on flows and permissions. Design, product, and engineering collaborated on a consent model that rolled Communication Center settings into CRM controls, ensuring company and customer opt-in were explicit. Because the feature touched AI and customer data, transparency and permissions were treated as core UX problems.
I explored multiple entry points for automated summaries, including an always-available trigger. We rejected that direction due to system cost and behavioral risk. Summaries needed to feel purposeful, not like a novelty action. Anchoring analysis to the wrap-up phase after an interaction balanced intent, relevance, and performance. Omnichannel complexity, especially phone transcription, reinforced the need for editable outputs and clear system states across data quality variations.
Iterations & decisions
With a working prototype, we launched a closed beta with enterprise customers to validate trust and usefulness. I partnered with product and support to review ratings across summaries, topics, and sentiment, targeting ~80% accuracy as an early viability benchmark. Separate rating controls helped isolate weak areas and guide iteration.
As confidence grew, decisions shifted from whether the AI worked to how it should behave in production. I worked with engineering to establish refined editing rules, limiting topic and sentiment changes to admins to protect reporting integrity. Close collaboration ensured the experience stayed lightweight while syncing results end-to-end with the CRM. Generated summaries automatically update records and remain pinned to conversation history, making insight visible without adding new steps.
Outcomes
Conversation Analysis launched in early 2025 and gained traction immediately. Accuracy benchmarks were exceeded, and customer feedback focused on expansion rather than reliability, a strong signal of trust.
Adoption scaled alongside platform growth: 3M+ messages processed (+94% YoY) and 1.5M archived conversations (+120% YoY). Each archived conversation previously required ~90 seconds of documentation. Automation translated to an estimated 37,500 hours saved in one year.
Agents spent less time copying notes across systems, while managers gained more consistent records without enforcement overhead. The feature connected daily operations to long-term reporting without changing user behavior, establishing a scalable model for AI workflows that improve data quality while respecting frontline speed.
Takeaways
- As WorkWave's first AI-assisted feature, this project set a standard for assistive, workflow-grounded AI.
- Trust came from control and predictability, not novelty.
- Adoption followed existing habits, not new training.
- Tight cross-functional collaboration was essential to translating AI capability into usable UX.
- The best automation disappears. The product simply feels smarter.