Client snapshot
The challenge
Knowledge scattered across three tools, support capacity capped.
The CEO wanted to optimize support capacity without sacrificing the quality and tone the team had built their reputation on. Three problems sat behind that goal:
- Useful answers lived in three different places — the Helpwise help center and history,
Notion product / support pages, and a handful of Slack channels (
#support,#cxteams,#faqs,#product-talk,#releases,#c-unit) - Every reply needed citations and on-brand tone — not a generic AI answer
- Releases ship constantly, so any knowledge layer that lagged the product would do more harm than good
Off-the-shelf chat widgets weren't going to work. The team needed an in-Helpwise workflow that drafted a real reply, cited where it came from, and let an agent ship it on brand.
The goal
An n8n workflow that drafts, cites, and waits for a human.
- Build a vector database from every relevant knowledge source
- Generate a draft reply with source citations and confidence metadata in under 30 seconds
- Surface the draft inside Helpwise so an agent can approve, edit, reject, or escalate
- Log every interaction to Google Sheets for auditability and tuning
- Refresh the knowledge base daily, with a priority lane for releases
- Handle at least five concurrent tickets without slowing down
Solution at a glance
One workflow, one vector DB, three sources, a human in the loop.
An n8n workflow listens for new Helpwise email and chat tickets, queries a unified vector database built from Helpwise history, Notion docs, and Slack channels, drafts a response with source citations, and posts it inside Helpwise for an agent to approve before sending.
- Trigger: Helpwise webhook (email + chat)
- Retrieval: embeddings query across Helpwise + Notion + Slack indexes
- Draft: top-tier LLM with retrieved context, citations, and a confidence score
- Review: draft surfaced inside Helpwise; agent approves, edits, rejects, or escalates
- Delivery: sent through the original channel (email reply or chat beacon)
- Logging: full interaction recorded to Google Sheets
- Refresh: nightly indexing job, plus a fast-lane for new releases
How we did it
Indexing first, drafts second, review last.
- Source scoping and access. Walked the team through every place an answer might live, mapped which Notion sub-pages and Slack channels actually carried support-grade signal, and locked down API scopes for Helpwise, Notion, Slack, Google Sheets, and the embeddings provider.
- Vector database. Built a single index covering all three sources with normalized metadata (source, URL, channel, last-updated, owner). Chunking and embedding tuned per source so Slack messages don't drown out Notion long-form pages.
- Draft pipeline. n8n receives a ticket, runs the retrieval step, hands top-K chunks to a top-tier LLM, and asks for a draft response, source citations, a confidence score, and the snippets it actually used.
- Human review. The draft is posted inside Helpwise next to the original ticket so the agent can approve, edit, reject, or escalate in one click. No customer ever sees a raw AI reply.
- Delivery and logging. The final response goes out through the original channel (email reply or chat beacon) and a complete record of the interaction lands in Google Sheets.
- Daily refresh. A scheduled n8n job pulls new and updated content, regenerates embeddings, and updates the index. A separate fast-lane re-indexes release notes within minutes of publication.
What it does
Behavior, end to end.
Knowledge sources indexed
- Helpwise: public help center, past customer emails, past chat conversations
- Notion: product workspace — Customer Support and Product & Dev sections, plus any sub-pages flagged by the team
- Slack:
#support,#cxteams,#faqs,#product-talk,#releases,#c-unit
What every draft includes
- A complete proposed reply, written in the team's voice
- Source citations — which articles, Notion pages, or Slack threads were used
- A confidence score and the retrieved snippets that fed the draft
- A category tag the agent can override before sending
Review options the agent has in Helpwise
- Approve and send as-is
- Edit, then send
- Reject and write a custom reply
- Escalate to a higher tier
Delivery
Email tickets are answered through the standard Helpwise email reply. Chat tickets are answered through the Helpwise chat beacon — the customer keeps a single, continuous conversation thread.
Escalation rules
Some tickets never get an AI draft.
By default every reply is human-reviewed, so the workflow keeps escalation lightweight: AI drafts are suppressed entirely for the categories where a human writes from scratch every time.
- Refund requests and billing disputes
- Critical bugs or system outages
- Account access and authentication issues
- Contract and legal questions
- Frustrated tone, detected by sentiment analysis on the inbound message
- Questions about features that haven't been released yet
Those tickets are flagged in Helpwise, routed to the right tier, and logged in Sheets with the escalation reason.
Knowledge base maintenance
Daily refresh, with a fast lane for releases.
- Scheduled n8n job runs nightly — pulls new and updated content from each source, regenerates embeddings, removes deprecated entries
- Release fast lane — new product releases trigger an immediate re-index of the relevant Notion
pages and
#releasesthreads - Source-level provenance preserved on every chunk, so deletions cleanly remove their references from the vector store
- Errors and rate-limit events are logged and alert by email
Performance & reliability
Built for live support, not a demo.
- Drafts generated in under 30 seconds end to end
- Handles at least 5 concurrent tickets without queueing
- Refresh jobs run during off-hours and never disrupt the live workflow
- Idempotent processing keyed on Helpwise message IDs — retries never produce duplicates
- Retries with exponential backoff on every external call (Helpwise, Notion, Slack, embeddings, LLM)
- Email alerts on failures, plus a daily run summary in Sheets
Data logging
Every interaction recorded for review and tuning.
Each ticket lands in Google Sheets with the fields support and product need to spot patterns and tune the system over time.
| Essential fields | Tuning fields |
|---|---|
| Timestamp | Confidence score from the LLM |
| Customer ID and email | Inferred customer sentiment |
| Question text and channel | Question category / topic |
| AI-generated draft response | Retrieved context snippets |
| Source citations used | Number of sources searched |
| Approved / edited / rejected / escalated | Model used and token usage |
| Final response sent and time to resolution | Escalation reason, where applicable |
Customer-facing
Behind the scenes by default.
The team chose a behind-the-scenes approach — replies go out under the agent's name, in the team's voice. The AI shows up as a productivity tool for the support team, not a label on the customer-facing reply. That choice keeps trust high and avoids friction for customers who simply want their question answered.
Tech stack
What it's built on
- Orchestration: n8n
- Embeddings: Google Gemini Embeddings (with one shared index across all three sources)
- Vector DB: chosen for incremental updates, source-level provenance, and clean deletions
- LLM: a top-tier model selected for citation quality, JSON tool use, and tone control
- Inputs: Helpwise (email + chat), Notion, Slack — all through their official APIs
- Logging: Google Sheets via the Sheets API
- Ops: retries, idempotency keys, run logs, email alerts, scheduled jobs
What was handed over
- Production n8n workflow (JSON) plus the daily refresh job
- Vector DB with all three sources indexed and a documented chunking / metadata schema
- Helpwise integration for surfacing drafts to agents and capturing review actions
- Google Sheets logging schema and live dashboard
- Architecture diagram, n8n workflow documentation, and a vector DB setup / maintenance guide
- Runbook for adding or removing data sources, updating embeddings, and handling API failures
- Internal testing pack — 20+ representative Q&As validated by the support team
- Knowledge transfer session and Loom walkthroughs