Skip to main content

Building Team Memory for Product and Support Workflows

A team accumulates context constantly: decisions made in Slack, bugs filed in Linear, docs written in Notion, post-mortems filed after incidents. That knowledge is scattered across tools and largely unretrievable — until someone needs it.

This tutorial shows how to build a connected team memory graph in RushDB where tickets, decisions, incidents, docs, and feature requests are first-class nodes, linked by causal and referential relationships. Once the graph exists, you can retrieve connected context instead of isolated documents.


Graph shape

The key labels are:

LabelWhat it represents
TICKETA bug report, support ticket, or task
DECISIONAn architectural or product decision, ADR-style
INCIDENTA production incident or outage
DOCA piece of documentation, runbook, post-mortem, or RFC
FEATURE_REQUESTA request from customers, research, or internal feedback
ALERTA monitoring alert that triggered during an incident

Step 1: Ingest existing tickets and docs

Start with a bulk import. Shape each entry before writing — assign status, category, and createdAt while the structure is fresh.

from rushdb import RushDB
import os

db = RushDB(os.environ["RUSHDB_API_KEY"], base_url="https://api.rushdb.com/api/v1")

db.records.import_json({
"label": "TICKET",
"data": [
{
"externalId": "TICKET-1001",
"title": "Login fails when SSO is enabled",
"status": "open",
"category": "auth",
"severity": "high",
"source": "linear",
"createdAt": "2025-03-01"
},
{
"externalId": "TICKET-1002",
"title": "Dashboard crashes on date range filter",
"status": "resolved",
"category": "ui",
"severity": "medium",
"source": "linear",
"createdAt": "2025-03-05"
}
]
})

db.records.import_json({
"label": "DOC",
"data": [
{
"externalId": "DOC-007",
"title": "SSO Integration Architecture",
"docType": "adr",
"status": "accepted",
"createdAt": "2025-01-10"
},
{
"externalId": "DOC-201",
"title": "Dashboard Date Filter Postmortem",
"docType": "postmortem",
"status": "published",
"createdAt": "2025-03-07"
}
]
})

After ingestion, fetch the records by their external IDs and attach causal and reference relationships.

ticket_result = db.records.find({"labels": ["TICKET"], "where": {"externalId": "TICKET-1002"}})
doc_result = db.records.find({"labels": ["DOC"], "where": {"externalId": "DOC-201"}})

ticket = ticket_result.data[0]
doc = doc_result.data[0]

db.records.attach(ticket.id, doc.id, {"type": "RESOLVED_BY", "direction": "out"})

# SSO ticket → SSO ADR
sso_ticket = db.records.find({"labels": ["TICKET"], "where": {"externalId": "TICKET-1001"}}).data[0]
sso_doc = db.records.find({"labels": ["DOC"], "where": {"externalId": "DOC-007"}}).data[0]

db.records.attach(sso_ticket.id, sso_doc.id, {"type": "REFERENCES", "direction": "out"})

Step 3: Query connected context around a ticket

Now retrieve all context relevant to an open ticket in one query.

auth_tickets_with_docs = db.records.find({
"labels": ["TICKET"],
"where": {
"status": "open",
"category": "auth",
"DOC": {
"$relation": {"type": "REFERENCES", "direction": "out"}
}
}
})

for ticket in auth_tickets_with_docs.data:
print(f"{ticket.data.get('externalId')}: {ticket.data.get('title')}")

Step 4: Semantic search over team knowledge

Enable semantic search on the title property of TICKET and DOC records to find related context when the exact terms are unknown.

# Create indexes once
db.ai.indexes.create({"label": "TICKET", "propertyName": "title"})
db.ai.indexes.create({"label": "DOC", "propertyName": "title"})

# Search
related = db.ai.search({
"query": "authentication failure after config change",
"propertyName": "title",
"labels": ["TICKET", "DOC"]
})

for item in related.data:
print(f"[{item.get('__labels')}] {item.get('title')} — score: {item.score:.3f}")

Step 5: Retrieve connected context for an agent prompt

When an agent needs to answer "what do we know about the SSO bug?", retrieve connected nodes and assemble a compact context block.

def get_ticket_context(external_id: str) -> str:
ticket_result = db.records.find({
"labels": ["TICKET"],
"where": {"externalId": external_id}
})
if not ticket_result.data:
return "Ticket not found."
ticket = ticket_result.data[0]

docs = db.records.find({
"labels": ["DOC"],
"where": {
"TICKET": {
"$relation": {"type": "REFERENCES", "direction": "in"},
"__id": ticket.id
}
}
})

similar = db.ai.search({
"query": ticket.data.get("title", ""),
"propertyName": "title",
"labels": ["TICKET"],
"where": {"status": "resolved"},
"limit": 3
})

lines = [
f"## Ticket: {ticket.data['externalId']}{ticket.data['title']}",
f"Status: {ticket.data.get('status')} | Category: {ticket.data.get('category')}",
"",
"### Referenced Documents"
]
for doc in docs.data:
lines.append(f"- [{doc.data.get('docType')}] {doc.data.get('title')}")
lines.append("")
lines.append("### Similar Resolved Tickets")
for t in similar.data:
lines.append(f"- {t.get('externalId')}: {t.get('title')} (score: {t.score:.2f})")

return "\n".join(lines)


print(get_ticket_context("TICKET-1001"))

What to add next

Once the base graph is established, enrich it incrementally:

  • Webhooks from Linear/GitHub/PagerDuty — auto-create TICKET and INCIDENT records on event (see Event-Driven Ingestion)
  • Decisions as first-class nodes — when a team makes a decision, create a DECISION record and link it to the TICKET or INCIDENT it resolved
  • FEATURE_REQUEST linked to TICKET — when a customer request drives a bug fix or feature, link the FEATURE_REQUEST to the TICKET so the team can see customer impact upstream of every change

Production caveat

Team memory graphs grow unboundedly unless pruned. Define a retention policy for closed tickets and resolved incidents older than a threshold (90 days, 1 year) and archive them with a status update rather than deleting — relationships to other nodes remain valid for historical queries even when the original record is archived.


Next steps