How Insight Promotion Works
Surchin uses session diversity — not team size — to decide when an insight is ready to promote. Here's the mechanism and why it works for solo devs and teams alike.
Every insight in Surchin starts as a draft. Drafts are visible and returned in queries, but they carry less weight in ranking. An insight earns promoted status when it meets two conditions:
- 3 or more positive signals — upvotes from agents (
POSITIVE), dashboard votes (HUMAN_UPVOTE), or automatic sense-return signals when the insight is fetched and used (SENSE_RETURN) - From at least 2 distinct sessions — the signals must come from different coding sessions, not the same one
That's it. No tier gating, no team-size thresholds, no manual approval queues.
Why sessions, not users?
The question promotion answers is: "Has this insight proven useful more than once?" The best proxy for that is independent encounters — different coding sessions where the insight surfaced and was either explicitly upvoted or implicitly validated by being fetched.
Using session count rather than user count means the system works identically whether you're a solo developer or a 50-person team:
- Solo dev: You fix a tricky RLS policy bug on Monday and deposit an insight. On Wednesday, a new Claude session hits a similar issue, fetches the insight, and the agent upvotes it. One more session, one more fetch — promoted.
- Team: Developer A deposits an insight about a race condition in the webhook handler. Developer B's agent fetches it the next day while debugging a related timeout. Developer C's agent fetches it a week later. Each developer's session is distinct — the insight promotes naturally.
Both paths require the same thing: the insight must survive contact with a new context and still be relevant.
What counts as a positive signal?
Three signal types contribute to the positive count:
| Signal | Source | When it fires |
|---|---|---|
SENSE_RETURN | Automatic | An agent queries insights and this one is returned in the results |
POSITIVE | Agent | The agent explicitly rates the insight as helpful via rate_insight |
HUMAN_UPVOTE | Dashboard | A developer clicks the upvote button on the insights page |
Negative signals (NEGATIVE, HUMAN_DOWNVOTE) don't subtract from the positive count, but they do reduce the insight's strength score over time, which affects ranking and can eventually trigger deprecation.
What counts as a distinct session?
Every signal carries a session identifier:
- MCP/API calls pass a
session_idfrom the agent's environment — typically unique per Claude Code invocation - Dashboard votes use
dashboard-{userId}— so your browser votes count as a separate session from your terminal sessions
This means a solo developer who upvotes an insight from the dashboard and then has an agent fetch it in a new terminal session has already hit 2 distinct sessions.
The one thing that won't work: a single Claude session upvoting the same insight multiple times. All of those signals share the same session ID, so they count as 1 distinct session no matter how many times the agent votes.
A concrete walkthrough
Here's the lifecycle of an insight for a solo free-tier developer:
| Step | Session | Signal | Positives | Sessions | Status |
|---|---|---|---|---|---|
| Dev fixes a bug, agent deposits insight | sess-A | (created) | 0 | 0 | draft |
| Same session, agent upvotes it | sess-A | POSITIVE | 1 | 1 | draft |
| Next day, new session fetches it | sess-B | SENSE_RETURN | 2 | 2 | draft |
| Agent in session B upvotes it | sess-B | POSITIVE | 3 | 2 | promoted |
Three positives, two sessions. The insight proved useful across two independent encounters, so it graduates from draft.
And here's what happens if a single session tries to force promotion:
| Step | Session | Signal | Positives | Sessions | Status |
|---|---|---|---|---|---|
| Agent deposits and upvotes 3 times | sess-A | POSITIVE x3 | 3 | 1 | draft |
Three positives, but only one session. The insight stays in draft because it hasn't been validated outside the context that created it.
Why this matters
The session-diversity requirement is a quality gate that prevents self-reinforcing noise. An agent that deposits a mediocre insight and immediately upvotes it can't promote its own work. The insight has to survive until a future session — potentially days later, in a different part of the codebase — and still be relevant enough to surface and earn a signal.
For solo developers, this means your knowledge base builds up naturally as you work across sessions. You don't need teammates to validate insights — your own future sessions do that. For teams, the same mechanism means insights that help multiple developers rise to the top automatically.
No configuration needed. Just keep working, and the good stuff floats up.