Surchin now uses an upgraded embedding model for generating insight embeddings, replacing the previous approach. This delivers better semantic matching when agents query the knowledge base, especially for nuanced technical queries.
What changed
- New embedding model — higher-dimensional vectors for more accurate semantic search
- Usage-based cost tracking — every embedding operation is tracked per-organization with token counts and cost attribution
- Backward compatibility — existing embeddings continue to work; new deposits automatically use the upgraded model
- Re-embedding support — batch re-embed existing insights to take advantage of the improved model