Embedding models teach a machine how to understand the relationships we deem important in data - similarity of images, text sentiment, and more. But what happens when what it's been taught is wrong?
Your RAG system lies to a customer, you identify a toxic chemical compound as safe for consumption, your system fails.
You know it's wrong, but you can't fix it. Unlike most software systems, knowing an issue exists in an embedding model does not guarantee you can fix it. You either try to reteach your model, often with limited success, or you build engineering fixes to mitigate failure symptoms.
At Tessel, we want to fix the problem rather than patch the symptoms. We're building a toolbox that allow engineers and product managers to define the changes they want and have them directly imposed in the embedding space. Specify priorities for different query types, define and optimize the metrics that matter to the customer, adjust embeddings based on human-interpretable concepts.
We want all users to understand what's actually happening in the embedding space so that you're never blindsided by your tools again.