1 min read

How Orca Helps You Customize to Different Preferences

Written by
Rob McKeon
Published on
October 11, 2024

The Problem

AI/ML models' performance is typically assessed by how often they produce expected outcomes from a test data set. However, this approach often breaks down in the real world due to changing definitions of acceptable outcomes. This can look like:

  • Different sensitivities to errors or defects: User requirements vary widely. Stryker may have much narrower tolerances when detecting defects for medical device screws than Ikea has for furniture screws.
  • Competing definitions of a variable: Think about sentiment classification - what’s classified as a negative sentiment will vary significantly between a fast, casual restaurant chain and a car manufacturer. Both want to detect negative reviews, but they set very different criteria.
  • Varying values from false positives: For fraud detection models, companies may have different tolerances for detected and undetected fraudulent transactions based on the cost of fraud compared to the costs of intervention. This creates differences in preferences for when transactions get escalated (or don’t). 

These variations can cause previously high-performing models to become inaccurate or inappropriate for certain users, creating challenges in meeting diverse or shifting capabilities.

The Status Quo

A static model will inherently have a limited ability to respond in these changing scenarios, decreasing the effectiveness of the model when clients want outcomes weighted differently. To address these challenges, teams often rely on two common strategies:

  1. Training new models: This requires new data and significant time to build production-grade models that meet new criteria. It also leads to managing an expanding set of models, creating technical debt.
  2. Human-in-the-loop approaches: While this mitigates the need for new models, it introduces ongoing expenses, especially for highly skilled reviewers, and creates scaling challenges as more reviewers need to be hired constantly.

Both approaches clean up inaccuracies but limit the ability to respond quickly and cost-effectively to new scenarios. This can delay revenue, prolong sales cycles, and lead to shelved internal projects.

How Orca Fixes This

Orca’s unique model architecture, where traditional deep-learning models learn to explicitly leverage external information during inference, solves this challenge of diverse (or shifting) opinions on what correct outputs actually are. Simply introduce a new set of external data into your models’ memory, and the model adjusts its outputs, increasing its accuracy against the new criteria. With this approach, Orca creates:

  • Real-time adaptation: By introducing new external data into the model's memory, outputs adjust to new criteria almost immediately.
  • Leveraging existing capabilities: The model benefits from its pre-existing reasoning abilities, reducing the amount of new data needed for adaptation.
  • Simplified model management: Maintaining one base model with independent data sets allows teams to focus on improvements rather than managing multiple models.

This solution enables businesses to quickly and efficiently respond to changing criteria, whether driven by diverse customer needs or shifting internal goals, without the drawbacks of traditional approaches.

Related Posts

Keep Up With Rapidly-Evolving Data Using Orca
1 min read

Keep Up With Rapidly-Evolving Data Using Orca

Orca can help models adapt to rapid data drift without the need for costly retraining using memory augmentation techniques.
Tackling Toxicity: How Orca’s Retrieval Augmented Classifiers Simplify Content Moderation
10 min read

Tackling Toxicity: How Orca’s Retrieval Augmented Classifiers Simplify Content Moderation

Detecting toxicity is challenging due to data imbalances and the trade-off between false positives and false negatives. Retrieval-Augmented Classifiers provide a robust solution for this complex problem.
How Orca Helps Your AI Adapt to Changing Business Objectives
2 min read

How Orca Helps Your AI Adapt to Changing Business Objectives

ML models must be adaptable to remain effective as business problems shift like targeting new customers, products, or goals. Learn how Orca can help.
How Orca Helps You Instantly Expand to New Use Cases
2 min read

How Orca Helps You Instantly Expand to New Use Cases

ML models in production often face unexpected use cases, and adapting to these can provide significant business value, but the challenge is figuring out how to achieve this flexibility.
Orca's Retrieval-Augmented Image Classifier Shows Perfect Robustness Against Data Drift
5 min read

Orca's Retrieval-Augmented Image Classifier Shows Perfect Robustness Against Data Drift

Memory-based updates enable an image classifier to maintain near-perfect accuracy even as data distributions shifted—without the need for costly retraining.
Retrieval-Augmented Text Classifiers Adapt to Changing Conditions in Real-Time
6 min read

Retrieval-Augmented Text Classifiers Adapt to Changing Conditions in Real-Time

Orca’s RAC text classifiers adapt in real-time to changing data, maintaining high accuracy comparable to retraining on a sentiment analysis of airline-related tweets.
Building Adaptable AI Systems for a Dynamic World
4 min read

Building Adaptable AI Systems for a Dynamic World

Orca's vision for the future of AI is one where models adapt instantly to changing data and objectives—unlocking real-time agility without the burden of retraining.
Survey: Data Quality and Consistency Are Top Issues for ML Engineers
4 min read

Survey: Data Quality and Consistency Are Top Issues for ML Engineers

Orca's survey of 205 engineers revealed that data challenges remain at the forefront of machine learning model development.