2 min read

How Orca Helps You Instantly Expand to New Use Cases

Written by
Rob McKeon
Published on
October 11, 2024

The Problem

Imagine this: Your team has spent months building a top-notch image classifier that detects spoiled fruit, helping food manufacturers cut down on waste and reduce costs. You’ve curated and labeled the ideal dataset, fine-tuned the model’s parameters, and are delivering impressive results.

But when it’s time to pitch the product, the prospect mentions a different challenge: “Our main issue is spoiled lettuce. Can your model also detect rotten vegetables?”

It’s a record-scratch moment.

Model performance can drop unexpectedly when applied to new, but similar, situations. Whether it’s a customer looking to use the model for a different application or your CEO pushing for expansion into a new market, these slight shifts can have significant implications for AI models.

For stakeholders, the change might seem minor (with a major revenue potential), but for your model, it can be a massive leap.

The Status Quo

Traditionally, expanding into new use cases or markets required retraining existing models or developing new ones from scratch. While effective, this approach has significant trade-offs in terms of performance, cost, and time-to-market:

  • Adding new data to expand use cases isn't simply additive; it can degrade your model’s performance on tasks it already excels at.
  • Creating custom models for every use case increases maintenance complexity and can be costly to host.
  • Training an unbiased, high-performing model requires vast amounts of data, limiting how quickly you can adapt to new challenges.
  • You might attempt to force a solution using RAG + LLM. For generative AI, this works well. However, for non-generative tasks like classification or recommendation, LLMs often underperform compared to purpose-built models.

How Orca Helps You Expand to New Use Cases—Fast

Orca allows you to build on a base model’s core reasoning capabilities while introducing new definitions of correct outcomes through memory augmentation. Here’s how:

  • Once your deep-learning model (e.g., an image classifier) learns to incorporate context during inference, it can serve as a foundation. In the above example, your model trained to detect rotten fruit retains the general reasoning of product image classification as a foundation model, but you can provide new examples to quickly and accurately classify new classes of inputs. 
  • You can develop specialized datasets for each use case (e.g., separate collections for fruits and vegetables) and direct the model to the appropriate set based on the use case.

This approach enables you to adjust and refine the model’s outputs without retraining from scratch. The result?

  • Faster time-to-market with reduced training cycles, leading to quicker revenue generation and smoother adaptation to new use cases.
  • Less reliance on massive datasets — you can use existing or synthetic data to teach the model basic reasoning, then augment it with a smaller, targeted set of new memories to refine correct or incorrect responses over time.
  • Reduced need for manual oversight, minimizing the need to add humans to the training loop to fix errors when applying the model to new scenarios.

That said, memory augmentation isn’t a cure-all. If you’re making significant changes to the use case, more than additional context will be needed — a different model may be required to achieve the desired performance and accuracy.

Related Posts

Stop Contorting Your AI App into an LLM
4 minutes

Stop Contorting Your AI App into an LLM

Why converting your discriminative model into an LLM for RAG isn't always worth it.
Building Adaptable AI Systems for a Dynamic World
4 min read

Building Adaptable AI Systems for a Dynamic World

Orca's vision for the future of AI is one where models adapt instantly to changing data and objectives—unlocking real-time agility without the burden of retraining.
How Orca Helps You Customize to Different Preferences
1 min read

How Orca Helps You Customize to Different Preferences

When evaluating an ML model's performance, the definition of "correct" can vary greatly across individuals and customers, posing a challenge in managing diverse preferences.
Keep Up With Rapidly-Evolving Data Using Orca
1 min read

Keep Up With Rapidly-Evolving Data Using Orca

Orca can help models adapt to rapid data drift without the need for costly retraining using memory augmentation techniques.
Tackling Toxicity: How Orca’s Retrieval Augmented Classifiers Simplify Content Moderation
10 min read

Tackling Toxicity: How Orca’s Retrieval Augmented Classifiers Simplify Content Moderation

Detecting toxicity is challenging due to data imbalances and the trade-off between false positives and false negatives. Retrieval-Augmented Classifiers provide a robust solution for this complex problem.
How Orca Helps Your AI Adapt to Changing Business Objectives
2 min read

How Orca Helps Your AI Adapt to Changing Business Objectives

ML models must be adaptable to remain effective as business problems shift like targeting new customers, products, or goals. Learn how Orca can help.
Orca's Retrieval-Augmented Image Classifier Shows Perfect Robustness Against Data Drift
5 min read

Orca's Retrieval-Augmented Image Classifier Shows Perfect Robustness Against Data Drift

Memory-based updates enable an image classifier to maintain near-perfect accuracy even as data distributions shifted—without the need for costly retraining.
Retrieval-Augmented Text Classifiers Adapt to Changing Conditions in Real-Time
6 min read

Retrieval-Augmented Text Classifiers Adapt to Changing Conditions in Real-Time

Orca’s RAC text classifiers adapt in real-time to changing data, maintaining high accuracy comparable to retraining on a sentiment analysis of airline-related tweets.
Survey: Data Quality and Consistency Are Top Issues for ML Engineers
4 min read

Survey: Data Quality and Consistency Are Top Issues for ML Engineers

Orca's survey of 205 engineers revealed that data challenges remain at the forefront of machine learning model development.