Real-time adaptability for your models
Orca helps machine learning teams update models instantly as business needs and data change, ensuring peak performance.
Models using Orca handle data evolution more effectively
Build AIs as dynamic as the real-world
Instantly expand to new use cases or environments
Problem
Expanding models to new but similar use cases often results in decreased accuracy, requiring costly and time-consuming retraining or new model development.
Solution
Orca-enabled memory augmentation allows you to use the reasoning from a "base model" that can then be augmented with different data sets for each use case.
Keep up with rapidly-evolving data
Problem
Large volumes of data often lead to data drift, which degrades AI/ML model performance, especially when data arrives faster than it can be processed
Solution
Orca's Retrieval Augmented Classifier models allow you to dynamically add or modify the data (akin to "memories") that the model can access during inference.
Customize to different preferences
Problem
AI/ML models often struggle with adapting to diverse user requirements and changing definitions of "correct" outcomes, leading to decreased effectiveness and poor customer satisfaction.
Solution
Orca's unique model architecture allows real-time adaptation to new criteria by swapping the external data stored in the model's memory, enabling businesses to quickly customize their AI/ML models.
Adapt to changing business objectives
Problem
Businesses often change their strategic goals and priorities, which leads to their AI/ML models becoming outdated. Retraining or transfer learning can be inefficient and time-consuming.
Solution
Orca solves this by using a memory-augmented architecture that allows models to update in real-time with external data, enabling instant adaptation to new priorities.
How does Orca's augmentation methodology work?
With Orca, models learn to leverage external data during initial training or fine-tuning. This allows any deep learning model to gain the ability to be "retrieval augmented," akin to how large-language-models (LLMs) use retrieval-augmented-generation (RAG) to produce modified, updated outputs. By accessing this external data, your classifiers and recommendation models evolve and adapt almost instantaneously, unlocking greater resilience to change and enabling easy customization instead of continuously retraining models.
from datasets import load_dataset
from orcalib.rac import LabeledMemoryset, RACModel
memoryset = LabeledMemoryset("datalicious") #cloud storage backed dynamic model memory dataset
source_dataset = load_dataset("datalicious/datalicious")
memoryset.insert(source_dataset) #works with all standard dataset formats
my_model = RACModel(num_classes=10)
with my_model.use(memoryset):
my_model.finetune(source_dataset)
my_model.attach(memoryset)
my_model.predict(new_input)
memoryset.analyze() #tell me what could be better about my data
with my_model.use(new_memoryset):
my_model.predict(new_input)
Find out if Orca is right for you
Speak to our research scientists to see if we can help you build more adaptable models.