During the past six months, we have witnessed some incredible developments in AI. The release of Stable Diffusion forever changed the artworld, and ChatGPT-3 shook up the internet with its ability to write songs, mimic research papers, and provide thorough and seemingly intelligent answers to commonly Googled questions.
These advancements in generative AI offer further evidence that we’re on the precipice of an AI revolution.
However, most of these generative AI models are foundational models: high-capacity, unsupervised learning systems that train on vast amounts of data and take millions of dollars of processing power to do it. Currently, only well-funded institutions with access to a massive amount of GPU power are capable of building these models.
The majority of companies developing the application-layer AI that’s driving the widespread adoption of the technology still rely on supervised learning, using large swaths of labeled training data. Despite the impressive feats of foundation models, we’re still in the early days of the AI revolution and numerous bottlenecks are holding back the proliferation of application-layer AI.
Downstream of the well-known data labeling problem exist additional data bottlenecks that will hinder the development of later-stage AI and its deployment to production environments.
These problems are why, despite the early promise and floods of investment, technologies like self-driving cars have been just one year away since 2014.
These exciting proof-of-concept models perform well on benchmarked datasets in research environments, but they struggle to predict accurately when released in the real world. A major problem is that the technology struggles to meet the higher performance threshold required in high-stakes production environments, and fails to hit important benchmarks for robustness, reliability and maintainability.
For instance, these models often can’t handle outliers and edge cases, so self-driving cars mistake reflections of bicycles for bicycles themselves. They aren’t reliable or robust so a robot barista makes a perfect cappuccino two out of every five times but spills the cup the other three.
As a result, the AI production gap, the gap between “that’s neat” and “that’s useful,” has been much larger and more formidable than ML engineers first anticipated.
Counterintuitively, the best systems also have the most human interaction.
Fortunately, as more and more ML engineers have embraced a data-centric approach to AI development, the implementation of active learning strategies have been on the rise. The most sophisticated companies will leverage this technology to leapfrog the AI production gap and build models capable of running in the wild more quickly.
What is active learning?
Active learning makes training a supervised model an iterative process. The model trains on an initial subset of labeled data from a large dataset. Then, it tries to make predictions on the rest of the unlabeled data based on what it has learned. ML engineers evaluate how certain the model is in its predictions and, by using a variety of acquisition functions, can quantify the performance benefit added by annotating one of the unlabeled samples.
By expressing uncertainty in its predictions, the model is deciding for itself what additional data will be most useful for its training. In doing so, it asks annotators to provide more examples of only that specific type of data so that it can train more intensively on that subset during its next round of training. Think of it like quizzing a student to figure out where their knowledge gap is. Once you know what problems they are missing, you can provide them with textbooks, presentations and other materials so that they can target their learning to better understand that particular aspect of the subject.
With active learning, training a model moves from being a linear process to a circular one with a strong feedback loop.
Why sophisticated companies should be ready to leverage active learning
Active learning is fundamental for closing the prototype-production gap and increasing model reliability.
It’s a common mistake to think of AI systems as a static piece of software, but these systems must be constantly learning and evolving. If not, they make the same mistakes repeatedly, or, when they’re released in the wild, they encounter new scenarios, make new mistakes and don’t have an opportunity to learn from them.