The rise of ChatGPT has fueled growth in the public awareness of AI, as well as the growing discourse around AI ethics. How should AI be used? What are its implications for society, not just business?
The inherent bias within AI applications — remember, it’s not just how the algorithm is built and by whom, it’s also about how the model itself is built — means we should be treading carefully in this brave new AI world.
After all, there have been very public examples of political and gender bias exhibited by AI platforms. OpenAI’s CEO, Sam Altman, admitted only last month that ChatGPT has “shortcomings around bias.” But those biases and faults can have far-reaching effects when applied to areas like insurance platforms or drug discovery, where the implications of getting decisions wrong could be massive.
MLOps (a mashup of “machine learning” and DevOps) is a set of practices that seeks to deploy and maintain machine learning models in production reliably and efficiently, and monitor those biases. Put simply, MLOps practices are used by data scientists, DevOps and machine learning engineers to transition an AI algorithm into every-day, working production models. The idea here is to improve the model’s automation while also keeping an eye on business and regulatory requirements around bias, as well as other aspects of AI. Improving efficiency also has a positive environmental impact.
Seldon is a U.K. startup that specializes in this rarified world of development tools to optimize machine learning models. It has competitors in the shape of Arise, Fiddler ($45.2 million in funding), Dataiku ($846.8 million in funding) and DataRobot ($1 billion in funding).
Seldon’s cloud-agnostic machine learning deployment platform secured a £7.1 million Series A from AlbionVC and Cambridge Innovation Capital back in 2020.
It’s now raised a $20 million Series B funding round led by new investor Bright Pixel (formerly Sonae IM). Also participating were existing investors AlbionVC, Cambridge Innovation Capital and Amadeus Capital Partners.
Founders Alex Housley (CEO) and Clive Cox (CTO) claim to have achieved a 400% YoY growth rate for Seldon’s open source frameworks since its Series A in November 2020. That’s important, because its open source network allows it to distribute its proprietary solutions far more efficiently and cost effectively.
“Seldon has differentiated itself by presenting a unique solution that is able to reduce the friction for users deploying and explaining ML models across any industry. This means more productivity for its clients, faster time-to-value combined with governance, risk and compliance capabilities,” said Pedro Carreira, director at Bright Pixel in a statement.
Current Seldon customers include PayPal, Johnson & Johnson, Audi and Experian, among others.
In an interview, Alex Housley, Seldon’s founder and CEO, told me: “AI is in everything, and Seldon is uniquely positioned. We already have a strong position in our open source distribution, and what we’ve just validated is a new concept in data centric MLOps, with a tight integration around data streams and production. Put simply, you can improve an AI model via its algorithm, but that has small improvements. Alternatively — and this is our approach — you can squeeze out much more performance by improving the production of the data quality. That’s what we’ve been working with Cambridge University, with significant success.”
According to Run:ai’s “State of AI Infrastructure Survey, 2023,” in 88% of companies more than half of these models never make it to production. Why? Because projects stall, or there’s duplication of efforts across business silos.
Seldon claims it can help teams collaborate better to speed up the deployment time by an average of 84%. This could be important, given there is a lot more regulation coming to AI (such as via the EU AI ACT, and US EEOC). Seldon — and its competitors — are racing to help enterprises remain compliant with those regulations, but improve these AI models internally.
The company has collaborated closely with Neil Lawrence, the inaugural DeepMind Professor of Machine Learning at the University of Cambridge.