wp-plugin-hostgator
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114ol-scrapes
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114Source:https:\/\/techcrunch.com\/2023\/04\/05\/cranium-launches-out-of-kpmgs-venture-studio-to-tackle-ai-security\/<\/a><\/br> Several years ago, Jonathan Dambrot, a partner at KPMG, was helping customers deploy and develop AI systems when he started to notice certain gaps in compliance and security. According to him, no one could explain whether their AI was secure \u2014 or even who was responsible for ensuring that.<\/p>\n \u201cFundamentally, data scientists don\u2019t understand the cybersecurity risks of AI and cyber professionals don\u2019t understand data science the way they understand other topics in technology,\u201d Dambrot told TechCrunch in an email interview. \u201cMore awareness of these risks and legislation will be required to ensure these risks are addressed appropriately and that organizations are making decisions on safe and secure AI systems.\u201d<\/p>\n Dambrot\u2019s perception led him to pitch KPMG Studio, KPMG\u2019s internal accelerator, on funding and incubating a software startup to solve the challenges around AI security and compliance. Along with two other co-founders, Felix Knoll (a \u201cgrowth leader\u201d at KPMG Studio) and Paul Spicer (a \u201cproduct owner\u201d at KPMG), and a team of about 25 developers and data scientists, Dambrot spun out the business \u2014 Cranium<\/a>.<\/p>\n To date, Cranium, which launches out of stealth today, has raised $7 million in venture capital from KPMG and SYN Ventures.<\/p>\n \u201cCranium was built to discover and provide visibility to AI systems at the client level, provide security reporting and monitoring, and create compliance and supply chain visibility reporting,\u201d Dambrot continued. \u201cThe core product takes a more holistic view of AI security and supply chain risks. It looks to address gaps in other solutions by providing better visibility into AI systems, providing security into core adversarial risks and providing supply chain visibility.\u201d<\/p>\n To that end, Cranium attempts to map AI pipelines and validate their security, monitoring for outside threats. What threats, you ask? It varies, depending on the customer, Dambrot says. But some of the more common ones involve poisoning (contaminating the data that an AI\u2019s trained on) and text-based attacks<\/a> (tricking AI with malicious instructions).<\/p>\n Cranium makes the claim that, working within an existing machine learning model training and testing environment, it can address these threats head-on. Customers can capture both in-development and deployed AI pipelines, including associated assets involved throughout the AI life cycle. And they can establish an AI security framework, providing their security and data science teams with a foundation for building a security program.<\/p>\n \u201cOur intent is to start having a rich repository of telemetry and use our AI models to be able to identify risks proactively across our client base,\u201d Dambrot said. \u201cMany of our risks are identified in other frameworks. We want to be a source of this data as we start to see a larger embedded base.\u201d<\/p>\n That\u2019s promising a lot \u2014 particularly at a time when new AI threats are emerging every day. And it\u2019s not exactly a brand-new concept. At least one other startup, HiddenLayer<\/a>, promises to do this, defending models from attacks ostensibly without the need to access any raw data or a vendor\u2019s algorithm. Others, like Robust Intelligence<\/a>, CalypsoAI and Troj.ai, offer a range of products designed to make AI systems more robust.<\/p>\n Cranium is starting from behind, without customers or revenue to speak of.<\/p>\n The elephant in the room is that it\u2019s difficult to pin down real-world examples of attacks against AI systems.\u00a0Research into the topic has exploded, with more than 1,500 papers on AI security published in 2019 on the scientific publishing site Arxiv.org, up from 56 in 2016, according to a study<\/a> from Adversa. But there\u2019s little public reporting on attempts by hackers to, for example, attack commercial facial recognition systems \u2014 assuming such attempts are happening in the first place.<\/p>\n For what it\u2019s worth, SYN managing partner Jay Leek, an investor in Cranium, thinks there\u2019s a future in AI robustness. It goes without saying that of course he would, given he\u2019s got a stake in the venture. Still, in his own words:<\/p>\n \u201cWe\u2019ve been tracking the AI security market for years and have never felt the timing was right,\u201d he told TechCrunch via email. \u201cHowever, with recent activity around how AI can change the world, Cranium is launching with ideal market conditions and timing. The need to ensure proper governance around AI for security, integrity, biases and misuse has never been more important across all industries. The Cranium platform instills security and trust across the entire AI lifecycle, ensuring enterprises achieve the benefits they hope to get from AI while also managing against unforeseen risks.\u201d<\/p>\n Cranium currently has around 30 full-time employees. Assuming business picks up, it expects to end the year with around 40 to 50.<\/p>\n<\/p><\/div>\n <\/br><\/br><\/br><\/p>\n
\nCranium launches out of KPMG\u2019s venture studio to tackle AI security<\/br>
\n2023-04-05 22:12:37<\/br><\/p>\n