wp-plugin-hostgator
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114ol-scrapes
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114Source:https:\/\/techcrunch.com\/2023\/05\/18\/meta-bets-big-on-ai-with-custom-chips-and-a-supercomputer\/<\/a><\/br> At a virtual event this morning, Meta lifted the curtains on its efforts to develop in-house infrastructure for AI workloads, including generative AI like the type that underpins its recently launched ad design and creation tools<\/a>.<\/p>\n It was an attempt at a projection of strength from Meta, which historically has been slow to adopt AI-friendly hardware systems \u2014 hobbling its ability to keep pace with rivals such as Google and Microsoft.<\/p>\n \u201c<\/strong>Building our own [hardware] capabilities gives us control at every layer of the stack, from datacenter design to training frameworks,\u201d Alexis Bjorlin, VP of Infrastructure at Meta, told TechCrunch. \u201cThis level of vertical integration is needed to push the boundaries of AI research at scale.\u201d<\/span><\/p>\n Over the past decade or so, Meta has spent billions of dollars recruiting top data scientists and building new kinds of AI, including AI that now powers the discovery engines, moderation filters and ad recommenders found throughout its apps and services. But the company has struggled<\/a> to turn many of its more ambitious AI research innovations into products, particularly on the generative AI front.<\/p>\n Until 2022, Meta largely ran its AI workloads using a combination of CPUs \u2014 which tend to be less efficient for those sorts of tasks than GPUs \u2014 and a custom chip designed for accelerating AI algorithms. Meta pulled the plug on a large-scale rollout of the custom chip, which was planned for 2022, and instead placed orders for billions of dollars\u2019 worth of Nvidia GPUs that required major redesigns of several of its datacenters.<\/p>\n In an effort to turn things around, Meta made plans to start developing a more ambitious in-house chip, due out in 2025, capable of both training AI models and running them. And that was the main topic of today\u2019s presentation.<\/p>\n Meta calls the new chip the Meta Training and Inference Accelerator, or MTIA for short, and describes it as a part of a \u201cfamily\u201d of chips for accelerating AI training and inferencing workloads. (\u201cInferencing\u201d refers to running a trained model.) The MTIA is an ASIC, a kind of chip that combines different circuits on one board, allowing it to be programmed to carry out one or many tasks in parallel.<\/p>\n
\nMeta bets big on AI with custom chips \u2014 and a supercomputer<\/br>
\n2023-05-18 21:57:16<\/br><\/p>\n