wp-plugin-hostgator
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114ol-scrapes
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114Source:https:\/\/techcrunch.com\/2023\/05\/10\/google-cloud-announces-new-a3-supercomputer-vms-built-to-power-llms\/<\/a><\/br> As we\u2019ve seen LLMs and generative AI come screaming into our consciousness in recent months, it\u2019s clear that these models take enormous amounts of compute power to train and run. Recognizing this, Google Cloud announced a new A3 supercomputer virtual machine today at Google I\/O<\/a>.<\/p>\n The A3 has been purpose-built to handle the considerable demands of these resource-hungry use cases.<\/p>\n \u201cA3 GPU VMs were purpose-built to deliver the highest-performance training for today\u2019s ML workloads, complete with modern CPU, improved host memory, next-generation NVIDIA GPUs and major network upgrades,\u201d the company wrote in an announcement.<\/p>\n Specifically, the company is arming these machines with NVIDIA\u2019s H100 GPUs and combining that with a specialized data center to derive immense computational power with high throughput and low latency, all at what they suggest is a more reasonable price point than you would typically pay for such a package.<\/p>\n If you\u2019re looking for specs, consider it\u2019s powered by 8 NVIDIA H100 GPUs, 4th Gen Intel Xeon Scalable processors, 2TB of host memory and 3.6 TB\/s bisectional bandwidth between the 8 GPUs via NVSwitch and NVLink 4.0<\/a>, two NVIDIA technologies designed to help maximize throughput between multiple GPUs like the ones in this product.<\/p>\n These machines can provide up to 26 exaFlops<\/a> of power, which should help improve the time and cost related to training larger machine learning models. What\u2019s more, the workloads on these VMs run in Google\u2019s specialized Jupiter data center<\/a> networking fabric, which the company describes as, \u201c26,000 highly interconnected GPUs.\u201d This enables \u201cfull-bandwidth reconfigurable optical links that can adjust the topology on demand.\u201d The company says this approach should also contribute to bringing down the cost of running these workloads.<\/p>\n The idea is to give customers an enormous amount of power designed to train more demanding workloads, whether that involves complex machine learning models or LLMs running generative AI applications, and to do it in a more cost-effective way.<\/p>\n Google will be offering the A3 in a couple of ways: customers can run it themselves, or if they would prefer, as a managed service where Google handles most of the heavy lifting for them. The do-it-yourself approach involves running the A3 VMs on Google Kubernetes Engine (GKE) and Google Compute Engine (GCE), while the managed service runs the A3 VMs on Vertex AI<\/a>, the company\u2019s managed machine learning platform.<\/p>\n While the new A3 VMs are being announced today at Google I\/O, they are only available for now by signing up for a preview waitlist<\/a>.<\/p>\n
\nGoogle Cloud announces new A3 supercomputer VMs built to power LLMs<\/br>
\n2023-05-10 22:14:14<\/br><\/p>\n