wp-plugin-hostgator
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114ol-scrapes
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114Source:https:\/\/techcrunch.com\/2023\/06\/22\/inflection-debuts-its-own-foundation-ai-model-to-rival-google-and-openai-llms\/<\/a><\/br> Inflection, a well-funded AI startup<\/a> aiming to create \u201cpersonal AI for everyone,\u201d has taken the wraps off the large language model powering its Pi conversational agent. It\u2019s hard to evaluate the quality of these things in any way, let alone objectively and systematically, but a little competition is a good thing.<\/p>\n Inflection-1<\/a>, as the model is called, is of roughly GPT-3.5 (AKA ChatGPT) size and capabilities \u2014 as measured in the computing power used to train them. The company claims that it\u2019s competitive or superior with other models on this tier, backing it up with a \u201ctechnical memo\u201d describing some benchmarks it ran on its model, GPT-3.5, LLaMA, Chinchilla and PaLM-540B.<\/p>\n According to the results they published, Inflection-1 indeed performs well on various measures, like middle- and high school-level exam tasks (think biology 101) and \u201ccommon sense\u201d benchmarks (things like \u201cif Jack throws the ball on the roof, and Jill throws it back down, where is the ball?\u201d). It mainly falls behind on coding, where GPT-3.5 beats it handily and, for comparison, GPT-4 smokes the competition; OpenAI\u2019s biggest model is well known to have been a huge leap in quality there, so it\u2019s no surprise.<\/p>\n Inflection notes that it expects to publish results for a larger model comparable to GPT-4 and PaLM-2(L), but no doubt they are waiting until the results are worth publishing. At any rate, Inflection-2 or Inflection-1-XL or whatever is in the oven but not quite baked.<\/p>\n So far the community hasn\u2019t formally divided AI models into the machine learning equivalent of boxing weight classes, but the concepts do map to one another quite well. You don\u2019t expect a flyweight to go up against a heavyweight, they\u2019re practically different sports. Same with AI models: a small one isn\u2019t as capable as a large one, but the small one runs efficiently on a phone while the large one requires a data center. It\u2019s an apples to oranges thing.<\/p>\n It\u2019s still too early to attempt such a thing, since the field is still comparatively young and there\u2019s no real consensus on what sizes and shapes of AI model should be considered of a feather.<\/p>\n Ultimately for most of these models the proof of the pudding is in the tasting, of course, and until Inflection opens up its model to widespread use and independent evaluation, all its vaunted benchmarks must be taken with a grain of salt. If you want to give Pi a shot, you can just add it<\/a> on one of your messaging apps, or chat with it online here<\/a>.<\/p>\n<\/p><\/div>\n <\/br><\/br><\/br><\/p>\n
\nInflection debuts its own foundation AI model to rival Google and OpenAI LLMs<\/br>
\n2023-06-22 21:48:29<\/br><\/p>\n