wp-plugin-hostgator
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114ol-scrapes
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114Source:https:\/\/www.quantamagazine.org\/how-quickly-do-large-language-models-learn-unexpected-skills-20240213\/#comments<\/a><\/br> Two years ago, in a project called the Beyond the Imitation Game benchmark<\/a>, or BIG-bench, 450 researchers compiled a list of 204 tasks designed to test the capabilities of large language models, which power chatbots like ChatGPT. On most tasks, performance improved predictably and smoothly as the models scaled up \u2014 the larger the model, the better it got. But with other tasks, the jump in ability wasn\u2019t smooth. The performance remained near zero for a while, then performance jumped. Other studies found similar leaps in ability.<\/p>\n The authors described this as \u201cbreakthrough\u201d behavior; other researchers have likened it to a phase transition in physics, like when liquid water freezes into ice. In a paper<\/a> published in August 2022, researchers noted that these behaviors are not only surprising but unpredictable, and that they should inform the evolving conversations around AI safety, potential and risk. They called the abilities \u201cemergent<\/a>,\u201d a word that describes collective behaviors that only appear once a system reaches a high level of complexity.<\/p>\n But things may not be so simple. A new paper<\/a> by a trio of researchers at Stanford University posits that the sudden appearance of these abilities is just a consequence of the way researchers measure the LLM\u2019s performance. The abilities, they argue, are neither unpredictable nor sudden. \u201cThe transition is much more predictable than people give it credit for,\u201d said Sanmi Koyejo<\/a>, a computer scientist at Stanford and the paper\u2019s senior author. \u201cStrong claims of emergence have as much to do with the way we choose to measure as they do with what the models are doing.\u201d<\/p>\n We\u2019re only now seeing and studying this behavior because of how large these models have become. Large language models train by analyzing enormous datasets of text<\/a> \u2014 words from online sources including books, web searches and Wikipedia \u2014 and finding links between words that often appear together. The size is measured in terms of parameters, roughly analogous to all the ways that words can be connected. The more parameters, the more connections an LLM can find. GPT-2 had 1.5 billion parameters, while GPT-3.5, the LLM that powers ChatGPT, uses 350 billion. GPT-4, which debuted in March 2023 and now underlies Microsoft Copilot, reportedly uses 1.75 trillion.<\/p>\n That rapid growth has brought an astonishing surge in performance and efficacy, and no one is disputing that large enough LLMs can complete tasks that smaller models can\u2019t, including ones for which they weren\u2019t trained. The trio at Stanford who cast emergence as a \u201cmirage\u201d recognize that LLMs become more effective as they scale up; in fact, the added complexity<\/a> of larger models should make it possible to get better at more difficult and diverse problems. But they argue that whether this improvement looks smooth and predictable or jagged and sharp results from the choice of metric \u2014 or even a paucity of test examples \u2014 rather than the model\u2019s inner workings.<\/p>\n Three-digit addition offers an example. In the 2022 BIG-bench study, researchers reported that with fewer parameters, both GPT-3 and another LLM named LAMDA failed to accurately complete addition problems. However, when GPT-3 trained using 13 billion parameters, its ability changed as if with the flip of a switch. Suddenly, it could add \u2014 and LAMDA could, too, at 68 billion parameters. This suggests that the ability to add emerges at a certain threshold.<\/p>\n But the Stanford researchers point out that the LLMs were judged only on accuracy: Either they could do it perfectly, or they couldn\u2019t. So even if an LLM predicted most of the digits correctly, it failed. That didn\u2019t seem right. If you\u2019re calculating 100 plus 278, then 376 seems like a much more accurate answer than, say, \u22129.34.<\/p>\n So instead, Koyejo and his collaborators tested the same task using a metric that awards partial credit. \u201cWe can ask: How well does it predict the first digit? Then the second? Then the third?\u201d he said.<\/p>\n Koyejo credits the idea for the new work to his graduate student Rylan Schaeffer, who he said noticed that an LLM\u2019s performance seems to change with how its ability is measured. Together with Brando Miranda, another Stanford graduate student, they chose new metrics showing that as parameters increased, the LLMs predicted an increasingly correct sequence of digits in addition problems. This suggests that the ability to add isn\u2019t emergent \u2014 meaning that it undergoes a sudden, unpredictable jump \u2014 but gradual and predictable. They find that with a different measuring stick, emergence vanishes.<\/p>\n<\/div>\n <\/br><\/br><\/br><\/p>\n
\nHow Quickly Do Large Language Models Learn Unexpected Skills?<\/br>
\n2024-02-14 21:58:35<\/br><\/p>\n