wp-plugin-hostgator
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114ol-scrapes
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114Source:https:\/\/www.quantamagazine.org\/how-selective-forgetting-can-help-ai-learn-better-20240228\/#comments<\/a><\/br> A team of computer scientists has created a nimbler, more flexible type<\/a> of machine learning model. The trick: It must periodically forget what it knows. And while this new approach won\u2019t displace the huge models that undergird the biggest apps, it could reveal more about how these programs understand language.<\/p>\n The new research marks \u201ca significant advance in the field,\u201d said Jea Kwon<\/a>, an AI engineer at the Institute for Basic Science in South Korea.<\/p>\n The AI language engines in use today are mostly powered by artificial neural networks<\/a>. Each \u201cneuron\u201d in the network is a mathematical function that receives signals from other such neurons, runs some calculations and sends signals on through multiple layers of neurons. Initially the flow of information is more or less random, but through training, the information flow between neurons improves as the network adapts to the training data. If an AI researcher wants to create a bilingual model, for example, she would train the model with a big pile of text from both languages, which would adjust the connections between neurons in such a way as to relate the text in one language with equivalent words in the other.<\/p>\n But this training process takes a lot of computing power. If the model doesn\u2019t work very well, or if the user\u2019s needs change later on, it\u2019s hard to adapt it. \u201cSay you have a model that has 100 languages, but imagine that one language you want is not covered,\u201d said Mikel Artetxe<\/a>, a co-author of the new research and founder of the AI startup Reka. \u201cYou could start over from scratch, but it\u2019s not ideal.\u201d<\/p>\n Artetxe and his colleagues have tried to circumvent these limitations. A few years ago<\/a>, Artetxe and others trained a neural network in one language, then erased what it knew about the building blocks of words, called tokens. These are stored in the first layer of the neural network, called the embedding layer. They left all the other layers of the model alone. After erasing the tokens of the first language, they retrained the model on the second language, which filled the embedding layer with new tokens from that language.<\/p>\n Even though the model contained mismatched information, the retraining worked: The model could learn and process the new language. The researchers surmised that while the embedding layer stored information specific to the words used in the language, the deeper levels of the network stored more abstract information about the concepts behind human languages, which then helped the model learn the second language.<\/p>\n \u201cWe live in the same world. We conceptualize the same things with different words\u201d in different languages, said Yihong Chen<\/a>, the lead author of the recent paper. \u201cThat\u2019s why you have this same high-level reasoning in the model. An apple is something sweet and juicy, instead of just a word.\u201d<\/p>\n<\/div>\n <\/br><\/br><\/br><\/p>\n
\nHow Selective Forgetting Can Help AI Learn Better<\/br>
\n2024-02-29 21:58:51<\/br><\/p>\n