Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wp-plugin-hostgator domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the ol-scrapes domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893
{"id":32160,"date":"2023-05-15T21:58:07","date_gmt":"2023-05-15T21:58:07","guid":{"rendered":"https:\/\/scienceandnerds.com\/2023\/05\/15\/chatbots-dont-know-what-stuff-isnt\/"},"modified":"2023-05-15T21:58:08","modified_gmt":"2023-05-15T21:58:08","slug":"chatbots-dont-know-what-stuff-isnt","status":"publish","type":"post","link":"https:\/\/scienceandnerds.com\/2023\/05\/15\/chatbots-dont-know-what-stuff-isnt\/","title":{"rendered":"Chatbots Don\u2019t Know What Stuff Isn\u2019t"},"content":{"rendered":"

Source:https:\/\/www.quantamagazine.org\/ai-like-chatgpt-are-no-good-at-not-20230512\/#comments<\/a><\/br>
\nChatbots Don\u2019t Know What Stuff Isn\u2019t<\/br>
\n2023-05-15 21:58:07<\/br><\/p>\n

\n

Nora Kassner<\/a> suspected her computer wasn\u2019t as smart as people thought. In October 2018, Google released a language model algorithm called BERT<\/a>, which Kassner, a researcher in the same field, quickly loaded on her laptop. It was Google\u2019s first language model that was self-taught on a massive volume of online data. Like her peers, Kassner was impressed that BERT could complete users\u2019 sentences and answer simple questions. It seemed as if the large language model (LLM) could read text like a human (or better<\/a>).<\/p>\n

But Kassner, at the time a graduate student at Ludwig Maximilian University of Munich, remained skeptical. She felt LLMs should understand what their answers mean \u2014 and what they don\u2019t mean. It\u2019s one thing to know that a bird can fly. \u201cA model should automatically also know that the negated statement \u2014 \u2018a bird cannot fly\u2019 \u2014 is false,\u201d she said. But when she and her adviser, Hinrich Sch\u00fctze, tested<\/a> BERT and two other LLMs in 2019, they found that the models behaved as if words like \u201cnot\u201d were invisible.<\/p>\n

Since then, LLMs have skyrocketed in size and ability. \u201cThe algorithm itself is still similar to what we had before. But the scale and the performance is really astonishing,\u201d said Ding Zhao<\/a>, who leads the Safe Artificial Intelligence Lab at Carnegie Mellon University.<\/p>\n

But while chatbots have improved their humanlike performances, they still have trouble with negation. They know what it means if a bird can\u2019t fly, but they collapse when confronted with more complicated logic involving words like \u201cnot,\u201d which is trivial to a human.<\/p>\n

\u201cLarge language models work better than any system we have ever had before,\u201d said Pascale Fung<\/a>, an AI researcher at the Hong Kong University of Science and Technology. \u201cWhy do they struggle with something that\u2019s seemingly simple while it\u2019s demonstrating amazing power in other things that we don\u2019t expect it to?\u201d Recent studies have finally started to explain the difficulties, and what programmers can do to get around them. But researchers still don\u2019t understand whether machines will ever truly know the word \u201cno.\u201d<\/p>\n

Making Connections<\/strong><\/h2>\n

\u00a0<\/strong>It\u2019s hard to coax a computer into reading and writing like a human. Machines excel at storing lots of data and blasting through complex calculations, so developers build LLMs as neural networks<\/a>: statistical models that assess how objects (words, in this case) relate to one another. Each linguistic relationship carries some weight, and that weight \u2014 fine-tuned during training \u2014 codifies the relationship\u2019s strength. For example, \u201crat\u201d relates more to \u201crodent\u201d than \u201cpizza,\u201d even if some rats have been known to enjoy a good slice.<\/p>\n

In the same way that your smartphone\u2019s keyboard learns that you follow \u201cgood\u201d with \u201cmorning,\u201d LLMs sequentially predict the next word in a block of text. The bigger the data set used to train them, the better the predictions, and as the amount of data used to train the models has increased enormously, dozens of emergent behaviors<\/a> have bubbled up<\/a>. Chatbots have learned style, syntax and tone, for example, all on their own. \u201cAn early problem was that they completely could not detect emotional language at all. And now they can,\u201d said Kathleen Carley<\/a>, a computer scientist at Carnegie Mellon. Carley uses LLMs for \u201csentiment analysis,\u201d which is all about extracting emotional language from large data sets \u2014 an approach used for things like mining social media for opinions.<\/p>\n

So new models should get the right answers more reliably. \u201cBut we\u2019re not applying reasoning,\u201d Carley said. \u201cWe\u2019re just applying a kind of mathematical change.\u201d And, unsurprisingly, experts are finding gaps where these models diverge from how humans read.<\/p>\n

No Negatives<\/strong><\/h2>\n

\u00a0<\/strong>Unlike humans, LLMs process language by turning it into math. This helps them excel at generating text \u2014 by predicting likely combinations of text \u2014 but it comes at a cost.<\/p>\n

\u201cThe problem is that the task of prediction is not equivalent to the task of understanding,\u201d said Allyson Ettinger<\/a>, a computational linguist at the University of Chicago. Like Kassner, Ettinger tests how language models fare on tasks that seem easy to humans. In 2019, for example, Ettinger tested BERT<\/a> with diagnostics pulled from experiments designed to test human language ability. The model\u2019s abilities weren\u2019t consistent. For example:<\/p>\n

He caught the pass and scored another touchdown. There was nothing he enjoyed more than a good game of ____. <\/em>(BERT correctly predicted \u201cfootball.\u201d)<\/p>\n

The snow had piled up on the drive so high that they couldn\u2019t get the car out. When Albert woke up, his father handed him a ____. <\/em>(BERT incorrectly guessed \u201cnote,\u201d \u201cletter,\u201d \u201cgun.\u201d)<\/p>\n

And when it came to negation, BERT consistently struggled.<\/p>\n

A robin is not a ____. <\/em>(BERT predicted \u201crobin,\u201d and \u201cbird.\u201d)<\/p>\n

On the one hand, it\u2019s a reasonable mistake. \u201cIn very many contexts, \u2018robin\u2019 and \u2018bird\u2019 are going to be predictive of one another because they\u2019re probably going to co-occur very frequently,\u201d Ettinger said. On the other hand, any human can see it\u2019s wrong.<\/p>\n

By 2023, OpenAI\u2019s ChatGPT and Google\u2019s bot, Bard, had improved enough to predict that Albert\u2019s father had handed him a shovel instead of a gun. Again, this was likely the result of increased and improved data, which allowed for better mathematical predictions.<\/p>\n

But the concept of negation still tripped up the chatbots. Consider the prompt, \u201cWhat animals don\u2019t have paws or lay eggs, but have wings?\u201d Bard replied, \u201cNo animals.\u201d ChatGPT correctly replied bats, but also included flying squirrels and flying lemurs, which do not have wings. In general, \u201cnegation [failures] tended to be fairly consistent as models got larger,\u201d Ettinger said. \u201cGeneral world knowledge doesn\u2019t help.\u201d<\/p>\n

Invisible Words<\/strong><\/h2>\n

The obvious question becomes: Why don\u2019t the phrases \u201cdo not\u201d or \u201cis not\u201d simply prompt the machine to ignore the best predictions from \u201cdo\u201d and \u201cis\u201d?<\/p>\n

That failure is not an accident. Negations like \u201cnot,\u201d \u201cnever\u201d and \u201cnone\u201d are known as stop words, which are functional rather than descriptive. Compare them to words like \u201cbird\u201d and \u201crat\u201d that have clear meanings. Stop words, in contrast, don\u2019t add content on their own. Other examples include \u201ca,\u201d \u201cthe\u201d and \u201cwith.\u201d<\/p>\n

\u201cSome models filter out stop words to increase the efficiency,\u201d said Izunna Okpala<\/a>, a doctoral candidate at the University of Cincinnati who works on perception analysis. Nixing every \u201ca\u201d and so on makes it easier to analyze a text\u2019s descriptive content. You don\u2019t lose meaning by dropping every \u201cthe.\u201d But the process sweeps out negations as well, meaning most LLMs just ignore them.<\/p>\n

So why can\u2019t LLMs just learn what stop words mean? Ultimately, because \u201cmeaning\u201d is something orthogonal to how these models work. Negations matter to us because we\u2019re equipped to grasp what those words do. But models learn \u201cmeaning\u201d from mathematical weights: \u201cRose\u201d appears often with \u201cflower,\u201d \u201cred\u201d with \u201csmell.\u201d And it\u2019s impossible to learn what \u201cnot\u201d is this way.<\/p>\n

Kassner says the training data is also to blame, and more of it won\u2019t necessarily solve the problem. Models mainly train on affirmative sentences because that\u2019s how people communicate most effectively. \u201cIf I say I\u2019m born on a certain date, that automatically excludes all the other dates,\u201d Kassner said. \u201cI wouldn\u2019t say \u2018I\u2019m not born on that date.\u2019\u201d<\/p>\n

This dearth of negative statements undermines a model\u2019s training. \u201cIt\u2019s harder for models to generate factually correct negated sentences, because the models haven\u2019t seen that many,\u201d Kassner said.<\/p>\n

Untangling the Not<\/strong><\/h2>\n

If more training data isn\u2019t the solution, what might work? Clues come from an analysis posted<\/a> to arxiv.org in March, where Myeongjun Jang<\/a> and Thomas Lukasiewicz<\/a>, computer scientists at the University of Oxford (Lukasiewicz is also at the Vienna University of Technology), tested ChatGPT\u2019s negation skills. They found that ChatGPT was a little better at negation than earlier LLMs, even though the way LLMs learned remained unchanged. \u201cIt is quite a surprising result,\u201d Jang said. He believes the secret weapon was human feedback.<\/p>\n

The ChatGPT algorithm had been fine-tuned with \u201chuman-in-the-loop\u201d learning, where people validate responses and suggest improvements. So when users noticed ChatGPT floundering with simple negation, they reported that poor performance, allowing the algorithm to eventually get it right.<\/p>\n

John Schulman, a developer of ChatGPT, described in a recent lecture<\/a> how human feedback was also key to another improvement: getting ChatGPT to respond \u201cI don\u2019t know\u201d when confused by a prompt, such as one involving negation. \u201cBeing able to abstain from answering is very important,\u201d Kassner said. Sometimes \u201cI don\u2019t know\u201d is the answer.<\/p>\n

Yet even this approach leaves gaps. When Kassner prompted ChatGPT with \u201cAlice is not born in Germany. Is Alice born in Hamburg?\u201d the bot still replied that it didn\u2019t know. She also noticed it fumbling with double negatives like \u201cAlice does not know that she does not know the painter of the Mona Lisa.\u201d<\/p>\n

\u201cIt\u2019s not a problem that is naturally solved by the way that learning works in language models,\u201d Lukasiewicz said. \u201cSo the important thing is to find ways to solve that.\u201d<\/p>\n

\u00a0<\/strong>One option is to add an extra layer of language processing to negation. Okpala developed one such algorithm for sentiment analysis. His team\u2019s paper<\/a>, posted on arxiv.org in February, describes applying a library called WordHoard to catch and capture negation words like \u201cnot\u201d and antonyms in general. It\u2019s a simple algorithm that researchers can plug into their own tools and language models. \u201cIt proves to have higher accuracy compared to just doing sentiment analysis alone,\u201d Okpala said. When he combined his code and WordHoard with three common sentiment analyzers, they all improved in accuracy in extracting opinions \u2014 the best one by 35%.<\/p>\n

Another option is to modify the training data. When working with BERT, Kassner used texts with an equal number of affirmative and negated statements. The approach helped boost performance in simple cases where antonyms (\u201cbad\u201d) could replace negations (\u201cnot good\u201d). But this is not a perfect fix, since \u201cnot good\u201d doesn\u2019t always mean \u201cbad.\u201d The space of \u201cwhat\u2019s not\u201d is simply too big for machines to sift through. \u201cIt\u2019s not interpretable,\u201d Fung said. \u201cYou\u2019re not me. You\u2019re not shoes. You\u2019re not an infinite amount of things.\u201d<\/p>\n

\u00a0<\/strong>Finally, since LLMs have surprised us with their abilities before, it\u2019s possible even larger models with even more training will eventually learn to handle negation on their own. Jang and Lukasiewicz are hopeful that diverse training data, beyond just words, will help. \u201cLanguage is not only described by text alone,\u201d Lukasiewicz said. \u201cLanguage describes anything. Vision, audio.\u201d OpenAI\u2019s new GPT-4 integrates text, audio and visuals, making it reportedly the largest \u201cmultimodal\u201d LLM to date.<\/p>\n

Future Not Clear<\/strong><\/h2>\n

But while these techniques, together with greater processing and data, might lead to chatbots that can master negation, most researchers remain skeptical. \u201cWe can\u2019t actually guarantee that that will happen,\u201d Ettinger said. She suspects it\u2019ll require a fundamental shift, moving language models away from their current objective of predicting words.<\/p>\n

After all, when children learn language, they\u2019re not attempting to predict words, they\u2019re just mapping words to concepts. They\u2019re \u201cmaking judgments like \u2018is this true\u2019 or \u2018is this not true\u2019 about the world,\u201d Ettinger said.<\/p>\n

If an LLM could separate true from false this way, it would open the possibilities dramatically. \u201cThe negation problem might go away when the LLM models have a closer resemblance to humans,\u201d Okpala said.<\/p>\n

Of course, this might just be switching one problem for another. \u201cWe need better theories of how humans recognize meaning and how people interpret texts,\u201d Carley said. \u201cThere\u2019s just a lot less money put into understanding how people think than there is to making better algorithms.\u201d<\/p>\n

And dissecting how LLMs fail is getting harder, too. State-of-the-art models aren\u2019t as transparent as they used to be, so researchers evaluate them based on inputs and outputs, rather than on what happens in the middle. \u201cIt\u2019s just proxy,\u201d Fung said. \u201cIt\u2019s not a theoretical proof.\u201d So what progress we have seen isn\u2019t even well understood.<\/p>\n

And Kassner suspects that the rate of improvement will slow in the future. \u201cI would have never imagined the breakthroughs and the gains we\u2019ve seen in such a short amount of time,\u201d she said. \u201cI was always quite skeptical whether just scaling models and putting more and more data in it is enough. And I would still argue it\u2019s not.\u201d<\/p>\n<\/div>\n

<\/br><\/br><\/br><\/p>\n

Uncategorized<\/br>
\n<\/br>
\nSource:
https:\/\/www.quantamagazine.org\/ai-like-chatgpt-are-no-good-at-not-20230512\/#comments<\/a><\/br><\/br><\/p>\n","protected":false},"excerpt":{"rendered":"

Source:https:\/\/www.quantamagazine.org\/ai-like-chatgpt-are-no-good-at-not-20230512\/#comments Chatbots Don\u2019t Know What Stuff Isn\u2019t 2023-05-15 21:58:07 Nora Kassner suspected her computer wasn\u2019t as smart as people thought. In October 2018, Google released a language model algorithm called BERT, which Kassner, a researcher in the same field, quickly loaded on her laptop. It was Google\u2019s first language model that was self-taught on a […]<\/p>\n","protected":false},"author":1,"featured_media":32161,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"nf_dc_page":"","om_disable_all_campaigns":false,"pagelayer_contact_templates":[],"_pagelayer_content":"","_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-32160","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"yoast_head":"\nChatbots Don\u2019t Know What Stuff Isn\u2019t - Science and Nerds<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/scienceandnerds.com\/2023\/05\/15\/chatbots-dont-know-what-stuff-isnt\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Chatbots Don\u2019t Know What Stuff Isn\u2019t - Science and Nerds\" \/>\n<meta property=\"og:description\" content=\"Source:https:\/\/www.quantamagazine.org\/ai-like-chatgpt-are-no-good-at-not-20230512\/#comments Chatbots Don\u2019t Know What Stuff Isn\u2019t 2023-05-15 21:58:07 Nora Kassner suspected her computer wasn\u2019t as smart as people thought. In October 2018, Google released a language model algorithm called BERT, which Kassner, a researcher in the same field, quickly loaded on her laptop. It was Google\u2019s first language model that was self-taught on a […]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/scienceandnerds.com\/2023\/05\/15\/chatbots-dont-know-what-stuff-isnt\/\" \/>\n<meta property=\"og:site_name\" content=\"Science and Nerds\" \/>\n<meta property=\"article:published_time\" content=\"2023-05-15T21:58:07+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-05-15T21:58:08+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/scienceandnerds.com\/wp-content\/uploads\/2023\/05\/chatbots-dont-know-what-stuff-isnt_6462aaefa26cd.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"2560\" \/>\n\t<meta property=\"og:image:height\" content=\"1440\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"admin\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"admin\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"11 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/scienceandnerds.com\/2023\/05\/15\/chatbots-dont-know-what-stuff-isnt\/\",\"url\":\"https:\/\/scienceandnerds.com\/2023\/05\/15\/chatbots-dont-know-what-stuff-isnt\/\",\"name\":\"Chatbots Don\u2019t Know What Stuff Isn\u2019t - Science and Nerds\",\"isPartOf\":{\"@id\":\"https:\/\/scienceandnerds.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/scienceandnerds.com\/2023\/05\/15\/chatbots-dont-know-what-stuff-isnt\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/scienceandnerds.com\/2023\/05\/15\/chatbots-dont-know-what-stuff-isnt\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/i0.wp.com\/scienceandnerds.com\/wp-content\/uploads\/2023\/05\/chatbots-dont-know-what-stuff-isnt_6462aaefa26cd.webp?fit=2560%2C1440&ssl=1\",\"datePublished\":\"2023-05-15T21:58:07+00:00\",\"dateModified\":\"2023-05-15T21:58:08+00:00\",\"author\":{\"@id\":\"https:\/\/scienceandnerds.com\/#\/schema\/person\/ea2991abeb2b9ab04b32790dff28360e\"},\"breadcrumb\":{\"@id\":\"https:\/\/scienceandnerds.com\/2023\/05\/15\/chatbots-dont-know-what-stuff-isnt\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/scienceandnerds.com\/2023\/05\/15\/chatbots-dont-know-what-stuff-isnt\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/scienceandnerds.com\/2023\/05\/15\/chatbots-dont-know-what-stuff-isnt\/#primaryimage\",\"url\":\"https:\/\/i0.wp.com\/scienceandnerds.com\/wp-content\/uploads\/2023\/05\/chatbots-dont-know-what-stuff-isnt_6462aaefa26cd.webp?fit=2560%2C1440&ssl=1\",\"contentUrl\":\"https:\/\/i0.wp.com\/scienceandnerds.com\/wp-content\/uploads\/2023\/05\/chatbots-dont-know-what-stuff-isnt_6462aaefa26cd.webp?fit=2560%2C1440&ssl=1\",\"width\":2560,\"height\":1440},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/scienceandnerds.com\/2023\/05\/15\/chatbots-dont-know-what-stuff-isnt\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/scienceandnerds.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Chatbots Don\u2019t Know What Stuff Isn\u2019t\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/scienceandnerds.com\/#website\",\"url\":\"https:\/\/scienceandnerds.com\/\",\"name\":\"Science and Nerds\",\"description\":\"My WordPress Blog\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/scienceandnerds.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/scienceandnerds.com\/#\/schema\/person\/ea2991abeb2b9ab04b32790dff28360e\",\"name\":\"admin\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/scienceandnerds.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/7e6e14fc6691445ef2b2c0a3a6c43882?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/7e6e14fc6691445ef2b2c0a3a6c43882?s=96&d=mm&r=g\",\"caption\":\"admin\"},\"sameAs\":[\"https:\/\/scienceandnerds.com\"],\"url\":\"https:\/\/scienceandnerds.com\/author\/admin\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Chatbots Don\u2019t Know What Stuff Isn\u2019t - Science and Nerds","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/scienceandnerds.com\/2023\/05\/15\/chatbots-dont-know-what-stuff-isnt\/","og_locale":"en_US","og_type":"article","og_title":"Chatbots Don\u2019t Know What Stuff Isn\u2019t - Science and Nerds","og_description":"Source:https:\/\/www.quantamagazine.org\/ai-like-chatgpt-are-no-good-at-not-20230512\/#comments Chatbots Don\u2019t Know What Stuff Isn\u2019t 2023-05-15 21:58:07 Nora Kassner suspected her computer wasn\u2019t as smart as people thought. In October 2018, Google released a language model algorithm called BERT, which Kassner, a researcher in the same field, quickly loaded on her laptop. It was Google\u2019s first language model that was self-taught on a […]","og_url":"https:\/\/scienceandnerds.com\/2023\/05\/15\/chatbots-dont-know-what-stuff-isnt\/","og_site_name":"Science and Nerds","article_published_time":"2023-05-15T21:58:07+00:00","article_modified_time":"2023-05-15T21:58:08+00:00","og_image":[{"width":2560,"height":1440,"url":"https:\/\/scienceandnerds.com\/wp-content\/uploads\/2023\/05\/chatbots-dont-know-what-stuff-isnt_6462aaefa26cd.webp","type":"image\/webp"}],"author":"admin","twitter_card":"summary_large_image","twitter_misc":{"Written by":"admin","Est. reading time":"11 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/scienceandnerds.com\/2023\/05\/15\/chatbots-dont-know-what-stuff-isnt\/","url":"https:\/\/scienceandnerds.com\/2023\/05\/15\/chatbots-dont-know-what-stuff-isnt\/","name":"Chatbots Don\u2019t Know What Stuff Isn\u2019t - Science and Nerds","isPartOf":{"@id":"https:\/\/scienceandnerds.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/scienceandnerds.com\/2023\/05\/15\/chatbots-dont-know-what-stuff-isnt\/#primaryimage"},"image":{"@id":"https:\/\/scienceandnerds.com\/2023\/05\/15\/chatbots-dont-know-what-stuff-isnt\/#primaryimage"},"thumbnailUrl":"https:\/\/i0.wp.com\/scienceandnerds.com\/wp-content\/uploads\/2023\/05\/chatbots-dont-know-what-stuff-isnt_6462aaefa26cd.webp?fit=2560%2C1440&ssl=1","datePublished":"2023-05-15T21:58:07+00:00","dateModified":"2023-05-15T21:58:08+00:00","author":{"@id":"https:\/\/scienceandnerds.com\/#\/schema\/person\/ea2991abeb2b9ab04b32790dff28360e"},"breadcrumb":{"@id":"https:\/\/scienceandnerds.com\/2023\/05\/15\/chatbots-dont-know-what-stuff-isnt\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/scienceandnerds.com\/2023\/05\/15\/chatbots-dont-know-what-stuff-isnt\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/scienceandnerds.com\/2023\/05\/15\/chatbots-dont-know-what-stuff-isnt\/#primaryimage","url":"https:\/\/i0.wp.com\/scienceandnerds.com\/wp-content\/uploads\/2023\/05\/chatbots-dont-know-what-stuff-isnt_6462aaefa26cd.webp?fit=2560%2C1440&ssl=1","contentUrl":"https:\/\/i0.wp.com\/scienceandnerds.com\/wp-content\/uploads\/2023\/05\/chatbots-dont-know-what-stuff-isnt_6462aaefa26cd.webp?fit=2560%2C1440&ssl=1","width":2560,"height":1440},{"@type":"BreadcrumbList","@id":"https:\/\/scienceandnerds.com\/2023\/05\/15\/chatbots-dont-know-what-stuff-isnt\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/scienceandnerds.com\/"},{"@type":"ListItem","position":2,"name":"Chatbots Don\u2019t Know What Stuff Isn\u2019t"}]},{"@type":"WebSite","@id":"https:\/\/scienceandnerds.com\/#website","url":"https:\/\/scienceandnerds.com\/","name":"Science and Nerds","description":"My WordPress Blog","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/scienceandnerds.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/scienceandnerds.com\/#\/schema\/person\/ea2991abeb2b9ab04b32790dff28360e","name":"admin","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/scienceandnerds.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/7e6e14fc6691445ef2b2c0a3a6c43882?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/7e6e14fc6691445ef2b2c0a3a6c43882?s=96&d=mm&r=g","caption":"admin"},"sameAs":["https:\/\/scienceandnerds.com"],"url":"https:\/\/scienceandnerds.com\/author\/admin\/"}]}},"jetpack_sharing_enabled":true,"jetpack_featured_media_url":"https:\/\/i0.wp.com\/scienceandnerds.com\/wp-content\/uploads\/2023\/05\/chatbots-dont-know-what-stuff-isnt_6462aaefa26cd.webp?fit=2560%2C1440&ssl=1","_links":{"self":[{"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/posts\/32160","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/comments?post=32160"}],"version-history":[{"count":1,"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/posts\/32160\/revisions"}],"predecessor-version":[{"id":32162,"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/posts\/32160\/revisions\/32162"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/media\/32161"}],"wp:attachment":[{"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/media?parent=32160"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/categories?post=32160"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/tags?post=32160"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}