Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wp-plugin-hostgator domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the ol-scrapes domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893
{"id":28108,"date":"2023-04-03T21:52:31","date_gmt":"2023-04-03T21:52:31","guid":{"rendered":"https:\/\/scienceandnerds.com\/2023\/04\/03\/the-great-pretender\/"},"modified":"2023-04-03T21:52:32","modified_gmt":"2023-04-03T21:52:32","slug":"the-great-pretender","status":"publish","type":"post","link":"https:\/\/scienceandnerds.com\/2023\/04\/03\/the-great-pretender\/","title":{"rendered":"The Great Pretender"},"content":{"rendered":"

Source:https:\/\/techcrunch.com\/2023\/04\/03\/the-great-pretender\/<\/a><\/br>
\nThe Great Pretender<\/br>
\n2023-04-03 21:52:31<\/br><\/p>\n

\n

There is a good<\/span> reason not to trust what today\u2019s AI constructs<\/a> tell you, and it has nothing to do with the fundamental nature of intelligence or humanity, with Wittgensteinian concepts of language representation, or even disinfo in the dataset. All that matters is that these systems do not distinguish between something that is<\/em> correct and something that looks<\/em> correct. Once you understand that the AI considers these things more or less interchangeable, everything makes a lot more sense.<\/p>\n

Now, I don\u2019t mean to short circuit any of the fascinating and wide-ranging discussions about this happening continually across every form of media and conversation. We have everyone from philosophers and linguists to engineers and hackers to bartenders and firefighters questioning and debating what \u201cintelligence\u201d and \u201clanguage\u201d truly are, and whether something like ChatGPT possesses them.<\/p>\n

This is amazing! And I\u2019ve learned a lot already as some of the smartest people in this space enjoy their moment in the sun, while from the mouths of comparative babes come fresh new perspectives.<\/p>\n

But at the same time, it\u2019s a lot to sort through over a beer or coffee when someone asks \u201cwhat about all this GPT stuff, kind of scary how smart AI is getting, right?\u201d Where do you start \u2014 with Aristotle, the mechanical Turk, the perceptron or \u201cAttention is all you need\u201d?<\/p>\n

During one of these chats I hit on a simple approach that I\u2019ve found helps people get why these systems can be both really cool and also totally untrustable, while subtracting not at all from their usefulness in some domains and the amazing conversations being had around them. I thought I\u2019d share it in case you find the perspective useful when talking about this with other curious, skeptical people who nevertheless don\u2019t want to hear about vectors or matrices.<\/p>\n

There are only three things to understand, which lead to a natural conclusion:<\/p>\n

    \n
  1. These models are created by having them observe the relationships between words and sentences and so on in an enormous dataset of text, then build their own internal statistical map of how all these millions and millions of words and concepts are associated and correlated. No one has said, this is a noun, this is a verb, this is a recipe, this is a rhetorical device; but these are things that show up naturally in patterns of usage.<\/li>\n
  2. These models are not specifically taught how to answer questions, in contrast to the familiar software companies like Google and Apple have been calling AI for the last decade. Those<\/em> are basically Mad Libs with the blanks leading to APIs: Every question is either accounted for or produces a generic response. With large language models the question is just a series of words like any other.<\/li>\n
  3. These models have a fundamental expressive quality of \u201cconfidence\u201d in their responses. In a simple example of a cat recognition AI, it would go from 0, meaning completely sure that\u2019s not a cat, to 100, meaning absolutely sure that\u2019s a cat. You can tell it to say \u201cyes, it\u2019s a cat\u201d if it\u2019s at a confidence of 85, or 90, whatever produces your preferred response metric.<\/li>\n<\/ol>\n

    So given what we know about how the model works, here\u2019s the crucial question: What is it confident about<\/em>? It doesn\u2019t know what a cat or a question is, only statistical relationships found between data nodes in a training set. A minor tweak would have the cat detector equally confident the picture showed a cow, or the sky, or a still life painting. The model can\u2019t be confident in its own \u201cknowledge\u201d because it has no way of actually evaluating the content of the data it has been trained on.<\/p>\n

    The AI is expressing how sure it is that its answer appears correct to the user<\/em>.<\/strong><\/p>\n

    This is true of the cat detector, and it is true of GPT-4 \u2014 the difference is a matter of the length and complexity of the output. The AI cannot distinguish between a right and wrong answer \u2014 it only can make a prediction of how likely<\/em> a series of words is to be accepted as correct. That is why it must be considered the world\u2019s most comprehensively informed bullshitter rather than an authority on any subject. It doesn\u2019t even know it\u2019s bullshitting you \u2014 it has been trained to produce a response that statistically resembles a correct answer<\/em>, and it will say anything<\/em> to improve that resemblance.<\/strong><\/p>\n

    The AI doesn\u2019t know the answer to any question, because it doesn\u2019t understand the question. It doesn\u2019t know what questions are. It doesn\u2019t \u201cknow\u201d anything! The answer follows the question because, extrapolating from its statistical analysis, that series of words is the most likely to follow the previous series of words. Whether those words refer to real places, people, locations, etc. is not material \u2014 only that they are like<\/em> real ones.<\/p>\n

    It\u2019s the same reason AI can produce a Monet-like painting that isn\u2019t a Monet \u2014 all that matters is it has all the characteristics that cause people to identify a piece of artwork as his. Today\u2019s AI approximates factual responses the way it would approximate \u201cWater Lilies.\u201d<\/p>\n

    Now, I hasten to add that this isn\u2019t an original or groundbreaking concept \u2014 it\u2019s basically another way to explain the stochastic parrot, or the undersea octopus. Those problems were identified very early by very smart people and represent a great reason to read commentary on tech matters widely.<\/p>\n

    But in the context of today\u2019s chatbot systems, I\u2019ve just found that people intuitively get this approach: The models don\u2019t understand facts or concepts, but relationships between words, and its responses are an \u201cartist\u2019s impression\u201d of an answer. Their goal, when you get down to it, is to fill in the blank convincingly<\/em>, not correctly<\/em>. This is the reason why its responses fundamentally cannot be trusted.<\/p>\n

    Of course sometimes, even a lot of the time, its answer is<\/em> correct! And that isn\u2019t an accident: For many questions, the answer that looks the most correct is the correct answer. That is what makes these models so powerful \u2014 and dangerous. There is so, so much you can extract from a systematic study of millions of words and documents. And unlike recreating \u201cWater Lilies\u201d exactly, there\u2019s a flexibility to language that lets an approximation of a factual response also be factual \u2014 but also make a totally or partially invented response appear equally or more so. The only thing the AI cares about is that the answer scans right.<\/p>\n

    This leaves the door open to discussions around whether this is truly knowledge, what if anything the models \u201cunderstand,\u201d if they have achieved some form of intelligence, what intelligence even is and so on. Bring on the Wittgenstein!<\/p>\n

    Furthermore, it also leaves open the possibility of using these tools in situations where truth isn\u2019t really a concern. If you want to generate five variants of an opening paragraph to get around writer\u2019s block, an AI might be indispensable. If you want to make up a story about two endangered animals, or write a sonnet about Pok\u00e9mon, go for it. As long as it is not crucial that the response reflects reality, a large language model is a willing and able partner \u2014 and not coincidentally, that\u2019s where people seem to be having the most fun with it.<\/p>\n

    Where and when AI gets it wrong is very, very difficult to predict because the models are too large and opaque. Imagine a card catalog the size of a continent, organized and updated over a period of a hundred years by robots, from first principles that they came up with on the fly. You think you can just walk in and understand the system? It gives a right answer to a difficult question and a wrong answer to an easy one. Why? Right now that is one question that neither AI nor its creators can answer.<\/p>\n

    This may well change in the future, perhaps even the near future. Everything is moving so quickly and unpredictably that nothing is certain. But for the present this is a useful mental model to keep in mind: The AI wants you to believe it and will say anything to improve its chances.<\/p>\n<\/p><\/div>\n

    <\/br><\/br><\/br><\/p>\n

    Science, Tech, Technology<\/br>
    \n<\/br>
    \nSource:
    https:\/\/techcrunch.com\/2023\/04\/03\/the-great-pretender\/<\/a><\/br><\/br><\/p>\n","protected":false},"excerpt":{"rendered":"

    Source:https:\/\/techcrunch.com\/2023\/04\/03\/the-great-pretender\/ The Great Pretender 2023-04-03 21:52:31 There is a good reason not to trust what today\u2019s AI constructs tell you, and it has nothing to do with the fundamental nature of intelligence or humanity, with Wittgensteinian concepts of language representation, or even disinfo in the dataset. All that matters is that these systems do not […]<\/p>\n","protected":false},"author":1,"featured_media":28109,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"nf_dc_page":"","om_disable_all_campaigns":false,"pagelayer_contact_templates":[],"_pagelayer_content":"","_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[26,17,8],"tags":[],"class_list":["post-28108","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-science","category-tech","category-technology"],"yoast_head":"\nThe Great Pretender - Science and Nerds<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/scienceandnerds.com\/2023\/04\/03\/the-great-pretender\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"The Great Pretender - Science and Nerds\" \/>\n<meta property=\"og:description\" content=\"Source:https:\/\/techcrunch.com\/2023\/04\/03\/the-great-pretender\/ The Great Pretender 2023-04-03 21:52:31 There is a good reason not to trust what today\u2019s AI constructs tell you, and it has nothing to do with the fundamental nature of intelligence or humanity, with Wittgensteinian concepts of language representation, or even disinfo in the dataset. All that matters is that these systems do not […]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/scienceandnerds.com\/2023\/04\/03\/the-great-pretender\/\" \/>\n<meta property=\"og:site_name\" content=\"Science and Nerds\" \/>\n<meta property=\"article:published_time\" content=\"2023-04-03T21:52:31+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-04-03T21:52:32+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/scienceandnerds.com\/wp-content\/uploads\/2023\/04\/the-great-pretender_642b4a9fdd438.jpeg\" \/>\n\t<meta property=\"og:image:width\" content=\"675\" \/>\n\t<meta property=\"og:image:height\" content=\"400\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"admin\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"admin\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/scienceandnerds.com\/2023\/04\/03\/the-great-pretender\/\",\"url\":\"https:\/\/scienceandnerds.com\/2023\/04\/03\/the-great-pretender\/\",\"name\":\"The Great Pretender - Science and Nerds\",\"isPartOf\":{\"@id\":\"https:\/\/scienceandnerds.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/scienceandnerds.com\/2023\/04\/03\/the-great-pretender\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/scienceandnerds.com\/2023\/04\/03\/the-great-pretender\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/i0.wp.com\/scienceandnerds.com\/wp-content\/uploads\/2023\/04\/the-great-pretender_642b4a9fdd438.jpeg?fit=675%2C400&ssl=1\",\"datePublished\":\"2023-04-03T21:52:31+00:00\",\"dateModified\":\"2023-04-03T21:52:32+00:00\",\"author\":{\"@id\":\"https:\/\/scienceandnerds.com\/#\/schema\/person\/ea2991abeb2b9ab04b32790dff28360e\"},\"breadcrumb\":{\"@id\":\"https:\/\/scienceandnerds.com\/2023\/04\/03\/the-great-pretender\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/scienceandnerds.com\/2023\/04\/03\/the-great-pretender\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/scienceandnerds.com\/2023\/04\/03\/the-great-pretender\/#primaryimage\",\"url\":\"https:\/\/i0.wp.com\/scienceandnerds.com\/wp-content\/uploads\/2023\/04\/the-great-pretender_642b4a9fdd438.jpeg?fit=675%2C400&ssl=1\",\"contentUrl\":\"https:\/\/i0.wp.com\/scienceandnerds.com\/wp-content\/uploads\/2023\/04\/the-great-pretender_642b4a9fdd438.jpeg?fit=675%2C400&ssl=1\",\"width\":675,\"height\":400},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/scienceandnerds.com\/2023\/04\/03\/the-great-pretender\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/scienceandnerds.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"The Great Pretender\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/scienceandnerds.com\/#website\",\"url\":\"https:\/\/scienceandnerds.com\/\",\"name\":\"Science and Nerds\",\"description\":\"My WordPress Blog\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/scienceandnerds.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/scienceandnerds.com\/#\/schema\/person\/ea2991abeb2b9ab04b32790dff28360e\",\"name\":\"admin\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/scienceandnerds.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/7e6e14fc6691445ef2b2c0a3a6c43882?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/7e6e14fc6691445ef2b2c0a3a6c43882?s=96&d=mm&r=g\",\"caption\":\"admin\"},\"sameAs\":[\"https:\/\/scienceandnerds.com\"],\"url\":\"https:\/\/scienceandnerds.com\/author\/admin\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"The Great Pretender - Science and Nerds","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/scienceandnerds.com\/2023\/04\/03\/the-great-pretender\/","og_locale":"en_US","og_type":"article","og_title":"The Great Pretender - Science and Nerds","og_description":"Source:https:\/\/techcrunch.com\/2023\/04\/03\/the-great-pretender\/ The Great Pretender 2023-04-03 21:52:31 There is a good reason not to trust what today\u2019s AI constructs tell you, and it has nothing to do with the fundamental nature of intelligence or humanity, with Wittgensteinian concepts of language representation, or even disinfo in the dataset. All that matters is that these systems do not […]","og_url":"https:\/\/scienceandnerds.com\/2023\/04\/03\/the-great-pretender\/","og_site_name":"Science and Nerds","article_published_time":"2023-04-03T21:52:31+00:00","article_modified_time":"2023-04-03T21:52:32+00:00","og_image":[{"width":675,"height":400,"url":"https:\/\/scienceandnerds.com\/wp-content\/uploads\/2023\/04\/the-great-pretender_642b4a9fdd438.jpeg","type":"image\/jpeg"}],"author":"admin","twitter_card":"summary_large_image","twitter_misc":{"Written by":"admin","Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/scienceandnerds.com\/2023\/04\/03\/the-great-pretender\/","url":"https:\/\/scienceandnerds.com\/2023\/04\/03\/the-great-pretender\/","name":"The Great Pretender - Science and Nerds","isPartOf":{"@id":"https:\/\/scienceandnerds.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/scienceandnerds.com\/2023\/04\/03\/the-great-pretender\/#primaryimage"},"image":{"@id":"https:\/\/scienceandnerds.com\/2023\/04\/03\/the-great-pretender\/#primaryimage"},"thumbnailUrl":"https:\/\/i0.wp.com\/scienceandnerds.com\/wp-content\/uploads\/2023\/04\/the-great-pretender_642b4a9fdd438.jpeg?fit=675%2C400&ssl=1","datePublished":"2023-04-03T21:52:31+00:00","dateModified":"2023-04-03T21:52:32+00:00","author":{"@id":"https:\/\/scienceandnerds.com\/#\/schema\/person\/ea2991abeb2b9ab04b32790dff28360e"},"breadcrumb":{"@id":"https:\/\/scienceandnerds.com\/2023\/04\/03\/the-great-pretender\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/scienceandnerds.com\/2023\/04\/03\/the-great-pretender\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/scienceandnerds.com\/2023\/04\/03\/the-great-pretender\/#primaryimage","url":"https:\/\/i0.wp.com\/scienceandnerds.com\/wp-content\/uploads\/2023\/04\/the-great-pretender_642b4a9fdd438.jpeg?fit=675%2C400&ssl=1","contentUrl":"https:\/\/i0.wp.com\/scienceandnerds.com\/wp-content\/uploads\/2023\/04\/the-great-pretender_642b4a9fdd438.jpeg?fit=675%2C400&ssl=1","width":675,"height":400},{"@type":"BreadcrumbList","@id":"https:\/\/scienceandnerds.com\/2023\/04\/03\/the-great-pretender\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/scienceandnerds.com\/"},{"@type":"ListItem","position":2,"name":"The Great Pretender"}]},{"@type":"WebSite","@id":"https:\/\/scienceandnerds.com\/#website","url":"https:\/\/scienceandnerds.com\/","name":"Science and Nerds","description":"My WordPress Blog","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/scienceandnerds.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/scienceandnerds.com\/#\/schema\/person\/ea2991abeb2b9ab04b32790dff28360e","name":"admin","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/scienceandnerds.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/7e6e14fc6691445ef2b2c0a3a6c43882?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/7e6e14fc6691445ef2b2c0a3a6c43882?s=96&d=mm&r=g","caption":"admin"},"sameAs":["https:\/\/scienceandnerds.com"],"url":"https:\/\/scienceandnerds.com\/author\/admin\/"}]}},"jetpack_sharing_enabled":true,"jetpack_featured_media_url":"https:\/\/i0.wp.com\/scienceandnerds.com\/wp-content\/uploads\/2023\/04\/the-great-pretender_642b4a9fdd438.jpeg?fit=675%2C400&ssl=1","_links":{"self":[{"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/posts\/28108","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/comments?post=28108"}],"version-history":[{"count":1,"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/posts\/28108\/revisions"}],"predecessor-version":[{"id":28110,"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/posts\/28108\/revisions\/28110"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/media\/28109"}],"wp:attachment":[{"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/media?parent=28108"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/categories?post=28108"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/tags?post=28108"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}