Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wp-plugin-hostgator domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the ol-scrapes domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893
{"id":37198,"date":"2023-08-14T21:58:41","date_gmt":"2023-08-14T21:58:41","guid":{"rendered":"https:\/\/scienceandnerds.com\/2023\/08\/14\/risky-giant-steps-can-solve-optimization-problems-faster\/"},"modified":"2023-08-14T21:58:42","modified_gmt":"2023-08-14T21:58:42","slug":"risky-giant-steps-can-solve-optimization-problems-faster","status":"publish","type":"post","link":"https:\/\/scienceandnerds.com\/2023\/08\/14\/risky-giant-steps-can-solve-optimization-problems-faster\/","title":{"rendered":"Risky Giant Steps Can Solve Optimization Problems Faster"},"content":{"rendered":"

Source:https:\/\/www.quantamagazine.org\/risky-giant-steps-can-solve-optimization-problems-faster-20230811\/#comments<\/a><\/br>
\nRisky Giant Steps Can Solve Optimization Problems Faster<\/br>
\n2023-08-14 21:58:41<\/br><\/p>\n

\n

\u201cIt turns out that we did not have full understanding\u201d of the theory behind gradient descent, said Shuvomoy Das Gupta<\/a>, an optimization researcher at the Massachusetts Institute of Technology. Now, he said, we\u2019re \u201ccloser to understanding what gradient descent is doing.\u201d<\/p>\n

The technique itself is deceptively simple. It uses something called a cost function, which looks like a smooth, curved line meandering up and down across a graph. For any point on that line, the height represents cost in some way \u2014 how much time, energy or error the operation will incur when tuned to a specific setting. The higher the point, the farther from ideal the system is. Naturally, you want to find the lowest point on this line, where the cost is smallest.<\/p>\n

Gradient descent algorithms feel their way to the bottom by picking a point and calculating the slope (or gradient) of the curve around it, then moving in the direction where the slope is steepest. Imagine this as feeling your way down a mountain in the dark. You may not know exactly where to move, how long you\u2019ll have to hike or how close to sea level you will ultimately get, but if you head down the sharpest descent, you should eventually arrive at the lowest point in the area.<\/p>\n

Unlike the metaphorical mountaineer, optimization researchers can program their gradient descent algorithms to take steps of any size. Giant leaps are tempting but also risky, as they could overshoot the answer. Instead, the field\u2019s conventional wisdom for decades has been to take baby steps. In gradient descent equations, this means a step size no bigger than 2, though no one could prove that smaller step sizes were always better.<\/p>\n

With advances in computer-aided proof techniques, optimization theorists have begun testing more extreme techniques. In one study, first posted<\/a> in 2022 and recently published<\/a> in Mathematical Programming<\/em>, Das Gupta and others tasked a computer with finding the best step lengths for an algorithm restricted to running only 50 steps \u2014 a sort of meta-optimization problem, since it was trying to optimize optimization. They found that the most optimal 50 steps varied significantly in length, with one step in the middle of the sequence reaching nearly to length 37, far above the typical cap of length 2.<\/p>\n

The findings suggested that optimization researchers had missed something. Intrigued, Grimmer sought to turn Das Gupta\u2019s numerical results into a more general theorem. To get past an arbitrary cap of 50 steps, Grimmer explored what the optimal step lengths would be for a sequence that could repeat, getting closer to the optimal answer with each repetition. He ran the computer through millions of permutations of step length sequences, helping to find those that converged on the answer the fastest.<\/p>\n

Grimmer found that the fastest sequences always had one thing in common: The middle step was always a big one. Its size depended on the number of steps in the repeating sequence. For a three-step sequence, the big step had length 4.9. For a 15-step sequence, the algorithm recommended one step of length 29.7. And for a 127-step sequence, the longest one tested, the big central leap was a whopping 370. At first that sounds like an absurdly large number, Grimmer said, but there were enough total steps to make up for that giant leap, so even if you blew past the bottom, you could still make it back quickly. His paper showed that this sequence can arrive at the optimal point nearly three times faster than it would by taking constant baby steps. \u201cSometimes, you should really overcommit,\u201d he said.<\/p>\n

This cyclical approach represents a different way of thinking of gradient descent, said Aymeric Dieuleveut<\/a>, an optimization researcher at the \u00c9cole Polytechnique in Palaiseau, France. \u201cThis intuition, that I should think not step by step, but as a number of steps consecutively \u2014 I think this is something that many people ignore,\u201d he said. \u201cIt\u2019s not the way it\u2019s taught.\u201d (Grimmer notes that this reframing was also proposed<\/a> for a similar class of problems in a 2018 master\u2019s thesis by Jason Altschuler, an optimization researcher now at the University of Pennsylvania.)<\/p>\n

However, while these insights may change how researchers think about gradient descent, they likely won\u2019t change how the technique is currently used. Grimmer\u2019s paper focused only on smooth functions, which have no sharp kinks, and convex functions, which are shaped like a bowl and only have one optimal value at the bottom. These kinds of functions are fundamental to theory but less relevant in practice; the optimization programs machine learning researchers use are usually much more complicated. These require versions of gradient descent that have \u201cso many bells and whistles, and so many nuances,\u201d Grimmer said.<\/p>\n

Some of these souped-up techniques can go faster than Grimmer\u2019s big-step approach, said Gauthier Gidel<\/a>, an optimization and machine learning researcher at the University of Montreal. But these techniques come at an additional operational cost, so the hope has been that regular gradient descent could win out with the right combination of step sizes. Unfortunately, the new study\u2019s threefold speedup isn\u2019t enough.<\/p>\n

\u201cIt shows a marginal improvement,\u201d Gidel said. \u201cBut I guess the real question is: Can we really close this gap?\u201d<\/p>\n

The results also raise an additional theoretical mystery that has kept Grimmer up at night. Why did the ideal patterns of step sizes all have such a symmetric shape? Not only is the biggest step always smack in the center, but the same pattern appears on either side of it: Keep zooming in and subdividing the sequence, he said, and you get an \u201calmost fractal pattern\u201d of bigger steps surrounded by smaller steps. The repetition suggests an underlying structure governing the best solutions that no one has yet managed to explain. But Grimmer, at least, is hopeful.<\/p>\n

\u201cIf I can\u2019t crack it, someone else will,\u201d he said.<\/p>\n<\/div>\n

<\/br><\/br><\/br><\/p>\n

Uncategorized<\/br>
\n<\/br>
\nSource:
https:\/\/www.quantamagazine.org\/risky-giant-steps-can-solve-optimization-problems-faster-20230811\/#comments<\/a><\/br><\/br><\/p>\n","protected":false},"excerpt":{"rendered":"

Source:https:\/\/www.quantamagazine.org\/risky-giant-steps-can-solve-optimization-problems-faster-20230811\/#comments Risky Giant Steps Can Solve Optimization Problems Faster 2023-08-14 21:58:41 \u201cIt turns out that we did not have full understanding\u201d of the theory behind gradient descent, said Shuvomoy Das Gupta, an optimization researcher at the Massachusetts Institute of Technology. Now, he said, we\u2019re \u201ccloser to understanding what gradient descent is doing.\u201d The technique itself […]<\/p>\n","protected":false},"author":1,"featured_media":37199,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"nf_dc_page":"","om_disable_all_campaigns":false,"pagelayer_contact_templates":[],"_pagelayer_content":"","_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-37198","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"yoast_head":"\nRisky Giant Steps Can Solve Optimization Problems Faster - Science and Nerds<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/scienceandnerds.com\/2023\/08\/14\/risky-giant-steps-can-solve-optimization-problems-faster\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Risky Giant Steps Can Solve Optimization Problems Faster - Science and Nerds\" \/>\n<meta property=\"og:description\" content=\"Source:https:\/\/www.quantamagazine.org\/risky-giant-steps-can-solve-optimization-problems-faster-20230811\/#comments Risky Giant Steps Can Solve Optimization Problems Faster 2023-08-14 21:58:41 \u201cIt turns out that we did not have full understanding\u201d of the theory behind gradient descent, said Shuvomoy Das Gupta, an optimization researcher at the Massachusetts Institute of Technology. Now, he said, we\u2019re \u201ccloser to understanding what gradient descent is doing.\u201d The technique itself […]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/scienceandnerds.com\/2023\/08\/14\/risky-giant-steps-can-solve-optimization-problems-faster\/\" \/>\n<meta property=\"og:site_name\" content=\"Science and Nerds\" \/>\n<meta property=\"article:published_time\" content=\"2023-08-14T21:58:41+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-08-14T21:58:42+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/scienceandnerds.com\/wp-content\/uploads\/2023\/08\/risky-giant-steps-can-solve-optimization-problems-faster_64daa392414c8.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"2560\" \/>\n\t<meta property=\"og:image:height\" content=\"1440\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"admin\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"admin\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"5 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/scienceandnerds.com\/2023\/08\/14\/risky-giant-steps-can-solve-optimization-problems-faster\/\",\"url\":\"https:\/\/scienceandnerds.com\/2023\/08\/14\/risky-giant-steps-can-solve-optimization-problems-faster\/\",\"name\":\"Risky Giant Steps Can Solve Optimization Problems Faster - Science and Nerds\",\"isPartOf\":{\"@id\":\"https:\/\/scienceandnerds.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/scienceandnerds.com\/2023\/08\/14\/risky-giant-steps-can-solve-optimization-problems-faster\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/scienceandnerds.com\/2023\/08\/14\/risky-giant-steps-can-solve-optimization-problems-faster\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/i0.wp.com\/scienceandnerds.com\/wp-content\/uploads\/2023\/08\/risky-giant-steps-can-solve-optimization-problems-faster_64daa392414c8.webp?fit=2560%2C1440&ssl=1\",\"datePublished\":\"2023-08-14T21:58:41+00:00\",\"dateModified\":\"2023-08-14T21:58:42+00:00\",\"author\":{\"@id\":\"https:\/\/scienceandnerds.com\/#\/schema\/person\/ea2991abeb2b9ab04b32790dff28360e\"},\"breadcrumb\":{\"@id\":\"https:\/\/scienceandnerds.com\/2023\/08\/14\/risky-giant-steps-can-solve-optimization-problems-faster\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/scienceandnerds.com\/2023\/08\/14\/risky-giant-steps-can-solve-optimization-problems-faster\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/scienceandnerds.com\/2023\/08\/14\/risky-giant-steps-can-solve-optimization-problems-faster\/#primaryimage\",\"url\":\"https:\/\/i0.wp.com\/scienceandnerds.com\/wp-content\/uploads\/2023\/08\/risky-giant-steps-can-solve-optimization-problems-faster_64daa392414c8.webp?fit=2560%2C1440&ssl=1\",\"contentUrl\":\"https:\/\/i0.wp.com\/scienceandnerds.com\/wp-content\/uploads\/2023\/08\/risky-giant-steps-can-solve-optimization-problems-faster_64daa392414c8.webp?fit=2560%2C1440&ssl=1\",\"width\":2560,\"height\":1440},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/scienceandnerds.com\/2023\/08\/14\/risky-giant-steps-can-solve-optimization-problems-faster\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/scienceandnerds.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Risky Giant Steps Can Solve Optimization Problems Faster\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/scienceandnerds.com\/#website\",\"url\":\"https:\/\/scienceandnerds.com\/\",\"name\":\"Science and Nerds\",\"description\":\"My WordPress Blog\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/scienceandnerds.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/scienceandnerds.com\/#\/schema\/person\/ea2991abeb2b9ab04b32790dff28360e\",\"name\":\"admin\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/scienceandnerds.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/7e6e14fc6691445ef2b2c0a3a6c43882?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/7e6e14fc6691445ef2b2c0a3a6c43882?s=96&d=mm&r=g\",\"caption\":\"admin\"},\"sameAs\":[\"https:\/\/scienceandnerds.com\"],\"url\":\"https:\/\/scienceandnerds.com\/author\/admin\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Risky Giant Steps Can Solve Optimization Problems Faster - Science and Nerds","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/scienceandnerds.com\/2023\/08\/14\/risky-giant-steps-can-solve-optimization-problems-faster\/","og_locale":"en_US","og_type":"article","og_title":"Risky Giant Steps Can Solve Optimization Problems Faster - Science and Nerds","og_description":"Source:https:\/\/www.quantamagazine.org\/risky-giant-steps-can-solve-optimization-problems-faster-20230811\/#comments Risky Giant Steps Can Solve Optimization Problems Faster 2023-08-14 21:58:41 \u201cIt turns out that we did not have full understanding\u201d of the theory behind gradient descent, said Shuvomoy Das Gupta, an optimization researcher at the Massachusetts Institute of Technology. Now, he said, we\u2019re \u201ccloser to understanding what gradient descent is doing.\u201d The technique itself […]","og_url":"https:\/\/scienceandnerds.com\/2023\/08\/14\/risky-giant-steps-can-solve-optimization-problems-faster\/","og_site_name":"Science and Nerds","article_published_time":"2023-08-14T21:58:41+00:00","article_modified_time":"2023-08-14T21:58:42+00:00","og_image":[{"width":2560,"height":1440,"url":"https:\/\/scienceandnerds.com\/wp-content\/uploads\/2023\/08\/risky-giant-steps-can-solve-optimization-problems-faster_64daa392414c8.webp","type":"image\/webp"}],"author":"admin","twitter_card":"summary_large_image","twitter_misc":{"Written by":"admin","Est. reading time":"5 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/scienceandnerds.com\/2023\/08\/14\/risky-giant-steps-can-solve-optimization-problems-faster\/","url":"https:\/\/scienceandnerds.com\/2023\/08\/14\/risky-giant-steps-can-solve-optimization-problems-faster\/","name":"Risky Giant Steps Can Solve Optimization Problems Faster - Science and Nerds","isPartOf":{"@id":"https:\/\/scienceandnerds.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/scienceandnerds.com\/2023\/08\/14\/risky-giant-steps-can-solve-optimization-problems-faster\/#primaryimage"},"image":{"@id":"https:\/\/scienceandnerds.com\/2023\/08\/14\/risky-giant-steps-can-solve-optimization-problems-faster\/#primaryimage"},"thumbnailUrl":"https:\/\/i0.wp.com\/scienceandnerds.com\/wp-content\/uploads\/2023\/08\/risky-giant-steps-can-solve-optimization-problems-faster_64daa392414c8.webp?fit=2560%2C1440&ssl=1","datePublished":"2023-08-14T21:58:41+00:00","dateModified":"2023-08-14T21:58:42+00:00","author":{"@id":"https:\/\/scienceandnerds.com\/#\/schema\/person\/ea2991abeb2b9ab04b32790dff28360e"},"breadcrumb":{"@id":"https:\/\/scienceandnerds.com\/2023\/08\/14\/risky-giant-steps-can-solve-optimization-problems-faster\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/scienceandnerds.com\/2023\/08\/14\/risky-giant-steps-can-solve-optimization-problems-faster\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/scienceandnerds.com\/2023\/08\/14\/risky-giant-steps-can-solve-optimization-problems-faster\/#primaryimage","url":"https:\/\/i0.wp.com\/scienceandnerds.com\/wp-content\/uploads\/2023\/08\/risky-giant-steps-can-solve-optimization-problems-faster_64daa392414c8.webp?fit=2560%2C1440&ssl=1","contentUrl":"https:\/\/i0.wp.com\/scienceandnerds.com\/wp-content\/uploads\/2023\/08\/risky-giant-steps-can-solve-optimization-problems-faster_64daa392414c8.webp?fit=2560%2C1440&ssl=1","width":2560,"height":1440},{"@type":"BreadcrumbList","@id":"https:\/\/scienceandnerds.com\/2023\/08\/14\/risky-giant-steps-can-solve-optimization-problems-faster\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/scienceandnerds.com\/"},{"@type":"ListItem","position":2,"name":"Risky Giant Steps Can Solve Optimization Problems Faster"}]},{"@type":"WebSite","@id":"https:\/\/scienceandnerds.com\/#website","url":"https:\/\/scienceandnerds.com\/","name":"Science and Nerds","description":"My WordPress Blog","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/scienceandnerds.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/scienceandnerds.com\/#\/schema\/person\/ea2991abeb2b9ab04b32790dff28360e","name":"admin","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/scienceandnerds.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/7e6e14fc6691445ef2b2c0a3a6c43882?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/7e6e14fc6691445ef2b2c0a3a6c43882?s=96&d=mm&r=g","caption":"admin"},"sameAs":["https:\/\/scienceandnerds.com"],"url":"https:\/\/scienceandnerds.com\/author\/admin\/"}]}},"jetpack_sharing_enabled":true,"jetpack_featured_media_url":"https:\/\/i0.wp.com\/scienceandnerds.com\/wp-content\/uploads\/2023\/08\/risky-giant-steps-can-solve-optimization-problems-faster_64daa392414c8.webp?fit=2560%2C1440&ssl=1","_links":{"self":[{"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/posts\/37198","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/comments?post=37198"}],"version-history":[{"count":1,"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/posts\/37198\/revisions"}],"predecessor-version":[{"id":37200,"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/posts\/37198\/revisions\/37200"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/media\/37199"}],"wp:attachment":[{"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/media?parent=37198"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/categories?post=37198"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/tags?post=37198"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}