Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wp-plugin-hostgator domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the ol-scrapes domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893
{"id":34837,"date":"2023-06-09T22:01:53","date_gmt":"2023-06-09T22:01:53","guid":{"rendered":"https:\/\/scienceandnerds.com\/2023\/06\/09\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech\/"},"modified":"2023-06-09T22:01:55","modified_gmt":"2023-06-09T22:01:55","slug":"runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech","status":"publish","type":"post","link":"https:\/\/scienceandnerds.com\/2023\/06\/09\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech\/","title":{"rendered":"Runway\u2019s Gen-2 shows the limitations of today\u2019s text-to-video tech"},"content":{"rendered":"

Source:https:\/\/techcrunch.com\/2023\/06\/09\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech\/<\/a><\/br>
\nRunway\u2019s Gen-2 shows the limitations of today\u2019s text-to-video tech<\/br>
\n2023-06-09 22:01:53<\/br><\/p>\n

\n

In a recent panel interview with Collider, Joe Russo, the director of tentpole Marvel films like \u201cAvengers: Endgame,\u201d predicted that within two years, AI will be able to create a fully fledged movie. I\u2019d say that\u2019s a rather optimistic timeline. But we\u2019re getting closer.<\/p>\n

This week, Runway<\/a>, a Google-backed<\/a> AI startup that helped develop the AI image generator Stable Diffusion<\/a>, released Gen-2, a model that generates videos from text prompts or an existing image. (Gen-2 was previously in limited, waitlisted access.) The follow-up to Runway\u2019s Gen-1 model launched in February, Gen-2 is one of the first commercially available text-to-video models.<\/p>\n

\u201cCommercially available\u201d is an important distinction. Text-to-video, being the logical next frontier in generative AI after images and text, is becoming a bigger area of focus particularly among tech giants, several of which have demoed text-to-video models over the past year. But those models remain firmly in the research stages, inaccessible to all but a select few data scientists and engineers.<\/p>\n

Of course, first isn\u2019t necessarily better.<\/p>\n

Out of personal curiosity and service to you, dear readers, I ran a few prompts through Gen-2 to get a sense of what the model can \u2014 and can\u2019t \u2014 accomplish. (Runway\u2019s currently providing around 100 seconds of free video generation.) There wasn\u2019t much of a method to my madness, but I tried to capture a range of angles, genres and styles that a director, professional or armchair, might like to see on the silver screen \u2014 or a laptop as the case might be.<\/p>\n

One limitation of Gen-2 that became immediately apparent is the framerate of the four-second-long videos the model generates. It\u2019s quite low and noticeably so, to the point where it\u2019s nearly slideshow-like in places.<\/p>\n

\"Runway<\/p>\n

Image Credits:<\/strong> Runway<\/p>\n<\/div>\n

What\u2019s unclear is whether that\u2019s a problem with the tech or an attempt by Runway to save on compute costs. In any case, it makes Gen-2 a rather unattractive proposition off the bat for editors hoping to avoid post-production work.<\/p>\n

Beyond the framerate issue, I\u2019ve found that Gen-2-generated clips tend to share a certain graininess or fuzziness in common, as if they\u2019ve had some sort of old-timey Instagram filter applied. Other artifacting occurs as well in places, like pixelation around objects when the \u201ccamera\u201d (for lack of a better word) circles them or quickly zooms toward them.<\/p>\n

As with many generative models, Gen-2 isn\u2019t particularly consistent with respect to physics or anatomy, either. Like something conjured up by a surrealist, people\u2019s arms and legs in Gen-2-produced videos meld together and come apart again while objects melt into the floor and disappear, their reflections warped and distorted. And \u2014 depending on the prompt \u2014 faces can appear doll-like, with glossy, emotionless eyes and pasty skin that evokes a cheap plastic.<\/p>\n

\"Runway<\/p>\n

Image Credits:<\/strong> Runway<\/p>\n<\/div>\n

To pile on higher, there\u2019s the content issue. Gen-2 seems to have a tough time understanding nuance, clinging to particular descriptors in prompts while ignoring others, seemingly at random.<\/p>\n

\"Runway<\/p>\n

Image Credits:<\/strong> Runway<\/p>\n<\/div>\n

One of the prompts I tried \u2014 \u201cA video of an underwater utopia, shot on an old camera, in the style of a \u2018found footage\u2019 film\u201d \u2014 brought about no such utopia, only what looked like a first-person scuba dive through an anonymous coral reef. Gen-2 struggled with my other prompts, too, failing to generate a zoom-in shot for a prompt specifically calling for a \u201cslow zoom\u201d and not quite nailing the look of your average astronaut.<\/p>\n

Could the issues lie with Gen-2\u2019s training data set? Perhaps.<\/p>\n

Gen-2, like Stable Diffusion, is a diffusion model, meaning it learns how to gradually subtract noise from a starting image made entirely of noise to move it closer, step by step, to the prompt. Diffusion models learn through training on millions to billions of examples; in an academic paper<\/a> detailing Gen-2\u2019s architecture, Runway says the model was trained on an internal data set of 240 million images and 6.4 million video clips.<\/p>\n

Diversity in the examples is key. If the data set doesn\u2019t contain much footage of, say, animation, the model \u2014 lacking points of reference \u2014 won\u2019t be able to generate reasonable-quality animations. (Of course, animation being a broad field, even if the data set did<\/em> have clips of anime or hand-drawn animation, the model wouldn\u2019t necessarily generalize well to all<\/em> types of animation.)<\/p>\n

\"Runway<\/p>\n

Image Credits:<\/strong> Runway<\/p>\n<\/div>\n

On the plus side, Gen-2 passes a surface-level bias test. While generative AI models like DALL-E 2 have been found to reinforce societal biases, generating images of positions of authority \u2014 like \u201cCEO or \u201cdirector\u201d \u2014 that depict mostly white men, Gen-2 was the tiniest bit more diverse in the content it generated \u2014 at least in my testing.<\/p>\n

\"Runway<\/p>\n

Image Credits:<\/strong> Runway<\/p>\n<\/div>\n

Fed the prompt \u201cA video of a CEO walking into a conference room,\u201d Gen-2 generated a video of men and women (albeit more men than women) seated around something like a conference table. The output for the prompt \u201cA video of a doctor working in an office,\u201d meanwhile, depicts a woman doctor vaguely Asian in appearance behind a desk.<\/p>\n

Results for any prompt containing the word \u201cnurse\u201d were less promising, though, consistently showing young white women. Ditto for the phrase \u201ca person waiting tables.\u201d Evidently, there\u2019s work to be done.<\/p>\n

The takeaway from all this, for me, is that Gen-2 is more a novelty or toy than a genuinely useful tool in any video workflow. Could the outputs be edited into something more coherent? Perhaps. But depending on the video, it\u2019d require potentially more work than shooting footage in the first place.<\/p>\n

That\u2019s not to be too<\/em> dismissive of the tech. It\u2019s impressive what Runway\u2019s done, here, effectively beating tech giants to the text-to-video punch. And I\u2019m sure some users will find uses for Gen-2 that don\u2019t require photorealism \u2014 or a lot of customizability. (Runway CEO Crist\u00f3bal Valenzuela recently<\/a> told Bloomberg that he sees Gen-2 as a way to offer artists and designers a tool that can help them with their creative processes.)<\/p>\n

\"Runway<\/p>\n

Image Credits:<\/strong> Runway<\/p>\n<\/div>\n

I did myself. Gen-2 can indeed understand a range of styles, like anime and claymation, which lend themselves to the lower framerate. With a little fiddling and editing work, it wouldn\u2019t be impossible to string together a few clips to create a narrative piece.<\/p>\n

Lest the potential for deepfakes concern you, Runway says it\u2019s using a combination of AI and human moderation to prevent users from generating videos that include pornography or violent content or that violate copyrights. I can confirm there\u2019s a content filter \u2014 an overzealous one in point of fact. But of course, those aren\u2019t foolproof methods, so we\u2019ll have to see how well they work in practice.<\/p>\n

\"Runway<\/p>\n

Image Credits:<\/strong> Runway<\/p>\n<\/div>\n

But at least for now, filmmakers, animators and CGI artists and ethicists can rest easy. It\u2019ll be at least a couple of iterations down the line before Runway\u2019s tech comes close to generating film-quality footage \u2014 assuming it ever gets there.<\/p>\n<\/p><\/div>\n

<\/br><\/br><\/br><\/p>\n

Science, Tech, Technology<\/br>
\n<\/br>
\nSource:
https:\/\/techcrunch.com\/2023\/06\/09\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech\/<\/a><\/br><\/br><\/p>\n","protected":false},"excerpt":{"rendered":"

Source:https:\/\/techcrunch.com\/2023\/06\/09\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech\/ Runway\u2019s Gen-2 shows the limitations of today\u2019s text-to-video tech 2023-06-09 22:01:53 In a recent panel interview with Collider, Joe Russo, the director of tentpole Marvel films like \u201cAvengers: Endgame,\u201d predicted that within two years, AI will be able to create a fully fledged movie. I\u2019d say that\u2019s a rather optimistic timeline. But we\u2019re getting […]<\/p>\n","protected":false},"author":1,"featured_media":34838,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"nf_dc_page":"","om_disable_all_campaigns":false,"pagelayer_contact_templates":[],"_pagelayer_content":"","_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[26,17,8],"tags":[],"class_list":["post-34837","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-science","category-tech","category-technology"],"yoast_head":"\nRunway\u2019s Gen-2 shows the limitations of today\u2019s text-to-video tech - Science and Nerds<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/scienceandnerds.com\/2023\/06\/09\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Runway\u2019s Gen-2 shows the limitations of today\u2019s text-to-video tech - Science and Nerds\" \/>\n<meta property=\"og:description\" content=\"Source:https:\/\/techcrunch.com\/2023\/06\/09\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech\/ Runway\u2019s Gen-2 shows the limitations of today\u2019s text-to-video tech 2023-06-09 22:01:53 In a recent panel interview with Collider, Joe Russo, the director of tentpole Marvel films like \u201cAvengers: Endgame,\u201d predicted that within two years, AI will be able to create a fully fledged movie. I\u2019d say that\u2019s a rather optimistic timeline. But we\u2019re getting […]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/scienceandnerds.com\/2023\/06\/09\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech\/\" \/>\n<meta property=\"og:site_name\" content=\"Science and Nerds\" \/>\n<meta property=\"article:published_time\" content=\"2023-06-09T22:01:53+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-06-09T22:01:55+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/scienceandnerds.com\/wp-content\/uploads\/2023\/06\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech_6483a151d0db9.png\" \/>\n\t<meta property=\"og:image:width\" content=\"729\" \/>\n\t<meta property=\"og:image:height\" content=\"375\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"admin\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"admin\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/scienceandnerds.com\/2023\/06\/09\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech\/\",\"url\":\"https:\/\/scienceandnerds.com\/2023\/06\/09\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech\/\",\"name\":\"Runway\u2019s Gen-2 shows the limitations of today\u2019s text-to-video tech - Science and Nerds\",\"isPartOf\":{\"@id\":\"https:\/\/scienceandnerds.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/scienceandnerds.com\/2023\/06\/09\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/scienceandnerds.com\/2023\/06\/09\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/i0.wp.com\/scienceandnerds.com\/wp-content\/uploads\/2023\/06\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech_6483a151d0db9.png?fit=729%2C375&ssl=1\",\"datePublished\":\"2023-06-09T22:01:53+00:00\",\"dateModified\":\"2023-06-09T22:01:55+00:00\",\"author\":{\"@id\":\"https:\/\/scienceandnerds.com\/#\/schema\/person\/ea2991abeb2b9ab04b32790dff28360e\"},\"breadcrumb\":{\"@id\":\"https:\/\/scienceandnerds.com\/2023\/06\/09\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/scienceandnerds.com\/2023\/06\/09\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/scienceandnerds.com\/2023\/06\/09\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech\/#primaryimage\",\"url\":\"https:\/\/i0.wp.com\/scienceandnerds.com\/wp-content\/uploads\/2023\/06\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech_6483a151d0db9.png?fit=729%2C375&ssl=1\",\"contentUrl\":\"https:\/\/i0.wp.com\/scienceandnerds.com\/wp-content\/uploads\/2023\/06\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech_6483a151d0db9.png?fit=729%2C375&ssl=1\",\"width\":729,\"height\":375},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/scienceandnerds.com\/2023\/06\/09\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/scienceandnerds.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Runway\u2019s Gen-2 shows the limitations of today\u2019s text-to-video tech\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/scienceandnerds.com\/#website\",\"url\":\"https:\/\/scienceandnerds.com\/\",\"name\":\"Science and Nerds\",\"description\":\"My WordPress Blog\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/scienceandnerds.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/scienceandnerds.com\/#\/schema\/person\/ea2991abeb2b9ab04b32790dff28360e\",\"name\":\"admin\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/scienceandnerds.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/7e6e14fc6691445ef2b2c0a3a6c43882?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/7e6e14fc6691445ef2b2c0a3a6c43882?s=96&d=mm&r=g\",\"caption\":\"admin\"},\"sameAs\":[\"https:\/\/scienceandnerds.com\"],\"url\":\"https:\/\/scienceandnerds.com\/author\/admin\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Runway\u2019s Gen-2 shows the limitations of today\u2019s text-to-video tech - Science and Nerds","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/scienceandnerds.com\/2023\/06\/09\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech\/","og_locale":"en_US","og_type":"article","og_title":"Runway\u2019s Gen-2 shows the limitations of today\u2019s text-to-video tech - Science and Nerds","og_description":"Source:https:\/\/techcrunch.com\/2023\/06\/09\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech\/ Runway\u2019s Gen-2 shows the limitations of today\u2019s text-to-video tech 2023-06-09 22:01:53 In a recent panel interview with Collider, Joe Russo, the director of tentpole Marvel films like \u201cAvengers: Endgame,\u201d predicted that within two years, AI will be able to create a fully fledged movie. I\u2019d say that\u2019s a rather optimistic timeline. But we\u2019re getting […]","og_url":"https:\/\/scienceandnerds.com\/2023\/06\/09\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech\/","og_site_name":"Science and Nerds","article_published_time":"2023-06-09T22:01:53+00:00","article_modified_time":"2023-06-09T22:01:55+00:00","og_image":[{"width":729,"height":375,"url":"https:\/\/scienceandnerds.com\/wp-content\/uploads\/2023\/06\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech_6483a151d0db9.png","type":"image\/png"}],"author":"admin","twitter_card":"summary_large_image","twitter_misc":{"Written by":"admin","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/scienceandnerds.com\/2023\/06\/09\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech\/","url":"https:\/\/scienceandnerds.com\/2023\/06\/09\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech\/","name":"Runway\u2019s Gen-2 shows the limitations of today\u2019s text-to-video tech - Science and Nerds","isPartOf":{"@id":"https:\/\/scienceandnerds.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/scienceandnerds.com\/2023\/06\/09\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech\/#primaryimage"},"image":{"@id":"https:\/\/scienceandnerds.com\/2023\/06\/09\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech\/#primaryimage"},"thumbnailUrl":"https:\/\/i0.wp.com\/scienceandnerds.com\/wp-content\/uploads\/2023\/06\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech_6483a151d0db9.png?fit=729%2C375&ssl=1","datePublished":"2023-06-09T22:01:53+00:00","dateModified":"2023-06-09T22:01:55+00:00","author":{"@id":"https:\/\/scienceandnerds.com\/#\/schema\/person\/ea2991abeb2b9ab04b32790dff28360e"},"breadcrumb":{"@id":"https:\/\/scienceandnerds.com\/2023\/06\/09\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/scienceandnerds.com\/2023\/06\/09\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/scienceandnerds.com\/2023\/06\/09\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech\/#primaryimage","url":"https:\/\/i0.wp.com\/scienceandnerds.com\/wp-content\/uploads\/2023\/06\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech_6483a151d0db9.png?fit=729%2C375&ssl=1","contentUrl":"https:\/\/i0.wp.com\/scienceandnerds.com\/wp-content\/uploads\/2023\/06\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech_6483a151d0db9.png?fit=729%2C375&ssl=1","width":729,"height":375},{"@type":"BreadcrumbList","@id":"https:\/\/scienceandnerds.com\/2023\/06\/09\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/scienceandnerds.com\/"},{"@type":"ListItem","position":2,"name":"Runway\u2019s Gen-2 shows the limitations of today\u2019s text-to-video tech"}]},{"@type":"WebSite","@id":"https:\/\/scienceandnerds.com\/#website","url":"https:\/\/scienceandnerds.com\/","name":"Science and Nerds","description":"My WordPress Blog","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/scienceandnerds.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/scienceandnerds.com\/#\/schema\/person\/ea2991abeb2b9ab04b32790dff28360e","name":"admin","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/scienceandnerds.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/7e6e14fc6691445ef2b2c0a3a6c43882?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/7e6e14fc6691445ef2b2c0a3a6c43882?s=96&d=mm&r=g","caption":"admin"},"sameAs":["https:\/\/scienceandnerds.com"],"url":"https:\/\/scienceandnerds.com\/author\/admin\/"}]}},"jetpack_sharing_enabled":true,"jetpack_featured_media_url":"https:\/\/i0.wp.com\/scienceandnerds.com\/wp-content\/uploads\/2023\/06\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech_6483a151d0db9.png?fit=729%2C375&ssl=1","_links":{"self":[{"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/posts\/34837","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/comments?post=34837"}],"version-history":[{"count":1,"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/posts\/34837\/revisions"}],"predecessor-version":[{"id":34839,"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/posts\/34837\/revisions\/34839"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/media\/34838"}],"wp:attachment":[{"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/media?parent=34837"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/categories?post=34837"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/tags?post=34837"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}