wp-plugin-hostgator
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114ol-scrapes
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114Source:https:\/\/techcrunch.com\/2023\/06\/09\/runways-gen-2-shows-the-limitations-of-todays-text-to-video-tech\/<\/a><\/br> In a recent panel interview with Collider, Joe Russo, the director of tentpole Marvel films like \u201cAvengers: Endgame,\u201d predicted that within two years, AI will be able to create a fully fledged movie. I\u2019d say that\u2019s a rather optimistic timeline. But we\u2019re getting closer.<\/p>\n This week, Runway<\/a>, a Google-backed<\/a> AI startup that helped develop the AI image generator Stable Diffusion<\/a>, released Gen-2, a model that generates videos from text prompts or an existing image. (Gen-2 was previously in limited, waitlisted access.) The follow-up to Runway\u2019s Gen-1 model launched in February, Gen-2 is one of the first commercially available text-to-video models.<\/p>\n \u201cCommercially available\u201d is an important distinction. Text-to-video, being the logical next frontier in generative AI after images and text, is becoming a bigger area of focus particularly among tech giants, several of which have demoed text-to-video models over the past year. But those models remain firmly in the research stages, inaccessible to all but a select few data scientists and engineers.<\/p>\n Of course, first isn\u2019t necessarily better.<\/p>\n Out of personal curiosity and service to you, dear readers, I ran a few prompts through Gen-2 to get a sense of what the model can \u2014 and can\u2019t \u2014 accomplish. (Runway\u2019s currently providing around 100 seconds of free video generation.) There wasn\u2019t much of a method to my madness, but I tried to capture a range of angles, genres and styles that a director, professional or armchair, might like to see on the silver screen \u2014 or a laptop as the case might be.<\/p>\n One limitation of Gen-2 that became immediately apparent is the framerate of the four-second-long videos the model generates. It\u2019s quite low and noticeably so, to the point where it\u2019s nearly slideshow-like in places.<\/p>\n
\nRunway\u2019s Gen-2 shows the limitations of today\u2019s text-to-video tech<\/br>
\n2023-06-09 22:01:53<\/br><\/p>\n