wp-plugin-hostgator
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114ol-scrapes
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114Source: https:\/\/www.theverge.com\/23162454\/openai-dall-e-image-generation-tool-creative-revolution<\/a> Disclaimer<\/strong><\/small>: <\/small>All images in this story were generated using artificial intelligence<\/em><\/small>.<\/small><\/p>\n Every few years, a technology comes along that splits the world neatly into before and after. I remember the first time I saw a YouTube video embedded on a web page; the first time I synced Evernote files between devices; the first time I scanned tweets from people nearby to see what they were saying about a concert I was attending.<\/p>\n I remember the first time I Shazam\u2019d a song, summoned an Uber, and streamed myself live using Meerkat. What makes these moments stand out, I think, is the sense that some unpredictable set of new possibilities had been unlocked. What would the web become when you could easily add video clips to it? When you could summon any file to your phone from the cloud? When you could broadcast yourself to the world?<\/p>\n It\u2019s been a few years since I saw the sort of nascent technology that made me call my friends and say: you\u2019ve got to see this<\/em>. But this week I did, because I have a new one to add to the list. It\u2019s an image generation tool called DALL-E, and while I have very little idea of how it will eventually be used, it\u2019s one of the most compelling new products I\u2019ve seen since I started writing this newsletter.<\/p>\n Technically, the technology in question is DALL-E 2<\/em>. It was created by OpenAI<\/a>, a seven-year-old San Francisco company whose mission is to create a safe and useful artificial general intelligence. OpenAI is already well known in its field for creating GPT-3<\/a>, a powerful tool for generating sophisticated text passages from simple prompts, and Copilot<\/a>, a tool that helps automate writing code for software engineers.<\/p>\n DALL-E \u2014 a portmanteau of the surrealist Salvador Dal\u00ed and Pixar\u2019s WALL-E <\/em>\u2014 takes text prompts and generates images from them. In January 2021, the company introduced the first version of the tool<\/a>, which was limited to 256-by-256 pixel squares.<\/p>\n But the second version<\/a>, which entered a private research beta in April, feels like a radical leap forward. The images are now 1,024 by 1,024 pixels and can incorporate new techniques such as \u201cinpainting\u201d \u2014 replacing one or more elements of an image with another. (Imagine taking a photo of an orange in a bowl and replacing it with an apple.) DALL-E has also improved at understanding the relationship between objects, which helps it depict increasingly fantastic scenes \u2014 a koala dunking a basketball, an astronaut riding a horse.<\/p>\n For weeks now, threads of DALL-E-generated images<\/a> have been taking over my Twitter timeline. And after I mused about what I might do with the technology \u2014 namely, waste countless hours on it<\/a> \u2014 a very nice person at OpenAI took pity on me and invited me into the private research beta. The number of people who have access is now in the low thousands, a spokeswoman told me today; the company is hoping to add 1,000 people a week. <\/p>\n
\n
<\/br><\/code><\/p>\n
\n