Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wp-plugin-hostgator domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the ol-scrapes domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893
{"id":31525,"date":"2023-05-10T21:36:50","date_gmt":"2023-05-10T21:36:50","guid":{"rendered":"https:\/\/scienceandnerds.com\/2023\/05\/10\/google-i-o-2023-is-a-wrap-heres-a-list-of-everything-announced\/"},"modified":"2023-05-10T21:36:51","modified_gmt":"2023-05-10T21:36:51","slug":"google-i-o-2023-is-a-wrap-heres-a-list-of-everything-announced","status":"publish","type":"post","link":"https:\/\/scienceandnerds.com\/2023\/05\/10\/google-i-o-2023-is-a-wrap-heres-a-list-of-everything-announced\/","title":{"rendered":"Google I\/O 2023 is a wrap \u2014 here\u2019s a list of everything announced"},"content":{"rendered":"

Source:https:\/\/techcrunch.com\/2023\/05\/10\/heres-everything-google-has-announced-at-i-o-so-far\/<\/a><\/br>
\nGoogle I\/O 2023 is a wrap \u2014 here\u2019s a list of everything announced<\/br>
\n2023-05-10 21:36:50<\/br><\/p>\n

\n

On Google I\/O keynote day, the search and internet advertising provider puts forth a rapid-fire stream of announcements during its developer conference, including many unveilings of recent things it\u2019s been working on.\u00a0<\/span><\/p>\n

Since we know you don\u2019t always have time to watch <\/span>a two-hour presentation<\/span>, the TechCrunch team took that on and delivered story after story on new products and features. Here, we give you quick hits of the biggest news from the keynote as they were announced, all in an easy-to-digest, easy-to-skim list. Here we go:<\/span><\/p>\n

Google Maps<\/h2>\n
\"Google's<\/p>\n

Image Credits:<\/strong> Google<\/p>\n<\/div>\n

Google Maps unveiled a new \u201cImmersive View for Routes\u201d feature in select cities. The new feature brings all of the information that a user may need into one place, including details about traffic simulations, bike lanes, complex intersections, parking and more. Read more<\/a>.<\/p>\n

Magic Editor and Magic Compose<\/h2>\n

\"\"<\/p>\n

We\u2019re always wanting to change something about the photo we just took, and now Google\u2019s Magic Editor feature is AI-powered for more complex edits in specific parts of the photos, for example the foreground or background and can also fill in gaps in the photo or even reposition the subject for a better-framed shot. Check it out<\/a>.<\/p>\n

There is also a new feature called Magic Compose, demoed today, that shows it being used with messages and conversations to rewrite texts in different styles. \u201cFor example, the feature could make the message sound more positive or more professional, or you could just have fun with it and make the message sound like it was \u201cwritten by your favorite playwright,\u201d aka Shakespeare,\u201d Sarah<\/a> writes. Read more<\/a>.<\/p>\n

PaLM 2<\/h2>\n
\"\"<\/p>\n

Image Credits:<\/strong> Google<\/p>\n<\/div>\n

Frederic<\/a> has your look at PaLM 2, Google\u2019s newest large language model (LLM). He writes \u201cPaLM 2 will power Google\u2019s updated Bard chat tool, the company\u2019s competitor to OpenAI\u2019s ChatGPT, and function as the foundation model for most of the new AI features the company is announcing today.\u201d PaLM 2 also now features improved support for writing and debugging code. More here<\/a>. Also, Kyle<\/a> takes a deeper dive into PaLM 2 with a more critical look<\/a> at the model through the lens of a Google-authored research paper.<\/p>\n

Bard gets smarter<\/h2>\n

\"\"<\/p>\n

Good news: Google is not only\u00a0removing its waitlist<\/a> for Bard and making its available, in English, in over 180 countries and territories, but it\u2019s also launching support for Japanese and Korean with a goal of supporting 40 languages in the near future. Also new is Bard\u2019s ability to surface images in its responses. Find out more<\/a>. In addition, Google is partnering with Adobe<\/a> for some art generation capabilities via Bard. Kyle writes that \u201cBard users will be able to generate images via Firefly and then modify them using Express. Within Bard, users will be able to choose from templates, fonts and stock images as well as other assets from the Express library.\u201d<\/p>\n

Workspace<\/h2>\n
\"Google<\/p>\n

Image Credits:<\/strong> TechCrunch<\/p>\n<\/div>\n

Google\u2019s Workspace suite is also getting the AI touch to make it smarter, with the addition of an automatic table (but not formula) generation in Sheets and image creation in Slides and Meet. Initially, the automatic table is simpler, though Frederic<\/a> notes there is more to come with regard to using AI to create formulas. The new features for Slides and Meet include the ability to type in what kind of visualization you are looking for, and the AI will create that image. Specifically for Google Meet, that means custom backgrounds. Check out more<\/a>.<\/p>\n

MusicLM<\/h2>\n
\"Google<\/p>\n

Image Credits:<\/strong> Google<\/p>\n<\/div>\n

MusicLM is Google\u2019s new experimental AI tool that turns text into music. Kyle writes that for example, if you are hosting a dinner party, you can simply type, \u201csoulful jazz for a dinner party\u201d and have the tool create several versions of the song. Read more<\/a>.<\/p>\n

Search<\/h2>\n

\"\"<\/p>\n

Google Search has two new features<\/a> surrounding better understanding of content and the context of an image the user is viewing in the search results. Sarah<\/a> reports that this includes more information with an \u201cAbout this Image\u201d feature and new markup in the file itself that will allow images to be labeled as \u201cAI-generated.\u201d Both of these are extensions of work already going on, but is meant to provide more transparency on if the \u201cimage is credible or AI-generated,\u201d albeit not an end-all-be-all of addressing the larger problem of AI image misinformation.<\/p>\n

Aisha<\/a> has more on Search, including that Google is experimenting with an AI-powered conversational mode<\/a> and described the experience as, \u201cusers will see suggested next steps when conducting a search and display an AI-powered snapshot of key information to consider, with links to dig deeper. When you tap on a suggested next step, Search takes you to a new conversational mode, where you can ask Google more about the topic you\u2019re exploring. Context will be carried over from question to question.\u201d<\/p>\n

There was also the introduction of a new \u201cPerspectives\u201d filter<\/a> that we will soon see at the top of some Search results when the results \u201cwould benefit from others\u2019 experiences,\u201d according to Google. For example, posts on discussion boards, Q&A sites and social media platforms, including those with video. Think having an easier time finding Reddit links or YouTube videos, Sarah writes.<\/p>\n

Sidekick<\/h2>\n
\"\"<\/p>\n

Image Credits:<\/strong> Google<\/p>\n<\/div>\n

Darrell<\/a> has your look at a new tool unveiled today called Sidekick<\/a>, writing that it is designed \u201cto help provide better prompts, potentially usurping the one thing people are supposed to be able to do best in the whole generative AI loop.\u201d The Sidekick panel will live in a side panel in Google Docs and is \u201cconstantly engaged in reading and processing your entire document as you write, providing contextual suggestions that refer specifically to what you\u2019re written.\u201d<\/p>\n

Codey<\/h2>\n

We like the name of Google\u2019s new code completion and code generation tool, Codey. It\u2019s part of a number of AI-centric coding tools being launched today and is also Google\u2019s answer to GitHub\u2019s Copilot, a chat tool used for asking questions about coding. Codey is specifically trained to handle coding-related prompts and is also trained to handle queries related to Google Cloud in general. Read more<\/a>.<\/p>\n

Google Cloud<\/h2>\n

There\u2019s a new A3 supercomputer virtual machine in town. Ron<\/a> writes that \u201cthis A3 has been purpose-built to handle the considerable demands of these resource-hungry use cases,\u201d noting that A3 is \u201carmed with NVIDIA\u2019s H100 GPUs and combining that with a specialized data center to derive immense computational power with high throughput and low latency, all at what they suggest is a more reasonable price point than you would typically pay for such a package.\u201d<\/p>\n

Imagen in Vertex<\/h2>\n

Google also announced new AI models heading to Vertex AI<\/a>, its fully managed AI service, including a text-to-image model called Imagen. Kyle<\/a> writes that Imagen was previewed via Google\u2019s AI Test Kitchen<\/a> app last November. It can generate and edit images as well as write captions for existing images.<\/p>\n

Find My Device<\/h2>\n
\"\"<\/p>\n

Image Credits:<\/strong> TechCrunch<\/p>\n<\/div>\n

Piggy-backing on Apple and Google teaming up on Bluetooth tracker safety measures<\/a> and a new specification, Google introduced its own series of improvements to its own Find My Device network, including proactive alerts about unknown trackers traveling with you with support for Apple\u2019s AirTag and others. Some of the new features will include notifying users if their phone detects an unknown tracker moving with them, but also connectivity with other Bluetooth trackers. Google\u2019s goal with the upgrades is the \u201coffer increased safety and security for their own respective user bases by making these alerts work across platforms in the same way \u2014 meaning, for example, the work\u00a0Apple did to make AirTags safer<\/a> following reports they were being used for stalking would also make its way to Android devices,\u201d Sarah<\/a> writes.<\/p>\n

Pixel 7a<\/h2>\n
\"\"<\/p>\n

Image Credits:<\/strong> Google<\/p>\n<\/div>\n

Google\u2019s Pixel 7a goes on sale May 11 at $100 less than the Pixel 7 ($499). Like the Pixel 6a, it has the 6.1-inch screen versus the 6.4-inch Pixel 7. It also launched in India<\/a>. When it comes to the camera, it has a slightly higher pixel density, but Brian said \u201cI really miss the flexibility and zoom of the 7 Pro, but I was able to grab some nice shots around my neighborhood with the 7a\u2019s cameras.\u201d Its new chip does enable features like Face Unblur and Super Res Zoom. Find the full breakdown here<\/a>.<\/span><\/p>\n

Project Tailwind<\/h2>\n

The names sounds more like an undercover government assignment, but to Google, Project Tailwind is an AI-powered notebook tool it is building with the aim of taking a user\u2019s freeform notes and automatically organizing and summarizing them. The tool is available through Labs, Google\u2019s refreshed hub for experimental products. Here\u2019s how it works: users pick files from Google Drive, then Project Tailwind creates a private AI model with expertise in that information, along with a personalized interface designed to help sift through the notes and docs. Check it out<\/a>.<\/p>\n

Generative AI wallpapers<\/h2>\n

Now that you got that new Pixel 7a in your hand, you have to make it pretty! Google will roll out\u00a0generative AI wallpapers this fall that will enable Android users to answer suggested prompts to describe your vision. The feature will use Google\u2019s text-to-image diffusion models to generate new and original wallpapers, and the color palette of your Android system will automatically match the wallpaper you\u2019ve selected. More here<\/a>.<\/p>\n

Wear OS 4<\/h2>\n

\"Google<\/p>\n

Google debuted the next version of its smartwatch operating system, Wear OS 4. Here\u2019s what you\u2019ll notice: improved battery life and functionality and new accessibility features, like text-to-speech. Developers also have some new tools to build new Wear OS watch faces and publish them to Google Play. Watch for Wear OS 4 to launch later this year. Read more<\/a>. Also, there are other fun new apps and things<\/a> coming for smartwatches, including improvements to its suite of offers, like Gmail, Calendar, etc., but also updates from WhatsApp, Peloton and Spotify.<\/p>\n

Universal Translator<\/h2>\n
\"\"<\/p>\n

Image Credits:<\/strong> Google<\/p>\n<\/div>\n

Also unveiled today is that Google is testing a powerful new translation service that puts video into a new language while also synchronizing the speaker\u2019s lips with words they never spoke. Called \u201cUniversal Translator,\u201d it was shown as \u201can example of something only recently made possible by advances in AI, but simultaneously presenting serious risks that have to be reckoned with from the start,\u201d Devin writes. Here\u2019s how it works: the \u201cexperimental\u201d service takes an input video, in this case a lecture from an online course originally recorded in English, transcribes the speech, translates it, regenerates the speech (matching style and tone) in that language, and then edits the video so that the speaker\u2019s lips more closely match the new audio. More on this<\/a>.<\/p>\n

Pixel Tablet<\/h2>\n
\"\"<\/p>\n

Image Credits:<\/strong> Google<\/p>\n<\/div>\n

You knew it was coming, and we can confirm that the Pixel Tablet is finally here. While Brian<\/a> thought the interface looked like a \u201cgiant Nest Home Hub,\u201d he did like the dock and the design.<\/p>\n

And since tablets are used primarily at home, Brian notes that the Pixel Tablet is \u201cnot just a tablet \u2014 it\u2019s a smart home controller\/hub, a teleconferencing device and a video streaming machine. It\u2019s not going to replace your television, but it\u2019s certainly a solid choice to watch some YouTube.\u201d Check out more here<\/a>.<\/p>\n

Pixel Fold<\/h2>\n
\"\"<\/p>\n

Image Credits:<\/strong> Google<\/p>\n<\/div>\n

One of the big announcements that already dropped, covered by <\/span>Brian<\/span><\/a>, is that Google used May 4 (aka \u201cMay the Fourth Be With You\u201d Day) to unveil it will have a foldable Pixel phone. In a new story, Brian does a deep dive into the phone, which he writes Google has been working on for five years.\u00a0<\/span><\/p>\n

He also notes that \u201cthe real secret sauce in the Pixel Fold experience is, unsurprisingly, the software\u2026The app continuity when switching between the external and internal screens is quite seamless, allowing you to pick where you left off as you change screen sizes. Naturally, Google has optimized its most popular third-party apps for the big screen experience, including Gmail and YouTube.\u201d Read more here<\/a>.<\/span><\/p>\n

Firebase<\/h2>\n

Firebase, Google\u2019s backend-as-a-service platform for application developers, has some new features<\/a>, including the addition of AI extensions powered by Google\u2019s PaLM API and opening up the Firebase extension marketplace to more developers.<\/p>\n

Google\u2019s Play Store gets some AI love<\/h2>\n

Sarah<\/a> and Frederic<\/a> teamed up to report on new ways developers can use Google\u2019s AI to build and optimize their Android apps for the Play Store alongside a host of other tools to grow their app\u2019s audience through things like automated translations and other promotional efforts.<\/p>\n

New features and updates include:<\/p>\n