Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wp-plugin-hostgator domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the ol-scrapes domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893
{"id":35118,"date":"2023-06-15T21:41:30","date_gmt":"2023-06-15T21:41:30","guid":{"rendered":"https:\/\/scienceandnerds.com\/2023\/06\/15\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications\/"},"modified":"2023-06-15T21:41:32","modified_gmt":"2023-06-15T21:41:32","slug":"pieter-abbeel-and-ken-goldberg-on-generative-ai-applications","status":"publish","type":"post","link":"https:\/\/scienceandnerds.com\/2023\/06\/15\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications\/","title":{"rendered":"Pieter Abbeel and Ken Goldberg on generative AI applications"},"content":{"rendered":"

Source:https:\/\/techcrunch.com\/2023\/06\/15\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications\/<\/a><\/br>
\nPieter Abbeel and Ken Goldberg on generative AI applications<\/br>
\n2023-06-15 21:41:30<\/br><\/p>\n

\n

I\u2019ll admit that<\/span> I\u2019ve tiptoed around the topic a bit in Actuator due to its sudden popularity (SEO gods be damned). I\u2019ve been in this business long enough to instantly be suspicious of hype cycles. That said, I totally get it this time around.<\/p>\n

While it\u2019s true that various forms of machine learning and AI touch our lives every day, the emergence of ChatGPT and its ilk present something far more immediately obvious to the average person. Typing a few commands into a dialog box and getting an essay, a painting or a song is a magical experience \u2013 particularly for those who haven\u2019t followed the minutiae of this stuff for years or decades.<\/p>\n

If you\u2019re reading this, you were probably aware of these notions prior to the last 12 months, but try to put yourself in the shoes of someone who sees a news story, visits a site and then seemingly out of nowhere, a machine is creating art. Your mind would be, in a word, blown. And rightfully so.<\/p>\n

Over the past few months, we\u2019ve covered a smattering of generative AI-related stories in Actuator. Take last week\u2019s video of Agility<\/a> using generative AI to tell Digit what to do with a simple verbal cue. I\u2019ve also begun speaking to founders and researchers about potential applications in the space. The sentiment shifted quite quickly from \u201cthis is neat\u201d to \u201cthis could be genuinely useful.\u201d<\/p>\n

Learning has, of course, been a giant topic in robotics for decades now. It also, fittingly, seems to be where much of the potential generative AI research is heading. What if, say, a robot could predict all potential outcomes based on learning? Or how about eliminating a lot of excess coding by simply telling a robot what you want it to do? Exciting, right?<\/p>\n

When I\u2019m fascinated by a topic in the space, I always do the same thing: find much smarter people to berate with questions. It\u2019s gotten me far in life. This time out, our lucky guests are a pair of UC Berkeley professors who have taught me a ton about the space over the years (I recommend getting a few for your rolodex).<\/p>\n

Pieter Abbeel is the Director of the Berkeley Robot Learning Lab and co-founder\/President\/Chief Scientist of Covariant, which uses AI and rich data sets to train picking robots. Along with helping Abbeel run BAIR (Berkeley AI Research Lab), Ken Goldberg is the school\u2019s William S. Floyd Jr. Distinguished Chair in Engineering and the Chief Scientist and co-founder of Ambi Robotics, which uses AI and robotics to tackle package sorting.<\/p>\n

Pieter Abbeel<\/h2>\n

\u00a0<\/p>\n

\"\"<\/p>\n

Image Credits:<\/strong> TechCrunch<\/p>\n<\/div>\n

Let\u2019s start with the broad question of how you see generative AI fitting into the broader world of robotics.<\/strong><\/p>\n

There are two big trends happening at the same time. There is a trend of foundation models and a trend of generative AI. They are very intertwined, but they\u2019re distinct. A foundation model is a model that is trained on a lot of data, including data that is only tangentially possibly related to what you care about. But it\u2019s still related. But by doing so, it gets better at the things you care about. Essentially, all generative models are foundation models, but there are foundation models that are not generative AI models, because they do something else. The Covariant Brain is a foundation model, the way it\u2019s set up right now. Since day one, back in 2017, we\u2019ve been training all the items that we could possibly run across. But in any given deployment, we only care about, for example, electrical supplies, or we only care about apparel, or we only care about groceries only.<\/p>\n

It\u2019s a shift in paradigm. Traditionally, people would have said, \u2018Oh, if you\u2019re going to do groceries, groceries, groceries, groceries. That\u2019s all your training, a neural network based on groceries. That\u2019s not what we\u2019ve been doing. It\u2019s all about chasing the long tail of edge cases. The more things you\u2019ve seen, the better you can make sense of an edge case. The reason it works is because the neural networks become so big. If you know networks are small, all this tangentially related stuff is going to perturb your knowledge of the most important stuff. But there\u2019s so much they can keep absorbing. It\u2019s like a massive sponge keeps absorbing things. You\u2019re not hurting anything by putting in this extra stuff. You\u2019re actually helping a little bit more by doing that.<\/p>\n

It\u2019s all about learning, right? It\u2019s a big thing everyone is trying to crack right now. The broader foundation model is to train it on as large a dataset as possible.<\/strong><\/p>\n

Yes, but the key is not just large. It\u2019s very diverse. I\u2019m not just doing only groceries. I\u2019m gonna pick groceries, but I\u2019m also training on all the other stuff that I might pick at another warehouse into the same foundation model to have a general understanding of all objects, which is a better way to learn not only about groceries. You never know what will pop up in the mix of those groceries. There\u2019s always going to be a new item. You never have everything covered. So, you need to generalize to new items. Your chance of generalizing to new items well, the probability is higher if you\u2019ve covered a very wide spectrum of other things.<\/p>\n

The larger the neural network, the more it understands the world, broadly speaking.<\/strong><\/p>\n

Yeah. That really is the key. That is what\u2019s going to unlock AI-powered robotics applications, whether it\u2019s picking or self-driving and so forth \u2013 it\u2019s the ability to absorb so much. But if we switch gears and think about generative AI specifically, there are thing you can imagine it playing a role in. If you think of generative, what does it mean compared to previous generations of AI? At its core, it means it\u2019s generative data. But how is that different from generating labels? If I give it an image and it says \u201ccat,\u201d that\u2019s also generating data. It\u2019s just that it\u2019s able to generate more data. Again, that relates to the neural network. The neural networks are larger, which allows them not only to analyze larger things, but to generate larger things in a consistent way.<\/p>\n

In robotics, there are a few angles. One is building a deeper understanding of the world. Instead of me saying, \u201cI\u2019m going to label data to teach the neural network,\u201d I can say, \u201cI\u2019m going to record a video of what happens,\u201d and my generative model needs to predict the next frame, the next frame, the next frame. And by forcing it to understand how to predict the future, I\u2019m forcing it to understand how the world works.<\/p>\n

Oftentimes when I talk to people about the different forms of learning, it\u2019s almost discussed as though they\u2019re in conflict with each other, but in this case, it\u2019s two different kinds of learning effectively working in tandem.<\/strong><\/p>\n

Yes. And again, because the networks are so large, we train neural networks to predict future frames. By doing that, in addition to training them to output the optimal actions for a certain task, they actually learn to output the actions much quicker, from far less data. You\u2019re giving it two tasks and it learns how to do the one task, because the two tasks are related. Predicting the next frame is such a difficult thinking exercise, you force it to think through so much more to predict actions that it predicts actions much, much faster.<\/p>\n

In terms of a practical real-world application \u2013 say, in an industrial setting, learning to screw something in \u2013 how does learning to predict the next thing inform its action?<\/strong><\/p>\n

This is a work in progress. But the idea is that there are different ways of teaching a robot. You can program it. You can give it demonstrations. It can learn from reinforcement, where it learns from its own trial and error. Programming has seen its limitations. It\u2019s not really going beyond what we\u2019ve seen for a long time in car factories.<\/p>\n

Let\u2019s say I was a self-driving car. If my robot can predict the future at all times, it can do two things. The first is having a deep understanding of the world and with a little extra learning, pick the right action. In addition, it has another option. If it wants to do a lot of work in the moment, it can simulate scenarios. It can also simulate the traffic around it. That\u2019s where this is headed.<\/p>\n

These are all of the possible outcomes I can see. This is the best outcome, I\u2019m going to do that.<\/strong><\/p>\n

Correct. There are other things we can do in generative AI with robotics. Google has had some results, where what they said is, what if we bolt some things together. One of the big challenges with robotics has been high-level reasoning. There are two challenges: 1. how do you do the actual motor skill and 2. what should you actually do. If someone asked you, \u2018make me scrambled eggs,\u2019 what does that even mean? And that\u2019s where generative AI models come in handy in a different way. They\u2019re pre-trained. The simplest version just uses language. Making scrambled eggs, you can break that down into:<\/p>\n

    \n
  1. Go get the eggs from the fridge<\/li>\n
  2. Get the frying pan<\/li>\n
  3. Get the butter.<\/li>\n<\/ol>\n

    The robot can go to the fridge. It might ask what to do with the fridge, and then the model says:<\/p>\n

      \n
    1. Go to the fridge<\/li>\n
    2. Take the thing out of the fridge<\/li>\n<\/ol>\n

      The whole thing in robotics has traditionally been logic or task planning, and people who have to program it in somehow have to describe the world in terms of logical statements that somehow come after each other, and so forth. The language models kind of seem to take care of it in a beautiful way. That\u2019s unexpected to many people.<\/p>\n

      Ken Goldberg<\/h2>\n
      \"Ken<\/p>\n

      Image Credits:<\/strong> Kimberly White (opens in a new window)<\/span><\/a> \/ Getty Images<\/p>\n<\/div>\n

      How do you see generative AI\u2019s potential in robotics?<\/strong><\/p>\n

      The core concept here is the transformer. The transformer network is very interesting, because it looks at sequences. It\u2019s able to essentially get very good at predicting the next item. It\u2019s astoundingly good at that. It works for words, because we only have a relatively small number of words in the English language. At best, the Oxford English Dictionary I think has about half a million. But you can get by with far fewer than that. And you have plenty of examples, because every string of text gives you an example of words and how they\u2019re put together. It\u2019s a beautiful sweet spot. You can show it a lot of examples, and you have relatively few choices to make at every step. It turned out it can predict that extremely well.<\/p>\n

      The same is true to sequences of sounds, so this can also be used for audio processing and prediction. You can train it very similar. Instead of words, you have sequences of phonemes coming in. You give it a lot of strings of music or voice, and then it will be able to predict the next sound signal or phoneme. It can also be used for images. You have a string of images and it can be used to predict the next image.<\/p>\n

      Pieter was talking about using video to predict what\u2019s going to happen in the next frame. It was effectively like having the robot think in video.<\/strong><\/p>\n

      Kind of, yeah. If you can now predict the next video, the next thing you can add in there is the control. If I add my control in there, I can predict what comes out if I do action A or B. I can look at all of my actions and pick the action that gets me closer to what I want to see. Now I want to get it to the next level, which is where I look at the next scene and have voxels. I have these three-dimensional volumes. I want to train it on those and say, \u201chere\u2019s my current volume, and here\u2019s the volume I want to have. What actions do I need to perform to get me there?\u201d<\/p>\n

      When you\u2019re talking about volumes, you mean where the robot exists in space?<\/strong><\/p>\n

      Yeah, or even what\u2019s happening in front of you. If you want to clean up the dishes in front of you, the volume is where all those dishes are. Then you say, \u201cwhat I want is a clear table with none of those dishes on it.\u201d That\u2019s the volume I want to get to, so now I have to find the sequence of actions that will get from the initial state, which is what I\u2019m looking at now, to the final state, which is where I have no dishes anymore.<\/p>\n

      Based on videos the robot has been trained on, it can extrapolate what to do.<\/strong><\/p>\n

      In principle, but not from videos of people. That\u2019s problematic. Those videos are shot from an odd angle. You don\u2019t know what motions they\u2019re doing. It\u2019s hard for the robot to know that. What you do is essentially have the robot self-learn by having the camera. The robot tries things out and learns over time.<\/p>\n

      A lot of the applications I\u2019m hearing about are based around language commands. You say something, the robot is able to determine what you mean and execute it in real-time.<\/strong><\/p>\n

      That\u2019s a different thing. We now have a tool that can handle language very well. And what\u2019s cool about it is that it gives you access to the semantics of a scene. A very well known paper from Google did the following: You have a robot and you say \u201cI just spilled something, and I need help to clean it up.\u201d Typically the robot wouldn\u2019t know what to do with that, but now you have language. You run that into ChatGPT and it generates: \u201cget a sponge. Get a napkin. Get a cloth. Look for the spilled can, make sure it can pick that up.\u201d All of that stuff can come out. What they do is exactly that: They take all of that output and they say, \u201cis there a sponge around? Let me look for a sponge.\u201d<\/p>\n

      The connection between semantic of the world \u2013 a spill and a sponge \u2013 ChatGPT is very good at that. That fills a gap that we\u2019ve always had. It\u2019s called the Open World Problem. Before that, we had to program in every single thing it was going to encounter. Now we have another source that can make these connections that we couldn\u2019t make before. That\u2019s very cool. We have a project on that\u2019s called Language Embedded Radiance Field. It\u2019s brand new. It\u2019s how to use that language to figure out where to pick things up. We say, \u201chere\u2019s a cup. Pick it up by the handle,\u201d and it seems to be able to identify where the handle is. It\u2019s really interesting.<\/p>\n

      You\u2019re obviously very smart people and you know a lot about generative AI, so I\u2019m curious where the surprise comes in.<\/strong><\/p>\n

      We always get surprised when these systems do things that we didn\u2019t anticipate. That\u2019s when robotics is at its best, when you give it a setup, and it suddenly does something.<\/p>\n

      It does the right thing for once.<\/strong><\/p>\n

      Exactly! That\u2019s always a surprise in robotics!<\/p>\n

      Postscript<\/h2>\n

      One more bit of generative AI before we move on for the week. Researchers at Switzerland\u2019s EPFL University are highlighting robots making robots<\/a>. I\u2019m immediately reminded of RepRap, which gave rise to the desktop 3D printing space. Launched in 2005, the project started with the mission of creating \u201chumanity\u2019s first general-purpose, self-replicating manufacturing machine.\u201d Effectively, the goal was to create a 3D printer that could 3D-print itself.<\/p>\n

      For this project, the researchers used ChatGPT to generate the design for a product-picking robot. The team suggests language models \u201ccould change the way we design robots, while enriching and simplifying the process.\u201d<\/p>\n

      Computational Robot Design & Fabrication Lab head Josie Hughes adds, \u201cEven though Chat-GPT is a language model and its code generation is text-based, it provided significant insights and intuition for physical design, and showed great potential as a sounding board to stimulate human creativity.\u201d<\/p>\n

      \"\"<\/p>\n

      Image Credits:<\/strong> EPFL<\/p>\n<\/div>\n

      Some light lanternfly extermination<\/h2>\n

      An interesting pair of research with some common DNA also crossed my desk this week. Anyone who\u2019s seen a spotted lanternfly in person knows how beautiful they can be. The China native insect flutters around on wings that flash sharp swaths of red and blue. Anyone who\u2019s seen a spotted lanternfly on the Eastern U.S. seaboard, however, knows that they\u2019re invasive species. Here in New York, there\u2019s a statewide imperative to destroy the buggers on command.<\/p>\n

      The CMU Robotics Institute designed TartanPest<\/a> as part of Farm-ng\u2019s Farm Robotics Challenge<\/a>. The system features a robotic arm mounted atop a Farm-ng tractor, which is designed to spot and spray masses of lanternfly eggs \u2013 destroying the bugs before they hatch. The robot, \u201cuses a deep learning model refined on an augmented image data set created from 700 images of spotted lanternfly egg masses from iNaturalist to identify them and scrape them off surfaces.<\/p>\n

      For the record, nowhere in Asimov\u2019s robotics laws<\/a> are lanternflies mentioned.<\/p>\n

      \"\"<\/p>\n

      Image Credits:<\/strong> CMU<\/p>\n<\/div>\n

      Reforestation<\/h2>\n

      Meanwhile, ABB this week showcased what it calls \u201cthe world\u2019s most remote robot.\u201d<\/a> A product of a collaboration with the nonprofit group JungleKeepers, the system effectively uses an ABB arm to automate seed collection, planting and watering in a bid to promote reforestation.<\/p>\n

      There\u2019s a big open question around efficacy and scalability, and this is certainly a nice PR play from the automation giant, but if this thing can make even small progress amid rapid deforestation, I\u2019m all for it.<\/p>\n

      \"\"<\/p>\n

      Image Credits:<\/strong> ABB<\/p>\n<\/div>\n

      Sweaters for robots<\/h2>\n

      One more CMU project that I missed a couple of weeks back. RobotSweater<\/a> is not a robotic sweater, but rather a robot in <\/em>a sweater (SweaterRobot might have been more apt). Regardless, the system uses knitted textiles as touch-sensitive skin. Per the school:<\/p>\n

      \n

      Once knitted, the fabric can be used to help the robot \u201cfeel\u201d when a human touches it, particularly in an industrial setting where safety is paramount. Current solutions for detecting human-robot interaction in industry look like shields and use very rigid materials that Liu notes can\u2019t cover the robot\u2019s entire body because some parts need to deform.<\/p>\n<\/blockquote>\n

      Once attached to the robot (in this case, an industrial arm), the e-textile can sense the force, direction and distribution of touch, sensitivities that could help these systems more safely work alongside people.<\/p>\n

      \u201cIn their research, the team demonstrated that pushing on a companion robot outfitted in RobotSweater told it which way to move or what direction to turn its head,\u201d CMU says. \u201cWhen used on a robot arm, RobotSweater allowed a push from a person\u2019s hand to guide the arm\u2019s movement, while grabbing the arm told it to open or close its gripper.\u201d<\/p>\n

      \"\"<\/p>\n

      Image Credits:<\/strong> CMU<\/p>\n<\/div>\n

      Robotic origami<\/h2>\n

      Capping off an extremely research heavy edition of Actuator \u2013 and returning once again to Switzerland\u2019s EPFL \u2013 is Mori3<\/a>. The little robot is made up of a pair of triangles that can form into different shapes.<\/p>\n

      \u201cOur aim with Mori3 is to create a modular, origami-like robot that can be assembled and disassembled at will depending on the environment and task at hand,\u201d Reconfigurable Robotics Lab director, Jamie Paik. \u201cMori3 can change its size, shape and function.\u201d<\/p>\n

      The system recalls a lot of fascinating work concurrently happening in the oft-intersecting fields of modular and origami robotics. The systems communicate with one another and attach to form complex shapes. The team is targeting space travel as a primary application for this emerging technology. Their small, flat design make them much easier to pack on a shuttle than a preassembled bot. And let\u2019s be honest, no one wants to spend a ton of time piecing together robots like Ikea furniture after blasting off.<\/p>\n

      \u201cPolygonal and polymorphic robots that connect to one another to create articulated structures can be used effectively for a variety of applications,\u201d says Paik. \u201cOf course, a general-purpose robot like Mori3 will be less effective than specialized robots in certain areas. That said, Mori3\u2019s biggest selling point is its versatility.\u201d<\/p>\n

      \"\"<\/p>\n

      Image Credits:<\/strong> EPFL<\/p>\n<\/div>\n

      3\u20262\u20261\u2026we have Actuator<\/a>.<\/em><\/p>\n<\/p><\/div>\n

      <\/br><\/br><\/br><\/p>\n

      Science, Tech, Technology<\/br>
      \n<\/br>
      \nSource:
      https:\/\/techcrunch.com\/2023\/06\/15\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications\/<\/a><\/br><\/br><\/p>\n","protected":false},"excerpt":{"rendered":"

      Source:https:\/\/techcrunch.com\/2023\/06\/15\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications\/ Pieter Abbeel and Ken Goldberg on generative AI applications 2023-06-15 21:41:30 I\u2019ll admit that I\u2019ve tiptoed around the topic a bit in Actuator due to its sudden popularity (SEO gods be damned). I\u2019ve been in this business long enough to instantly be suspicious of hype cycles. That said, I totally get it this time […]<\/p>\n","protected":false},"author":1,"featured_media":35119,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"nf_dc_page":"","om_disable_all_campaigns":false,"pagelayer_contact_templates":[],"_pagelayer_content":"","_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[26,17,8],"tags":[],"class_list":["post-35118","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-science","category-tech","category-technology"],"yoast_head":"\nPieter Abbeel and Ken Goldberg on generative AI applications - Science and Nerds<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/scienceandnerds.com\/2023\/06\/15\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Pieter Abbeel and Ken Goldberg on generative AI applications - Science and Nerds\" \/>\n<meta property=\"og:description\" content=\"Source:https:\/\/techcrunch.com\/2023\/06\/15\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications\/ Pieter Abbeel and Ken Goldberg on generative AI applications 2023-06-15 21:41:30 I\u2019ll admit that I\u2019ve tiptoed around the topic a bit in Actuator due to its sudden popularity (SEO gods be damned). I\u2019ve been in this business long enough to instantly be suspicious of hype cycles. That said, I totally get it this time […]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/scienceandnerds.com\/2023\/06\/15\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications\/\" \/>\n<meta property=\"og:site_name\" content=\"Science and Nerds\" \/>\n<meta property=\"article:published_time\" content=\"2023-06-15T21:41:30+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-06-15T21:41:32+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/scienceandnerds.com\/wp-content\/uploads\/2023\/06\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications_648b858b72214.jpeg\" \/>\n\t<meta property=\"og:image:width\" content=\"711\" \/>\n\t<meta property=\"og:image:height\" content=\"399\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"admin\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"admin\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"18 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/scienceandnerds.com\/2023\/06\/15\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications\/\",\"url\":\"https:\/\/scienceandnerds.com\/2023\/06\/15\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications\/\",\"name\":\"Pieter Abbeel and Ken Goldberg on generative AI applications - Science and Nerds\",\"isPartOf\":{\"@id\":\"https:\/\/scienceandnerds.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/scienceandnerds.com\/2023\/06\/15\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/scienceandnerds.com\/2023\/06\/15\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/i0.wp.com\/scienceandnerds.com\/wp-content\/uploads\/2023\/06\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications_648b858b72214.jpeg?fit=711%2C399&ssl=1\",\"datePublished\":\"2023-06-15T21:41:30+00:00\",\"dateModified\":\"2023-06-15T21:41:32+00:00\",\"author\":{\"@id\":\"https:\/\/scienceandnerds.com\/#\/schema\/person\/ea2991abeb2b9ab04b32790dff28360e\"},\"breadcrumb\":{\"@id\":\"https:\/\/scienceandnerds.com\/2023\/06\/15\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/scienceandnerds.com\/2023\/06\/15\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/scienceandnerds.com\/2023\/06\/15\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications\/#primaryimage\",\"url\":\"https:\/\/i0.wp.com\/scienceandnerds.com\/wp-content\/uploads\/2023\/06\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications_648b858b72214.jpeg?fit=711%2C399&ssl=1\",\"contentUrl\":\"https:\/\/i0.wp.com\/scienceandnerds.com\/wp-content\/uploads\/2023\/06\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications_648b858b72214.jpeg?fit=711%2C399&ssl=1\",\"width\":711,\"height\":399},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/scienceandnerds.com\/2023\/06\/15\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/scienceandnerds.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Pieter Abbeel and Ken Goldberg on generative AI applications\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/scienceandnerds.com\/#website\",\"url\":\"https:\/\/scienceandnerds.com\/\",\"name\":\"Science and Nerds\",\"description\":\"My WordPress Blog\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/scienceandnerds.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/scienceandnerds.com\/#\/schema\/person\/ea2991abeb2b9ab04b32790dff28360e\",\"name\":\"admin\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/scienceandnerds.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/7e6e14fc6691445ef2b2c0a3a6c43882?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/7e6e14fc6691445ef2b2c0a3a6c43882?s=96&d=mm&r=g\",\"caption\":\"admin\"},\"sameAs\":[\"https:\/\/scienceandnerds.com\"],\"url\":\"https:\/\/scienceandnerds.com\/author\/admin\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Pieter Abbeel and Ken Goldberg on generative AI applications - Science and Nerds","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/scienceandnerds.com\/2023\/06\/15\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications\/","og_locale":"en_US","og_type":"article","og_title":"Pieter Abbeel and Ken Goldberg on generative AI applications - Science and Nerds","og_description":"Source:https:\/\/techcrunch.com\/2023\/06\/15\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications\/ Pieter Abbeel and Ken Goldberg on generative AI applications 2023-06-15 21:41:30 I\u2019ll admit that I\u2019ve tiptoed around the topic a bit in Actuator due to its sudden popularity (SEO gods be damned). I\u2019ve been in this business long enough to instantly be suspicious of hype cycles. That said, I totally get it this time […]","og_url":"https:\/\/scienceandnerds.com\/2023\/06\/15\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications\/","og_site_name":"Science and Nerds","article_published_time":"2023-06-15T21:41:30+00:00","article_modified_time":"2023-06-15T21:41:32+00:00","og_image":[{"width":711,"height":399,"url":"https:\/\/scienceandnerds.com\/wp-content\/uploads\/2023\/06\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications_648b858b72214.jpeg","type":"image\/jpeg"}],"author":"admin","twitter_card":"summary_large_image","twitter_misc":{"Written by":"admin","Est. reading time":"18 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/scienceandnerds.com\/2023\/06\/15\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications\/","url":"https:\/\/scienceandnerds.com\/2023\/06\/15\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications\/","name":"Pieter Abbeel and Ken Goldberg on generative AI applications - Science and Nerds","isPartOf":{"@id":"https:\/\/scienceandnerds.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/scienceandnerds.com\/2023\/06\/15\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications\/#primaryimage"},"image":{"@id":"https:\/\/scienceandnerds.com\/2023\/06\/15\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications\/#primaryimage"},"thumbnailUrl":"https:\/\/i0.wp.com\/scienceandnerds.com\/wp-content\/uploads\/2023\/06\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications_648b858b72214.jpeg?fit=711%2C399&ssl=1","datePublished":"2023-06-15T21:41:30+00:00","dateModified":"2023-06-15T21:41:32+00:00","author":{"@id":"https:\/\/scienceandnerds.com\/#\/schema\/person\/ea2991abeb2b9ab04b32790dff28360e"},"breadcrumb":{"@id":"https:\/\/scienceandnerds.com\/2023\/06\/15\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/scienceandnerds.com\/2023\/06\/15\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/scienceandnerds.com\/2023\/06\/15\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications\/#primaryimage","url":"https:\/\/i0.wp.com\/scienceandnerds.com\/wp-content\/uploads\/2023\/06\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications_648b858b72214.jpeg?fit=711%2C399&ssl=1","contentUrl":"https:\/\/i0.wp.com\/scienceandnerds.com\/wp-content\/uploads\/2023\/06\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications_648b858b72214.jpeg?fit=711%2C399&ssl=1","width":711,"height":399},{"@type":"BreadcrumbList","@id":"https:\/\/scienceandnerds.com\/2023\/06\/15\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/scienceandnerds.com\/"},{"@type":"ListItem","position":2,"name":"Pieter Abbeel and Ken Goldberg on generative AI applications"}]},{"@type":"WebSite","@id":"https:\/\/scienceandnerds.com\/#website","url":"https:\/\/scienceandnerds.com\/","name":"Science and Nerds","description":"My WordPress Blog","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/scienceandnerds.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/scienceandnerds.com\/#\/schema\/person\/ea2991abeb2b9ab04b32790dff28360e","name":"admin","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/scienceandnerds.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/7e6e14fc6691445ef2b2c0a3a6c43882?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/7e6e14fc6691445ef2b2c0a3a6c43882?s=96&d=mm&r=g","caption":"admin"},"sameAs":["https:\/\/scienceandnerds.com"],"url":"https:\/\/scienceandnerds.com\/author\/admin\/"}]}},"jetpack_sharing_enabled":true,"jetpack_featured_media_url":"https:\/\/i0.wp.com\/scienceandnerds.com\/wp-content\/uploads\/2023\/06\/pieter-abbeel-and-ken-goldberg-on-generative-ai-applications_648b858b72214.jpeg?fit=711%2C399&ssl=1","_links":{"self":[{"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/posts\/35118","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/comments?post=35118"}],"version-history":[{"count":1,"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/posts\/35118\/revisions"}],"predecessor-version":[{"id":35120,"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/posts\/35118\/revisions\/35120"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/media\/35119"}],"wp:attachment":[{"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/media?parent=35118"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/categories?post=35118"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scienceandnerds.com\/wp-json\/wp\/v2\/tags?post=35118"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}