wp-plugin-hostgator
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114ol-scrapes
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114Source:https:\/\/www.quantamagazine.org\/how-to-guarantee-the-safety-of-autonomous-vehicles-20240116\/#comments<\/a><\/br> Driverless cars and planes are no longer the stuff of the future. In the city of San Francisco alone, two taxi companies have collectively logged 8 million miles of autonomous driving through August 2023. And more than 850,000 autonomous aerial vehicles, or drones, are registered in the United States \u2014 not counting those owned by the military.<\/p>\n But there are legitimate concerns about safety. For example, in a 10-month period that ended in May 2022, the National Highway Traffic Safety Administration reported<\/a> nearly 400 crashes involving automobiles using some form of autonomous control. Six people died as a result of these accidents, and five were seriously injured.<\/p>\n The usual way of addressing this issue \u2014 sometimes called \u201ctesting by exhaustion\u201d \u2014 involves testing these systems until you\u2019re satisfied they\u2019re safe. But you can never be sure that this process will uncover all potential flaws. \u201cPeople carry out tests until they\u2019ve exhausted their resources and patience,\u201d said Sayan Mitra<\/a>, a computer scientist at the University of Illinois, Urbana-Champaign. Testing alone, however, cannot provide guarantees.<\/p>\n Mitra and his colleagues can. His team has managed to prove<\/a> the safety<\/a> of lane-tracking capabilities for cars and landing systems<\/a> for autonomous aircraft. Their strategy is now being used to help land drones on aircraft carriers, and Boeing plans to test it on an experimental aircraft this year. \u201cTheir method of providing end-to-end safety guarantees is very important,\u201d said Corina Pasareanu<\/a>, a research scientist at Carnegie Mellon University and NASA\u2019s Ames Research Center.<\/p>\n Their work involves guaranteeing the results of the machine-learning algorithms that are used to inform autonomous vehicles. At a high level, many autonomous vehicles have two components: a perceptual system and a control system. The perception system tells you, for instance, how far your car is from the center of the lane, or what direction a plane is heading in and what its angle is with respect to the horizon. The system operates by feeding raw data from cameras and other sensory tools to machine learning algorithms based on neural networks, which re-create the environment outside the vehicle.<\/p>\n These assessments are then sent to a separate system, the control module, which decides what to do. If there\u2019s an upcoming obstacle, for instance, it decides whether to apply the brakes or steer around it. According to Luca Carlone<\/a>, an associate professor at the Massachusetts Institute of Technology, while the control module relies on well-established technology, \u201cit is making decisions based on the perception results, and there\u2019s no guarantee that those results are correct.\u201d<\/p>\n To provide a safety guarantee, Mitra\u2019s team worked on ensuring the reliability of the vehicle\u2019s perception system. They first assumed that it\u2019s possible to guarantee safety when a perfect rendering of the outside world is available. They then determined how much error the perception system introduces into its re-creation of the vehicle\u2019s surroundings.<\/p>\n The key to this strategy is to quantify the uncertainties involved, known as the error band \u2014 or the \u201cknown unknowns,\u201d as Mitra put it. That calculation comes from what he and his team call a perception contract. In software engineering, a contract is a commitment that, for a given input to a computer program, the output will fall within a specified range. Figuring out this range isn\u2019t easy. How accurate are the car\u2019s sensors? How much fog, rain or solar glare can a drone tolerate? But if you can keep the vehicle within a specified range of uncertainty, and if the determination of that range is sufficiently accurate, Mitra\u2019s team proved that you can ensure its safety.<\/p>\n<\/div>\n <\/br><\/br><\/br><\/p>\n
\nHow to Guarantee the Safety of Autonomous Vehicles<\/br>
\n2024-01-17 21:59:05<\/br><\/p>\n