Before machines can be autonomous, humans must work to keep them safe



As the self-driving car drives down a dark rural road, a deer lingering among the trees in the front appears poised to dash in the car’s path. Will the vehicle know exactly what to do to keep everyone safe?

Some computer scientists and engineers aret so sure. But researchers at the University of Virginia’s School of Engineering and Applied Sciences are working hard to develop methods that they hope will bring greater confidence to the world of machine learning – not just for cars. autonomous, but for planes that can land on their own and drones that can deliver your errands.

The crux of the matter is that the key software functions that guide self-driving cars and other machines through their autonomous movements aren’t written by humans – instead, those functions are the product of machine learning. Machine-learned functions are expressed in a form that makes it virtually impossible for humans to understand the rules and the logic they encode, making it very difficult to assess whether the software is safe and in the best interests of the human being. humanity.

Researchers at UVA’s Leading Engineering for Safe Software Lab – the LESS Lab, as it is commonly known – are working to develop the methods necessary to give society the confidence to trust emerging autonomous systems.

The teamthe researchers are Matthew B. Dwyer, distinguished professor Robert Thomson; Sebastian Elbaum, Anita Jones faculty member and professor; Lu Feng, assistant professor; Yonghwi Kwon, Assistant Career Enhancement Professor at John Knight; Mary Lou Soffa, Owens R. Cheatham science teacher; and Kevin Sullivan, associate professor. All occupy positions within the computer engineering department of AVU. Feng holds a joint position in the Department of Engineering Systems and Environment.

Since its inception in 2018, the LESS Lab has grown rapidly to support over 20 graduate students, publish over 50 publications, and secure competitive external awards totaling over $ 10 million in research funding. The awards were presented by agencies such as the National Science Foundation, the Defense Advanced Research Projects Agency, the Air Force Office of Scientific Research, and the Army Research Office.

The laboratoryThe growth trajectory matches the scale and urgency of the problem these researchers are trying to solve.

The explosion of machine learning

An inflection point in the rapid rise of machine learning occurred barely a decade ago when computer vision researchers won the ImageNet large-scale visual recognition challenge – to identify objects on photos – using a machine learning solution. Google realized this and quickly moved to capitalize on the use of data-driven algorithms.

Other tech companies have followed suit, and public demand for machine learning applications has snowballed. Last year, Forbes estimated that the global machine learning market has grown at a compound annual growth rate of 44% and is on track to grow into a $ 21 billion market by 2024.

But as the technology gained momentum, computer scientists began to sound the alarm that mathematical methods to validate and verify software were lagging behind.

Government agencies like the Defense Advanced Research Projects Agency have responded to the concerns. In 2017, DARPA launched the Assured Autonomy program to develop mathematically verifiable approaches ”to ensure an acceptable level of security with data-driven machine learning algorithms.

UVA Engineering’s IT department has also taken action, extending its expertise with a strategic hiring center in software engineering, programming languages ​​and cyber-physical systems. The experts combined their efforts in cross-cutting research collaborations, including the LESS Lab, to focus on solving a global software problem in need of an urgent solution.

A canary in the coal mine

At that time, the self-driving car industry was in the spotlight, and the issues with machine learning were becoming increasingly evident. A fatal Uber crash in 2018 was recorded as the first human death from an autonomous vehicle.

The progress towards practical autonomous driving has been very rapid and dramatic, shaped like a hockey stick curve, and much faster than the rate of growth of the techniques that can keep them safe, ”said Elbaum.

In August, tech blog Engadget reported that an autonomous car company, Waymo, had completed 20 million kilometers of road tests. Yet devastating failures continue to occur; as the software became more and more complex, no real testing would be enough to find all the bugs.

And the software crashes that have occurred have made many people leery of road testing.

Even if a simulation works 100 times, would you hop into a self-driving car and let it drive you? Probably not, ”Dwyer said. You probably want meaningful tests on the street to further boost your self-confidence. And at the same time you don’tI don’t want to be the one in the car when this test goes on.

And then, we must anticipate – and test – each obstacle that could arise on the public road network of 4 million kilometers.

Think about the complexity of a particular scenario, like driving on the freeway with trucks on either side, so an autonomous vehicle has to navigate at high speed in crosswinds and around bends, ”Dwyer said. Getting a physical setup that fits this difficult scenario would be difficult to do in actual testing. But in simulation it can get much easier.

And thatThis is an area where LESS Lab researchers are making real progress. They develop sophisticated virtual reality simulations that can accurately create challenging scenarios that might otherwise be impossible to test on the road.

There is a huge gap between real world testing and simulation testing, ”Elbaum said. “We want to bridge this gap and create methods in which we can expose autonomous systems to the many complex scenarios they will face. It will greatly reduce the time and cost of testing, and it is a safer way to test. “

Having simulations so precise that they can replace real-world driving would be the equivalent of rocket fuel for the enormous amount of sophisticated testing it will take to solve the problem of timing and human cost. But that’s only half the solution.

The other half develop mathematical guarantees that can prove that the software will do what it is supposed to do all the time. And that will require a whole new set of math frameworks.

With the old, with the new

Before machine learning, engineers wrote explicit, step-by-step instructions for the computer to follow. The logic was deterministic and absolute, so humans could use formal mathematical rules that existed to test the code and ensure it worked.

So if you really wanted a property that the autonomous system only makes sharp turns when there is an obstacle in front of it, before machine learning, mechanical engineers would say, I want that to be true, ”and software engineers would build that rule into the software,” Dwyer said.

Today, the computer is continually improving the likelihood that an algorithm will produce an expected result by receiving images of examples from which to learn. The process is no longer based on absolutes, following rules that a human can understand.

With machine learning, you can give an autonomous system examples of tight turns only with obstacles in front of it, ”said Dwyer. “But that doesn’tThis means that the computer will learn and write a program which guarantees that it is always true. Previously, humans checked the rules they coded into systems. Now we need a new way to verify that a machine-written program is playing by the rules.

Elbaum emphasizes that there are still many open questions that need to be answered in the quest for concrete and tangible methods. “We are catching up to make sure these systems are doing what they are supposed to do,” he said.

This is why the combined strength of the LESS LabFaculty and students are of critical importance in accelerating discovery. Equally important is the LESS Lab’s commitment to working collectively and in concert with other UVA experts on machine learning and cyber-physical systems to enable a future where people can trust autonomous systems.

The lab’s mission couldn’t be more relevant if we are to one day fulfill the promise of self-driving cars, let alone eliminate the fears of a deeply skeptical public.

If your self-driving car is driving on a sunny day on the road without any other car, it should work, ”Dwyer said. If itit’s raining and its at night and hes congested and there is a deer on the side of the road, that should work too, and in all other scenarios.

We want it to work all the time, for everyone and in all settings. “


Previous Understanding Commercial Cyber ​​Coverage | Insurance Company Canada
Next The startup wants to break the monopoly of the national network-Mis-aisa-The latest news, Tech, Industry, Environment, Low carbon, Resource, Innovations.

No Comment

Leave a reply

Your email address will not be published.