The future of self-driving cars continues to creep up on the American driving public. Ride sharing company Uber recently began using autonomous vehicles to transport customers through downtown Pittsburgh. American automaker Ford Motor Company (NYSE:F) is working on developing autonomous vehicles which can be sold to consumers by the year 2025. In New Zealand, Ann Arbor, MI-based pizza company Domino’s Pizza, Inc. (NYSE:DPZ) is already using an autonomous delivery system to get its pizzas to customers.
The regulatory landscape surrounding autonomous vehicles is also in the midst of being developed, albeit mostly at the state and city level. California’s Department of Motor Vehicles recently granted a permit to test autonomous vehicles to Chinese web services company Baidu (NASDAQ:BIDU) for testing its self-driving car technologies within the state. The state senate of Michigan, home to Detroit and the epicenter of American automaking, passed a series of four bills in early September which enable self-driving car developers to become authorized to test their systems within the state. Elsewhere, however, the reception to the self-driving car concept has been a little chillier. Recent reports indicate that an ordinance proposal submitted to the Chicago City Council by council members could ban autonomous vehicles from the roads of one of America’s largest cities.
The Obama administration is hoping to add some clarity to the regulatory picture by unveiling a new set of guidelines in a 100-page federal automated vehicles policy document. The guidance requires autonomous vehicle developers to provide a safety assessment to the National Highway Traffic Safety Administration (NHTSA) discussing 15 areas of safety evaluation.
The guidelines, which have been issued through the U.S. Department of Transportation (DOT), identify 11 areas of safety evaluation related to cross-cutting technologies that apply to all automation functions on a vehicle. For privacy, the DOT guidelines indicate that manufacturer’s privacy policies should ensure security measures against unauthorized uses of data, transparency on security notices and a respect for context related to data usage so that consumers are aware of exactly how their data will be used. The human machine interface is another area where the DOT would like manufacturers to be able to clearly inform drivers whether the automated vehicle system is functioning properly and whether the autonomous mode is currently engaged. Safety assessments will also have to satisfy that the autonomous vehicle meets NHTSA crashworthiness standards and that the manufacturer has a documented process in place showing how an autonomous vehicle can be reinstated into service post-crash.
The guidelines also focus on four areas of specific automation for which manufacturers will have to provide documentation of autonomous vehicle performance. Documentation of the operational design domain should include roadway types on which the self-driving car can operate safely, geographic area, speed range and environmental conditions which the car can handle, like rain or icy roads. Assessment of a car’s object and event detection and response system must examine how the vehicle handles events during normal driving, like merging, as well as the car’s performance during pre-crash scenarios. The manufacturer will also provide documentation as to fall back processes for transitioning a self-driving car to a minimal risk condition if the autonomous system has malfunctioned, and then any tests they’ve developed to validate a high level of safety.
The guidelines will apply differently to car manufacturers based upon the level of vehicle automation as defined in an international standard developed by SAE. This standard identifies six levels of automation in vehicles ranging from level 0, where there is no automation, up to level 5, where the vehicle is fully autonomous in all driving modes. The DOT’s recently released autonomous vehicle guidance will fully apply to manufacturers producing vehicles which are at level 3, or conditional automation in which an automated driving mode exists but a human driver is required as a fallback, or above. Cars at level 2 automation, where either steering or acceleration is automated and a human driver is still mostly in control, must comply with guidance on cross-cutting technologies, although they only have to clarify local and state traffic laws to drivers instead of producing detailed plans on how their vehicles will comply with such laws. Regarding automation functions, carmakers producing level 2 vehicles only have to produce documentation of safety validation tests whereas those producing level 3 vehicles must comply with guidelines on all four functions.
The DOT’s recent guidance also spells out the responsibilities for regulatory actions at the federal and state levels. At the national level, the NHTSA will be responsible for setting federal motor vehicle safety standards on new autonomous vehicles, enforcing compliance with those standards, managing recalls of non-compliant vehicles and public education on safety issues. States, on the other hand, are responsible for licensing human drivers, registering vehicles, conducting safety inspections, enforcing traffic laws and regulating motor vehicle insurance. As the DOT guidance notes, this is how the roles are split up between federal agencies and the states for vehicles today and the development of autonomous vehicles shouldn’t affect those roles that much.
A series of new regulatory tools is also considered in the DOT’s autonomous vehicle guidance. The use of variable tests for autonomous systems would help the NHTSA test autonomous systems in urban or suburban driving environments without allowing a manufacturer to program the car specifically to pass a test where the obstacles are known. The guidance also considers whether iterative and forward-looking test protocols so that such protocols can advance with the rapid pace of the technology and not hamper innovation.
The new guidance on autonomous vehicles from the DOT could do much to change how vehicles are constructed in a world where self-driving technologies change the driving experience. Analysis of the policy guidelines from The Hill identified features that fully autonomous vehicles wouldn’t need to include in the driver’s cabin, including a steering wheel or brake pedals.
Adequately responding to safety concerns is going to be an important part of the regulatory picture in light of a growing tide of stories involving autonomous vehicle crashes. Tesla Motors (NASDAQ:TSLA) has touted the safety of autonomous vehicles against the risks of human error but its cars have been involved in crashes in cases where drivers said that the autopilot mode was responsible. This June, a Tesla Model X driver in Irvine, CA, claimed that the car accelerated autonomously when it crashed into the side of a shopping mall. Although Tesla has pushed back on that instance, saying that the driver was at fault, Tesla’s autopilot mode was engaged during a fatal crash in Florida earlier this year. Google, which has earned some notice for its own self-driving car developments, has also had one of its autonomous vehicles involved in a serious crash in late September; human error on the part of the other driver was the likely culprit in that case.