Revolution in the Driver’s Seat: The Road to Autonomous Vehicles

Revolution in the Driver’s Seat: The Road to Autonomous Vehicles

Title image

Revolution in the Driver’s Seat: The Road to Autonomous Vehicles

  • Add To Interests
  • PDF

  • The Evolution of AV Technology

    AVs are enabled by multiple hardware and software components, in particular a variety of sensor technologies that assess and react to a vehicle’s environment at all times. The functionality of AVs relies on innovative technologies to process the inputs from sensors and on software to interpret the inputs and translate them into action. Vehicle manufacturers and suppliers will therefore need to invest heavily in hardware, such as sensor technology and processors; software and IT; systems integration; and assembly to produce AVs on a commercial scale. (See Exhibit 5.)

    A Crucial Need: Sensor Technology

    Although some of these technologies are already commercially available, certain critical pieces of hardware—most notably, sensors—will need further development before they can be used commercially. Automotive suppliers and a handful of tech companies have already developed a mix of sensors that rely on radar, cameras, ultrasound, and light detection and ranging (lidar) technology, as well as other computing and positioning systems. But some of the most vital enabling components—specifically lidar sensors and GPS—must be further developed, and their costs scaled down, before OEMs will adopt them. (See Exhibit 6.)


    The unit costs of these and other components are highly variable, ranging from the tens of dollars to multiple thousands, because of the wide variance in technical specifications, scale of production, and maturity for each component. For instance, the cost of lidar technology ranges from $90 for a single-beam unit used today in ADAS applications to $8,000 for an eight-beam array that would be better suited to AV applications. OEMs will no doubt use different combinations of sensor components to enable various autonomous features.

    Wide Variations in Autonomous Architectures

    During the course of this study, we worked with a broad array of OEMs and technology suppliers to identify the various architectures that are currently in play. We found, for example, that different OEMs take different approaches to adaptive cruise control: some rely on a stereoscopic camera, while others use long-range radar in conjunction with a mono-vision camera. Different OEMs are likely to deploy varying configurations of other enabling technologies as well. Some fully autonomous vehicles, for example, may need to use three or more lidars in conjunction with additional sensors, safety redundancies, and GPS to give the vehicle a 360-degree view of its surroundings. Others might not need to use lidar at all, operating with a combination of radar and camera systems instead.

    To understand how widely the approaches of different OEMs may vary, consider just two of the many possible sensor-based solutions for achieving fully autonomous driving capability. OEMs that opt to use lidar-based technology to gain a 360-degree view of a car’s surroundings, for example, would focus mainly on supporting the lidar with long-range data collection through long-range radars and mono-vision cameras while using a few radar- or vision-based systems to provide short-range redundancy. OEMs that opt not to use advanced lidar systems to generate a full view around the vehicle, however, would instead employ several short-range radars and stereo cameras. So, to be considered as a viable alternative to short-range radars paired with near-range vision systems, lidar technologies will need to be competitive with those sensor combinations in terms of costs, accuracy, and failure rates.

    Whatever combination they choose, OEMs will rely on improved processing speeds to handle the large amount of data from the sensors that enable the car to respond quickly to time-sensitive situations—for example, when road obstacles must be identified and avoided.

    The Challenge of Autonomous Software

    The other critical technology in need of further development is the software that will interpret sensor data and trigger the actuators that govern vehicle braking, acceleration, and steering. The software will need to be highly intricate to contend with the complexity of the driving environment. To put things in perspective, the software in the latest Mercedes S-class vehicle, which is loaded with several ADAS features, contains roughly 15 times more lines of code than the software in a Boeing 787. The quantity of code required will multiply as vehicle manufacturers move from ADAS to partial autonomy and then full autonomy.

    Short-range communications technology—such as vehicle-to-vehicle and vehicle-to-infrastructure communication, collectively referred to as V2X—can be effectively applied to complex driving environments to enhance the safety of AVs. V2X technology can supplement on-board sensors to gather and transmit environmental data, enabling the car to, for example, peer around corners and negotiate road intersections, just as—in fact, better than—a human driver would. Is V2X a prerequisite for AVs right off the bat? There is no consensus on the question among OEM engineers. But there is broad agreement that V2X technologies, which are today being developed in parallel with AV technologies, will enhance AV performance and overall safety.

    The complexity of the driving environment will likely govern the launch sequence of partially autonomous features as well. For instance, highways present a less complex driving environment than do urban streets or parking lots, which are replete with nonstandard infrastructure and involve a high level of interaction with other vehicles, pedestrians, and objects. By contrast, low-speed environments, such as traffic jams, may present fewer risks than high-speed driving.

    The Price Tag

    We estimate that to bring the entire suite of AV features to market, OEMs and suppliers will have to make substantial R&D investments—upward of $1 billion per OEM over the next decade—to further develop sensors and processing technology and integration software, and to perform testing, validation, prototype design, and pilots.

    These factors will influence the pace of adoption over the coming years. The technology will not gain commercial scale overnight—and, in fact, it may take several years before OEMs will be able to offer autonomous features at a price that’s both acceptable to consumers and profitable for the manufacturer. We expect that after launch, the cost of individual autonomous features will decline at a compound annual rate of roughly 4 to 10 percent over ten years as component costs are scaled, R&D investments amortized, and assembly costs reduced because of volume increases. (See Exhibit 7.) We expect that by about 2025, the cost of autonomous features initially developed for partially autonomous vehicles will have decreased to the point where adding the sensor and processing capabilities needed to enable fully autonomous vehicles will become economically feasible.