Traditionally, computation tasks in automobiles have been performed by microcontroller units (MCUs) and application processors (APs). A typical mid-range vehicle can contain 25 to 35 MCUs/APs, while luxury cars may employ 70 or more. Increasingly, automobiles require extremely sophisticated, computationally intensive capabilities for such tasks as advanced driver assistance systems (ADAS), infotainment, control, networking, and security. Many of these applications involve machine vision in the form of image and video processing coupled with artificial intelligence (AI).
Alone, the processor architecture struggles to handle all of the electrical interfaces and protocols that are demanded by peripheral devices like sensors, cameras, and displays. Also, in many cases, these processors simply cannot satisfy the extreme computational demands of tasks like machine vision and AI.
To address this complexity, designers of automotive systems are turning to field-programmable gate arrays (FPGAs), not to replace the existing MCUs/APs, but rather to act as bridges between them and other devices, and to augment them by offloading communications and other computationally intensive tasks.
Since FPGAs can be programmed to support a wide variety of electrical interfaces and protocols, they can act as bridges between MCUs/APs and sensors, cameras, and displays. Also, because they can perform calculations and operations in a massively parallel fashion, FPGAs can be used to execute computationally intensive vision processing and AI tasks, thereby freeing up the host processors for other activities.
This article discusses the processing requirements of modern vehicles and describes some of the automotive applications that can be addressed by FPGAs. It then introduces some example FPGAs from Lattice Semiconductor and shows how they can be used to solve connectivity, processing, and security problems. Associated development boards are also presented to help designers get started.
To support their ADAS capabilities, today’s automobiles employ many sensors outside the vehicle, including cameras, radar, LiDAR, and ultrasonic detectors. In many cases, it is necessary to take data from disparate sensors, pre-process this data (removing noise and formatting it as required), and use sensor fusion to combine the data such that the resulting information has less uncertainty than would be possible if the data from the different sensors were to be used individually. In many cases, AI applications are employed to analyze the data, make decisions, and take appropriate actions.
A relatively recent trend is the deployment of electronic (also known as “digital”) rear-view mirrors. In this case, a wide-angle, high-resolution camera is installed inside the rear window. The video stream from this camera is presented on a digital display that replaces the traditional mirror, resulting in a clear rearward view that is unobstructed by passengers in the rear seats. In some cases, video streams from cameras mounted on the side mirrors may be merged with the video stream from the rear window camera. These three feeds are “stitched together” to provide a single image that is presented on a super-wide electronic mirror, thereby providing the driver with a much higher degree of situational awareness as to what’s going on around the vehicle.
Another recent trend is to deploy in-cabin cameras mounted on the dashboard, on the steering column, or integrated into the rear-view mirror (regular or electronic). When coupled with AI, these in-cabin mirrors can be employed for a wide variety of tasks, such as recognizing who is sitting in the driver’s seat and adjusting it and the mirrors accordingly. In addition to monitoring drivers to ensure they’re paying attention to the road and not dozing off, such a system can also look for signs of drowsiness, as well as medical problems or distress such as an epileptic fit or heart attack, and take appropriate actions. These actions may include activating the hazard warning lights, applying the brakes, and guiding the vehicle to the side of the road. Further applications include ensuring young children and pets are not mistakenly left unattended in the rear seats by preventing the car from being locked and flashing the lights, and alerting the driver if a passenger leaves something like a phone, bag, or package on the back seat.
With regard to video-based applications, in some cases, it is required to split a single video input into multiple streams; in others, a design requirement may be to aggregate multiple video streams into one.
With the increasing deployment of electric vehicles (EVs) comes the need to monitor and control motors, and to monitor and manage the charging process to achieve maximum battery life.
On top of all of this, many of today’s automobiles are starting to be 5G or V2X enabled, where V2X (“vehicle to anything”) refers to communication between a vehicle and any other entity that may affect (or be affected by) the vehicle, from roadside infrastructure to other vehicles. Along with this connectivity comes the need for security to prevent the vehicle from being hacked.
It’s important to remember that not all FPGAs are suitable for automotive applications. The Automotive Electronics Council (AEC) is an organization originally established in the 1990s by Chrysler, Ford, and GM for the purpose of establishing common part qualification and quality system standards. One of the most commonly referenced AEC documents is AEC-Q100, "Failure Mechanism Based Stress Test Qualification for Integrated Circuits."
IATF 16949:2016 is a technical specification aimed at the development of a quality management system which provides for continual improvement, emphasizing defect prevention and the reduction of variation and waste in the automotive industry supply chain and assembly process. Based on the ISO 9001 standard, IATF 16949:2016 was created by the International Automotive Task Force (IATF) and the Technical Committee of ISO.
Electronic system suppliers to the automotive market increasingly require that semiconductor suppliers provide products compliant to the AEC-Q100 standard, and can demonstrate ISO/TS-16949 certification of their quality systems.
FPGAs are extremely flexible, but different device families offer various combinations of capabilities and functions that make them better suited to specific tasks. In the case of embedded vision applications, for example, modern cameras and displays often employ MIPI interfaces. The MIPI CSI-2 (camera/sensor) and DSI (display) protocols both employ a communications physical layer (PHY) called the D-PHY. Legacy MCUs/APs may not support this interface, but some FPGAs do, such as CrossLink-NX embedded vision and processing FPGAs from Lattice Semiconductor.
In addition to two hardened four-lane MIPI D-PHY transceivers supporting 10 gigabits per second (Gbits/s) per PHY, CrossLink-NX devices also support 5 Gbits/s PCIe, 1.5 Gbits/s programmable inputs/outputs (I/O), and 1066 megabits per second (Mbits/s) DDR3. These devices also support traditional electrical interfaces and protocols like low-voltage differential signaling (LVDS), Sub-LVDS (a reduced-voltage version of LVDS), Open LVDS Display Interface (OLDI), and serial gigabit media-independent interface (SGMII). As a result, these devices can be used for aggregating video streams, splitting video streams, running AI applications, and—while doing all of this—also act as bridges between legacy MCUs/APs and modern sensors and displays.
Developers of automotive systems looking to evaluate CrossLink-NX FPGAs will find the combination of the LIFCL-VIP-SI-EVN CrossLink-NX VIP Sensor Input Board (Figure 1) and the LF-EVDK1-EVN Modular Embedded Vision Kit to be of interest (the former can act as an input board for the latter). In addition to a CrossLink-NX FPGA, the sensor input board also features four, 13 megapixel Sony IMX258 CMOS MIPI image sensors, supporting 4K2K @ 30 frames per second (fps) or 1080p @ 60 fps. It also supports easy sensor connectivity via three independent PMOD interfaces.
Figure 1: The CrossLink-NX VIP Sensor Input Board, which can act as input to the Embedded Vision Development Kit, contains a CrossLink-NX FPGA and supports the aggregation of four MIPI Sony IMX258 image sensors. (Image source: Lattice Semiconductor)
For compute-intensive applications that also demand high I/O bandwidth—such as AI for tasks like gesture recognition and control, voice recognition and control, human presence detection, occupant identification, and driver monitoring—Lattice’s ECP5 FPGAs feature up to 3.2 Gbit/s serializer/deserializer (SERDES), up to four channels per device in dual-channel blocks for higher granularity, up to 85K look-up tables (LUTs), enhanced digital signal processing (DSP) blocks that provide 2x resource improvement for symmetrical filters, and single event upset (SEU) mitigation support. These FPGAs also provide programmable I/O support for LVCMOS 33/25/18/15/12, XGMII, LVTTL, LVDS, Bus-LVDS, 7:1 LVDS, LVPECL and MIPI D-PHY I/O interfaces.
An example ECP5 device is the LFE5U-85F-6BG554C with 84,000 logic elements, 3.75 megabits (Mbits) of RAM, and 259 I/Os. Also of interest is the LFE5UM-45F-VERSA-EVNG ECP5 Versa Development Kit (Figure 2). The board uses a half-length PCI Express (PCIe) form factor and allows designers to evaluate key connectivity features of the ECP5 FPGA, including PCIe, Gigabit Ethernet (GbE), DDR3, and generic SERDES performance.
Figure 2: Presented in a half-length PCI Express form-factor, the ECPe Versa Development Kit lets designers evaluate key connectivity features of the ECP5 FPGA, including PCIe, GbE, DDR3, and generic SERDES performance. (Image source: Lattice Semiconductor)
Security threats from hacking are increasing, with new breaches constantly occurring. In the case of automobiles, a cyberattack could cause loss of control resulting in injury or death to the passengers and pedestrians, and damage to the car, other vehicles, and property.
A large part of an automobile’s security solution is to establish a root of trust (RoT); that is, a hardware resource within the system that can always be trusted. One solution is an FPGA-based hardware RoT (HRoT), such as that provided by Lattice’s MachXO3D family of devices. In addition to substantial LUT resources and large numbers of I/O, these flash-based devices offer instant-on and hot-socketing capabilities. General-purpose applications include glue logic, bus bridging, bus interfacing, motor control, power-up control, and other control logic applications.
Of particular interest is the fact that the MachXO3D is the only FPGA with both dual-boot capability and less than 10K LUTs that is equipped with a hard National Institute of Standards and Technology (NIST)-certified Immutable Security Engine. This allows the MachXO3D to act as the automobile’s HRoT in the form of the system’s first-on, last-off device. When the system is powered up, the MachXO3D first checks to make sure that it’s running authenticated firmware. It then checks the firmware of the other devices in the system. If any of the components in the system are attacked or compromised, including itself, the MachXO3D rejects the suspect firmware and reloads that component with a known-good, authenticated firmware image.
For those developers interested in evaluating MachXO3D-based designs, the LCMXO3D-9400HC-D-EVN MachXO3D Development Board provides an extensible prototyping platform (Figure 3). The board features an L-ASC10 (analog sense and control) hardware management device, a general-purpose I/O interface for use with Arduino and Raspberry Pi boards, two Hirose FX12-40 header positions (DNI), an Aardvark header (DNI), and 128 Mbit serial peripheral interface (SPI) Flash with quad read feature.
Figure 3: The MachXO3D Development Board features a MachXO3D FPGA, an L-ASC10 (analog sense and control) hardware management device, support for Arduino and Raspberry Pi boards, two Hirose FX12-40 header positions (DNI), an Aardvark header, and a USB-B connection for device programming. (Image source: Lattice Semiconductor)
The board comes in a 4 x 6-inch form factor and features a USB mini-B connector for power and programming, and multiple header positions supporting Arduino, Aardvark, FX12, Hirose and Raspberry Pi. Both a USB cable and a quick start guide are included.
Modern automotive electronics require an ever-increasing number of sensors, electrical interfaces, and protocols, with corresponding demands on processing power and bandwidth. The addition of AI and machine vision processing, as well as security requirements, complicate the implementation of solutions using classic MCU or AP approaches.
As shown, by appropriate application of FPGAs, designers can add a degree of flexibility and processing power that can bridge disparate processing environments, perform sensor aggregation and fusion functions, address I/O bandwidth requirements, and perform calculations and operations in a massively parallel fashion, while freeing up the host processors for other activities.
For security, a flash-based FPGA with dual-boot capability and NIST-certified Immutable Security Engine can act as the automobile’s HRoT and ensure that it—and other devices—are running only authenticated firmware, thereby preventing hackers from cryptographically compromising the automobile’s systems.