Syncopated Systems
Seriously Sound Science

Common Threads: Media, Science, Technology, and Other Magic

May 2020: The COVID-19 Chronicles

Cars That (Nearly) Drive Themselves

Over about the last 10 years, one of the more interesting changes I’ve seen is the rising number of all-electric highway-capable automobiles on Silicon Valley roads, from just a few homemade oddities—including one built around 1980 by a childhood friend’s father—to more than 50,000 all-electric cars today.

Of the more than 3.5-million electric vehicles now in the world, about 20% were produced by Elon Musk’s Tesla Motors in its nearby factory.

Perhaps even more interesting is that some vehicles—both gas and electric—now drive themselves, and many others are learning quickly. I frequently see vehicles on local roads outfitted with additional equipment for this purpose, including Google’s Waymo Chrysler minivans (which seem to have replaced its earlier custom bubble cars), and others using Ford, Lexus, and Lincoln vehicles. December 2019 reports include that a self-driving tractor-trailer freight truck completed the first cross-country commercial freight run, and that researchers at nearby Stanford University have made promising developments with the until-now overlooked use case of power drifting using—of all things—a 1981 DeLorean.

To give a vehicle the ability to drive itself is to turn it into a large autonomous robot.

Just as we must first know where we are going before we can drive somewhere, self-driving vehicles must be able to navigate before they can learn how to drive. In short, we’ve got to know where we are and where we want to go before we can figure out how to get there.

Nolan Bushnell (left), Al Alcorn (right), and Jerry Jessop with the first coin-operated video game, his Computer Space (serial # 1) in Santa Clara, CA on September 8, 2016

Nolan Bushnell (left), Al Alcorn (right), and Jerry Jessop with the first coin-operated video game, his Computer Space (serial # 1) in Santa Clara, CA on September 8, 2016
(click for larger image)

Teaching Autonomous Robots to Navigate

Right after graduating from high school and during my first semester in college, I worked at IBM’s Almaden Research Center (one of IBM’s two main research facilities in the United States), as a technician in its Advanced Computing Environment. In many ways, it was a dream job. I was 18 years old, and one of relatively few people there without a PhD.

(During the previous year’s summer break, I had worked at Atari, which—at least for a teenage computer geek—was another dream job, though Atari was clearly a company in decline. I met many of the people who had started and run Commodore, and I managed to save enough money and use my employee discount to buy a new computer, an Atari 520ST, and a 3.5-inch “microfloppy” disk drive.)

One of my friends told me about a presentation that his brother had attended, at which Atari founder and serial entrepreneur Nolan Bushnell described his latest venture. It was a startup called Bots Inc., which was to make autonomous mobile robots. It sounded interesting, so I looked up Bushnell’s office and sent in my resume.

I started working there shortly before the end-of-year holidays.

In 1988, my earliest friend in Sunnyvale and next-door neighbor, Mike Sherwood (1972-2016), had gotten our high school electronics teacher to give him the coin-operated Pong game that had stood in the back of the room. Together we restored it in his parents’ living room, until they told him to get rid of it. At that point we trucked it into my father’s garage and continued our work, until my father told me to get rid of it. By that point, I had started working for Bushnell, and asked around the office if he had a Pong game; apparently he had given his last one to the Smithsonian Institution, so, for Christmas, I put a bow on it and gave him mine. I enjoyed sharing his walk down memory lane as he looked through the documentation that was in the bottom of the cabinet and recalled the people who were named in it.

We developed a system of customer-facing point-of-sale graphical computer terminals and animated autonomous robots that in 1990 delivered food and drinks to customer tables from the kitchen of a pizza restaurant.

The robots navigated using dead reckoning and stripe following, and communicated with a control system and each other using pulsed infrared light. They also played digital audio and animated their humanoid features using servomotors commonly used in toy radio-controlled cars.

Though our team solved enough technological problems to make the system do what was requested, its commercial success may have been hampered by some unaddressed sociological issues. Our client was Little Caesars Pizza, which is based in Detroit, Michigan. Perhaps admirably, our client wanted its prototype system installed at a location near its corporate headquarters. So, the robots were sent into a bad part of town. (Coincidentally, the film RoboCop—arguably more violent than any film before it—was released only about three years earlier. It was set in a futuristic and sadly lawless and violent Detroit.)

Although I already had worked for Atari and IBM and had started my own business, I was clearly the youngest and the least experienced member of our small team making robots. I learned a lot from the boss and my colleagues, especially from Mike Ciholas, who had recently completed a graduate degree from MIT.

I recall at one point suggesting to Bushnell an idea I had had in high school (due in large part to a particularly good science teacher, Richard Parsons) for the display of three-dimensional images in free space using holographic wave interference. Though he wasn’t immediately dismissive of the idea, he did suggest that the amount of power required might be impractical. Some time later, I learned that while he ran Atari, Atari owned most of the patents in the field of display holography. (See interview with Roger Hector.)

Through working for Bushnell, I met other interesting people including the engineer behind Pong, Al Alcorn.

Around that time, Bushnell’s attempts to leverage his many earlier successes seemed to catch up with him, and this was apparently his last venture in Silicon Valley.

Early Automotive Navigation Systems

Among his other contributions to technology and society, Bushnell had been the seed investor behind the first practical commercial automotive navigation system, the 1985 Etak Navigator. (Through a local early online social network—a computer bulletin board system—one of my friends was the stepson of Etak’s founder.) Back then, I enjoyed watching one of these systems operate while riding in Bushell’s car.

At the time, that had been the largest Mercedes-Benz I had ever seen, until a couple years later when I happened to park next to a bigger one owned by Apple co-founder Steve Wozniak. Both were blue sedans, but the latter was a V12 with an old blue California license plate with orange letters that read “APPLE II” and a gold frame around it with the name Apple Computer and its early color logo.

The Etak Navigator used dead reckoning based mostly on odometry through data collected from rotational encoders mounted on the inside rim of each rear wheel. This allowed the Etak Navigator to measure the distance each wheel traveled independently so that it could also measure turns. One thing that made the Etak Navigator interesting is that it also self-corrected the accumulated measurement errors by comparing its presumed heading and position with a compass and map matching. Another thing that made Etak interesting is that the company sent cars equipped with its system out to map the streets and create valuable electronic databases of the roads, and that these map databases were then licensed to create Yahoo! maps, long before Google started sending out its cars for map-making and later cars with cameras adding Google Street View.

I also see some fun similarities between technologies used by the Etak Navigator and by Atari. For example, the Etak Navigator stored its data on tape cassettes, as could the 8-bit Atari home computers that were available 1979-1992, such as the Atari 800. (I have heard that a later Etak Navigator used a CD-ROM drive; after Sony purchased Etak in 1997, its SkyMap system used CD-ROMs.) The Etak Navigator displayed maps and the user’s position on a vector-based (non-rasterized) cathode ray tube (CRT) display; though small and green only, this was the same type of display used by some Atari coin-operated video games, including Asteroids (1979), Battlezone (1980), Lunar Lander (1979), Red Baron (1980), Space Duel (1982), Star Wars (1983), Star Wars: The Empire Strikes Back (1985), and Tempest (1981).

Global Positioning System

For one of my clients in 2003, I created hardware and software interfaces to receivers for the United States Global Positioning System (GPS) radionavigation-satellite service (RNSS), including consumer, industrial, and differential GPS units. This was only about 10 years after GPS became fully operational and only about three years after the federal government removed its selective availability accuracy restriction mechanism in 2000, making it useful for non-military use.

Automating Automobiles

Nowadays some of the vehicles I see in Silicon Valley have various non-factory equipment mounted externally, apparently used to test systems being developed to give them autonomy as fully self-driving (FSD) vehicles. Much of the equipment used to enable and evaluate autonomous driving tends to be mounted on these vehicles’ roofs. To aid with odometry, many cars add a rotational encoder to at least one of their rear wheels, often mounted to the exterior of the vehicle body about as inelegantly as training wheels are added to a child’s bicycle, and somewhat similar in function. (Even almost 35 years ago, Etak could install these invisibly when equipping cars with its Navigator systems.)

In contrast, local electric automobile maker Tesla has offered its Autopilot option on its vehicles manufactured since October 9, 2014, already more than five years ago. Tesla has also made Autopilot standard equipment as of April 2019, the same month it introduced a third major hardware version and support for full self-driving capability.

Of course, drivers don’t need their vehicles to be all-electric to enjoy the benefits of automation.

Levels of Automation in Driving

Although Tesla modestly describes its Autopilot as only an advanced driver-assistance system (ADAS), the standard used by the National Highway Traffic Safety Administration (NHTSA) and the National Transportation Safety Board (NTSB) to classify levels of automated driving identifies Tesla Autopilot as a Level 2 (“Partial Automation”) system.

This standard, SAE International (formerly the Society of Automotive Engineers) J3016_201609, is summarized by the following six levels. Note that as automation increases, the task of monitoring the environment shifts from the human driver to the machine.

SAE J3016 Levels of Automated Driving
Level 0:No Automation
Level 1:Driver Assistance
Level 2:Partial Automation
Level 3:Conditional Automation
Level 4:High Automation
Level 5:Full Automation

In contrast to Tesla Autopilot, a conventional cruise control system (which uses only a tone wheel or similar sensor) would be considered Level 1 vehicle automation or a “Driver Assistance” system, and an adaptive cruise control (ACC) system would also be considered Level 1 vehicle automation, but a more typical example of an advanced driver assistance system (ADAS).

Benefits of Automated Driving

One benefit of vehicular automation is that it reduces the workload of the human driver, thus reducing fatigue and hazards associated with operating a motor vehicle while fatigued. Other benefits include collision avoidance and mitigation. Since the early 2000s, I have experimented with low levels of automated driving, integrating computers into automobiles, and effects on driver workloads.

Reducing Driver Fatigue

Although I am originally from Silicon Valley, I had followed work elsewhere a few times, and a couple of those times were to central Texas, in and around Austin. With Texas being such a large state and so far away from my roots near the west coast, often I found myself driving long distances.

While returning from a road trip with a relatively short stay in Silicon Valley—a three-day drive each way—I finally succumbed to the soreness from my right foot holding the accelerator pedal for so long and called my dealer in Austin to order a cruise control system, which I installed shortly thereafter in my 1999 Subaru Forester.

(When I bought the car, I got a particularly good deal buying one that the dealer had put just over 1,000 miles on, ordering extra equipment packages that were normally installed by the dealer, and installing them myself. So, I was already very familiar with installing Subaru options like the cruise control. As a kid, I learned to work on cars from my father, who loved self-reliance and Volkswagens.)

During the long drives that followed, I quickly realized that using the cruise control allowed me to feel significantly less fatigue during long drives. The way it regulates the throttle also boosted my car’s highway fuel economy from 28 MPG to 29 MPG, exactly as was stated on the manufacturer’s sticker when the car was new.

Collision Avoidance and Mitigation

Discussions of collision-avoidance systems often seem to focus on ethical dilemmas associated with seemingly-infinite permutations of the trolley problem. In a nutshell, this presents an ethical thought experiment raising the question of whether it is better to intervene in a situation by killing one person to save the lives of five, or to do nothing and let five people die. (I contend that the action of pulling the lever while knowing that it would kill a person would legally be classified as murder, and that any ethical utilitarianism arguments would not meet the definition of lawful necessity which would excuse such action.)

Rather than skipping down the road of technological evolution to questions of how machines of the future should address such problems, I prefer to focus on how we can use the technology of today to address the root causes of the problem: that you have a trolley that can’t stop and people on the tracks ahead of it. (Seriously, what responsible engineer would allow such a foreseeable situation, and what responsible engineering managers would allow such work to go unchecked?)

The first thought that comes to my mind is how to get the vehicle to a safer state. In the case of an automobile, its safest state is generally when it is stopped. (I qualify this statement because there are always edge cases, such as those caused by environmental hazards.) Similar to how human drivers are taught that they should not over drive the distance they can see with their headlamps, automated vehicles should not drive faster than they can stop to avoid hazards that approach their required braking distance.

At the very least, I’m thrilled to see developments in the area of autonomous emergency braking systems (AEBS). Unfortunately, I think that our society is overdue to require new cars to be designed not to drive into things or people.

Risks of Automated Driving

To date, 104 deaths are associated with Tesla automobiles. Of these, at least four to eight deaths are associated with the use of Autopilot.

Although it may be too soon to glean statistical significance, data suggests that rates of motor vehicle collisions are about 80% lower when using Tesla Autopilot. (The same report indicates that, per mile driven, Tesla drivers are already about 3.5 times less likely to crash than all drivers in the United States.)

Unreasonable Over-Reliance on Technology

Descriptions of Autopilot incidents include, in my opinion, stories of unreasonable over-reliance on the technology and irresponsible ignorance of its limitations.

This is the same idea conveyed by old (and almost certainly false) stories of a someone who bought a camper van (also known as a motor home, recreational vehicle, or RV), drove it down a highway, set the cruise control, walked to the back to make a sandwich, brew coffee, or some other such thing, and was surprised when the vehicle soon crashed.

Sadly, this seems to be the case with a the second driver killed while using Autopilot. (The first was on January 20, 2016 in Handan, China.) On May 7, 2016 in Williston, Florida, a Tesla Model S using Autopilot underrode a turning semitrailer as its driver apparently focused his attention on the passenger seat where he had a laptop computer mounted to the seat and a DVD player playing a movie.

The next fatality attributed to the use of Autopilot occurred on March 23, 2018 in Mountain View, California, about six miles from my home. As his Tesla Model X erroneously drifted toward a damaged impact attenuator where a lane diverged from the rest of the freeway, the driver had about five seconds to react, but may have had his hands off the steering wheel for about six seconds before the crash.

Who is Responsible?

Such vehicular automation systems also raise questions about general design philosophies, and specifically who should be in control and who who should be responsible for the operation of the motor vehicle.

I contend that the driver should always have final authority and responsibility. So, if an autonomous driving system detects a that an imminent collision is probable, the system should alert the driver and allow the driver to override automatic braking and give the driver the option to override the system. (For example, a driver should be allowed to crash through a gate or guard arm when needed to avoid something worse, like getting hit by an oncoming train.) Of course, this exchange might need to occur very quickly, so it may be necessary to give the driver preemptive control such as pushing or holding a button on the steering wheel (rather than an interactive control through which the driver must wait to respond to a prompt from the system). The system would also need to detect with reasonable certainty that the signal indicates the driver’s intent, and not input caused by a hardware failure such as a stuck switch. The deadly failures of Tesla Autopilot raise the question of whether Tesla misrepresented the capability of its technology. At the time of this writing, the Tesla Autopilot web page appears to reasonably describe the system, stating “Autopilot enables your car to steer, accelerate and brake automatically within its lane. Current Autopilot features require active driver supervision and do not make the vehicle autonomous.”

With that description, Tesla Autopilot does not meet the definition of an “automated driving system” under Florida statute 316.003(3), so the driver in the May 7, 2016 incident appears to have violated Florida statute 316.003(1), which states “A motor vehicle may not be operated on the highways of this state if the vehicle is actively displaying moving television broadcast or pre-recorded video entertainment content that is visible from the driver’s seat while the vehicle is in motion, unless the vehicle is being operated with the automated driving system engaged.”

Previous article in this series: Champions of the Space Age

Back to series index: Common Threads index