Why Direct to Autonomy

Over the last 70 years, many have studied the impact of automation on human activity. From early automation pioneers like Mackworth, who in 1948 studied the phenomena described as vigilance decrement to Endsley who in 1996 found that an over reliance on automation may reduce situational awareness in human operators. At Vahana we’ve pored over these reports and others as we considered the self-piloted on-demand air mobility use case of our aircraft. We also looked for lessons we could learn from the automotive industry as it pushed ahead with self-driving cars.

Drawing a Distinction

Unlike aircraft, cars face a constant flow of hazards — other cars, pedestrians, debris — with human error as the main contributor to risk. Self-driving cars must navigate these hazards in addition to the infrastructure (from street signs to road markers to curbs) meant to guide their safe movement on roads. This extent of hazard management makes autonomy in cars challenging, especially in high-density urban areas. Yet they have a fallback mode that, if implemented correctly, can provide a convenient solution: brakes.

Now compare the perspective from a typical car trip to that of a typical flight over one of the densest airspaces in the world, San Francisco, as shown in the above photos. Even with over 20 airports, including three that are international, hazards, including aircraft, birds, and drones, are encountered significantly less often in the sky. In the instance that a hazard is encountered, the aircraft, unlike the car, can move in three dimensions to avoid it. Yet, unlike the car, the consequences of an accident are typically much worse. To be clear, we do not subscribe to the “big sky” theory. Rather, this large difference in hazard density and difference in response to hazards illustrates that aircraft autonomy is different in nature to automotive autonomy and needs to be addressed as such.

By viewing the challenge of autonomy from both the risk (chance of encountering a hazard) and impact (chance of a serious incident) angles, we begin to get at the core of why automotive autonomy and aircraft autonomy are so different and why, in many ways, the challenge of autonomy in aviation is simpler. Whereas cars must navigate a highly complex, hazard-rich environment with limited avoidance options, aircraft must navigate a simple, sparser environment with a wide array of avoidance options. Yet the simpler the scenarios, the greater the vigilance decrement.

Automation vs Autonomy

It’s critical to remember that automation and autonomy are not synonymous.

  • Automation is the ability for a system to control a vehicle. For example: autopilot, cruise control, lane keeping, or trajectory following in cars and aircraft.
  • Autonomy is the ability for a system to respond to unexpected hazards or situations during its operation. For example: in the case of automation without autonomy, if a car (or aircraft) were set in cruise control (or autopilot) mode, the vehicle would keep moving forward with no regard for an object in its path. With an autonomy system, the vehicle can detect the hazard and automatically react. It can deviate from its original path and then returning to its nominal path afterwards. The systems to enable autonomy may include a plurality of perception sensors such as radars or cameras, vehicle-to-vehicle (V2V) communication systems such as voice or ADS-B, or a command and control (C2) link by which a remote operator may redirect the vehicle.

Getting To Autonomy

Within the autonomy community, there are primarily two schools of thought. The first includes use of a fallback driver or pilot (sometimes called a safety pilot). In this case, the vehicle must be able to be controlled by both the autonomy system and by the person. When in autonomy mode, the vehicle must also be able to communicate its intentions, limitations, and warnings to the person giving them the necessary information in a timely manner. Finally, as the fallback, the person must be able to take over control at any moment and for any reason. Over time, as the autonomy system becomes increasingly reliable, the fallback driver does less and less work until one day, their services are no longer required.

This scenario of partial automation is where some behavioral studies play in. For example, in 1997 Parasuraman proposed that automation does not supplant human activity, but changes the nature of the human’s work. In the case of the recent pedestrian fatality (as a result of a safety driver not overriding and stopping an Uber self-driving ground vehicle) the error was caused due to the differences in active and passive roles. In this case, the self-driving vehicle’s safety driver was operating in a passive role (as a backup), as opposed to a traditional driver that is in constant control of the vehicle. Yet the circumstances that led to the accident demanded that the safety driver move to an active role. In related findings, Parasuraman also challenged the assumption that automation reduces operator error. These type of real experiences from the automotive industry, coupled with decades of research, illustrate the cognitive challenges, such as vigilance decrement, posed to the human operator by a system that can pass back the controls at any time. Again, as vigilance decrement is a problem in cars for which hazards are plentiful, then for aircraft for which there are orders of magnitude fewer hazards, vigilance decrement is an even greater challenge.

The alternative targets full autonomy from the beginning. A reasonable strategy for the first implementations of fully autonomous vehicles is to constrain the operational environment sufficiently so that the autonomy systems can be designed from the beginning to be safe and extensively tested. The advantage of implementing full autonomy over partial autonomy is that there is no need to develop human-machine interfaces, manage complex human psychological issues that arise from handover processes, and there is no concern of the driver becoming complacent. One disadvantage is that the imposed operational constraints also limit operational freedom.

Clearly, both paths propose a gradual approach to autonomy: one through responsibility growth and the other through operational growth. The former path may get self-driving cars on the road faster, but there is also significant cost in developing systems that are ultimately not needed. Likewise, the latter may restrict early revenue generation opportunities to just those routes that are well understood and predefined before expanding into a more generalized operating environment.

Fortunately, air taxi missions that vehicles like Vahana are targeting are, by their nature, constrained for the following reasons:

  • Flights are point-to-point and have to navigate within known airspace constraints
  • Landing infrastructure is limited thereby constraining the number of possible routes
  • Flight times are short allowing precise management of weather and energy
  • The entire fleet may be coordinated and monitored as a whole
  • Air hazards are rarely encountered as compared to ground hazards
  • Ground hazards are minimized by applying safety means at the landing and takeoff sites, such as access control

It is this specific type of limited air taxi operations that we call “self-piloted operations”. With respect to choosing to pursue the self-piloted on-demand air mobility use case it can be assumed that requiring a safety pilot (as is unavoidable in the case of a partially automated aircraft), increases cost of operation, introduces vigilance decrement, and decreases useful load.

Sharing Responsibilities

When it comes to self-piloted operations, it’s important to consider the core shifts in roles and responsibilities outlined in the image below. Some of these responsibilities can be automated while others can’t. As an example, the responsibilities on the first row — namely, basic airmanship, sense and avoid, navigation, and systems management — require a presence on the vehicle. As such, any self-piloted aircraft requires these particular systems, at a minimum, to be fully automated. Fortunately, the technology to automate those responsibilities has largely been demonstrated and certified on commercially available autopilot systems. The remaining responsibilities on the second and third rows may be offloaded from vehicle systems and be either automated or performed manually. Some, such as path planning or weather monitoring, are easily automatable and require minimal operator oversight. Others, such as preflight checks and landing procedures, will likely require a combination of eyes-on and automated checks. Lastly, other responsibilities, such as communications with Air Traffic Control (ATC), will be performed manually as they are currently difficult to reliably automate without a significant overhaul of the existing infrastructure or improvement in technology.

In this context, we find that self-piloted operations involve the coordination of three main actors — vehicle, operator, and ATC. The operators act to oversee the operations and comply with commands from ATC as necessary. The operators will review flight plans, ensure vehicle airworthiness, clear the landing area, and provide passenger support. The vehicle on the other hand will receive flight path commands from the operators, automatically follow that path while in flight, and sense and avoid hazards while in flight. While the scale of this system is significant, it efficiently leverages automation and autonomy to enable high-volume, low-cost operations. Furthermore, it additionally supports a broader perspective on the overall operational safety case through constant monitoring and fleetwide analytics.

Vahana’s Path

Our decision to tackle self-piloted operations from the beginning was driven in part by being a project within Acubed by Airbus. Acubed operates its projects, like Vahana, with time scales that are significantly more aggressive than traditional industry development projects. These constraints challenge us to think creatively to shorten development timelines. This kind of undertaking, within an enormously successful aerospace company like Airbus, had to be discrete from other R&D efforts the company pursues.

Yet we’re not pushing for self-piloting just to push hard and fast towards innovation. Through flight testing we are able to identify and mitigate corner cases that, if there were to be a safety pilot, they would only be there for the truly obscure events. The rarity of these occurrences leads to complacency and control handover challenges, which ultimately encourages human error. As Parasuraman found in 1997, in an environment of high complexity, no designer of automation can foresee all possibilities. The scenarios that this can lead to — a reliance on human operators to manage those rare situations — is a recipe for setbacks and drives up the cost and complexity of pilot training.

As we continue to build the future of flight, we will always advocate for the safest and most efficient path to autonomy. We seek to impact the lives of future generations, ensuring both their wellbeing and efficiency. These future generations will live in a world of autonomous systems (cars, elevators, trams, etc.). So in order to make the mark we intend, the aircraft and supporting systems must be capable of reaching the scale to truly impact a whole generation of people. We are building an aircraft, which when operated as a service, will see demand of millions of flight hours a year. It is the pace of our innovation, the business case for our approach, and the requirement to uphold the safety standards to which aerospace has long been held that contribute to our path to autonomy.

Stay tuned for more info on these systems as testing progresses.

- Zach Lovering