James C Koch CV Blog Portfolio Publications DE

Autonomous vehicles: The future of transportation?

Autonomous vehicles are by no short stretch of the imagination a new and exciting field with enormous potential to change the way society moves. It is also currently a hot topic among consumers, entrepeneurs, and researchers. Being considered a hot topic is generally advantageous since significant investments and research grants are readily available to entrepeneurs and researchers to solve outstanding production and technical challenges, respectivly. However, often the new tech hype can cause our technology to outpace regulations by not addressing how the technology will impact society as a whole. Sometimes when this is the case, the benefits are touted as a magical solution but in reality there may be some concerns during the process by which new technologies mature.

The remaining part of this post will attempt to highlight some concerns by covering the following topics:

  • Background
  • Case study: s3 Project @ Chalmers University of Technology
  • Technical challenges & my thoughts

But first lets provide some context and terminology when experts in autonomous vehicles speak or write on this topic.

Background

First lets clarify what is meant by what is referred to as the 6 levels of automation. I believe that the definition of these 6 levels of autonomy by the Society of Automotive Engineers (SAE) clearly and succinctly conveys the basic operations which fully autonomous vehicles must be capable of. Therefore, borrowing from SAE, the 6 levels of automation are defined as:

  1. Level 0: No Automation
    • This is your normal vehicle where you as the driver perform all of the operating tasks need (i.e. steering, braking, accelerating).
  2. Level 1: Driver Assistance
    • At this level, the vehicle can generally assist you as the driver with some operating tasks. Think of functions which exist in vehicles today such as traction control and anti-lock braking (ABS) systems.
  3. Level 2: Partial Automation
    • With partial automation, the vehicle begins assisting the driver with steering and accelerating tasks which allows the driver to disengage from performing these tasks. Think cruise control and other such systems currently in vehicles today but also adaptive cruise control which will accelerate/de-accelerate based on the vehicle directly ahead. These are generally the features which most automotive companies are implementing into their vehicles currently.
  4. Level 3: Conditional Automation
    • At this level of automation, a seismic shift in mindset is required in that the vehicle itself takes control of all monitoring of its own environment. This is done with many, many sensors which employ a wide range of techniques such as radar and LIDAR. The driver's attention is still critical at this level to respond to random events which require actions to prevent a collision but can safely disengage from functions such as lane-changing or obeying traffic signals. Here it may be useful to imagine features where the vehicle is able to make a lane-change autonomously. This would mark the next step from the current vehicles in the market which are able to warn you, as the driver, if you are attempting a dangerous lane-changing maneuver (i.e. there is a car next to you). Generally, this level requires a human driver to act as a fail-safe if sensing techniques fail to detect and classify a problematic situation.
  5. Level 4: High Automation
    • At this next level of autonomy, the vehicle is fully capable of steering, braking, accelerating, monitoring the vehicle and roadway and responding to events as they happen to determine lane-changing, turning, and to use traffic signals. This level of autonomy still would require a steering wheel and brake present since the autonomous driving system would only be activated by the driver if the conditions were safe to do so. Some complex driving situations such as highway merging and traffic jams would still require driver intervention.
  6. Level 5: Complete Automation
    • This final level of autonomy is complete autonomy requiring zero human attention. As such a level 5 autonomous vehicle would have no need for pedals or steering wheel since the autonomous vehicle controls all critical tasks in the driving paradigm.

Currently, to get to high (level 4) and complete (level 5) automation there still remain many technical challenges in remote sensing techniques to accurately capture in real time the driving environment; however, these are rapidly being solved.

Case study: s3 Project @ Chalmers University of Technology

The s3 Project is a pilot project testing a self-driving, electric shuttle bus on the campus of Chalmers University of Technology. This project is part of the framework of Drive Sweden which itself is part of the Swedish government's strategic cooperation transportation program. The self-driving shuttles in this project are examples of Level 4: high automation vehicles with the expection that the shuttle drives on a mapped route at low speeds. One point about why this project is needed is to test and improve the technologies which autonomous vehicles need to use in order to operate safely and efficiently.

WalkingpastArma-bred-e1523623412244.jpeg

The shuttles used in this project utilize a variety of sensors to enable the vehicle to "see" or at least visualize it's surroundings.

  • GNSS (Global Navigation Satellite System) The GNSS sensor is used to determine the position of the shuttle. GNSS positioning uses a receiver to measure the transmitting time of signals from 4 or more satellites which are used to determine the spatial coordinates of the shuttle.
  • LIDAR (Light detection and ranging) LIDAR sensors are used to build a accurate 3D map representation of the shuttle's surrounding environment. LIDAR works by scanning the environment with a pulsing laser and measuring the reflected pulses. Differences in laser return times and wavelengths are used to generate the 3D map representation.

representativeVisual_LIDAR.png

  • Cameras Cameras are used primarily to detect and identify road signs and traffic lights to improve the accuracy of the 3D map representation generated from the LIDAR sensors.
  • IMU (Inertial measurement unit) The IMU sensor calculates the shuttle's movements while in operation in terms of it's position and speed.
  • Odometer The odometer is used to estimate and confirm the speed and position of the shuttle.

The actual self-driving shuttles being used in this project are the Navya Arma developed by Navya. Some of the actual specifications are a max. speed of \(45 \frac{km}{h}\), weight of \(2400 kg\), capacity of \(11\) seats and \(4\) standing places, range of approximately \(100 km\) or \(8\) hours, and no difference in forward/reverse direction.

Technical challenges & my thoughts …

The immense achievements of the s3 project are tremendous and go a long way to creating the technology needed in full autonomy but certain technical challenges still exist. For one, this project utilizes a pre-mapped route which the self-driving shuttle does not deviate from to lower the amount of input data which is required to be processed in real-time. By reducing the amount of information which the shuttle must process to detect and recognise potential obstacles during operation, processing time is decreased enabling faster response of the shuttle due to a unexpected object. However, fully autonomous vehicles would be expected to be able to efficiently process all data of it's surroundings in near real-time without incident. This is a major challenge for full autonomy where there must exist systems which can robustly detect and recognise new environments as the vehicle travels through it's environment in real-time. This means it is not enough to only process the difference between the pre-mapped static environment and any movement which exists within the immediate environment of the shuttle like other vehicles or pedestrians. Some technologies which are showing promise in this area are machine learning and artificial intelligence algorithms which in this application are used to compare newly detected scenarios with a history of known scenarios. To this end companies like Waymo (formerly the Google self-driving car project), Uber's Advanced Technologies Group either are or have been testing their fully autonomous vehicles on public roads in order to develop a training dataset for articficial intelligence algorithms to learn and improve their self-driving behaviour.

This brings up another challenge stemming from the fact that these early autonomous vehicles will operate on public roads and therefore will have to be able to respond to actions taken by non-autonomous vehicles piloted by humans, us. Therefore, it is important that autonomous vehicles are able to understand rational and irrational behaviour of human drivers to be able to avoid collisions. Uber's Advanced Technologies Group found this out when one of their test vehicles operating in autonomous mode fatally hit a woman who was crossing the street in Tempe, Arizona a year ago on March 18th, 2018 as reported in this news article. These are unfortunately the risk that these companies take to improve the sophistication and reliability of their algorithms and in this case lead to the worst, most tragic outcome. In terms of traffic safety then, autonomous vehicles are theoretically 100% safer than human-driven vehicles except that this assumes that all vehicles will be fully autonomous (i.e. the rules by which all autonomous vehicles operate are known by every vehicle). The most probable future of autonomous vehicles, however, is the slow introduction of a few autonomous vehicles to operate within our current transportation network (side-by-side human-driven vehicles).

Ultimately, there are many benefits from full autonomy of vehicles such as improved traffic safety and faster commute times but there also exist many challenges in the path towards a future with a fully autonomous transportation network. Thank you for taking the time to read this article and if you have any comments, I always appreciate the feedback and can be left via the comments section of this article on LinkedIN.