No items found.

Designing an automated lane keeping system (ALKS)

Designing an automated lane keeping system (ALKS)Designing an automated lane keeping system (ALKS)

What is an automated lane keeping system (ALKS)?

An automated lane keeping system is a closed-loop communication system employed in semi-autonomous and autonomous vehicles. An automated lane keeping system will work to keep a car in its lane by steering and applying throttle and brakes to the vehicle. Automated lane keeping systems are typically considered to be Level 3 in SAE’s levels of driving automation. 

What are the levels of driving automation? 

The SAE’s J3016 automation standard is the most widely adopted formal classification of autonomous driving systems. This standard has been adopted by organizations such as the National Highway Traffic Safety Administration (NHTSA) among others. This standard defines 6 levels of driving automation: 

  • Level 0: Automated system issues warnings but has no sustained vehicle control
  • Level 1: The driver and automated system share control of the vehicle
  • Level 2: The automated system takes full control of the vehicle acceleration, steering and braking
  • Level 3: The driver can safely turn their attention away from driving, but must be prepared to intervene with some limited time
  • Level 4: Similar to level 3, however, driver attention is not required for safety
  • Level 5: No human intervention is required at all

Why is closed loop system testing important for automated lane keeping systems?

Closed loop control systems are different from open loop control systems because they feature a feedback loop between the motor and the controller. This allows them to self correct in real time based on changing variables as determined by the feedback system. 

Before moving to production on any system, vehicle manufacturers and Tier 1 suppliers must conduct end to end verification and validation of their systems. This allows them to examine the functionality of their systems and ensure safety of passengers and other vehicles on the road. 

Closed-loop system testing is a part of the verification and validation process. It provides robust statistical evidence on the functioning of an autonomous car system. When done right, it can help engineers see areas where the systems may be lacking, giving ideas on what to prioritize in the future.

AirSim, CarSim, and Applied Intuition are popular tools that vehicle manufacturers will employ to ensure their vehicle models are thoroughly tested in a 3D car simulator environment. Collimator is designed to seamlessly integrate with these 3rd party platforms via API and allows users to design and test their end to end system - their perception system, localization system and control system all in one place. 

Designing an automated lane keeping system in Collimator

In this tutorial, we will design an automated lane keeping system (LKAS) in Collimator. We will then simulate the car's performance in a 3D simulated environment so we can view how the system reacts based on its position in a lane.

The goal is to see how the vehicle performs in the simulated environment to inform our controller design and test the real-time communication between Collimator and the AirSim environment. 

Below is the simple schematic of the interface and closed loop feedback between the two systems:

Seamless communication between Collimator and AirSim

Model Overview

An overview of the our model is pictured below:

ALKS model

In order to infer the position of the vehicle in its lane, we will use a pre-trained PyTorch neural network. The neural network is adapted from a paper written by Lucas Tabelini Torres of Cornell University titled PolyLaneNet: Lane Estimation via Deep Polynomial Regression.

PolyLaneNet will infer an image and draw over it, the leftmost and rightmost position of the lane the vehicle is on. The version of PolyLaneNet we are using has also been modified to return the offset recommendation of the steering wheel in order to keep the vehicle in the lane. 

Note: for the purpose of this simulation we are assuming a constant throttle and no brakes applied onto the vehicle

The simulation begins with a clock block that’s responsible for time keeping across both Collimator and AirSim. The clock is connected to a series of Python scripts that will operate our AirSim model.

  1. The initScript will initialize the neural network responsible for lane detection
  2. The AirSim_Rx will connect to an already active AirSim environment
  3. The PolyLaneNet script will load our PolyLaneNet model for inference on the live images generated from the AirSim environment and infer the lanes on the road while also providing a suggested angle for steering
  4. Lastly the Airsim_TX script will apply steering and throttle to the model in the AirSim environment based on the Neural net’s estimation of where the car is with respect to the left and right edges of the lane

The inputs and outputs of the three main blocks are listed below:

\begin{array} {|l|l|} \hline Block & Input & Output \\\hline \text{AirSim_RX} & \text{Clock} & \text{RGB e.g. (200,200,200)} \\ \hline \text{PolyLaneNet} & \text{RGB} & \text{Offset (degrees)} \\ \hline \text{App_LangeLocalization} & \text{Steering (offset) and throttle} & \text{Vehicle position} \\ \hline  \end{array}

System initialization

The IntiScript sets up the neural net we will use for inference. Our Airsim_RX script block assumes a running AirSim instance. We’ll be using AirSim’s API to pull images from the AirSim simulator.

import airsim
import time
import numpy as np
import warnings

if 'client' not in locals(): #ensure we don't overwrite the client
    client = airsim.CarClient(ip=REMOTE_IP)
    client.confirmConnection()
    client.enableApiControl(True)
    client.simPause(True)
    car_controls = airsim.CarControls()

As specified in the AirSim documentation, we’ll connect to the AirSim instance and assign the speed, throttle and gear in our script. 

car_state = client.getCarState()
speed = car_state.speed
gear = car_state.gear

Next we will grab the images from our running AirSim simulation and convert them to RGB numpy arrays for neural net inferring. 

responses = client.simGetImages([airsim.ImageRequest("0", airsim.ImageType.Scene, False, False)])
response = responses[0]
img1d = np.frombuffer(response.image_data_uint8, dtype=np.uint8) 
image = img1d.reshape(response.height, response.width, 3)

R = np.array(image[:, :, 0], dtype=float)
G = np.array(image[:, :, 1], dtype=float)
B = np.array(image[:, :, 2], dtype=float)

#Ensure that AirSim video resolution is set to 360 by 640

Vehicle perception and localization

We'll begin our PolyLaneNet script block by importing the libraries we will use and setting the width and height of the simulator:

from model_helpers.log import log_info, log_debug, log_warn
import numpy as np
import cv2

WIDTH=640
HEIGHT=360

We also want to update the frame counter with each frame that we pull from AirSim:

frame_id = locals().get('frame_id', 0) + 1

In order to infer images using our version of PolyLaneNet we will need to load the model from the path we specified in our initializing script. This only needs to happen on the first frame of the simulation. 

We’ll ensure this happens only once by checking that our variable model doesn’t exist.

if 'model' not in locals():
    log_debug("Loading model for inference...")

    from single_test import load_model, infer, find_left_right_lanes, draw_overlays
    import torchvision.transforms as transforms
    import torch

    transform = transforms.ToTensor()
    model, device = load_model()
    log_info("Inference will run on", device)

Next we will assign an image buffer that will be our first image into the net. The image is a placeholder image that is fed on the 0'th frame.

image = np.zeros((360, 640, 3), dtype=np.uint8) 
    unscaled = np.zeros((R.shape[0], R.shape[1], 3), dtype=np.uint8)
    frame_id = 0

Now we can begin writing code that will execute on every frame called from AirSim.

We want to be sure that our RGB values from each frame are assigned to our unscaled variable every time a new frame is loaded: We will also add bilinear interpolation to smoothen the image making it easier to infer. Then we’ll load the image onto a 1 dimensional tensor array.

unscaled[..., 0] = R
unscaled[..., 1] = G
unscaled[..., 2] = B
image = cv2.resize(unscaled, (WIDTH, HEIGHT), interpolation=cv2.INTER_LINEAR)
tensor = transform(image)
tensor = tensor.unsqueeze(0)

Next we will run inference on Tensor passing our model and device variables that have already been generated.

outputs = infer(model, tensor, device)
results = outputs[0][0]

Our updated version of PolyLaneNet returns an offset - which is just the difference between the left and right distance divided by the area - and draws the lane guidelines over the image we feed it. Will use cv2 to override our images with the results we got from PolyLaneNet.

offset, leftDistance, rightDistance, leftLane, rightLane, centerLane = find_left_right_lanes(results, WIDTH, HEIGHT)

    draw_overlays(image, results)
    cv2.imwrite("frame_%d.jpg" % frame_id, image)

We will then assign our offset variable as our output variable to the PolyLaneNet script block.

PolyLaneNet inputs and outputs

Car controller system

The throttle we will apply to the model is going to be generated by a step block whose parameters are shown below:

What this will do is provide a constant throttle of 1 units to the model and slow that to .04 units after the 5th second.

Next in our final script block we will apply our Collimator suggested throttle, steering and brake to the AirSim model.


# set the controls for car
car_controls.throttle = throttle
car_controls.steering = steering # np.sqrt(np.abs(steering)) * np.sign(steering)
car_controls.brake = brake
client.setCarControls(car_controls)

In order to double check simulation data between Collimator and AirSim we will assign the x,y, and z position of the car to the output variables of the function.

print("Speed %d, Gear %d, Throttle %.2f, Steering %.2f, Brake %.2f" % (car_state.speed, car_state.gear, throttle, steering, brake))

# let AirSim continue the simulation for the next frame
client.simContinueForFrames(1)

x = car_state.kinematics_estimated.position.x_val
y = car_state.kinematics_estimated.position.y_val
z = car_state.kinematics_estimated.position.z_val

Full Tutorial Video:

Try it in Collimator