Refine Your Search

Search Results

Author:
Viewing 1 to 3 of 3
Technical Paper

Region Proposal Technique for Traffic Light Detection Supplemented by Deep Learning and Virtual Data

2017-03-28
2017-01-0104
In this work, we outline a process for traffic light detection in the context of autonomous vehicles and driver assistance technology features. For our approach, we leverage the automatic annotations from virtually generated data of road scenes. Using the automatically generated bounding boxes around the illuminated traffic lights themselves, we trained an 8-layer deep neural network, without pre-training, for classification of traffic light signals (green, amber, red). After training on virtual data, we tested the network on real world data collected from a forward facing camera on a vehicle. Our new region proposal technique uses color space conversion and contour extraction to identify candidate regions to feed to the deep neural network classifier. Depending on time of day, we convert our RGB images in order to more accurately extract the appropriate regions of interest and filter them based on color, shape and size. These candidate regions are fed to a deep neural network.
Technical Paper

Creating 3D Virtual Driving Environments for Simulation-Aided Development of Autonomous Driving and Active Safety

2017-03-28
2017-01-0107
Recreating traffic scenarios for testing autonomous driving in the real world requires significant time, resources and expense, and can present a safety risk if hazardous scenarios are tested. Using a 3D virtual environment to enable testing of many of these traffic scenarios on the desktop or cluster significantly reduces the amount of required road tests. In order to facilitate the development of perception and control algorithms for level 4 autonomy, a shared memory interface between MATLAB, Simulink, and Unreal Engine 4 can send information (such as vehicle control signals) back to the virtual environment. The shared memory interface conveys arbitrary numerical data, RGB image data, and point cloud data for the simulation of LiDAR sensors.
Technical Paper

Generation and Usage of Virtual Data for the Development of Perception Algorithms Using Vision

2016-04-05
2016-01-0170
Camera data generated in a 3D virtual environment has been used to train object detection and identification algorithms. 40 common US road traffic signs were used as the objects of interest during the investigation of these methods. Traffic signs were placed randomly alongside the road in front of a camera in a virtual driving environment, after the camera itself was randomly placed along the road at an appropriate height for a camera located on a vehicle’s rear view mirror. In order to best represent the real world, effects such as shadows, occlusions, washout/fade, skew, rotations, reflections, fog, rain, snow and varied illumination were randomly included in the generated data. Images were generated at a rate of approximately one thousand per minute, and the image data was automatically annotated with the true location of each sign within each image, to facilitate supervised learning as well as testing of the trained algorithms.
X