Refine Your Search

Search Results

Author:
Viewing 1 to 4 of 4
Technical Paper

Understanding How Rain Affects Semantic Segmentation Algorithm Performance

2020-04-14
2020-01-0092
Research interests in autonomous driving have increased significantly in recent years. Several methods are being suggested for performance optimization of autonomous vehicles. However, weather conditions such as rain, snow, and fog may hinder the performance of autonomous algorithms. It is therefore of great importance to study how the performance/efficiency of the underlying scene understanding algorithms vary with such adverse scenarios. Semantic segmentation is one of the most widely used scene-understanding techniques applied to autonomous driving. In this work, we study the performance degradation of several semantic segmentation algorithms caused by rain for off-road driving scenes. Given the limited availability of datasets for real-world off-road driving scenarios that include rain, we utilize two types of synthetic datasets.
Journal Article

LiDAR Data Segmentation in Off-Road Environment Using Convolutional Neural Networks (CNN)

2020-04-14
2020-01-0696
Recent developments in the area of autonomous vehicle navigation have emphasized algorithm development for the characterization of LiDAR 3D point-cloud data. The LiDAR sensor data provides a detailed understanding of the environment surrounding the vehicle for safe navigation. However, LiDAR point cloud datasets need point-level labels which require a significant amount of annotation effort. We present a framework which generates simulated labeled point cloud data. The simulated LiDAR data was generated by a physics-based platform, the Mississippi State University Autonomous Vehicle Simulator (MAVS). In this work, we use the simulation framework and labeled LiDAR data to develop and test algorithms for autonomous ground vehicle off-road navigation. The MAVS framework generates 3D point clouds for off-road environments that include trails and trees.
Technical Paper

Training of Neural Networks with Automated Labeling of Simulated Sensor Data

2019-04-02
2019-01-0120
While convolutional neural networks (CNNs) have revolutionized ground-vehicle autonomy in the last decade, this class of algorithms requires large, truth-labeled data sets to be trained. The process of collecting and labeling training data is tedious, time-consuming, expensive, and error-prone. In order to automate this process, an automated method for training CNNs with simulated data was developed. This method utilizes physics-based simulation of sensors, along with automated truth labeling, to improve the speed and accuracy of training data acquisition for both camera and LIDAR sensors. This framework is enabled by the MSU Autonomous Vehicle Simulator (MAVS), a physics-based sensor simulator for ground vehicle robotics that includes high-fidelity simulations of LIDAR, cameras, and other sensors.
Journal Article

Simulating the Mobility of Wheeled Ground Vehicles with Mercury

2017-03-28
2017-01-0273
Mercury is a high-fidelity, physics-based object-oriented software for conducting simulations of vehicle performance evaluations for requirements and engineering metrics. Integrating cutting-edge, massively parallel modeling techniques for soft, cohesive and dry granular soil that will integrate state-of-the-art soil simulation with high-fidelity multi-body dynamics and powertrain modeling to provide a comprehensive mobility simulator for ground vehicles. The Mercury implements the Chrono::Vehicle dynamics library for vehicle dynamics, which provides multi-body dynamic simulation of wheeled and tracked vehicles. The powertrain is modeled using the Powertrain Analysis Computational Environment (PACE), a behavior-based powertrain analysis based on the U.S. Department of Energy’s Autonomie software. Vehicle -terrain interaction (VTI) is simulated with the Ground Contact Element (GCE), which provides forces to the Chrono-vehicle solver.
X