Refine Your Search

Search Results

Viewing 1 to 3 of 3
Technical Paper

Mobile Robot Localization Evaluations with Visual Odometry in Varying Environments Using Festo-Robotino

2020-04-14
2020-01-1022
Autonomous ground vehicles can use a variety of techniques to navigate the environment and deduce their motion and location from sensory inputs. Visual Odometry can provide a means for an autonomous vehicle to gain orientation and position information from camera images recording frames as the vehicle moves. This is especially useful when global positioning system (GPS) information is unavailable, or wheel encoder measurements are unreliable. Feature-based visual odometry algorithms extract corner points from image frames, thus detecting patterns of feature point movement over time. From this information, it is possible to estimate the camera, i.e., the vehicle’s motion. Visual odometry has its own set of challenges, such as detecting an insufficient number of points, poor camera setup, and fast passing objects interrupting the scene. This paper investigates the effects of various disturbances on visual odometry.
Journal Article

Scene Structure Classification as Preprocessing for Feature-Based Visual Odometry

2018-04-03
2018-01-0610
Cameras and image processing hardware are rapidly evolving technologies, which enable real-time applications for passenger cars, ground robots, and aerial vehicles. Visual odometry (VO) algorithms estimate vehicle position and orientation changes from the moving camera images. For ground vehicles, such as cars, indoor robots, and planetary rovers, VO can augment movement estimation from rotary wheel encoders. Feature-based VO relies on detecting feature points, such as corners or edges, in image frames as the vehicle moves. These points are tracked over frames and, as a group, estimate motion. Not all detected points are tracked since not all are found in the next frame. Even tracked features may not be correct since a feature point may map to an incorrect nearby feature point. This can depend on the driving scenario, which can include driving at high speed or in the rain or snow.
Technical Paper

Evaluation of a Stereo Visual Odometry Algorithm for Passenger Vehicle Navigation

2017-03-28
2017-01-0046
To reliably implement driver-assist features and ultimately self-driving cars, autonomous driving systems will likely rely on a variety of sensor types including GPS, RADAR, LASER range finders, and cameras. Cameras are an essential sensory component because they lend themselves to the task of identifying object types that a self-driving vehicle is likely to encounter such as pedestrians, cyclists, animals, other cars, or objects on the road. In this paper, we present a feature-based visual odometry algorithm based on a stereo-camera to perform localization relative to the surrounding environment for purposes of navigation and hazard avoidance. Using a stereo-camera enhances the accuracy with respect to monocular visual odometry. The algorithm relies on tracking a local map consisting of sparse 3D map points. By tracking this map across frames, the algorithm makes use of the full history of detected features which reduces the drift in the estimated motion trajectory.
X