Refine Your Search

Search Results

Author:
Viewing 1 to 13 of 13
Technical Paper

Video and Object Tracking for Speed Determination Using Aerial LiDAR

2024-04-09
2024-01-2483
Video of an event recorded from a moving camera contains information not only useful for reconstructing the locations and timing of an event, but also the velocity of the camera attached to the moving object or vehicle. Determining the velocity of a video camera recording from a moving vehicle is useful for determining the vehicle’s velocity and can be compared with speeds calculated through other reconstruction methods, or to data from vehicle speed monitoring devices. After tracking the video, the positions and speeds of other objects within the video can also be determined. Video tracking analysis traditionally has required a site inspection to map the three-dimensional environment. In instances where there have been significant site changes, where there is limited or no site access, and where budgeting and timing constraints exist, a three-dimensional environment can be created using publicly available aerial imagery and aerial LiDAR.
Technical Paper

Accuracy of Rectifying Oblique Images to Planar and Non-Planar Surfaces

2024-04-09
2024-01-2481
Emergency personnel and first responders have the opportunity to document crash scenes while evidence is still recent. The growth of the drone market and the efficiency of documentation with drones has led to an increasing prevalence of aerial photography for incident sites. These photographs are generally of high resolution and contain valuable information including roadway evidence such as tire marks, gouge marks, debris fields, and vehicle rest positions. Being able to accurately map the captured evidence visible in the photographs is a key process in creating a scaled crash-scene diagram. Image rectification serves as a quick and straightforward method for producing a scaled diagram. This study evaluates the precision of the photo rectification process under diverse roadway geometry conditions and varying camera incidence angles.
Journal Article

Aerial Photoscanning with Ground Control Points from USGS LiDAR

2022-03-29
2022-01-0833
Aerial photoscanning is a software-based photogrammetry method for obtaining three-dimensional site data. Ground Control Points (GCPs) are commonly used as part of this process. These control points are traditionally placed within the site and then captured in aerial photographs from a drone. They are used to establish scale and orientation throughout the resulting point cloud. There are different types of GCPs, and their positions are established or documented using different technologies. Some systems include satellite-based Global Positioning System (GPS) sensors which record the position of the control points at the scene. Other methods include mapping in the control point locations using LiDAR based technology such as a total station or a laser scanner.
Journal Article

Accuracy of Aerial Photoscanning with Real-Time Kinematic Technology

2022-03-29
2022-01-0830
Photoscanning photogrammetry is a method for obtaining and preserving three-dimensional site data from photographs. This photogrammetric method is commonly associated with small Unmanned Aircraft Systems (sUAS) and is particularly beneficial for large area site documentation. The resulting data is comprised of millions of three-dimensional data points commonly referred to as a point cloud. The accuracy and reliability of these point clouds is dependent on hardware, hardware settings, field documentation methods, software, software settings, and processing methods. Ground control points (GCPs) are commonly used in aerial photoscanning to achieve reliable results. This research examines multiple GCP types, flight patterns, software, hardware, and a ground based real-time kinematic (RTK) system. Multiple documentation and processing methods are examined and accuracies of each are compared for an understanding of how capturing methods will optimize site documentation.
Technical Paper

Accuracies in Single Image Camera Matching Photogrammetry

2021-04-06
2021-01-0888
Forensic disciplines are called upon to locate evidence from a single camera or static video camera, and both the angle of incidence and resolution can limit the accuracy of single image photogrammetry. This research compares a baseline of known 3D data points representing evidence locations to evidence locations determined through single image photogrammetry and evaluates the effect that object resolution (measured in pixels), and angle of incidence has on accuracy. Solutions achieved using an automated process where a camera match alignment is calculated from common points in the 2D imagery and the 3D environment, were compared to solutions achieved in a more manual method by iteratively adjusting the camera’s position, orientation, and field-of-view until an alignment is achieved. This research independently utilizes both methods to achieve photogrammetry solutions and to locate objects within a 3D environment.
Technical Paper

Visualization of Driver and Pedestrian Visibility in Virtual Reality Environments

2021-04-06
2021-01-0856
In 2016, Virtual Reality (VR) equipment entered the mainstream scientific, medical, and entertainment industries. It became both affordable and available to the public market in the form of some of the technologies earliest successful headset: the Oculus Rift™ and HTC Vive™. While new equipment continues to emerge, at the time these headsets came equipped with a 100° field of view screen that allows a viewer a seamless 360° environment to experience that is non-linear in the sense that the viewer can chose where they look and for how long. The fundamental differences, however, between the conventional form of visualizations like computer animations and graphics and VR are subtle. A VR environment can be understood as a series of two-dimensional images, stitched together to be a seamless single 360° image. In this respect, it is only the number of images the viewer sees at one time that separates a conventional visualization from a VR experience.
Technical Paper

Reconstruction of 3D Accident Sites Using USGS LiDAR, Aerial Images, and Photogrammetry

2019-04-02
2019-01-0423
The accident reconstruction community has previously relied upon photographs and site visits to recreate a scene. This method is difficult in instances where the site has changed or is not accessible. In 2017 the United States Geological Survey (USGS) released historical 3D point clouds (LiDAR) allowing for access to digital 3D data without visiting the site. This offers many unique benefits to the reconstruction community including: safety, budget, time, and historical preservation. This paper presents a methodology for collecting this data and using it in conjunction with aerial imagery, and camera matching photogrammetry to create 3D computer models of the scene without a site visit.
Technical Paper

The Application of Augmented Reality to Reverse Camera Projection

2019-04-02
2019-01-0424
In 1980, research by Thebert introduced the use of photography equipment and transparencies for onsite reverse camera projection photogrammetry [1]. This method involved taking a film photograph through the development process and creating a reduced size transparency to insert into the cameras viewfinder. The photographer was then able to see both the image contained on the transparency, as well as the actual scene directly through the cameras viewfinder. By properly matching the physical orientation and positioning of the camera it was possible to visually align the image on the image on the transparency to the physical world as viewed through the camera. The result was a solution for where the original camera would have been located when the photograph was taken. With the original camera reverse-located, any evidence in the transparency that is no longer present at the site could then be replaced to match the evidences location in the transparency.
Journal Article

Using Multiple Photographs and USGS LiDAR to Improve Photogrammetric Accuracy

2018-04-03
2018-01-0516
The accident reconstruction community relies on photogrammetry for taking measurements from photographs. Camera matching, a close-range photogrammetry method, is a particularly useful tool for locating accident scene evidence after time has passed and the evidence is no longer physically visible. In this method, objects within the accident scene that have remained unchanged are used as a reference for locating evidence that is no longer physically available at the scene such as tire marks, gouge marks, and vehicle points of rest. Roadway lines, edges of pavement, sidewalks, signs, posts, buildings, and other structures are recognizable scene features that if unchanged between the time of accident and time of analysis are beneficial to the photogrammetric process. In instances where these scene features are limited or do not exist, achieving accurate photogrammetric solutions can be challenging.
Technical Paper

An Evaluation of Two Methodologies for Lens Distortion Removal when EXIF Data is Unavailable

2017-03-28
2017-01-1422
Photogrammetry and the accuracy of a photogrammetric solution is reliant on the quality of photographs and the accuracy of pixel location within the photographs. A photograph with lens distortion can create inaccuracies within a photogrammetric solution. Due to the curved nature of a camera’s lens(s), the light coming through the lens and onto the image sensor can have varying degrees of distortion. There are commercially available software titles that rely on a library of known cameras, lenses, and configurations for removing lens distortion. However, to use these software titles the camera manufacturer, model, lens and focal length must be known. This paper presents two methodologies for removing lens distortion when camera and lens specific information is not available. The first methodology uses linear objects within the photograph to determine the amount of lens distortion present. This method will be referred to as the straight-line method.
Technical Paper

Comparing A Timed Exposure Methodology to the Nighttime Recognition Responses from SHRP-2 Naturalistic Drivers

2017-03-28
2017-01-1366
Collision statistics show that more than half of all pedestrian fatalities caused by vehicles occur at night. The recognition of objects at night is a crucial component in driver responses and in preventing nighttime pedestrian accidents. To investigate the root cause of this fact pattern, Richard Blackwell conducted a series of experiments in the 1950s through 1970s to evaluate whether restricted viewing time can be used as a surrogate for the imperfect information available to drivers at night. The authors build on these findings and incorporate the responses of drivers to objects in the road at night found in the SHRP-2 naturalistic database. A closed road outdoor study and an indoor study were conducted using an automatic shutter system to limit observation time to approximately ¼ of a second. Results from these limited exposure time studies showed a positive correlation to naturalistic responses, providing a validation of the time-limited exposure technique.
Technical Paper

A Survey of Multi-View Photogrammetry Software for Documenting Vehicle Crush

2016-04-05
2016-01-1475
Video and photo based photogrammetry software has many applications in the accident reconstruction community including documentation of vehicles and scene evidence. Photogrammetry software has developed in its ease of use, cost, and effectiveness in determining three dimensional data points from two dimensional photographs. Contemporary photogrammetry software packages offer an automated solution capable of generating dense point clouds with millions of 3D data points from multiple images. While alternative modern documentation methods exist, including LiDAR technologies such as 3D scanning, which provide the ability to collect millions of highly accurate points in just a few minutes, the appeal of automated photogrammetry software as a tool for collecting dimensional data is the minimal equipment, equipment costs and ease of use.
Technical Paper

Photogrammetric Measurement Error Associated with Lens Distortion

2011-04-12
2011-01-0286
All camera lenses contain optical aberrations as a result of the design and manufacturing processes. Lens aberrations cause distortion of the resulting image captured on film or a sensor. This distortion is inherent in all lenses because of the shape required to project the image onto film or a sensor, the materials that make up the lens, and the configuration of lenses to achieve varying focal lengths and other photographic effects. The distortion associated with lenses can cause errors to be introduced when photogrammetric techniques are used to analyze photographs of accidents scenes to determine position, scale, length and other characteristics of evidence in a photograph. This paper evaluates how lens distortion can affect images, and how photogrammetrically measuring a distorted image can result in measurement errors.
X