Refine Your Search

Search Results

Technical Paper

Video Based Simulation of Daytime and Nighttime Rain Affecting Driver Visibility

2021-04-06
2021-01-0854
This paper presents a methodology for generating video realistic computer simulated rain, and the effect rain has on driver visibility. Rain was considered under three different rain rates, light, moderate and heavy, and in nighttime and daytime conditions. The techniques and methodologies presented in this publication rely on techniques of video tracking and projection mapping that have been previous published. Neale et al. [2004, 2016], showed how processes of video tracking can convert two-dimensional image data from video images into three-dimensional scaled computer-generated environments. Further, Neale et al. [2013,2016] demonstrated that video projection mapping, when combined with video tracking, enables the production of video realistic simulated environments, where videographic and photographic baseline footage is combined with three-dimensional computer geometry.
Technical Paper

Calibrating Digital Imagery in Limited Time Conditions of Dawn, Dusk and Twilight

2021-04-06
2021-01-0855
This paper presents a methodology for accurately representing dawn and dusk lighting conditions (twilight) through photographs and video recordings. Attempting to generate calibrated photographs and video during twilight conditions can be difficult, since the time available to capture the light changes rapidly over time. In contrast, during nighttime conditions, when the sun is no longer contributing light directly or indirectly through the sky dome, matching a specific time of night is not as relevant, as man-made lights are the dominate source of illumination. Thus, the initial setup, calibration and collection of calibrated video, when it is dark, is not under a time constraint, but during twilight conditions the time frame may be narrow. This paper applies existing methods for capturing calibrated footage at night but develops a method for adjusting the footage in the event matching an exact time during twilight is necessary.
Technical Paper

Accuracies in Single Image Camera Matching Photogrammetry

2021-04-06
2021-01-0888
Forensic disciplines are called upon to locate evidence from a single camera or static video camera, and both the angle of incidence and resolution can limit the accuracy of single image photogrammetry. This research compares a baseline of known 3D data points representing evidence locations to evidence locations determined through single image photogrammetry and evaluates the effect that object resolution (measured in pixels), and angle of incidence has on accuracy. Solutions achieved using an automated process where a camera match alignment is calculated from common points in the 2D imagery and the 3D environment, were compared to solutions achieved in a more manual method by iteratively adjusting the camera’s position, orientation, and field-of-view until an alignment is achieved. This research independently utilizes both methods to achieve photogrammetry solutions and to locate objects within a 3D environment.
Technical Paper

Visualization of Driver and Pedestrian Visibility in Virtual Reality Environments

2021-04-06
2021-01-0856
In 2016, Virtual Reality (VR) equipment entered the mainstream scientific, medical, and entertainment industries. It became both affordable and available to the public market in the form of some of the technologies earliest successful headset: the Oculus Rift™ and HTC Vive™. While new equipment continues to emerge, at the time these headsets came equipped with a 100° field of view screen that allows a viewer a seamless 360° environment to experience that is non-linear in the sense that the viewer can chose where they look and for how long. The fundamental differences, however, between the conventional form of visualizations like computer animations and graphics and VR are subtle. A VR environment can be understood as a series of two-dimensional images, stitched together to be a seamless single 360° image. In this respect, it is only the number of images the viewer sees at one time that separates a conventional visualization from a VR experience.
Journal Article

Pedestrian Impact Analysis of Side-Swipe and Minor Overlap Conditions

2021-04-06
2021-01-0881
This paper presents analyses of 21real-world pedestrian versus vehicle collisions that were video recorded from vehicle dash mounted cameras or surveillance cameras. These pedestrian collisions have in common an impact configuration where the pedestrian was at the side of the vehicle, or with a minimal overlap at the front corner of the vehicle (less than one foot overlap). These impacts would not be considered frontal impacts [1], and as a result determining the speed of the vehicle by existing methods that incorporate the pedestrian travel distance post impact, or by assessing vehicle damage, would not be applicable. This research examined the specific interaction of non-frontal, side-impact, and minimal overlap pedestrian impact configurations to assess the relationship between the speed of the vehicle at impact, the motion of the pedestrian before and after impact, and the associated post impact travel distances.
Technical Paper

The Application of Augmented Reality to Reverse Camera Projection

2019-04-02
2019-01-0424
In 1980, research by Thebert introduced the use of photography equipment and transparencies for onsite reverse camera projection photogrammetry [1]. This method involved taking a film photograph through the development process and creating a reduced size transparency to insert into the cameras viewfinder. The photographer was then able to see both the image contained on the transparency, as well as the actual scene directly through the cameras viewfinder. By properly matching the physical orientation and positioning of the camera it was possible to visually align the image on the image on the transparency to the physical world as viewed through the camera. The result was a solution for where the original camera would have been located when the photograph was taken. With the original camera reverse-located, any evidence in the transparency that is no longer present at the site could then be replaced to match the evidences location in the transparency.
Technical Paper

Nighttime Visibility in Varying Moonlight Conditions

2019-04-02
2019-01-1005
When the visibility of an object or person in the roadway from a driver’s perspective is an issue, the potential effect of moonlight is sometimes questioned. To assess this potential effect, methods typically used to quantify visibility were performed during conditions with no moon and with a full moon. In the full moon condition, measurements were collected from initial moon rise until the moon reached peak azimuth. Baseline ambient light measurements of illumination at the test surface were measured in both no moon and full moon scenarios. Additionally, a vehicle with activated low beam headlamps was positioned in the testing area and the change in illumination at two locations forward of the vehicle was recorded at thirty-minute intervals as the moon rose to the highest position in the sky. Also, two separate luminance readings were recorded during the test intervals, one location 75 feet in front and to the left of the vehicle, and another 150 feet forward of the vehicle.
Technical Paper

Low Speed Override of Passenger Vehicles with Heavy Trucks

2019-04-02
2019-01-0430
In low speed collisions (under 15 mph) that involve a heavy truck impacting the rear of a passenger vehicle, it is likely that the front bumper of the heavy truck will override the rear bumper beam of the passenger vehicle, creating an override/underride impact configuration. There is limited data available for study when attempting to quantify vehicle damage and crash dynamics in low-speed override/underride impacts. Low speed impact tests were conducted to provide new data for passenger vehicle dynamics and damage assessment for low speed override/underride rear impacts to passenger vehicles. Three tests were conducted, with a tractor-trailer impacting three different passenger vehicles at 5 mph and 10 mph. This paper presents data from these three tests in order to expand the available data set for low speed override/underride collisions.
Technical Paper

Braking and Swerving Capabilities of Three-Wheeled Motorcycles

2019-04-02
2019-01-0413
This paper reports testing and analysis of the braking and swerving capabilities of on-road, three-wheeled motorcycles. A three-wheeled vehicle has handling and stability characteristics that differ both from two-wheeled motorcycles and from four-wheeled vehicles. The data reported in this paper will enable accident reconstructionists to consider these different characteristics when analyzing a three-wheeled motorcycle operator’s ability to brake or swerve to avoid a crash. The testing in this study utilized two riders operating two Harley-Davidson Tri-Glide motorcycles with two wheels in the rear and one in the front. Testing was also conducted with ballast to explore the influence of passenger or cargo weight. Numerous studies have documented the braking capabilities of two-wheeled motorcycles with riders of varying skill levels and with a range of braking systems.
Technical Paper

Mid-Range Data Acquisition Units UsingGPS and Accelerometers

2018-04-03
2018-01-0513
In the 2016 SAE publication “Data Acquisition using Smart Phone Applications,” Neale et al., evaluated the accuracy of basic fitness applications in tracking position and elevation using the GPS and accelerometer technology contained within the smart phone itself [1]. This paper further develops the research by evaluating mid-level applications. Mid-level applications are defined as ones that use a phone’s internal accelerometer and record data at 1 Hz or greater. The application can also utilize add-on devices, such as a Bluetooth enabled GPS antenna, which reports at a higher sample rate (10 Hz) than the phone by itself. These mid-level applications are still relatively easy to use, lightweight and affordable [2], [3], [4], but have the potential for higher data sample rates for the accelerometer (due to the software) and GPS signal (due to the hardware). In this paper, Harry’s Lap Timer™ was evaluated as a smart phone mid-level application.
Technical Paper

Motorcycle Headlamp Distribution Comparison

2018-04-03
2018-01-1037
The forward lighting systems on a motorcycle differ from the forward lighting systems on passenger cars, trucks, and tractor trailer. Many motorcycles, for instance, have only a single headlamp. For motorcycles that have more than one headlamp, the total width between the headlamps is still significantly less than the width of an automobile, an important component in the detection of a vehicle at night, as well as a factor in the efficacy of the beam pattern to help a driver see ahead. Single headlamp configurations are centered on the vehicle, and provide little assistance in marking the outside boundaries like a passenger car or truck headlamps can. Further, because of the dynamics of a motorcycle, the performance of the headlamp will differ around turns or corners, since the motorcycle must lean in order to negotiate a turn. As a result, the beam pattern, and hence visibility, provided by the headlamps on a motorcycle are unique for motorized vehicles.
Technical Paper

Application of 3D Visualization in Modeling Wheel Stud Contact Patterns with Rotating and Stationary Surfaces

2017-03-28
2017-01-1414
When a vehicle with protruding wheel studs makes contact with another vehicle or object in a sideswipe configuration, the tire sidewall, rim and wheel studs of that vehicle can deposit distinct geometrical damage patterns onto the surfaces it contacts. Prior research has demonstrated how relative speeds between the two vehicles or surfaces can be calculated through analysis of the distinct contact patterns. This paper presents a methodology for performing this analysis by visually modeling the interaction between wheel studs and various surfaces, and presents a method for automating the calculations of relative speed between vehicles. This methodology also augments prior research by demonstrating how the visual modeling and simulation of the wheel stud contact can extend to almost any surface interaction that may not have any previous prior published tests, or test methods that would be difficult to setup in real life.
Technical Paper

Nighttime Videographic Projection Mapping to Generate Photo-Realistic Simulation Environments

2016-04-05
2016-01-1415
This paper presents a methodology for generating photo realistic computer simulation environments of nighttime driving scenarios by combining nighttime photography and videography with video tracking [1] and projection mapping [2] technologies. Nighttime driving environments contain complex lighting conditions such as forward and signal lighting systems of vehicles, street lighting, and retro reflective markers and signage. The high dynamic range of nighttime lighting conditions make modeling of these systems difficult to render realistically through computer generated techniques alone. Photography and video, especially when using high dynamic range imaging, can produce realistic representations of the lighting environments. But because the video is only two dimensional, and lacks the flexibility of a three dimensional computer generated environment, the scenarios that can be represented are limited to the specific scenario recorded with video.
Technical Paper

Data Acquisition using Smart Phone Applications

2016-04-05
2016-01-1461
There are numerous publically available smart phone applications designed to track the speed and position of the user. By accessing the phones built in GPS receivers, these applications record the position over time of the phone and report the record on the phone itself, and typically on the application’s website. These applications range in cost from free to a few dollars, with some, that advertise greater functionality, costing significantly higher. This paper examines the reliability of the data reported through these applications, and the potential for these applications to be useful in certain conditions where monitoring and recording vehicle or pedestrian movement is needed. To analyze the reliability of the applications, three of the more popular and widely used tracking programs were downloaded to three different smart phones to represent a good spectrum of operating platforms.
Technical Paper

Determining Position and Speed through Pixel Tracking and 2D Coordinate Transformation in a 3D Environment

2016-04-05
2016-01-1478
This paper presents a methodology for determining the position and speed of objects such as vehicles, pedestrians, or cyclists that are visible in video footage captured with only one camera. Objects are tracked in the video footage based on the change in pixels that represent the object moving. Commercially available programs such as PFTracktm and Adobe After Effectstm contain automated pixel tracking features that record the position of the pixel, over time, two dimensionally using the video’s resolution as a Cartesian coordinate system. The coordinate data of the pixel over time can then be transformed to three dimensional data by ray tracing the pixel coordinates onto three dimensional geometry of the same scene that is visible in the video footage background.
Technical Paper

Evaluation of the Accuracy of Image Based Scanning as a Basis for Photogrammetric Reconstruction of Physical Evidence

2016-04-05
2016-01-1467
Improvements in computer image processing and identification capability have led to programs that can rapidly perform calculations and model the three-dimensional spatial characteristics of objects simply from photographs or video frames. This process, known as structure-from-motion or image based scanning, is a photogrammetric technique that analyzes features of photographs or video frames from multiple angles to create dense surface models or point clouds. Concurrently, unmanned aircraft systems have gained widespread popularity due to their reliability, low-cost, and relative ease of use. These aircraft systems allow for the capture of video or still photographic footage of subjects from unique perspectives. This paper explores the efficacy of using a point cloud created from unmanned aerial vehicle video footage with traditional single-image photogrammetry methods to recreate physical evidence at a crash scene.
Technical Paper

Video Projection Mapping Photogrammetry through Video Tracking

2013-04-08
2013-01-0788
This paper examines a method for generating a scaled three-dimensional computer model of an accident scene from video footage. This method, which combines the previously published methods of video tracking and camera projection, includes automated mapping of physical evidence through rectification of each frame. Video Tracking is a photogrammetric technique for obtaining three-dimensional data from a scene using video and was described in a 2004 publication titled, “A Video Tracking Photogrammetry Technique to Survey Roadways for Accident Reconstruction” (SAE 2004-01-1221).
Technical Paper

Photogrammetric Measurement Error Associated with Lens Distortion

2011-04-12
2011-01-0286
All camera lenses contain optical aberrations as a result of the design and manufacturing processes. Lens aberrations cause distortion of the resulting image captured on film or a sensor. This distortion is inherent in all lenses because of the shape required to project the image onto film or a sensor, the materials that make up the lens, and the configuration of lenses to achieve varying focal lengths and other photographic effects. The distortion associated with lenses can cause errors to be introduced when photogrammetric techniques are used to analyze photographs of accidents scenes to determine position, scale, length and other characteristics of evidence in a photograph. This paper evaluates how lens distortion can affect images, and how photogrammetrically measuring a distorted image can result in measurement errors.
Technical Paper

Evaluation of Photometric Data Files for Use in Headlamp Light Distribution

2010-04-12
2010-01-0292
Computer simulation of nighttime lighting in urban environments can be complex due to the myriad of light sources present (e.g., street lamps, building lights, signage, and vehicle headlamps). In these areas, vehicle headlamps can make a significant contribution to the lighting environment 1 , 2 . This contribution may need to be incorporated into a lighting simulation to accurately calculate overall light levels and to represent how the light affects the experience and quality of the environment. Within a lighting simulation, photometric files, such as the photometric standard light data file format, are often used to simulate light sources such as street lamps and exterior building lights in nighttime environments. This paper examines the validity of using these same photometric file types for the simulation of vehicle headlamps by comparing the light distribution from actual vehicle headlamps to photometric files of these same headlamps.
Technical Paper

Simulating Headlamp Illumination Using Photometric Light Clusters

2009-04-20
2009-01-0110
Assessing the ability of a driver to see objects, pedestrians, or other vehicles at night is a necessary precursor to determining if that driver could have avoided a nighttime crash. The visibility of an object at night is largely due to the luminance contrast between the object and its background. This difference depends on many factors, one of which is the amount of illumination produced by a vehicle’s headlamps. This paper focuses on a method for digitally modeling a vehicle headlamp, such that the illumination produced by the headlamps can be evaluated. The paper introduces the underlying concepts and a methodology for simulating, in a computer environment, a high-beam headlamp using a computer generated light cluster. In addition, the results of using this methodology are evaluated by comparing light values measured for a real headlamp to a simulated headlamp.
X