Display:
Results
20170328
Technical Paper
2017010205
Abstract Reliability and resiliency (R&R) definitions differ depending on the system under consideration. Generally, each engineering sector defines relevant R&R metrics pertinent to their system. While this can impede crossdisciplinary engineering projects as well as research, it is a necessary strategy to capture all the relevant system characteristics. This paper highlights the difficulties associated with defining performance of such systems while using smart microgrids as an example. Further, it develops metrics and definitions that are useful in assessing their performance, based on utility theory. A microgrid must not only anticipate load conditions but also tolerate partial failures and remain optimally operating. Many of these failures happen infrequently but unexpectedly and therefore are hard to plan for. We discuss real life failure scenarios and show how the proposed definitions and metrics are beneficial.
20170328
Journal Article
2017010209
Abstract Warranty forecasting of repairable systems is very important for manufacturers of mass produced systems. It is desired to predict the Expected Number of Failures (ENF) after a censoring time using collected failure data before the censoring time. Moreover, systems may be produced with a defective component resulting in extensive warranty costs even after the defective component is detected and replaced with a new design. In this paper, we present a forecasting method to predict the ENF of a repairable system using observed data which is used to calibrate a Generalized Renewal Processes (GRP) model. Manufacturing of products may exhibit different production patterns with different failure statistics through time. For example, vehicles produced in different months may have different failure intensities because of supply chain differences or different skills of production workers, for example.
20170328
Journal Article
2017010206
Abstract Recent developments in timedependent reliability have introduced the concept of a composite limit state. The composite limit state method can be used to calculate the timedependent probability of failure for dynamic systems with limitstate functions of input random variables, input random processes and explicit in time. The probability of failure can be calculated exactly using the composite limit state if the instantaneous limit states are linear, forming an open or close polytope, and are functions of only two random variables. In this work, the restriction on the number of random variables is lifted. The proposed algorithm is accurate and efficient for linear instantaneous limit state functions of any number of random variables. An example on the design of a hydrokinetic turbine blade under timedependent river flow load demonstrates the accuracy of the proposed general composite limit state approach.
20170328
Journal Article
2017010194
Abstract A methodology for timedependent reliabilitybased design optimization of vibratory systems with random parameters under stationary excitation is presented. The timedependent probability of failure is computed using an integral equation which involves upcrossing and joint upcrossing rates. The total probability theorem addresses the presence of the system random parameters and a sparse grid quadrature method calculates the integral of the total probability theorem efficiently. The sensitivity derivatives of the timedependent probability of failure with respect to the design variables are computed using finite differences. The Modified Combined Approximations (MCA) reanalysis method is used to reduce the overall computational cost from repeated evaluations of the system frequency response or equivalently impulse response function. The method is applied to the shape optimization of a vehicle frame under stochastic loading.
20170328
Journal Article
2017010197
Abstract Fatigue life estimation, reliability and durability are important in acquisition, maintenance and operation of vehicle systems. Fatigue life is random because of the stochastic load, the inherent variability of material properties, and the uncertainty in the definition of the SN curve. The commonly used fatigue life estimation methods calculate the mean (not the distribution) of fatigue life under Gaussian loads using the potentially restrictive narrowband assumption. In this paper, a general methodology is presented to calculate the statistics of fatigue life for a linear vibratory system under stationary, nonGaussian loads considering the effects of skewness and kurtosis. The input loads are first characterized using their first four moments (mean, standard deviation, skewness and kurtosis) and a correlation structure equivalent to a given Power Spectral Density (PSD).
20170328
Journal Article
2017010207
Abstract A new secondorder Saddlepoint Approximation (SA) method for structural reliability analysis is introduced. The Meanvalue Secondorder Saddlepoint Approximation (MVSOSA) is presented as an extension to the Meanvalue Firstorder Saddlepoint Approximation (MVFOSA). The proposed method is based on a secondorder Taylor expansion of the limit state function around the mean value of the input random variables. It requires not only the first but also the secondorder sensitivity derivatives of the limit state function. If sensitivity analysis must be avoided because of computational cost, a quadrature integration approach, based on sparse grids, is also presented and linked to the saddlepoint approximation (SGSA  Sparse Grid Saddlepoint Approximation). The SGSA method is compared with the first and secondorder SA methods in terms of accuracy and efficiency. The proposed MVSOSA and SGSA methods are used in the reliability analysis of two examples.
20160405
Journal Article
2016010316
Abstract We have recently obtained experimental data and used them to develop computational models to quantify occupant impact responses and injury risks for military vehicles during frontal crashes. The number of experimental tests and model runs are however, relatively small due to their high cost. While this is true across the auto industry, it is particularly critical for the Army and other government agencies operating under tight budget constraints. In this study we investigate through statistical simulations how the injury risk varies if a large number of experimental tests were conducted. We show that the injury risk distribution is skewed to the right implying that, although most physical tests result in a small injury risk, there are occasional physical tests for which the injury risk is extremely large. We compute the probabilities of such events and use them to identify optimum design conditions to minimize such probabilities.
20160405
Journal Article
2016011318
Abstract Finite element analysis is a standard tool for deterministic or probabilistic design optimization of dynamic systems. The optimization process requires repeated eigenvalue analyses which can be computationally expensive. Several reanalysis techniques have been proposed to reduce the computational cost including Parametric Reduced Order Modeling (PROM), Combined Approximations (CA), and the Modified Combined Approximations (MCA) method. Although the cost of reanalysis is substantially reduced, it can still be high for models with a large number of degrees of freedom and a large number of design variables. Reanalysis methods use a basis composed of eigenvectors from both the baseline and the modified designs which are in general linearly dependent. To eliminate the linear dependency and improve accuracy, Gram Schmidt orthonormalization is employed which is costly itself.
20160405
Journal Article
2016011338
Abstract Weight reduction is very important in automotive design because of stringent demand on fuel economy. Structural optimization of dynamic systems using finite element (FE) analysis plays an important role in reducing weight while simultaneously delivering a product that meets all functional requirements for durability, crash and NVH. With advancing computer technology, the demand for solving large FE models has grown. Optimization is however costly due to repeated fullorder analyses. Reanalysis methods can be used in structural vibrations to reduce the analysis cost from repeated eigenvalue analyses for both deterministic and probabilistic problems. Several reanalysis techniques have been introduced over the years including Parametric Reduced Order Modeling (PROM), Combined Approximations (CA) and the Epsilon algorithm, among others.
20160405
Journal Article
2016011395
Abstract To improve fuel economy, there is a trend in automotive industry to use light weight, high strength materials. Automotive body structures are composed of several panels which must be downsized to reduce weight. Because this affects NVH (Noise, Vibration and Harshness) performance, engineers are challenged to recover the lost panel stiffness from downgaging in order to improve the structure borne noise transmitted through the lightweight panels in the frequency range of 100300 Hz where most of the booming and low medium frequency noise occurs. The loss in performance can be recovered by optimized panel geometry using beading or damping treatment. Topography optimization is a special class of shape optimization for changing sheet metal shapes by introducing beads. A large number of design variables can be handled and the process is easy to setup in commercial codes. However, optimization methods are computationally intensive because of repeated fullorder analyses.
20150414
Journal Article
2015010425
Abstract Using the total probability theorem, we propose a method to calculate the failure rate of a linear vibratory system with random parameters excited by stationary Gaussian processes. The response of such a system is nonstationary because of the randomness of the input parameters. A spacefilling design, such as optimal symmetric Latin hypercube sampling or maximin, is first used to sample the input parameter space. For each design point, the output process is stationary and Gaussian. We present two approaches to calculate the corresponding conditional probability of failure. A Kriging metamodel is then created between the input parameters and the output conditional probabilities allowing us to estimate the conditional probabilities for any set of input parameters. The total probability theorem is finally applied to calculate the timedependent probability of failure and the failure rate of the dynamic system. The proposed method is demonstrated using a vibratory system.
20140401
Journal Article
2014010717
We propose a new metamodeling method to characterize the output (response) random process of a dynamic system with random parameters, excited by input random processes. The metamodel can be then used to efficiently estimate the timedependent reliability of a dynamic system using analytical or simulationbased methods. The metamodel is constructed by decomposing the input random processes using principal components or wavelets and then using a few simulations to estimate the distributions of the decomposition coefficients. A similar decomposition is also performed on the output random process. A kriging model is then established between the input and output decomposition coefficients and subsequently used to quantify the output random process corresponding to a realization of the input random parameters and random processes. What distinguishes our approach from others in metamodeling is that the system input is not deterministic but random.
20140401
Journal Article
2014010719
Implications of decision analysis (DA) on engineering design are important and welldocumented. However, widespread adoption has not occurred. To that end, the authors recently proposed decision topologies (DT) as a visual method for representing decision situations and proved that they are entirely consistent with normative decision analysis. This paper addresses the practical issue of assessing the DTs of a designer using their responses. As in classical DA, this step is critical to encoding the DA's preferences so that further analysis and mathematical optimization can be performed on the correct set of preferences. We show how multiattribute DTs can be directly assessed from DM responses. Furthermore, we show that preferences under uncertainty can be trivially incorporated and that topologies can be constructed using single attribute topologies similarly to multilinear functions in utility analysis. This incremental construction simplifies the process of topology construction.
20140401
Journal Article
2014010716
The reliability theory of repairable systems is vastly different from that of nonrepairable systems. The authors have recently proposed a ‘decisionbased’ framework to design and maintain repairable systems for optimal performance and reliability using a set of metrics such as minimum failure free period, number of failures in planning horizon (lifecycle), and cost. The optimal solution includes the initial design, the system maintenance throughout the planning horizon, and the protocol to operate the system. In this work, we extend this idea by incorporating flexibility and demonstrate our approach using a smart charging electric microgrid architecture. The flexibility is realized by allowing the architecture to change with time. Our approach “learns” the working characteristics of the microgrid. We use actual load and supply data over a short time to quantify the load and supply random processes and also establish the correlation between them.
20130408
Journal Article
2013010606
The classical definition of reliability may not be readily applicable for repairable systems. Commonly used concepts such as the Mean Time Between Failures (MTBF) and availability can be misleading because they only report limited information about the system functionality. In this paper, we discuss a set of metrics that can help with the design of repairable systems. Based on a set of desirable properties for these metrics, we select a minimal set of metrics (MSOM) which provides the most information about a system, with the smallest number of metrics. The metric of Minimum Failure Free Period (MFFP) with a given probability generalizes MTBF because the latter is simply the MFFP with a 0.5 probability. It also generalizes availability because coupled with repair times it provides a clearer picture of the length of the expected uninterrupted service. Two forms of MFFP are used: transient and steady state.
20130408
Journal Article
2013010943
Importance Sampling is a popular method for reliability assessment. Although it is significantly more efficient than standard Monte Carlo simulation if a suitable sampling distribution is used, in many design problems it is too expensive. The authors have previously proposed a method to manage the computational cost in standard Monte Carlo simulation that views design as a choice among alternatives with uncertain reliabilities. Information from simulation has value only if it helps the designer make a better choice among the alternatives. This paper extends their method to Importance Sampling. First, the designer estimates the prior probability density functions of the reliabilities of the alternative designs and calculates the expected utility of the choice of the best design. Subsequently, the designer estimates the likelihood function of the probability of failure by performing an initial simulation with Importance Sampling.
20130408
Journal Article
2013010947
Our recent work has shown that representation of systems using a reliability block diagram can be used as a decision making tool. In decision making, we called these block diagrams decision topologies. In this paper, we generalize the results and show that decision topologies can be used to make many engineering decisions and can in fact replace decision analysis for most decisions. We also provide a metaproof that the proposed method using decision topologies is entirely consistent with decision analysis at the limit. The main advantages of the method are that (1) it provides a visual representation of a decision situation, (2) it can easily model tradeoffs, (3) it can incorporate binary attributes, (4) it can model preferences with limited information, and (5) it can be used in a lowfidelity sense to quickly make a decision.
20130408
Technical Paper
2013011385
Design optimization often relies on computational models, which are subjected to a validation process to ensure their accuracy. Because validation of computer models in the entire design space can be costly, we have previously proposed an approach where design optimization and model validation, are concurrently performed using a sequential approach with variablesize local domains. We used test data and statistical bootstrap methods to size each local domain where the prediction model is considered validated and where design optimization is performed. The method proceeds iteratively until the optimum design is obtained. This method however, requires test data to be available in each local domain along the optimization path. In this paper, we refine our methodology by using polynomial regression to predict the size and shape of a local domain at some steps along the optimization process without using test data.
20120416
Journal Article
2012010070
Reliability is an important engineering requirement for consistently delivering acceptable product performance through time. It also affects the scheduling for preventive maintenance. Reliability usually degrades with time increasing therefore, the lifecycle cost due to more frequent failures which result in increased warranty costs, costly repairs and loss of market share. In a lifecycle cost based design, we must account for product quality and preventive maintenance using timedependent reliability. Quality is a measure of our confidence that the product conforms to specifications as it leaves the factory. For a repairable system, preventive maintenance is scheduled to avoid failures, unnecessary production loss and safety violations. This article proposes a methodology to obtain the optimal scheduling for preventive maintenance using timedependent reliability principles.
20120416
Journal Article
2012010064
In this article we present an approach to identify the system topology using simulation for reliability calculations. The system topology provides how all components in a system are functionally connected. Most reliability engineering literature assumes that either the system topology is known and therefore all failure modes can be deduced or if the system topology is not known we are only interested in identifying the dominant failure modes. The authors contend that we should try to extract as much information about the system topology from failure or success information of a system as possible. This will not only identify the dominant failure modes but will also provide an understanding of how the components are functionally connected, allowing for more complicated analyses, if needed. We use an evolutionary approach where system topologies are generated at random and then tested against failure or success data. The topologies evolve based on how consistent they are with test data.
20120416
Technical Paper
2012010914
Reaching a system level reliability target is an inverse problem. Component level reliabilities are determined for a required system level reliability. Because this inverse problem does not have a unique solution, one approach is to tradeoff system reliability with cost and to allow the designer to select a design with a target system reliability, using his/her preferences. In this case, the component reliabilities are readily available from the calculation of the reliabilitycost tradeoff. To arrive at the set of solutions to be traded off, one encounters two problems. First, the system reliability calculation is based on repeated system simulations where each system state, indicating which components work and which have failed, is tested to determine if it causes system failure, and second, the task of eliciting and encoding the decision maker's preferences is extremely difficult because of uncertainty in modeling the decision maker's preferences.
20120416
Journal Article
2012010226
Design optimization often relies on computational models, which are subjected to a validation process to ensure their accuracy. Because validation of computer models in the entire design space can be costly, a recent approach was proposed where design optimization and model validation were concurrently performed using a sequential approach with both fixed and variablesize local domains. The variablesize approach used parametric distributions such as Gaussian to quantify the variability in test data and model predictions, and a maximum likelihood estimation to calibrate the prediction model. Also, a parametric bootstrap method was used to size each local domain. In this article, we generalize the variablesize approach, by not assuming any distribution such as Gaussian. A nonparametric bootstrap methodology is instead used to size the local domains. We expect its generality to be useful in applications where distributional assumptions are difficult to verify, or not met at all.
20120416
Technical Paper
2012010915
Monte Carlo simulation is a popular tool for reliability assessment because of its robustness and ease of implementation. A major concern with this method is its computational cost; standard Monte Carlo simulation requires quadrupling the number of replications for halving the standard deviation of the estimated failure probability. Efforts to increase efficiency focus on intelligent sampling procedures and methods for efficient calculation of the performance function of a system. This paper proposes a new method to manage cost that views design as a decision among alternatives with uncertain reliabilities. Information from a simulation has value only if it enables the designer to make a better choice among the alternative options. Consequently, the value of information from the simulation is equal to the gain from using this information to improve the decision. A designer can determine the number of replications that are worth performing by using the method.
20110517
Technical Paper
2011011628
StyreneButadiene Rubber (SBR), a copolymer of butadiene and styrene, is widely used in the automotive industry due to its high durability and resistance to abrasion, oils and oxidation. Some of the common applications include tires, vibration isolators, and gaskets, among others. This paper characterizes the dynamic behavior of SBR and discusses the suitability of a viscoelastic model of elastomers, known as the Kelvin model, from a mathematical and physical point of view. An optimization algorithm is used to estimate the parameters of the Kelvin model. The resulting model was shown to produce reasonable approximations of measured dynamic stiffness. The model was also used to calculate the self heating of the elastomer due to energy dissipation by the viscous damping components in the model. Developing such a predictive capability is essential in understanding the dynamic behavior of elastomers considering that their dynamic stiffness can in general depend on temperature.
20110412
Journal Article
2011010728
Reliability is an important engineering requirement for consistently delivering acceptable product performance through time. As time progresses, the product may fail due to timedependent operating conditions and material properties, component degradation, etc. The reliability degradation with time may increase the lifecycle cost due to potential warranty costs, repairs and loss of market share. Reliability is the probability that the system will perform its intended function successfully for a specified time interval. In this work, we consider the firstpassage reliability which accounts for the first time failure of nonrepairable systems. Methods are available in the literature, which provide an upper bound to the true reliability which may overestimate the true value considerably. MonteCarlo simulations are accurate but computationally expensive.
20110412
Journal Article
2011010725
Understanding reliability is critical in design, maintenance and durability analysis of engineering systems. A reliability simulation methodology is presented in this paper for vehicle fleets using limited data. The method can be used to estimate the reliability of nonrepairable as well as repairable systems. It can optimally allocate, based on a target system reliability, individual component reliabilities using a multiobjective optimization algorithm. The algorithm establishes a Pareto front that can be used for optimal tradeoff between reliability and the associated cost. The method uses Monte Carlo simulation to estimate the system failure rate and reliability as a function of time. The probability density functions (PDF) of the time between failures for all components of the system are estimated using either limited data or a usersupplied MTBF (mean time between failures) and its coefficient of variation.
20110412
Journal Article
2011011080
Multiattribute decision making and multiobjective optimization complement each other. Often, while making design decisions involving multiple attributes, a Pareto front is generated using a multiobjective optimizer. The end user then chooses the optimal design from the Pareto front based on his/her preferences. This seemingly simple methodology requires sufficient modification if uncertainty is present. We explore two kinds of uncertainties in this paper: uncertainty in the decision variables which we call inherent design problem (IDP) uncertainty and that in knowledge of the preferences of the decision maker which we refer to as preference assessment (PA) uncertainty. From a purely utility theory perspective a rational decision maker maximizes his or her expected multi attribute utility.
20110412
Journal Article
2011011081
This paper presents a methodology to evaluate and optimize discrete event systems, such as an assembly line or a call center. First, the methodology estimates the performance of a system for a single probability distribution of the inputs. Probabilistic Reanalysis (PRRA) uses this information to evaluate the effect of changes in the system configuration on its performance. PRRA is integrated with a program to optimize the system. The proposed methodology is dramatically more efficient than one requiring a new Monte Carlo simulation each time we change the system. We demonstrate the approach on a drilling center and an electronic parts factory.
20110412
Journal Article
2011010243
A common approach to the validation of simulation models focuses on validation throughout the entire design space. A more recent methodology validates designs as they are generated during a simulationbased optimization process. The latter method relies on validating the simulation model in a sequence of local domains. To improve its computational efficiency, this paper proposes an iterative process, where the size and shape of local domains at the current step are determined from a parametric bootstrap methodology involving maximum likelihood estimators of unknown model parameters from the previous step. Validation is carried out in the local domain at each step. The iterative process continues until the local domain does not change from iteration to iteration during the optimization process ensuring that a converged design optimum has been obtained.
20100412
Journal Article
2010010645
A simulationbased, system reliabilitybased design optimization (RBDO) method is presented that can handle problems with multiple failure regions and correlated random variables. Copulas are used to represent the correlation. The method uses a Probabilistic ReAnalysis (PRRA) approach in conjunction with a trustregion optimization approach and local metamodels covering each trust region. PRRA calculates very efficiently the system reliability of a design by performing a single Monte Carlo (MC) simulation per trust region. Although PRRA is based on MC simulation, it calculates “smooth” sensitivity derivatives, allowing therefore, the use of a gradientbased optimizer. The PRRA method is based on importance sampling. It provides accurate results, if the support of the sampling PDF contains the support of the joint PDF of the input random variables. The sequential, trustregion optimization approach satisfies this requirement.
Filter

Design Engineering and Styling
67

Quality, Reliability, and Durability
26

Simulation and modeling
25

Analysis methodologies
25

Reliability
24

Design processes
23

Power and Propulsion
19

Maintenance and Aftermarket
18

Maintainability and supportability
18

Engines
17

Noise, Vibration, and Harshness
13

Noise
13

Materials
12

Engine components
11

Manufacturing
10

Engine mechanical components
10

Manufacturing processes
9

Vehicles and Performance
9

Mathematical models
7

Finite element analysis
7

Statistical analysis
7

Mathematical analysis
7

Management and Organization
7

Product development
6

Parts and Components
6

Parts
6

Pistons
6

Bodies and Structures
5

Body structures
5

Computer simulation
4

Systems engineering
4

Vibration
4

Bearings
4

Crankshafts
4

Doors
3

Failure modes and effects analysis
3

Failure analysis
3

Electrical, Electronics, and Avionics
3

Metals
3

Steel
3

Ferrous metals
3

Diesel / Compression Ignition engines
3

Tests and Testing
3

Vehicles
3

CAD, CAM, and CAE
2

Optimization
2

Architecture
2

Fuels and Energy Sources
2

Onboard energy sources
2

Fuel cells
2

Polymers
2

Powertrains
2

Engine cylinders
2

Spark ignition engines
2

Test procedures
2

Vehicle dynamics
2

Vehicle performance
2

Computational fluid dynamics
1

Computer software and hardware
1

Data acquisition and handling
1

Environment
1

Terrain
1

Fuel economy
1

Hybrid power
1

Human Factors and Ergonomics
1

Vehicle occupants
1

Maintenance, repair, and service operations
1

Elastomers
1

Axles
1

Transmissions
1

Engine cooling systems
1

Combustion chambers
1

Connecting rods
1

Flywheels
1

Auxiliary power units
1

Electric motors
1

Safety
1

Crashes
1

Injuries
1

Safety testing and procedures
1

Risk assessments
1

Vehicle handling
1

Military vehicles and equipment
1

Fleets
1

Fuel cell vehicles
1

Lightweighting
1

Oakland University
42

Oakland Univ.
19

US Army TARDEC
12

Univ. of Toledo
6

University of Michigan
6

FederalMogul Corporation
5

University of Toledo
5

Department of Mechanical Engineering, Oakland University
4

The University of Toledo
4

Mechanical Engineering Department, Oakland University
3

Vanderbilt University
3

Department of Mechanical & Industrial Engineering, University of Illinois at Chicago
2

Oakland Univ
2

US Army RDECOM
2

Univ. of MichiganAnn Arbor
2

University of Alaska Fairbanks
2

Beijing Jiaotong University
1

Beta CAE Systems USA Inc
1

California Polytechnic State University, San Luis Obispo
1

Chrysler LLC
1

DaimlerChrysler Corporation
1

Department of Mechanical Engineering, The University of Michigan
1

Engineous Software Inc.
1

FCA US LLC
1

Federal Mogul Corp.
1

Fluid Mechanics Dept. General Motors Research Laboratories
1

General Motors Corporation
1

General Motors LLC
1

General Motors R&D and Planning
1

Honeywell Corporation
1

McGill University
1

Oakland University, Rochester
1

Oakland Universtiy
1

TARDEC
1

Transportation Research Institute
1

US Army
1

US Army, TARDEC
1

Univ. of Michigan  Ann Arbor
1

University of Cyprus, Nicosia, Cyprus
1

University of Michigan, Ann Arbor
1