Criteria

Text:
Display:

Results

Viewing 1 to 30 of 78
2017-03-28
Journal Article
2017-01-0197
Vasiliki Tsianika, Vasileios Geroulas, Zissimos Mourelatos, Igor Baseski
Abstract Fatigue life estimation, reliability and durability are important in acquisition, maintenance and operation of vehicle systems. Fatigue life is random because of the stochastic load, the inherent variability of material properties, and the uncertainty in the definition of the S-N curve. The commonly used fatigue life estimation methods calculate the mean (not the distribution) of fatigue life under Gaussian loads using the potentially restrictive narrow-band assumption. In this paper, a general methodology is presented to calculate the statistics of fatigue life for a linear vibratory system under stationary, non-Gaussian loads considering the effects of skewness and kurtosis. The input loads are first characterized using their first four moments (mean, standard deviation, skewness and kurtosis) and a correlation structure equivalent to a given Power Spectral Density (PSD).
2017-03-28
Journal Article
2017-01-0206
Monica Majcher, Zissimos Mourelatos, Vasiliki Tsianika
Abstract Recent developments in time-dependent reliability have introduced the concept of a composite limit state. The composite limit state method can be used to calculate the time-dependent probability of failure for dynamic systems with limit-state functions of input random variables, input random processes and explicit in time. The probability of failure can be calculated exactly using the composite limit state if the instantaneous limit states are linear, forming an open or close polytope, and are functions of only two random variables. In this work, the restriction on the number of random variables is lifted. The proposed algorithm is accurate and efficient for linear instantaneous limit state functions of any number of random variables. An example on the design of a hydrokinetic turbine blade under time-dependent river flow load demonstrates the accuracy of the proposed general composite limit state approach.
2017-03-28
Journal Article
2017-01-0207
Dimitrios Papadimitriou, Zissimos Mourelatos
Abstract A new second-order Saddlepoint Approximation (SA) method for structural reliability analysis is introduced. The Mean-value Second-order Saddlepoint Approximation (MVSOSA) is presented as an extension to the Mean-value First-order Saddlepoint Approximation (MVFOSA). The proposed method is based on a second-order Taylor expansion of the limit state function around the mean value of the input random variables. It requires not only the first but also the second-order sensitivity derivatives of the limit state function. If sensitivity analysis must be avoided because of computational cost, a quadrature integration approach, based on sparse grids, is also presented and linked to the saddlepoint approximation (SGSA - Sparse Grid Saddlepoint Approximation). The SGSA method is compared with the first and second-order SA methods in terms of accuracy and efficiency. The proposed MVSOSA and SGSA methods are used in the reliability analysis of two examples.
2017-03-28
Journal Article
2017-01-0194
Santosh Patil, Dimitrios Papadimitriou, Zissimos Mourelatos
Abstract A methodology for time-dependent reliability-based design optimization of vibratory systems with random parameters under stationary excitation is presented. The time-dependent probability of failure is computed using an integral equation which involves up-crossing and joint up-crossing rates. The total probability theorem addresses the presence of the system random parameters and a sparse grid quadrature method calculates the integral of the total probability theorem efficiently. The sensitivity derivatives of the time-dependent probability of failure with respect to the design variables are computed using finite differences. The Modified Combined Approximations (MCA) reanalysis method is used to reduce the overall computational cost from repeated evaluations of the system frequency response or equivalently impulse response function. The method is applied to the shape optimization of a vehicle frame under stochastic loading.
2017-03-28
Journal Article
2017-01-0209
Themistoklis Koutsellis, Zissimos Mourelatos, Mohammad Hijawi, Huairui Guo, Matthew Castanier
Abstract Warranty forecasting of repairable systems is very important for manufacturers of mass produced systems. It is desired to predict the Expected Number of Failures (ENF) after a censoring time using collected failure data before the censoring time. Moreover, systems may be produced with a defective component resulting in extensive warranty costs even after the defective component is detected and replaced with a new design. In this paper, we present a forecasting method to predict the ENF of a repairable system using observed data which is used to calibrate a Generalized Renewal Processes (GRP) model. Manufacturing of products may exhibit different production patterns with different failure statistics through time. For example, vehicles produced in different months may have different failure intensities because of supply chain differences or different skills of production workers, for example.
2017-03-28
Technical Paper
2017-01-0205
Annette Skowronska, Vijitashwa Pandey, Kevin Weinert, David Gorsich, Zissimos Mourelatos
Abstract Reliability and resiliency (R&R) definitions differ depending on the system under consideration. Generally, each engineering sector defines relevant R&R metrics pertinent to their system. While this can impede cross-disciplinary engineering projects as well as research, it is a necessary strategy to capture all the relevant system characteristics. This paper highlights the difficulties associated with defining performance of such systems while using smart microgrids as an example. Further, it develops metrics and definitions that are useful in assessing their performance, based on utility theory. A microgrid must not only anticipate load conditions but also tolerate partial failures and remain optimally operating. Many of these failures happen infrequently but unexpectedly and therefore are hard to plan for. We discuss real life failure scenarios and show how the proposed definitions and metrics are beneficial.
2016-04-05
Journal Article
2016-01-0316
Dorin Drignei, Zissimos Mourelatos, Ervisa Kosova, Jingwen Hu, Matthew Reed, Jonathan Rupp, Rebekah Gruber, Risa Scherer
Abstract We have recently obtained experimental data and used them to develop computational models to quantify occupant impact responses and injury risks for military vehicles during frontal crashes. The number of experimental tests and model runs are however, relatively small due to their high cost. While this is true across the auto industry, it is particularly critical for the Army and other government agencies operating under tight budget constraints. In this study we investigate through statistical simulations how the injury risk varies if a large number of experimental tests were conducted. We show that the injury risk distribution is skewed to the right implying that, although most physical tests result in a small injury risk, there are occasional physical tests for which the injury risk is extremely large. We compute the probabilities of such events and use them to identify optimum design conditions to minimize such probabilities.
2016-04-05
Journal Article
2016-01-1395
Syed F. Haider, Zissimos Mourelatos
Abstract To improve fuel economy, there is a trend in automotive industry to use light weight, high strength materials. Automotive body structures are composed of several panels which must be downsized to reduce weight. Because this affects NVH (Noise, Vibration and Harshness) performance, engineers are challenged to recover the lost panel stiffness from down-gaging in order to improve the structure borne noise transmitted through the lightweight panels in the frequency range of 100-300 Hz where most of the booming and low medium frequency noise occurs. The loss in performance can be recovered by optimized panel geometry using beading or damping treatment. Topography optimization is a special class of shape optimization for changing sheet metal shapes by introducing beads. A large number of design variables can be handled and the process is easy to setup in commercial codes. However, optimization methods are computationally intensive because of repeated full-order analyses.
2016-04-05
Journal Article
2016-01-1318
Syed F. Haider, Zissimos Mourelatos
Abstract Finite element analysis is a standard tool for deterministic or probabilistic design optimization of dynamic systems. The optimization process requires repeated eigenvalue analyses which can be computationally expensive. Several reanalysis techniques have been proposed to reduce the computational cost including Parametric Reduced Order Modeling (PROM), Combined Approximations (CA), and the Modified Combined Approximations (MCA) method. Although the cost of reanalysis is substantially reduced, it can still be high for models with a large number of degrees of freedom and a large number of design variables. Reanalysis methods use a basis composed of eigenvectors from both the baseline and the modified designs which are in general linearly dependent. To eliminate the linear dependency and improve accuracy, Gram Schmidt orthonormalization is employed which is costly itself.
2016-04-05
Journal Article
2016-01-1338
Syed F. Haider, Zissimos Mourelatos
Abstract Weight reduction is very important in automotive design because of stringent demand on fuel economy. Structural optimization of dynamic systems using finite element (FE) analysis plays an important role in reducing weight while simultaneously delivering a product that meets all functional requirements for durability, crash and NVH. With advancing computer technology, the demand for solving large FE models has grown. Optimization is however costly due to repeated full-order analyses. Reanalysis methods can be used in structural vibrations to reduce the analysis cost from repeated eigenvalue analyses for both deterministic and probabilistic problems. Several reanalysis techniques have been introduced over the years including Parametric Reduced Order Modeling (PROM), Combined Approximations (CA) and the Epsilon algorithm, among others.
2015-04-14
Journal Article
2015-01-0425
Monica Majcher, Zissimos P. Mourelatos, Vasileios Geroulas, Igor Baseski, Amandeep Singh
Abstract Using the total probability theorem, we propose a method to calculate the failure rate of a linear vibratory system with random parameters excited by stationary Gaussian processes. The response of such a system is non-stationary because of the randomness of the input parameters. A space-filling design, such as optimal symmetric Latin hypercube sampling or maximin, is first used to sample the input parameter space. For each design point, the output process is stationary and Gaussian. We present two approaches to calculate the corresponding conditional probability of failure. A Kriging metamodel is then created between the input parameters and the output conditional probabilities allowing us to estimate the conditional probabilities for any set of input parameters. The total probability theorem is finally applied to calculate the time-dependent probability of failure and the failure rate of the dynamic system. The proposed method is demonstrated using a vibratory system.
2014-04-01
Journal Article
2014-01-0717
Igor Baseski, Dorin Drignei, Zissimos Mourelatos, Monica Majcher
We propose a new metamodeling method to characterize the output (response) random process of a dynamic system with random parameters, excited by input random processes. The metamodel can be then used to efficiently estimate the time-dependent reliability of a dynamic system using analytical or simulation-based methods. The metamodel is constructed by decomposing the input random processes using principal components or wavelets and then using a few simulations to estimate the distributions of the decomposition coefficients. A similar decomposition is also performed on the output random process. A kriging model is then established between the input and output decomposition coefficients and subsequently used to quantify the output random process corresponding to a realization of the input random parameters and random processes. What distinguishes our approach from others in metamodeling is that the system input is not deterministic but random.
2014-04-01
Journal Article
2014-01-0719
Vijitashwa Pandey, Zissimos Mourelatos, Matthew Castanier
Implications of decision analysis (DA) on engineering design are important and well-documented. However, widespread adoption has not occurred. To that end, the authors recently proposed decision topologies (DT) as a visual method for representing decision situations and proved that they are entirely consistent with normative decision analysis. This paper addresses the practical issue of assessing the DTs of a designer using their responses. As in classical DA, this step is critical to encoding the DA's preferences so that further analysis and mathematical optimization can be performed on the correct set of preferences. We show how multi-attribute DTs can be directly assessed from DM responses. Furthermore, we show that preferences under uncertainty can be trivially incorporated and that topologies can be constructed using single attribute topologies similarly to multi-linear functions in utility analysis. This incremental construction simplifies the process of topology construction.
2014-04-01
Journal Article
2014-01-0716
Vijitashwa Pandey, Annette Skowronska, Zissimos Mourelatos, David Gorsich, Matthew Castanier
The reliability theory of repairable systems is vastly different from that of non-repairable systems. The authors have recently proposed a ‘decision-based’ framework to design and maintain repairable systems for optimal performance and reliability using a set of metrics such as minimum failure free period, number of failures in planning horizon (lifecycle), and cost. The optimal solution includes the initial design, the system maintenance throughout the planning horizon, and the protocol to operate the system. In this work, we extend this idea by incorporating flexibility and demonstrate our approach using a smart charging electric microgrid architecture. The flexibility is realized by allowing the architecture to change with time. Our approach “learns” the working characteristics of the microgrid. We use actual load and supply data over a short time to quantify the load and supply random processes and also establish the correlation between them.
2013-04-08
Journal Article
2013-01-0606
Vijitashwa Pandey, Zissimos Mourelatos
The classical definition of reliability may not be readily applicable for repairable systems. Commonly used concepts such as the Mean Time Between Failures (MTBF) and availability can be misleading because they only report limited information about the system functionality. In this paper, we discuss a set of metrics that can help with the design of repairable systems. Based on a set of desirable properties for these metrics, we select a minimal set of metrics (MSOM) which provides the most information about a system, with the smallest number of metrics. The metric of Minimum Failure Free Period (MFFP) with a given probability generalizes MTBF because the latter is simply the MFFP with a 0.5 probability. It also generalizes availability because coupled with repair times it provides a clearer picture of the length of the expected uninterrupted service. Two forms of MFFP are used: transient and steady state.
2013-04-08
Journal Article
2013-01-0943
Efstratios Nikolaidis, Mahdi Norouzi, Zissimos Mourelatos, Vijitashwa Pandey
Importance Sampling is a popular method for reliability assessment. Although it is significantly more efficient than standard Monte Carlo simulation if a suitable sampling distribution is used, in many design problems it is too expensive. The authors have previously proposed a method to manage the computational cost in standard Monte Carlo simulation that views design as a choice among alternatives with uncertain reliabilities. Information from simulation has value only if it helps the designer make a better choice among the alternatives. This paper extends their method to Importance Sampling. First, the designer estimates the prior probability density functions of the reliabilities of the alternative designs and calculates the expected utility of the choice of the best design. Subsequently, the designer estimates the likelihood function of the probability of failure by performing an initial simulation with Importance Sampling.
2013-04-08
Journal Article
2013-01-0947
Vijitashwa Pandey, Zissimos Mourelatos
Our recent work has shown that representation of systems using a reliability block diagram can be used as a decision making tool. In decision making, we called these block diagrams decision topologies. In this paper, we generalize the results and show that decision topologies can be used to make many engineering decisions and can in fact replace decision analysis for most decisions. We also provide a meta-proof that the proposed method using decision topologies is entirely consistent with decision analysis at the limit. The main advantages of the method are that (1) it provides a visual representation of a decision situation, (2) it can easily model tradeoffs, (3) it can incorporate binary attributes, (4) it can model preferences with limited information, and (5) it can be used in a low-fidelity sense to quickly make a decision.
2013-04-08
Technical Paper
2013-01-1385
Dorin Drignei, Zissimos Mourelatos, Vijitashwa Pandey, Igor Baseski, Michael Kokkolaras, Amandeep Singh, David Lamb
Design optimization often relies on computational models, which are subjected to a validation process to ensure their accuracy. Because validation of computer models in the entire design space can be costly, we have previously proposed an approach where design optimization and model validation, are concurrently performed using a sequential approach with variable-size local domains. We used test data and statistical bootstrap methods to size each local domain where the prediction model is considered validated and where design optimization is performed. The method proceeds iteratively until the optimum design is obtained. This method however, requires test data to be available in each local domain along the optimization path. In this paper, we refine our methodology by using polynomial regression to predict the size and shape of a local domain at some steps along the optimization process without using test data.
2012-04-16
Journal Article
2012-01-0070
Jing Li, Zissimos Mourelatos, Amandeep Singh
Reliability is an important engineering requirement for consistently delivering acceptable product performance through time. It also affects the scheduling for preventive maintenance. Reliability usually degrades with time increasing therefore, the lifecycle cost due to more frequent failures which result in increased warranty costs, costly repairs and loss of market share. In a lifecycle cost based design, we must account for product quality and preventive maintenance using time-dependent reliability. Quality is a measure of our confidence that the product conforms to specifications as it leaves the factory. For a repairable system, preventive maintenance is scheduled to avoid failures, unnecessary production loss and safety violations. This article proposes a methodology to obtain the optimal scheduling for preventive maintenance using time-dependent reliability principles.
2012-04-16
Journal Article
2012-01-0064
Vijitashwa Pandey, Zissimos Mourelatos
In this article we present an approach to identify the system topology using simulation for reliability calculations. The system topology provides how all components in a system are functionally connected. Most reliability engineering literature assumes that either the system topology is known and therefore all failure modes can be deduced or if the system topology is not known we are only interested in identifying the dominant failure modes. The authors contend that we should try to extract as much information about the system topology from failure or success information of a system as possible. This will not only identify the dominant failure modes but will also provide an understanding of how the components are functionally connected, allowing for more complicated analyses, if needed. We use an evolutionary approach where system topologies are generated at random and then tested against failure or success data. The topologies evolve based on how consistent they are with test data.
2012-04-16
Technical Paper
2012-01-0914
Vijitashwa Pandey, Zissimos Mourelatos, Efstratios Nikolaidis, Matthew Castanier, David Lamb
Reaching a system level reliability target is an inverse problem. Component level reliabilities are determined for a required system level reliability. Because this inverse problem does not have a unique solution, one approach is to tradeoff system reliability with cost and to allow the designer to select a design with a target system reliability, using his/her preferences. In this case, the component reliabilities are readily available from the calculation of the reliability-cost tradeoff. To arrive at the set of solutions to be traded off, one encounters two problems. First, the system reliability calculation is based on repeated system simulations where each system state, indicating which components work and which have failed, is tested to determine if it causes system failure, and second, the task of eliciting and encoding the decision maker's preferences is extremely difficult because of uncertainty in modeling the decision maker's preferences.
2012-04-16
Journal Article
2012-01-0226
Dorin Drignei, Zissimos Mourelatos, Vijitashwa Pandey, Michael Kokkolaras
Design optimization often relies on computational models, which are subjected to a validation process to ensure their accuracy. Because validation of computer models in the entire design space can be costly, a recent approach was proposed where design optimization and model validation were concurrently performed using a sequential approach with both fixed and variable-size local domains. The variable-size approach used parametric distributions such as Gaussian to quantify the variability in test data and model predictions, and a maximum likelihood estimation to calibrate the prediction model. Also, a parametric bootstrap method was used to size each local domain. In this article, we generalize the variable-size approach, by not assuming any distribution such as Gaussian. A nonparametric bootstrap methodology is instead used to size the local domains. We expect its generality to be useful in applications where distributional assumptions are difficult to verify, or not met at all.
2012-04-16
Technical Paper
2012-01-0915
Efstratios Nikolaidis, Vijitashwa Pandey, Zissimos Mourelatos
Monte Carlo simulation is a popular tool for reliability assessment because of its robustness and ease of implementation. A major concern with this method is its computational cost; standard Monte Carlo simulation requires quadrupling the number of replications for halving the standard deviation of the estimated failure probability. Efforts to increase efficiency focus on intelligent sampling procedures and methods for efficient calculation of the performance function of a system. This paper proposes a new method to manage cost that views design as a decision among alternatives with uncertain reliabilities. Information from a simulation has value only if it enables the designer to make a better choice among the alternative options. Consequently, the value of information from the simulation is equal to the gain from using this information to improve the decision. A designer can determine the number of replications that are worth performing by using the method.
2011-05-17
Technical Paper
2011-01-1628
Hejie Lin, Turgay Bengisu, Zissimos Mourelatos
Styrene-Butadiene Rubber (SBR), a copolymer of butadiene and styrene, is widely used in the automotive industry due to its high durability and resistance to abrasion, oils and oxidation. Some of the common applications include tires, vibration isolators, and gaskets, among others. This paper characterizes the dynamic behavior of SBR and discusses the suitability of a visco-elastic model of elastomers, known as the Kelvin model, from a mathematical and physical point of view. An optimization algorithm is used to estimate the parameters of the Kelvin model. The resulting model was shown to produce reasonable approximations of measured dynamic stiffness. The model was also used to calculate the self heating of the elastomer due to energy dissipation by the viscous damping components in the model. Developing such a predictive capability is essential in understanding the dynamic behavior of elastomers considering that their dynamic stiffness can in general depend on temperature.
2011-04-12
Journal Article
2011-01-0728
Amandeep Singh, Zissimos Mourelatos, Efstratios Nikolaidis
Reliability is an important engineering requirement for consistently delivering acceptable product performance through time. As time progresses, the product may fail due to time-dependent operating conditions and material properties, component degradation, etc. The reliability degradation with time may increase the lifecycle cost due to potential warranty costs, repairs and loss of market share. Reliability is the probability that the system will perform its intended function successfully for a specified time interval. In this work, we consider the first-passage reliability which accounts for the first time failure of non-repairable systems. Methods are available in the literature, which provide an upper bound to the true reliability which may overestimate the true value considerably. Monte-Carlo simulations are accurate but computationally expensive.
2011-04-12
Journal Article
2011-01-0725
Zissimos Mourelatos, Jing Li, Vijitashwa Pandey, Amandeep Singh, Matthew Castanier, David A. Lamb
Understanding reliability is critical in design, maintenance and durability analysis of engineering systems. A reliability simulation methodology is presented in this paper for vehicle fleets using limited data. The method can be used to estimate the reliability of non-repairable as well as repairable systems. It can optimally allocate, based on a target system reliability, individual component reliabilities using a multi-objective optimization algorithm. The algorithm establishes a Pareto front that can be used for optimal tradeoff between reliability and the associated cost. The method uses Monte Carlo simulation to estimate the system failure rate and reliability as a function of time. The probability density functions (PDF) of the time between failures for all components of the system are estimated using either limited data or a user-supplied MTBF (mean time between failures) and its coefficient of variation.
2011-04-12
Journal Article
2011-01-1080
Vijitashwa Pandey, Efstratios Nikolaidis, Zissimos Mourelatos
Multi-attribute decision making and multi-objective optimization complement each other. Often, while making design decisions involving multiple attributes, a Pareto front is generated using a multi-objective optimizer. The end user then chooses the optimal design from the Pareto front based on his/her preferences. This seemingly simple methodology requires sufficient modification if uncertainty is present. We explore two kinds of uncertainties in this paper: uncertainty in the decision variables which we call inherent design problem (IDP) uncertainty and that in knowledge of the preferences of the decision maker which we refer to as preference assessment (PA) uncertainty. From a purely utility theory perspective a rational decision maker maximizes his or her expected multi attribute utility.
2011-04-12
Journal Article
2011-01-1081
Yibo Li, Efstratios Nikolaidis, Zissimos Mourelatos
This paper presents a methodology to evaluate and optimize discrete event systems, such as an assembly line or a call center. First, the methodology estimates the performance of a system for a single probability distribution of the inputs. Probabilistic Reanalysis (PRRA) uses this information to evaluate the effect of changes in the system configuration on its performance. PRRA is integrated with a program to optimize the system. The proposed methodology is dramatically more efficient than one requiring a new Monte Carlo simulation each time we change the system. We demonstrate the approach on a drilling center and an electronic parts factory.
2011-04-12
Journal Article
2011-01-0243
Dorin Drignei, Zissimos Mourelatos, Michael Kokkolaras, Jing Li, Grzegorz Koscik
A common approach to the validation of simulation models focuses on validation throughout the entire design space. A more recent methodology validates designs as they are generated during a simulation-based optimization process. The latter method relies on validating the simulation model in a sequence of local domains. To improve its computational efficiency, this paper proposes an iterative process, where the size and shape of local domains at the current step are determined from a parametric bootstrap methodology involving maximum likelihood estimators of unknown model parameters from the previous step. Validation is carried out in the local domain at each step. The iterative process continues until the local domain does not change from iteration to iteration during the optimization process ensuring that a converged design optimum has been obtained.
2010-04-12
Journal Article
2010-01-0645
Ramon Kuczera, Zissimos Mourelatos, Efstratios Nikolaidis
A simulation-based, system reliability-based design optimization (RBDO) method is presented that can handle problems with multiple failure regions and correlated random variables. Copulas are used to represent the correlation. The method uses a Probabilistic Re-Analysis (PRRA) approach in conjunction with a trust-region optimization approach and local metamodels covering each trust region. PRRA calculates very efficiently the system reliability of a design by performing a single Monte Carlo (MC) simulation per trust region. Although PRRA is based on MC simulation, it calculates “smooth” sensitivity derivatives, allowing therefore, the use of a gradient-based optimizer. The PRRA method is based on importance sampling. It provides accurate results, if the support of the sampling PDF contains the support of the joint PDF of the input random variables. The sequential, trust-region optimization approach satisfies this requirement.
Viewing 1 to 30 of 78

Filter

  • Range:
    to:
  • Year: