Abstract
Epistemic uncertainty is widespread in reliability analysis of practical engineering products. Evidence theory is regarded as a powerful model for quantifying and analyzing epistemic uncertainty. However, the heavy computational burden has severely hindered its application in practical engineering problems, which is essentially caused by the repeated extreme analysis of limit-state function (LSF). In order to address the issue, this paper proposes a novel method to solve the evidence-theory-based reliability analysis (ETRA). It transforms the conventional ETRA problem into the classification of three classes of joint focal elements (JFEs) and then solves the classification problem effectively through a deep learning approach. The core of solving an ETRA problem is to determine whether the joint focal element is located in the reliable region, failure region, or intersected with the LSF. A spatial position feature reduction and arrangement method is proposed to classify the JFEs, which can effectively reduce the feature dimension and take into account the integrity and correlation of features. The stacked autoencoders model is then constructed and updated by extracting the spatial position features of the sampled JFEs to achieve high-accuracy classification of the remaining JFEs, and the reliability interval is calculated efficiently according to the classification results. Finally, the effectiveness of the proposed method is demonstrated using several numerical examples.
1 Introduction
Due to limitations in measurement technologies and cognitive proficiency, uncertainties widely exist in practical engineering problems, such as external loads, material parameters, and boundary conditions, which should be appropriately quantified and controlled for the reliability and safety of a product. According to the nature and source, uncertainties are distinguished into two categories: aleatory uncertainty and epistemic uncertainty [1,2]. Aleatory uncertainty is irreducible and describes the inherent variability of a physical system, which can be modeled as random variables or processes using probability theory [3,4]. At present, probability theory is a prevailing method to deal with aleatory uncertainty which has been successfully applied to varieties of industrial fields [5]. However, the probabilistic model requires a large number of statistical samples, which are usually not readily available [6]. In this case, the epistemic uncertainty will be involved. Epistemic uncertainty is defined as a lack of knowledge or information at some stage or activities in the modeling process [7]. Some representative theories, including convex models [8–10], possibility theory [11–13], fuzzy sets [14–16], and evidence theory [17–19], can be used to deal with epistemic uncertainty. Evidence theory is considered a promising complement to probability theory in epistemic uncertainty analysis [20].
Evidence theory was proposed by Dempster and advanced by his student Shafer [21]. Evidence theory was first applied to the parameter uncertainty analysis of simple systems, the strengths, and weaknesses of which were discussed thoroughly compared with probability theory, and then extended to more complex systems [22]. Bae et al. [23] proposed a reliability analysis method with high efficiency for structures with epistemic uncertainty by establishing a multipoint approximation of the limit state function (LSF). Cao et al. [24] established a basic solution framework for quantification of inverse uncertainty based on evidence theory and estimated the influence of uncertainty on unknown parameters in imprecise structures. Bai et al. [25] compared different metamodel techniques, including quadratic polynomial, radial basis function, and surrogate model in evidence-theory-based reliability analysis (ETRA) to obtain the reason for the difference. In the above studies, evidence variables were assumed independent. Such independent assumptions about evidence variables may not be consistent with engineering practice. In order to quantify correlations between evidence variables in ETRA, there were several research reports [26,27]. Jiang et al. [28] introduced an ellipsoid model to properly represent the correlation between evidence variables. Liu et al. [29] proposed a new evidence theory model to quantify both interdependent and independent evidence variables.
Although some important achievements have been introduced earlier, it is still a challenge to apply evidence theory in the uncertainty analysis of practical engineering problems. One of the most critical issue is the high computational cost which is caused by the repeating extreme value analysis [30]. In order to solve this problem, three mainstream approaches are proposed as follows. Yin et al. [31] proposed a response-surface-based method that attempts to approximate the LSF by which an explicit approximated function can be established to solve the ETRA efficiently. However, in practical engineering problems, the implicit LSF is generally strongly nonlinear, which severely hinders the application of the response surface method. Mourelatos et al. [32] proposed a focal element reduction method that attempts to improve the computational efficiency by reducing the number of joint focal elements (JFEs) that require function evaluations. However, this method does not perform well in high-dimensional problems. Zhang et al. [33] proposed a probabilistic equivalence method that attempts to find the most probable focal element (MPFE) and perform first-order and second-order Taylor series to approximate the actual LSF around the MPFE. This method has high computational efficiency, but the calculation accuracy is not satisfied for strongly nonlinear LSF.
In this paper, we tend to solve the ETRA problem from a novel perspective of JFE classification, and a deep learning method based on the stacked autoencoders (SAE) model is proposed for JFE classification. The remainder of this paper is organized as follows. The conventional reliability analysis method based on evidence theory is introduced in Sec. 2. The proposed reliability analysis method is formulated in Sec. 3. Three numerical examples and an engineering example are investigated in Sec. 4. Finally, conclusions are summarized in Sec. 5.
2 The Conventional Method for ETRA
The conventional reliability analysis using evidence theory includes three main steps:
definition of the frame of discernment;
construction of joint basic probability assignment;
computation of reliability interval.
2.1 Definition of the Frame of Discernment.
2.2 Construction of Joint Basic Probability Assignment.
2.3 Computation of Believe Function and Plausibility Function.
In this paper, the matlab function fmincon is adopted to obtain gmin and gmax, which implements sequential quadratic programming (SQP) algorithm. For a JFE with gmin and gmax both larger than g0, its JBPA contributes to the calculation of both Bel and Pl. For a JFE with only gmax larger than g0, its JBPA contributes to only Pl. Otherwise, the JFE contributes to neither Bel or Pl.
3 The Proposed Method
As introduced previously, the core of solving an ETRA problem is to determine whether the JFE is located in the reliable region, failure region or intersected with the LSF. With this, we could efficiently calculate the reliability interval through Eqs. (9) and (10). As shown in Fig. 1, the JFE is totally located in the failure region when its maximum LSF value is smaller than zero. For this type of JFE, we denote it as the failure JFE and label it as class I. The JFE is intersected with the LSF when its minimum LSF value is smaller than zero but maximum LSF value is larger than zero. This type of JFE is denoted as the cross JFE, and we label it as class II. The JFE is totally located in the reliable region when its minimum LSF value is larger than zero. This type of JFE is denoted as the safe JFE, and we label it as class III.
In this paper, we propose a deep learning approach to classify the JFEs in the uncertainty domain into three classes with high accuracy and efficiency. First, the point cloud down-sampling technique is adopted to uniformly sample a part of JFEs in the uncertainty domain; second, the spatial position feature is selected to represent each JFE, and a spatial position feature reduction and arrangement method is proposed to reduce the feature dimension but ensure the integrity and correlation of features; third, the SAE model is constructed and updated by extracting the spatial position features of the sampled JFEs; finally, the SAE model is utilized to predict the classification of JFEs, and the reliability interval is calculated efficiently according to the classification results. Therefore, the proposed algorithm includes four main parts:
Point cloud down-sampling of JFEs.
Spatial position features reduction and arrangement of JFEs.
Construction of the SAE model.
Calculation of reliability interval.
3.1 Point Cloud Down-Sampling of Joint Focal Elements.
In order to construct the SAE model, a part of experiment JFEs should be obtained. However, the JFE is a hypercube without any distribution information. The probability-based sampling method of selecting experiment points in the random uncertainty domain cannot be used to select experiment JFEs in the epistemic uncertainty domain. Therefore, the uniform sampling method of JFEs in the epistemic uncertainty domain can only be explored from a geometrical point of view.
Farthest point sampling (FPS) is a kind of point cloud down-sampling method based on the max-minimum distances [35]. As mentioned earlier, considering an n-dimension independent vector of evidence variables, expressed as , the number of interval focal elements included in each evidence variable is N, expressed as , . The point cloud down-sampling process of the JFEs can be divided into the following steps:
- Step 1. The centroid coordinates of all JFEs are regarded as the initial point cloud, and the number of the initial point cloud is Nn, expressed as(12)
- Step 2. Divide the initial point cloud into two sets, namely the sampling point set T and the un-sampling point set C, and T is initialized as an empty set, expressed as(13)
- Step 3. Randomly sample a seed point in the initial point cloud and put it into set T, then put the remaining Nn − 1 points into set C, expressed as(14)
- Step 4. Calculate the distance from all points in the set T to the set C, where the distance from the point to the set is defined as the minimum distance from the point to all the points in the set, which is expressed as(15)
- Step 5. Find the point in the set C with the largest distance to the set T, which is the newly added point, and put the newly added point into T, which is expressed as(16)
- Step 6.
Repeat Step 4 to Step 5 until an appropriate number of points are obtained.
Figure 2 shows the specific sampling process of point cloud down-sampling for a two-dimensional problem. In this problem, the number of interval focal elements included in each evidence variable is assumed to be 20, and the number of JFEs is 20 × 20 = 400. Therefore, the number of the initial point cloud is also 400. The point cloud down-sampling method is used to sample 50 JFEs. First, a seed point is randomly sampled, and its corresponding JFE (red square) is shown in (a). Based on this seed point, the next sampling point is determined according to Eqs. (15) and (16) which is shown in (b). The above process is repeated until 50 JFEs are sampled. The sampling results of the next three steps are shown in (c), (d), (e), and the final sampling result is shown in (f).
3.2 Spatial Position Features Reduction and Arrangement of Joint Focal Elements.
A JFE is actually a hypercube which has many features, such as shape, size, spatial position, and so on. In order to use a deep learning model to classify the JFEs, the features of JFEs must be chosen. Realizing that the spatial position of each JFE is unique in uncertainty domain, the spatial position features of each JFE are taken as the main feature.
For each JFE, all vertices can represent its spatial position exactly, so all coordinates of vertices can be used to represent its spatial position features, but this idea is seriously limited by the dimension of evidence variables, because the number of vertices increases exponentially with the dimension of evidence variables. In order to solve this issue, a unique spatial position features reduction method is proposed.
Comparing Eqs. (22) and (26), for the three-dimensional JFE, the number of vertices that required to represent its space position is reduced from 8 (23) to 4 (3 + 1), and the number of spatial position features is reduced from 24 (23 × 3) to 12 ((3 + 1) × 3). In fact, for an n-dimensional JFE, the number of vertices that are selected to represent its space position will be reduced from 2n to n + 1 by the proposed method. Therefore, the number of spatial position features will also be greatly reduced from 2n × n to (n + 1) × n by the proposed method, and the training speed of the SAE model will be accelerated greatly.
The following two points are mainly considered in the arrangement of the spatial position features. The first is the integrity of the spatial position features. The spatial position features are a part of vertices coordinates of the JFE. Therefore, when arranging the spatial position features, the integrity of the vertex coordinates should be preserved. The second is the correlation of the spatial position features. Two adjacent vertices are connected by the edges of the JFE. Therefore, when arranging the spatial position features, the correlation between vertices should be preserved to a certain extent. In order to illuminate explicitly, Fig. 4 shows the spatial position features reduction and arrangement schematic diagram of a three-dimensional JFE. The spatial position features are converted from all vertices coordinates to the red vertices coordinates. The red vertex that connects with all other red vertices by edges of the JFE is regarded as the common vertex (points in the purple circle) and its coordinates are set as the first column. The remaining red vertex coordinates are arranged in the order of dimensions. After the arrangement is completed, a coordinate matrix will be obtained which can be understood as an “image.” The spatial position features of the JFE are the pixel value of the “image,” and the class of the JFE is the class of the “image”. As a result, the classification problem of JFEs has been transformed into the classification problem of images which has achieved great success in deep learning field [36–38].
3.3 Construction of the Stacked Autoencoder Model.
Neural networks with multiple hidden layers can be used for classification problems with complex data such as images. Each layer can learn features at different levels of abstraction. However, in practice, training neural networks with multiple hidden layers can be difficult. An efficient way to train a neural network with multiple layers is to train one layer at a time. To do this, a special type of network called autoencoder can be trained for each desired hidden layer. As shown in Fig. 5, a neural network with two hidden layers is trained in this paper to classify JFEs. First, each hidden layer is trained individually in an unsupervised manner using an autoencoder. Then, the last softmax layer is trained, and these layers are connected together to form a stacked neural network, which is finally trained in a supervised manner.
As mentioned earlier, the spatial position features of the JFEs have been reduced and arranged. At this time, a JFE corresponds to an “image,” and the classification of the JFE also corresponds to the classification of the “image.” When the autoencoder recognizes an image, it first stores the pixel values of the image in columns and converts the image into a column vector. Therefore, an experiment JFE corresponds to a sample x(i), i = 1, 2 …nt, and the sample x(i) is a column vector with (n + 1) × n row and one column. The dimensions of the input and output layers of the first autoencoder are both (n + 1) × n.
In order to improve the results of stacked neural networks, backpropagation (BP) on the entire multilayer network has been performed. This process is often called fine-tuning. The network is finetuned by retraining the network based on the training data in a supervised manner.
3.4 Calculation of Reliability Interval.
The detailed steps of calculating the reliability interval are the following:
- Step 1.
Calculate the number of JFEs and their centroid coordinates.
The centroid coordinates of all JFEs are the initial point cloud of point cloud down-sampling in the epistemic uncertainty domain. the JFE population can be expressed as(31) - Step 2.
Find candidate JFEs from the JFE population by the SAE model in the previous iteration.
The set of candidate JFEs is defined bywhere is the set of the candidate JFEs of iteration τ; is the prediction class of the samples by the SAE model, which is a column vector , where the value of mK, K = 1, 2, 3 represents the probability that the JFE belongs to the kth class, and ξ is a threshold. Since it is a three-classification problem, the condition that the threshold ξ should satisfy is 1/3 ≤ ξ ≤ 1. In this paper, the threshold ξ is set to 0.6.(32) - Step 3.
Point cloud down-sampling of JFEs.
In the first sampling, only spatial uniformity is considered. Point cloud down-sampling is used until three classes of JFEs are sampled, and the number of JFEs sampled at this time is express as Ninitial.
In each subsequent iterative sampling process, both spatial uniformity and sample quantity uniformity are considered. The number of candidate JFEs of each class can be obtained according to the class predicted by the SAE model, among which the number of safe focal elements, cross JFEs, and failed JFEs is recorded as NS, NC, and Nf, respectively. Then, select M new experimental JFEs with the point cloud down-sampling in each class of candidate JFEs.
- Step 4.
Update the training set with the new experimental JFEs.
The training set of each iteration is the union of experimental JFEs selected in all iterationswhere are the selected experimental JFEs of iteration τ. In order to obtain the true class of the experimental JFEs, the SQP optimization algorithm is used to calculate the extreme value. The training set St is obtained(33)where is the experiment JFEs, is the true class label of the experiment JFEs, and nt is the number of the experiment JFEs.(34) - Step 5.
Spatial position features reduction and arrangement of JFEs.
Specific reduction and arrangement method is discussed in Sec. 3.2. When the numerical difference between the evidence variables is not significant, the original spatial location features can be directly used for training. Otherwise, the evidence variables should be normalized to improve the training accuracy and convergence speed. As shown in Fig. 3, each dimension evidence variable contains the endpoint coordinates of the interval focal element can be expressed as(35)The maximum and minimum normalization can be expressed aswhere is the element in the vector , and is the normalized element.(36) - Step 6.
Construct and update the SAE model.
In the first iteration, the SAE model is constructed and trained with the initial training set, and then, the SAE model is updated with new experimental JFEs in each iteration.
- Step 7.
Predict reliability interval by SAE model.
The class of the JFE population is predicted by the SAE model, and the reliability interval is evaluated by Eqs. (9) and (10). The lower bound of the reliability interval is the sum of the JBPAs of safe focal elements, and the upper bound of the reliability interval is the sum of the JBPAs of safe focal elements and cross-focal elements.
- Step 8.
Check the convergence.
The difference between the estimated lower bound and upper bound of the reliability interval of two contiguous iterations and is used to check the convergence [40]. If the relative error of lower bound and upper bound of the reliability interval of two contiguous iterations is lower than the threshold ecrL and ecrB (e.g., 0.005), respectively, the iteration process is stopped and the result is outputted; otherwise, the iterative process goes back to Step 2(37)
More details of the algorithm flowchart are shown in Fig. 7.
4 Numerical Example
4.1 Example 1: Two-Dimensional Mathematical Problem.
This example is a two-dimensional problem [7]. The LSF is described by
The BPA structure for evidence variables in Example 1
X1 | X2 | ||
---|---|---|---|
Focal elements | BPA | Focal elements | BPA |
[3.0000,3.0625] | 0.0625 | [1.0000,1.0625] | 0.0625 |
[3.0625,3.1250] | 0.0625 | [1.0625,1.1250] | 0.0625 |
[3.1250,3.1875] | 0.0625 | [1.1250,1.1875] | 0.0625 |
[3.1875,3.2500] | 0.0625 | [1.1875,1.2500] | 0.0625 |
[3.2500,3.3125] | 0.0625 | [1.2500,1.3125] | 0.0625 |
[3.3125,3.3750] | 0.0625 | [1.3125,1.3750] | 0.0625 |
[3.3750,3.4375] | 0.0625 | [1.3750,1.4375] | 0.0625 |
[3.4375,3.5000] | 0.0625 | [1.4375,1.5000] | 0.0625 |
[3.5000,3.5625] | 0.0625 | [1.5000,1.5625] | 0.0625 |
[3.5625,3.6250] | 0.0625 | [1.5625,1.6250] | 0.0625 |
[3.6250,3.6875] | 0.0625 | [1.6250,1.6875] | 0.0625 |
[3.6875,3.7500] | 0.0625 | [1.6875,1.7500] | 0.0625 |
[3.7500,3.8125] | 0.0625 | [1.7500,1.8125] | 0.0625 |
[3.8125,3.8750] | 0.0625 | [1.8125,1.8750] | 0.0625 |
[3.8750,3.9375] | 0.0625 | [1.8750,1.9375] | 0.0625 |
[3.9375,4.0000] | 0.0625 | [1.9375,2.0000] | 0.0625 |
X1 | X2 | ||
---|---|---|---|
Focal elements | BPA | Focal elements | BPA |
[3.0000,3.0625] | 0.0625 | [1.0000,1.0625] | 0.0625 |
[3.0625,3.1250] | 0.0625 | [1.0625,1.1250] | 0.0625 |
[3.1250,3.1875] | 0.0625 | [1.1250,1.1875] | 0.0625 |
[3.1875,3.2500] | 0.0625 | [1.1875,1.2500] | 0.0625 |
[3.2500,3.3125] | 0.0625 | [1.2500,1.3125] | 0.0625 |
[3.3125,3.3750] | 0.0625 | [1.3125,1.3750] | 0.0625 |
[3.3750,3.4375] | 0.0625 | [1.3750,1.4375] | 0.0625 |
[3.4375,3.5000] | 0.0625 | [1.4375,1.5000] | 0.0625 |
[3.5000,3.5625] | 0.0625 | [1.5000,1.5625] | 0.0625 |
[3.5625,3.6250] | 0.0625 | [1.5625,1.6250] | 0.0625 |
[3.6250,3.6875] | 0.0625 | [1.6250,1.6875] | 0.0625 |
[3.6875,3.7500] | 0.0625 | [1.6875,1.7500] | 0.0625 |
[3.7500,3.8125] | 0.0625 | [1.7500,1.8125] | 0.0625 |
[3.8125,3.8750] | 0.0625 | [1.8125,1.8750] | 0.0625 |
[3.8750,3.9375] | 0.0625 | [1.8750,1.9375] | 0.0625 |
[3.9375,4.0000] | 0.0625 | [1.9375,2.0000] | 0.0625 |
In this paper, the traditional method refers to the classical optimization method. SQP optimization algorithm [41] is first used to calculate the extreme values of all JFEs to obtain the class labels of all JFEs. Then, the sum of the JBPA of safe focal elements is calculated to obtain the value of Bel, and the sum of the JBPA of cross-focal elements and safe focal elements is calculated to obtain the value of Pl. Considering the SQP optimization algorithm may not obtain the global optimum, the same optimization problem from several different initial points has been solved and the mean value is calculated as the final result. The Bel and Pl results obtained by using the conventional method are [Bel, Pl] = [0.7422, 0.8281], which are treated as the reference values.
In the proposed method, first, the point cloud down-sampling method is used to uniformly sample a part of JFEs in the epistemic uncertain domain. Starting with 10 initial training JFEs which means Ninitial = 10 in this example, an SQP optimization algorithm is used to obtain the true class labels of sampled JFEs. The position relationship between sampled JFEs and the LSF is shown in Fig. 8(a). It can be found that the sampled JFEs are uniformly distributed in the uncertain domain.

(a) The position relationship between initial training JFEs and the LSF and (b) the position relationship between final training JFEs and the LSF
Second, the spatial position features of sampled JFEs are reduced and arranged according to the way discussed in Sec. 3.2. The spatial position features reduction and arrangement results of several sampled JFEs which are concluded in the black circle of Fig. 8(a) are shown in Fig. 9. The color outside the circle indicates its class, and the same color inside the circle indicates the vertex coordinates of a sampled JFE.
Third, the SAE model is constructed. It is worth noting that the original features are used to train the SAE model directly rather than the normalized features, because the numerical difference between the evidence variables is not significant in this example. There are several important parameters during the training process of the SAE model which are shown in Table 2. By the way, the parameters can be adjusted according to specific problems.
Hyper-parameters during the training process of SAE
Name | Item | Value |
---|---|---|
First autoencoder | Width of hidden | 100 |
Activation function | Logistic function | |
Max epoch | 400 | |
L2 Weight Regularization | 0.004 | |
Sparsity Regularization | 4 | |
Sparsity Proportion | 0.15 | |
Learning rate | 1 × 10−6 | |
Second autoencoder | Width of hidden | 50 |
Activation function | Logistic function | |
Max epoch | 100 | |
L2 Weight regularization | 0.002 | |
Sparsity regularization | 4 | |
Sparsity proportion | 0.10 | |
Learning rate | 1 × 10−6 | |
Softmax | Max epoch | 400 |
Learning rate | 1 × 10−6 | |
Stacked autoencoders | Max epoch | 400 |
Learning rate | 1 × 10−6 |
Name | Item | Value |
---|---|---|
First autoencoder | Width of hidden | 100 |
Activation function | Logistic function | |
Max epoch | 400 | |
L2 Weight Regularization | 0.004 | |
Sparsity Regularization | 4 | |
Sparsity Proportion | 0.15 | |
Learning rate | 1 × 10−6 | |
Second autoencoder | Width of hidden | 50 |
Activation function | Logistic function | |
Max epoch | 100 | |
L2 Weight regularization | 0.002 | |
Sparsity regularization | 4 | |
Sparsity proportion | 0.10 | |
Learning rate | 1 × 10−6 | |
Softmax | Max epoch | 400 |
Learning rate | 1 × 10−6 | |
Stacked autoencoders | Max epoch | 400 |
Learning rate | 1 × 10−6 |
Finally, the SAE model is updated, and the reliability interval is calculated. In each iteration, the JFEs whose maximum value of prediction label is less than 0.6 are selected as the candidate JFEs and the number of adding JFEs of each class in candidate JFEs is set to 3. In other words, it means ξ = 0.6 and M = 3, respectively. The iterative updating process stops after six iterations to approximate the reliability interval as [Bel, Pl] = [0.7461, 0.8242], At this time, the position relationship between the sampled JFEs and the LSF is shown in Fig. 8(b). The class label of each sampled JFE is also shown in Fig. 8 in different colors. It can be found that the added JFEs are uniformly distributed near the LSF. The relative error curves of Bel function and Pl function in the iterative process are shown in Fig. 10.
The accuracy and efficiency between the proposed method and the conventional method are compared, and the result is shown in Table 3. The relative error of the Bel is 0.52%, and the relative error of the Pl is 0.47%, which is acceptable.
Comparison of the computational cost and accuracy between the proposed method and the conventional method
Method | Ncall | [Bel, Pl] | Deviations |
---|---|---|---|
Conventional method | 256 | [0.7422,0.8281] | – |
Proposed method | 19 | [0.7461,0.8242] | [0.52%,0.47%] |
Method | Ncall | [Bel, Pl] | Deviations |
---|---|---|---|
Conventional method | 256 | [0.7422,0.8281] | – |
Proposed method | 19 | [0.7461,0.8242] | [0.52%,0.47%] |
4.2 Example 2: Ten-Dimensional Mathematical Problem.
BPA Structure for evidence variables in Example 2
Xi (i = 1, …, 10) | |
---|---|
Focal elements | BPA |
[0.4,0.7] | 0.1 |
[0.7,1.0] | 0.4 |
[1.0,1.3] | 0.4 |
[1.3,1.6] | 0.1 |
Xi (i = 1, …, 10) | |
---|---|
Focal elements | BPA |
[0.4,0.7] | 0.1 |
[0.7,1.0] | 0.4 |
[1.0,1.3] | 0.4 |
[1.3,1.6] | 0.1 |
The Bel and Pl results obtained by using the SQP optimization algorithm are [Bel, Pl] = [0.7221, 1.0000]. In the proposed method, firstly, the point cloud down-sampling method is used to uniformly sample a part of JFEs in the uncertain domain. Second, the spatial position features of sampled JFEs are reduced and arranged according to the way described in Sec. 3.2. For each JFE, the number of spatial position features is reduced from 10,240 (210 × 10) to 110 ((10 + 1) × 10) by the proposed spatial position features reduction method. The number of spatial position features is reduced by two orders of magnitude which shows the spatial position features reduction method can effectively reduce the number of features as the dimension increase. Third, the SAE model is constructed by using the spatial position features of the JFEs and their true class labels. Finally, the SAE is used to predict the class of the remaining JFEs and updated according to the prediction results. It is worth noting that the true class label of each JFE is obtained by the SQP optimization algorithm, and the original features are used to train the SAE model directly.
In this example, the number of initial training JFEs is 100, and the number of adding JFEs of each class in candidate JFEs is set to 4. In other words, it means Ninitial = 100 and M = 4, respectively. Other configurations of the SAE model are equal to Example 1. The iterative updating process stops after 10 iterations to approximate the reliability interval as [Bel, Pl] = [0.7223, 1.0000]. The relative error curves of Bel function and Pl function in the iterative process are shown in Fig. 11.
The accuracy and efficiency between the proposed method and conventional method are compared, and the result is shown in Table 5. The proposed method only takes 190 function evaluations to obtain high-precision reliability results, compared with 1,048,576 function evaluations for the conventional method, which is four orders of magnitude more than the proposed method. What is more, compared with the conventional method, the relative error of the Bel obtained by the proposed method is basically negligible, and the relative error of the Pl is less than 0.03%. The relative error of the Bel is 0.52%, and the relative error of the Pl is 0.47%, respectively, which is acceptable. It can be found that the proposed method has higher computational efficiency and accuracy compared with the conventional method.
Comparison of the computational cost and accuracy between the proposed method and the conventional method
Method | Ncall | [Bel, Pl] | Deviations |
---|---|---|---|
Conventional method | 1,048,576 | [0.7221,1.0000] | – |
Proposed method | 190 | [0.7223,1.0000] | [0.028%,0.000%] |
Method | Ncall | [Bel, Pl] | Deviations |
---|---|---|---|
Conventional method | 1,048,576 | [0.7221,1.0000] | – |
Proposed method | 190 | [0.7223,1.0000] | [0.028%,0.000%] |
4.3 Example 3: Burst Margin of Disk.
The BPA structure for evidence variables in example 3
f | S (lb/in.2) | ω (rpm) | R (in) | ρ (lb/in.3) | R0 (in.) | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|
FE | BPA | FE | BPA | FE | BPA | FE | BPA | FE | BPA | FE | BPA |
[0.9000,0.9125] | 0.125 | [219,800,219,850] | 0.04 | [20,900,20,925] | 0.04 | [23.00,23.25] | 0.04 | [0.2800,0.2825] | 0.125 | [6.4,6.8] | 0.04 |
[0.9125,0.9250] | 0.125 | [219,850,219,900] | 0.06 | [20,925,20,950] | 0.06 | [23.25,23.50] | 0.06 | [0.2825,0.2850] | 0.125 | [6.8,7.2] | 0.06 |
[0.9250,0.9375] | 0.125 | [219,900,219,950] | 0.16 | [20,950,20,975] | 0.16 | [23.50,23.75] | 0.16 | [0.2850,0.2875] | 0.125 | [7.2,7.6] | 0.16 |
[0.9375,0.9500] | 0.125 | [219,950,220,000] | 0.24 | [20,975,21,000] | 0.24 | [23.75,24.00] | 0.24 | [0.2875,0.2900] | 0.125 | [7.6,8.0] | 0.24 |
[0.9500,0.9625] | 0.125 | [220,000,220,050] | 0.24 | [21,000,21,025] | 0.24 | [24.00,24.25] | 0.24 | [0.2900,0.2925] | 0.125 | [8.0,8.4] | 0.24 |
[0.9625,0.9750] | 0.125 | [220,050,220,100] | 0.16 | [21,025,21,050] | 0.16 | [24.25,24.50] | 0.16 | [0.2925,0.2950] | 0.125 | [8.4,8.8] | 0.16 |
[0.9750,0.9875] | 0.125 | [220,100,220,150] | 0.06 | [21,050,21,075] | 0.06 | [24.50,24.75] | 0.06 | [0.2950,0.2975] | 0.125 | [8.8,9.2] | 0.06 |
[0.9875,1.0000] | 0.125 | [220,150,220,200] | 0.04 | [21,075,21,100] | 0.04 | [24.75,25.00] | 0.04 | [0.2975,0.3000] | 0.125 | [9.2,9.6] | 0.04 |
f | S (lb/in.2) | ω (rpm) | R (in) | ρ (lb/in.3) | R0 (in.) | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|
FE | BPA | FE | BPA | FE | BPA | FE | BPA | FE | BPA | FE | BPA |
[0.9000,0.9125] | 0.125 | [219,800,219,850] | 0.04 | [20,900,20,925] | 0.04 | [23.00,23.25] | 0.04 | [0.2800,0.2825] | 0.125 | [6.4,6.8] | 0.04 |
[0.9125,0.9250] | 0.125 | [219,850,219,900] | 0.06 | [20,925,20,950] | 0.06 | [23.25,23.50] | 0.06 | [0.2825,0.2850] | 0.125 | [6.8,7.2] | 0.06 |
[0.9250,0.9375] | 0.125 | [219,900,219,950] | 0.16 | [20,950,20,975] | 0.16 | [23.50,23.75] | 0.16 | [0.2850,0.2875] | 0.125 | [7.2,7.6] | 0.16 |
[0.9375,0.9500] | 0.125 | [219,950,220,000] | 0.24 | [20,975,21,000] | 0.24 | [23.75,24.00] | 0.24 | [0.2875,0.2900] | 0.125 | [7.6,8.0] | 0.24 |
[0.9500,0.9625] | 0.125 | [220,000,220,050] | 0.24 | [21,000,21,025] | 0.24 | [24.00,24.25] | 0.24 | [0.2900,0.2925] | 0.125 | [8.0,8.4] | 0.24 |
[0.9625,0.9750] | 0.125 | [220,050,220,100] | 0.16 | [21,025,21,050] | 0.16 | [24.25,24.50] | 0.16 | [0.2925,0.2950] | 0.125 | [8.4,8.8] | 0.16 |
[0.9750,0.9875] | 0.125 | [220,100,220,150] | 0.06 | [21,050,21,075] | 0.06 | [24.50,24.75] | 0.06 | [0.2950,0.2975] | 0.125 | [8.8,9.2] | 0.06 |
[0.9875,1.0000] | 0.125 | [220,150,220,200] | 0.04 | [21,075,21,100] | 0.04 | [24.75,25.00] | 0.04 | [0.2975,0.3000] | 0.125 | [9.2,9.6] | 0.04 |
This is a six-dimensional problem with strong nonlinear, each evidence variable contains eight interval focal elements, and the number of JFEs is 86 = 262,144 in total. For comparison, the SQP optimization algorithm is used to calculate the extreme value of each JFE, and the class of each JFE is judged by the extreme value. Then, the Bel function and the Pl function can be calculated according to the class of each JFE. The Bel and Pl results obtained by using the conventional method are [Bel, Pl] = [0.8400, 0.8871]. Then, the results of Bel and Pl are treated as the reference values, and the same JFE population is used in testing the proposed method. In this example, the number of initial training JFEs is 30, and the number of adding JFEs of each class in candidate JFEs is set to 4. In other words, it means Ninitial = 30 and M = 4, respectively. Other configurations of the SAE model are equal to Example 1.
The iterative updating process stops after 11 iterations to approximate the reliability interval as [Bel, Pl] = [0.8406, 0.8822]. The relative error curves of Bel function and Pl function in the iterative process are shown in Fig. 12.
The comparison results of the proposed method with the conventional method are shown in Table 7, which shows the proposed method can achieve a similar accuracy with less calls. The proposed method only takes 180 function evaluations to obtain high-precision reliability results, compared with 262,144 function evaluations for the conventional method, which is three orders of magnitude more than the proposed method. The relative errors of the Bel and Pl are 0.07% and 0.55%, which is acceptable. In addition, the result shows that the proposed method has an advantage in solving problems with strong nonlinear.
Comparison of the computational cost and accuracy between the proposed method and the conventional method
Method | Ncall | [Bel, Pl] | Deviations |
---|---|---|---|
Conventional method | 262,144 | [0.8400,0.8871] | – |
Proposed method | 180 | [0.8406,0.8822] | [0.07%,0.55%] |
Method | Ncall | [Bel, Pl] | Deviations |
---|---|---|---|
Conventional method | 262,144 | [0.8400,0.8871] | – |
Proposed method | 180 | [0.8406,0.8822] | [0.07%,0.55%] |
4.4 Example 4: A Space Nuclear Reactor Core.
The space nuclear reactor power supply system has the advantages of not relying on the sun, independent energy generation, and high energy density, which is an effective solution to solve the energy supply bottleneck in extreme scenarios such as deep space exploration [44–46]. The reactor core is generally regarded as the most important subsystem in the space nuclear reactor power supply system since the nuclear energy is generated by nuclear fission reactions in the reactor core. During the service period, the reactor core is working in extreme nuclear physical and thermal-hydraulic conditions. Therefore, it is critical to carry out reliability analysis and design for a reactor core to ensure its working safety and reliability.
The Young modulus of Inconel 718 E1, the Young modulus of U-Mo Alloy E2, the Poisson ratio of Inconel 718 υ1, the Poisson ratio of U-Mo Alloy υ2, and the external loads F1 and F2 are considered to be evidence variables. Their BPA information is presented in Table 8. A finite element model containing 139,538 elements is constructed to compute ΔH, as shown in Fig. 13.
The BPA structure for evidence variables in Example 4
E1 | E2 | υ1 | υ2 | F1 | F2 |
---|---|---|---|---|---|
FE BPA | FE BPA | FE BPA | FE BPA | FE BPA | FE BPA |
[140.00,155.00] 0.02 | [175.00,193.75] 0.01 | [0.2100,0.2325] 0.04 | [0.1960,0.2170] 0.04 | [686.00,759.50] 0.125 | [6.8600,7.5950] 0.04 |
[155.00,170.00] 0.08 | [193.75,212.50] 0.05 | [0.2325,0.2550] 0.06 | [0.2170,0.2380] 0.06 | [759.50,833.00] 0.125 | [7.5950,8.3300] 0.06 |
[170.00,185.00] 0.16 | [212.50,231.25] 0.16 | [0.2550,0.2775] 0.16 | [0.2380,0.2590] 0.16 | [833.00,906.50] 0.125 | [8.3300,9.0650] 0.16 |
[185.00,200.00] 0.24 | [231.25,250.00] 0.28 | [0.2775,0.3000] 0.24 | [0.2590,0.2800] 0.24 | [906.50,980.00] 0.125 | [9.0650,9.8000] 0.24 |
[200.00,215.00] 0.24 | [250.00,268.78] 0.28 | [0.3000,0.3225] 0.24 | [0.2800,0.3010] 0.24 | [980.00,1053.5] 0.125 | [9.8000,10.535] 0.24 |
[215.00,230.00] 0.16 | [268.78,287.50] 0.16 | [0.3225,0.3450] 0.16 | [0.3010,0.3220] 0.16 | [1053.5,1127.0] 0.125 | [10.535,11.270] 0.16 |
[230.00,245.00] 0.08 | [287.50,306.25] 0.05 | [0.3450,0.3675] 0.06 | [0.3220,0.3430] 0.06 | [1127.0,1200.5] 0.125 | [11.270,12.005] 0.06 |
[245.00,260.00] 0.02 | [306.25,325.00] 0.01 | [0.3675,0.3900] 0.04 | [0.3430,0.3640] 0.04 | [1200.5,1274.0] 0.125 | [12.005,12.740] 0.04 |
E1 | E2 | υ1 | υ2 | F1 | F2 |
---|---|---|---|---|---|
FE BPA | FE BPA | FE BPA | FE BPA | FE BPA | FE BPA |
[140.00,155.00] 0.02 | [175.00,193.75] 0.01 | [0.2100,0.2325] 0.04 | [0.1960,0.2170] 0.04 | [686.00,759.50] 0.125 | [6.8600,7.5950] 0.04 |
[155.00,170.00] 0.08 | [193.75,212.50] 0.05 | [0.2325,0.2550] 0.06 | [0.2170,0.2380] 0.06 | [759.50,833.00] 0.125 | [7.5950,8.3300] 0.06 |
[170.00,185.00] 0.16 | [212.50,231.25] 0.16 | [0.2550,0.2775] 0.16 | [0.2380,0.2590] 0.16 | [833.00,906.50] 0.125 | [8.3300,9.0650] 0.16 |
[185.00,200.00] 0.24 | [231.25,250.00] 0.28 | [0.2775,0.3000] 0.24 | [0.2590,0.2800] 0.24 | [906.50,980.00] 0.125 | [9.0650,9.8000] 0.24 |
[200.00,215.00] 0.24 | [250.00,268.78] 0.28 | [0.3000,0.3225] 0.24 | [0.2800,0.3010] 0.24 | [980.00,1053.5] 0.125 | [9.8000,10.535] 0.24 |
[215.00,230.00] 0.16 | [268.78,287.50] 0.16 | [0.3225,0.3450] 0.16 | [0.3010,0.3220] 0.16 | [1053.5,1127.0] 0.125 | [10.535,11.270] 0.16 |
[230.00,245.00] 0.08 | [287.50,306.25] 0.05 | [0.3450,0.3675] 0.06 | [0.3220,0.3430] 0.06 | [1127.0,1200.5] 0.125 | [11.270,12.005] 0.06 |
[245.00,260.00] 0.02 | [306.25,325.00] 0.01 | [0.3675,0.3900] 0.04 | [0.3430,0.3640] 0.04 | [1200.5,1274.0] 0.125 | [12.005,12.740] 0.04 |
The reliability interval obtained by the conventional method is [Bel, Pl] = [0.8354, 0.8392], which is used as the reference results.
For the proposed method, the number of initial training JFEs is 30 which means Ninitial = 30. In order to speed up the convergence speed and test the stability of the proposed method, the JFEs whose maximum value of prediction label is less than 0.6 are used as the candidate JFEs, and the number of adding JFEs of each class in candidate JFEs is set as 18 during each iteration. In other words, it means ξ = 0.6 and M = 18, respectively. Other configurations of the SAE model are the same as that in Example 1.
The iterative updating process stops after nine iterations to obtain the reliability interval as [Bel, Pl] = [0.8355, 0.8406]. The relative error curves of belief function and plausibility function in the iterative process are shown in Fig. 14.
Table 9 presents the computational cost of both reliability analysis methods. It is observed that the conventional method requires a large number of function evaluations, reaching 262,144, which is hardly acceptable for engineering designers. In contrast, the proposed method only requires 410 function evaluations, which is smaller than 262,144 by three orders of magnitudes. What is more, compared with the conventional method, the relative error of the Bel obtained by the proposed method is basically negligible, and the relative error of the Pl is less than 0.2%. Therefore, it is reasonable to believe that the proposed method achieves high computational efficiency and accuracy for this problem, and it will help extend the application of evidence theory in reliability analysis and design of practical engineering problems.
Comparison of the computational cost and accuracy between the proposed method and the conventional method
Method | Ncall | [Bel, Pl] | Deviations |
---|---|---|---|
Conventional method | 262,144 | [0.8354,0.8392] | – |
Proposed method | 410 | [0.8355,0.8406] | [0.012%,0.167%] |
Method | Ncall | [Bel, Pl] | Deviations |
---|---|---|---|
Conventional method | 262,144 | [0.8354,0.8392] | – |
Proposed method | 410 | [0.8355,0.8406] | [0.012%,0.167%] |
5 Conclusions
This paper proposes a novel method to solve the ETRA. It transforms the conventional ETRA problem to the classification of three types of JFEs and then solves the classification problem effectively through a deep learning approach. The computation efficiency and accuracy of the proposed method are compared with those of the conventional method. Some conclusions are obtained from the results of the numerical examples. (1) The proposed method is much more efficient than the conventional method. Two main factors contribute to the high efficiency of the proposed method. First, the conventional method through the extreme value classification of the JFE is transformed into the feature classification of the JFE. Second, only the extreme value of the experiment JFEs needs to be calculated to obtain its label rather than the extreme value of all JFEs in the proposed method which extremely reduces the computational cost of ETRA. (2) The proposed method has almost comparable accuracy to the conventional method. Two main factors contribute to the high accuracy of the proposed method. First, the unique spatial position feature of each JFE is captured as the main feature and the novel features reduction and arrangement method alleviates the dimensional disaster and ensures the integrity and correlation of features. Second, the newly added experiment JFEs are distributed near the LSF, which can more effectively improve the classification accuracy of the deep learning model. However, the point cloud down-sampling method may occupy a large amount of memory for high-dimensional problems, which will limit the application of the proposed method. Therefore, in future work, we will propose a more effective spatial uniformity sampling scheme for high-dimensional reliability analysis, so as to extend the application of the proposed method in complex engineering problems.
Acknowledgment
This work is supported by the National Key R&D Program of China (Grant Nos. 2020YFB1901800 and 2022YFB3403803) and the Fundamental Research Funds for the Central Universities (Grant No. 531118010677).
Conflict of Interest
There are no conflicts of interest.
Data Availability Statement
The authors attest that all data for this study are included in the paper.