## Abstract

This paper presents an adaptive Kriging based method to perform uncertainty quantification (UQ) of the photoelectron sheath and dust levitation on the lunar surface. The objective of this study is to identify the upper and lower bounds of the electric potential and that of dust levitation height, given the intervals of model parameters in the one-dimensional (1D) photoelectron sheath model. To improve the calculation efficiency, we employ the widely used adaptive Kriging method (AKM). A task-oriented learning function and a stopping criterion are developed to train the Kriging model and customize the AKM. Experiment analysis shows that the proposed AKM is both accurate and efficient.

## 1 Introduction

The Moon is directly exposed to solar radiation and solar wind plasma (drifting protons and electrons) lacking an atmosphere and a global magnetic field. Consequently, the lunar surface is electrically charged by the bombardment of solar wind plasma and emission/collection of photoelectrons. Near the illuminated lunar surface, the plasma sheath is dominated by photoelectrons, thus usually referred to as “photoelectron sheath.” Additionally, dust grains on the lunar surface may get charged and levitated from the surface under the influence of the electric field within the plasma sheath as well as gravity. This work is motivated by the high computational cost associated with uncertainty quantification (UQ) analysis of plasma simulations using high-fidelity kinetic models such as particle-in-cell. The main quantities of interest (QoI) of this study are the vertical structure of the photoelectron sheath and its effects on levitation of dust grains with different sizes and electric charges.

Both the electric potential ($ϕ$) and the electric field ($E$) on lunar surface are determined by many parameters, such as solar wind drifting velocity ($vd$), electron temperature ($Te$), photoelectron temperature ($Tp$), density of ions at infinity ($ni,∞$), and density of photoelectrons ($np$). Due to uncertain factors in lunar environment, the electric potential, electric field, and the dust levitation height, etc., are also uncertain. While many sources uncertainty may exist, they are generally categorized as either aleatory or epistemic. Uncertainties are characterized as epistemic, if the modeler sees a possibility to reduce them by gathering more data or by refining models. Uncertainties are categorized as aleatory if the modeler does not foresee the possibility of reducing them [1]. An example of the aleatory uncertainty in lunar environment is the solar wind parameters, and an example of the epistemic uncertainty is the photoelectron temperature which is obtained by limited measurement data from Apollo missions. For lunar landing missions, one needs to take into consideration the uncertainties of the electrostatic and dust environment near the lunar surface. For example, the upper and lower bounds of the electric field and dust grain levitation heights in the photoelectron sheath should be considered when determining whether it is safe for a certain area to land a spacecraft.

Determining the bounds of the electric potential, electric field, and dust levitation height, however, is computationally expensive, because the particle-based kinetic models such as particle-in-cell simulations are time-consuming to evaluate. To address this issue, we develop an adaptive Kriging method (AKM), which can determine those bounds with a small number of calculations of the model. It is straightforward to train and obtain an accurate Kriging model [2] to replace the actual model and then calculate the bounds with the model. However, it is not necessary for the Kriging model to be accurate everywhere in its input space, because it will need more training samples and hence decrease the efficiency. Since the objective is to determine those bounds, we only need the Kriging model to be partially accurate near the regions of interest, as long as it can help find those bounds accurately. This way, we can save more computational efforts. To this end, we develop a task-oriented learning function and a stopping criterion to adaptively train the Kriging model. We start with an analytic model for the one-dimensional (1D) photoelectron sheath near the lunar surface [3,4]. This model is computationally cheap, and hence, the accurate results can be obtained by brute force. With the accurate results, we can test the accuracy of the proposed method. It is noted here that the ultimate application of this method is not the simple 1D problem presented in this work, but more complicated or computationally expensive models such as three-dimensional (3D) fully kinetic particle-in-cell plasma simulations.

The rest of this paper is organized as follows. Section 2 presents the 1D photoelectron sheath and dust levitation problem on lunar surface, as well as the 1D analytic model. Section 3 briefly introduces the Kriging method and general AKM. Section 4 presents the proposed AKM. Section 5 presents the results. Conclusions are given in Sec. 6.

## 2 Problem Statement

### 2.1 One-Dimensional Photoelectron Sheath Model on the Lunar Surface.

We employ the recently derived 1D photoelectron sheath model for the lunar surface [3,4]. As given in detail in Refs. [3] and [4], there are three types of electric potential profiles [36] in the photoelectron sheath: type A, type B, and type C, as shown in Fig. 1, where $ϕ$ is the electric potential and $Z$ is the vertical coordinate. In this study, we focus on type C sheath profile as it is expected at the polar regions of the Moon, where the next lunar landing mission will likely occur.

Fig. 1
Fig. 1
Close modal
Both the electrical potential $ϕ$ and corresponding electric field $E$ are functions of $Z$ with a series of parameters $P=(vd,Te,Tp,ni,∞,np)$. To obtain $ϕ(Z;P)$ and $E(Z;P)$, we need to solve an ordinary differential equation (ODE) [3]. Once the potential profile $ϕ$ is obtained, it is straightforward to calculate electric field $E$ by
$E(Z;P)=−dϕ(Z;P)dZ$
(1)

A typical type C sample curve of $E(Z;P)$ is shown in Fig. 2. Note that both $ϕ$ and $E$ converge to zero at large values of $Z$ where it is used as the electric potential reference (zero potential and zero field).

Fig. 2
Fig. 2
Close modal

### 2.2 Dust Levitation.

Subjected to the electric field force, a charged dust on lunar surface may be levitated [7,8]. Above the lunar surface, there is a position where the upward electric field force balances the downward gravity [4]. This position is referred to as equilibrium levitation height, denoted as $Z*$. $Z*$ can be solved through the following equation of static equilibrium of a charged dust in an electric field
$qE(Z;P)=mg$
(2)
where $q$ is the dust charge, $m$ is the mass of the dust, and $g=1.62 m/s2$ is the gravity acceleration on lunar surface [9]. With the assumption of spherical dust grains, $m$ is given by
$m=43πr3ρ$
(3)

where $r$ is the radius of the lunar dust grain and $ρ=1.8 g/cm3$ is the mass density of dust grains [10].

For simplicity, Eq. (2) is rewritten as
$E(Z;P)=w$
(4)

where $w=mg/q$. Once both $E(Z;P)$ and $w$ have been given or determined, a root-finding scheme is employed to solve Eq. (4) for $Z*$. Figure 3 shows an example of how to obtain $Z*$ graphically.

Fig. 3
Fig. 3
Close modal

### 2.3 Objective.

Due to the lack of information, it is almost impossible to obtain the distribution functions of $P$. The bounds of $P$, however, are much easier to obtain. In some work designs on lunar surface, we need to determine the bounds of $ϕ(Z;P)$ and/or $E(Z;P)$, given the bounds of $P$. In this study, all the parameters in $P$ are modeled as interval variables, whose domain is denoted as $Ω$. For a given realization $p$ of $P$, both $ϕ(Z;p)$ and $E(Z;p),Z∈[Zmin,Zmax]$ are obtained by solving the ODE.

The upper bound $ϕ¯(Z)$ of the electric potential is defined as
$ϕ¯(z)=maxp∈Ωϕ(z;p)$
(5)
where $z$ is a given value of variable $Z$. Note that the entire upper bound curve $ϕ¯(Z)$ is not necessarily determined by a specific $p$. In other words, at different values of $z$, $ϕ¯(z)$ may be determined by different realizations of $P$. Similarly, the lower bound $ϕ¯(Z)$ of the electric potential, the upper bound $E¯(Z)$ of the electric field, and the lower bound $E¯(Z)$ are defined as
$ϕ¯(z)=minp∈Ωϕ(z;p)$
(6)
$E¯(z)=maxp∈ΩE(z;p)$
(7)
$E¯(z)=minp∈ΩE(z;p)$
(8)
Since $P$ are modeled as interval variables and the intervals (lower and upper bounds) of output are desired, we in fact cope with interval propagation problems in this work. The most straightforward method to determine $ϕ¯(Z)$, $ϕ¯(Z)$, $E¯(Z)$, and $E¯(Z)$ is through Monte Carlo simulation (MCS) [11] in the following steps. First, evenly generate a large number $NMCS$ of samples of $P$. For convenience, we denote those samples as $pMCS$. Second, obtain corresponding $NMCS$ samples of $ϕ(Z;P)$ and $E(Z;P)$ by solving the ODE $NMCS$ times. Finally, calculate $ϕ¯(Z)$, $ϕ¯(Z)$, $E¯(Z)$, and $E¯(Z)$ using the $NMCS$ samples of $ϕ(Z;P)$ and $E(Z;P)$
$ϕ¯(z)=maxp∈pMCSϕ(z;p)$
(9)
$ϕ¯(z)=minp∈pMCSϕ(z;p)$
(10)
$E¯(z)=maxp∈pMCSE(z;p)$
(11)
$E¯(z)=minp∈pMCSE(z;p)$
(12)

However, this method is too expensive or even unaffordable. One reason is that solving the ODE a large number $NMCS$ of times is time-consuming, even when the analytic solution to the ODE is available for the 1D problem. Another reason is that there is no analytic solution to complex two-dimensional or 3D problems where kinetic particle-in-cell simulations are usually employed to solve the electrostatic field through Poisson's equation.

The objective of this study is to develop a method to determine $ϕ¯(Z)$, $ϕ¯(Z)$, $E¯(Z)$, and $E¯(Z)$ accurately and then calculate $Z*$ of dust grains. It is noted here that the ultimate application of this method is not the relatively simple 1D problem presented in this work but more complicated or computationally expensive models such as 3D fully kinetic particle-in-cell plasma simulations. For computationally expensive models, evaluating the model consumes the majority of computational resource, so we will use the number $NODE$ of ODEs that we need to solve as a measure of the computational cost.

## 3 Introduction to Kriging Model and Adaptive Kriging Method

Before presenting the proposed method, we briefly introduce the Kriging model [12,13] and AKM [1328], on which the proposed method is based.

### 3.1 Overview of Kriging Model.

Kriging model makes regression to a black-box function (BBF) using a training sample set, or a design of experiment (DoE). The main idea of Kriging is to treat the BBF as a realization of a Gaussian random field indexed by the input variables of the BBF. The theoretical foundation of Kriging model is exactly the Bayesian inference [29]. From the perspective of Bayesian interface, a prior Gaussian random field is trained by the DoE, and hence, a posterior Gaussian random field is generated. Then the mean value function, also indexed by the input variables of the BBF, of the posterior random field is the Kriging prediction to the BBF. Besides, the variance function, also indexed by the input variables of the BBF, of the posterior random field quantifies the local prediction uncertainty or prediction error.

The randomness, or uncertainty, of the posterior random field mainly comes from the fact that only a limit number of samples of the BBF are used to train the prior random field. In other words, only part of the information of the BBF is available, and the missing part of information leads to the epistemic uncertainty in the random field. Generally, the more training samples we use, the less epistemic uncertainty will result and with stronger confidence will we predict the BBF.

### 3.2 Formulation for Kriging Model.

A simple yet widely used prior random field is the stationary Gaussian random field given by
$K(X)=μ+η(X;ξ2,θ)$
(13)

where $μ$ is an unknown parameter representing the mean value of the random field $K(X)$ and $η(X;ξ2,θ)$ is a zero-mean stationary Gaussian random field indexed by $X$, the input variables of a BBF $k(X)$. Both the variance parameter $ξ2$ and correlation parameters $θ$ of $η(X;ξ2,θ)$ are unknown. The parameters $μ$, $ξ2$, and $θ$ fully define the prior random field $K(X)$. A DoE, or a training sample set, of $k(X)$ is used to train $K(X)$ and then determine those parameters.

The correlation function $C(x(i),x(j))$ of $η(X;ξ2,θ)$ is given by
$C(x(i),x(j))=ξ2R(x(i),x(j);θ)$
(14)
where $R(x(i),x(j);θ)$ is the correlation coefficient function of $η(X;ξ2,θ)$ at two points $x(i)$ and $x(j)$ of $X$. There are many models for $R(x(i),x(j);θ)$. A widely used model is known as the Gaussian model, or squared exponential model, given by
$R(x(i),x(j);θ)=∏d=1Dexp[−θd(xd(i)−xd(j))2]$
(15)

where $D$ is the dimension of $X$, $xd(i)$ is the $dth$ component of $x(i)$, and $θd$ is the $dth$ component of $θ$.

For a BBF $k(X)$, the Kriging model predicts $k(x)$ as $k̃(x)$, which is a normal variable whose mean value and variance are $k̂(x)$ and $σ2(x)$, respectively. Note that $σ2(x)$ is also termed as mean squared error. Generally, $k̂(x)$ is regarded as the deterministic prediction to $k(x)$, since a deterministic prediction is usually needed. $σ2(x)$ measures the prediction uncertainty, or prediction error, and therefore, we can validate a Kriging model simply using $k̂(x)$ and $σ2(x)$ without employing traditional validation methods, such as the cross validation [30]. Because of this advantage, many algorithms have been proposed to adaptively train a Kriging model for expensive BBFs [1427,3136]. When sufficient training samples have been used for training, $σ2(x)$ converges to 0, and the normal variable $k̃(x)$ degenerates to a deterministic value, i.e., the exact value of $k(x)$.

### 3.3 An Example of Kriging Model.

Figure 4 shows a 1D example of Kriging model. In total, five initial training samples are used to train the Kriging. The vertical distance between $k̂(x)±σ(x)$ graphically quantify the prediction error at $x$. The larger the distance, the larger the prediction error. On interval $[0, 2]$, the training samples are denser than that on $[2, 10]$. Consequently, the prediction error is smaller on $[0, 2]$ than that on $[2, 10]$. It is noted that the prediction error is not only dependent on the density of the training samples but also the nonlinearity of the BBF. With the prediction error shown in Fig. 4, it is obvious that in order to improve the prediction accuracy, we need to add training samples somewhere near $x=4$ and $x=8$. Figure 5 shows the updated Kriging model with one more training sample added at $x=8$. The overall prediction accuracy is improved significantly.

Fig. 4
Fig. 4
Close modal
Fig. 5
Fig. 5
Close modal

The main idea of AKM is to adaptively add training samples to update the Kriging model iteratively until an expected accuracy is achieved. Figure 6 shows a brief flowchart of AKM. The QoI is what we aim to calculate, such as $ϕ¯(Z)$ and $ϕ¯(Z)$. Since the QoI is calculated through the Kriging model instead of the BBF itself, there is an inevitable error caused by the Kriging model. The error metric is used to measure the error. The stopping criterion, which is based on the error metric, is used to determine when to stop adding training samples. Once the error of QoI is sufficiently small, it is reasonable to return the QoI and stop the algorithm. If the error is large in an iteration, we must add one or more training samples to update the Kriging model. How to determine new training samples is the task of the learning function. A good learning function should be robust and lead to a high convergence rate.

Fig. 6
Fig. 6
Close modal

Given a specific engineering problem, the key of employing an AKM is to make good use of all available information, such as the features of the BBF and QoI, and then design a customized or task-oriented error metric, stopping criterion and learning function.

In the UQ community, a great number of AKMs have been developed to solve varies kinds of problems, such as reliability analysis [15,1724,26,3133,36], robustness analysis [14], sensitivity analysis [34], robust design [25,35], and reliability-based design [16,27].

## 4 The Proposed Method

In this section, we present detailed procedures of calculating $ϕ¯(Z)$ and $ϕ¯(Z)$. Similar procedures can also apply to calculate $E¯(Z)$ and $E¯(Z)$.

### 4.1 Overview of the Proposed Method.

The main idea of the proposed method is to employ the framework of AKM and customize it to calculate $ϕ¯(Z)$ and $ϕ¯(Z)$ (as well as $E¯(Z)$ and $E¯(Z)$). Figure 7 shows the brief flowchart of the proposed method. In step 1, we evenly generate $Nin$ initial samples of $P$. Generally, $Nin$ is much smaller than $NMCS$. Details of this step will be given in Subsec. 4.2. In step 2, the ODE (1D Poisson's equation) is solved $Nin$ times, with the $Nin$ samples of $P$, to obtain $Nin$ samples of $ϕ(Z;P)$. In step 3, the samples of $ϕ(Z;P)$ are used to build a Kriging model $ϕ̂(Z;P)$. Both $Z$ and $P$ are treated as input variables so the dimension of $ϕ̂(Z;P)$ is $1+5=6$. In step 4, $ϕ¯(Z)$ and $ϕ¯(Z)$ are estimated through
$ϕ¯(z)=maxp∈pMCSϕ̂(z;p)$
(16)
$ϕ¯(z)=minp∈pMCSϕ̂(z;p)$
(17)
Fig. 7
Fig. 7
Close modal

In step 5, an error metric is developed to measure the error of $ϕ¯(Z)$ and $ϕ¯(Z)$ estimated by Eqs. (16) and (17). Step 6 is about a stopping criterion. Details about steps 5 and 6 will be given in Subsec. 4.4. The learning function involved in step 7 will be given in Subsec. 4.3. The implementation of the proposed method will be given in Subsec. 4.5.

There are two significant differences between most existing AKMs and the proposed method. First, the former aims at estimating a constant value, such as the structural reliability and robustness, while the latter aims at estimating two functions, i.e., $ϕ¯(Z)$ and $ϕ¯(Z)$. Second, when given a specific value of input, the output of the BBFs involved in the former methods is a single value. However, in this work, with a given realization $p$ of $P$, the output of solving the ODE is a function $ϕ(Z;p)$. With those differences, we cannot use the existing error metrics, stopping criteria or learning functions. Instead, we take into consideration those differences and design a new error metric, stopping criterion and learning function to fit the problem. This is the main contribution of the proposed algorithm.

### 4.2 Candidate Samples and Initial Training Samples.

For numerical computation, we need to evenly discretize $Ω$ into a few points. Suppose $Pi$, the i$th$ component of $P$, is discretized into $Ni$ points, then $Ω$ will be discretized into in total $NP=∏i=15Ni$ points. For convenience, we denote the set of those points as $pMCS$. Similarly, $Z$ is discretized into $NZ$ points (denoted as $zMCS$) in its range $[Zmin,Zmax]$. Theoretically, any $p∈Ω$ could be selected as a training sample for $ϕ̂(Z;P)$. However, we do not want any two training samples to be clustering together, because we use the exact interpolation in Kriging and clustered training samples may impact the training and the convergence rate of the proposed AKM. Therefore, we only select training samples of $P$ from $pMCS$ and call $pMCS$ candidate samples or candidate points.

The $Nin$ initial training samples $pin$ of $P$ are selected such that they are distributed in $Ω$ as even as possible. Commonly used sampling methods include random sampling, Latin hypercube sampling, and Hammersley sampling [37]. In this study, we employ the Hammersley sampling method because it has better uniformity properties over a multidimensional space [38]. The Hammersley sampling method first generates initial training samples in a five-dimensional hypercube $[0,1]5$ and then, they are mapped into $Ω$ to get the initial training samples of $P$. Note that the five dimensions of the hypercube are assumed to be independent, with the assumption that all variables in $P$ are independent. Those initial training samples, however, are not necessarily among $pMCS$, so we need to round them to the nearest ones in $pMCS$. Since the components of $P$ do not necessarily share the same dimension unit, the distances which we use to find the nearest samples should be normalized. For example, the distance $d$ between a sample $p(h)$ generated by Hammersley and a candidate sample $p(c)$ in $pMCS$ is given by
$d(p(h),p(c))=∑i=15(pi(h)−pi(c)Pi,max−Pi,min)2$
(18)

where $pi(h)$ is the $ith$ component of $p(h)$, $pi(c)$ is the $ith$ component of $p(c)$, $Pi,max$ is the maximal value of $Pi$ which is the $ith$ component of $P$, and $Pi,min$ is the minimal value of $Pi$. Then $p(h)$ is rounded to $p*=argminp∈pMCSd(p(h),p(c))$. When all the initial training samples generated by Hammersley have been rounded to the nearest ones in $pMCS$, we get the initial training sample set $pin⊂pMCS$ of $P$.

Solving the ODE $Nin$ times, each with a sample in $pin$, we get $Nin$ samples of $ϕ(Z;P)$. Note that each sample of $ϕ(Z;P)$ has $NZ$ points, since we discretized $Z$ into $NZ$ points. Then, we have $NZNin$ input training points $zMCS×pin$. Except the $Nin$ points at $Zmax$, we select the other $(NZ−1)Nin$ points to form the first part of input training sample set of $ϕ̂(Z;P)$. We denote those $(NZ−1)Nin$ input training points as $xinp1$, where superscript inp of $x$ represents input, and the superscript 1 means that $xinp1$ is only the first part of the entire input training sample set. The other part $xinp2$ is given below.

Since for any $p∈pMCS$, it is known that $ϕ(Zmax;p)≡0$ (Fig. 1), theoretically we also need to add all the $NP$ points $Zmax×pMCS$ as input training samples so that we make good use of all known information. However, it is not practical to do so. For example, if $Ni=10,i=1,2,…,5$, we need to add $NP=105$ points as input training samples. So many training samples will make $ϕ̂(Z;P)$ complex, expensive, and not accurate, losing its expected properties. To balance the need to add them and the drawback of adding all of them, we add part of them. Specifically, we evenly generate $NP′$ samples $p′$ of $P$ using procedures similar to what we used to generate $pin$. Then $xinp2$ is given by
$xinp2={(Zmax,p)p∈p′}$
(19)

The input training sample set $xinp=xinp1∪xinp2$. Denote the corresponding electric potential $ϕ$ at $xinp$ as $ϕout$. The input–output training sample pairs $(xinp,ϕout)$ are used to build the initial $ϕ̂(Z;P)$. More training samples will be added to update $ϕ̂(Z;P)$ later.

### 4.3 Learning Function.

Generally, the initial Kriging model is not accurate enough to get $ϕ¯(Z)$ or $ϕ¯(Z)$ accurately through Eqs. (5) and (6). To improve the accuracy of $ϕ̂(Z;P)$ and hence of $ϕ¯(Z)$ and $ϕ¯(Z)$, we need to add training samples of $ϕ(Z;P)$ to refine $ϕ̂(Z;P)$. A learning function is used to determine which sample of $P$, and hence of $ϕ(Z;P)$, should be added.

In our previous work [3], we used the learning function given by
$p(next)=argmaxp∈pMCS∑z∈zMCS|σ(z;p) ϕ̂(z;p)|$
(20)

where $p(next)$ is the next to-be-added sample of $P$, $ϕ̂(z;p)$ is the predicted value of $ϕ(z;p)$ by the Kriging model $ϕ̂(Z;P)$, and $σ(z;p)$ is the standard deviation of the prediction. Both $ϕ̂(z;p)$ and $σ(z;p)$ are calculated by the Kriging toolbox. $σ(z(j);p)ϕ̂(z(j);p)$ is the deviation coefficient of the prediction at $(z;p)$, and thus, the learning function in Eq. (20) determines the training sample $p(next)$ at which the summation of the absolute deviation coefficients of the predictions along $Z$ coordinate is maximal. The summation $∑z∈zMCS|σ(z;p) ϕ̂(z;p)|$ measures the overall prediction error at $p$. Adding a sample of $ϕ(Z;P)$ at $p$ to update $ϕ̂(Z;P)$ will let $∑z∈zMCS|σ(z;p) ϕ̂(z;p)|$ become zero, and therefore adding a sample of $ϕ(Z;P)$ at $p(next)$ to update $ϕ̂(Z;P)$ will decrease the overall prediction error of $ϕ(Z;P)$ by the largest extent. This is the basic mechanism of the learning function in Eq. (20).

However, we do not necessarily need $ϕ̂(Z;P)$ to be overall accurate. Since the objective is to estimate $ϕ¯(Z)$ and $ϕ¯(Z)$ accurately and efficiently, we only need $ϕ̂(Z;P)$ to be partially or locally accurate enough so that it can help estimate $ϕ¯(Z)$ and $ϕ¯(Z)$ accurately. With this idea, we can further improve the efficiency of updating $ϕ̂(Z;P)$ by adding training samples more skillfully.

A widely used learning function in an AKM, which aims at calculating extreme values, is the expected improvement function [28]. The expected improvement function $ξ¯(z,p)$ of $ϕ¯(z)$ is given by
$ξ¯(z,p)=(ϕ̂(z;p)− ϕ¯(z))Φ(ϕ̂(z;p)−ϕ¯(z)σ(z;p))+σ(z;p)φ(ϕ̂(z;p)−ϕ¯(z)σ(z;p))$
(21)
where $Φ(·)$ and $φ(·)$ are the cumulative distribution function and probability density function of the standard Gaussian variable, respectively. A simple explanation to the expected improvement function $ξ¯(z,p)$ is that if we added a training point at $(z,p)$, we could expect to improve current $ϕ¯(z)$ to $ϕ¯(z)$+ $ξ¯(z,p)$, with an improvement rate of $ξ¯(z,p)/ϕ¯(z)$. If the objective is to estimate $ϕ¯(z)$, which is a maximal value, instead of $ϕ¯(Z)$, which is an entire function, we can determine the next training sample $p(next)$ of $P$ using the learning function given by
$p(next)=argmaxp∈pMCS|ξ¯(z,p)/ϕ¯(z)|$
(22)
However, since the objective is to determine the entire function $ϕ¯(Z)$ and one ODE solution has $NZ$ training points, we must have a learning function which aims at improving the calculation accuracy of the entire function $ϕ¯(Z)$. Therefore, we propose a learning function given by
$p(next)=argmaxp∈pMCS∑z∈zMCS|ξ¯(z,p)/ϕ¯(z)|$
(23)

where we sum up the absolute values of the improvement rate. This learning function means that if we added a training sample $ϕ(Z;p(next))$, which has $NZ$ points, to update $ϕ̂(Z;P)$, we could expect to get the best improvement of $ϕ¯(Z)$.

Similarly, the expected improvement function $ξ¯(z,p)$ of $ϕ¯(z)$ is given by
$ξ¯(z,p)=( ϕ¯(z)−ϕ̂(z;p))Φ(ϕ¯(z)−ϕ̂(z;p)σ(z;p))+σ(z;p)φ(ϕ¯(z)−ϕ̂(z;p)σ(z;p))$
(24)
To estimate $ϕ¯(Z)$, we also propose a learning function given by
$p(next)=argmaxp∈pMCS∑z∈zMCS|ξ¯(z,p)/ ϕ¯(z)|$
(25)
In order to estimate both $ϕ¯(Z)$ and $ϕ¯(Z)$ simultaneously, we combine Eqs. (23) and (25) to propose a learning function given by
$p(next)=argmax{maxp∈pMCS∑z∈zMCS|ξ¯(z,p)ϕ¯(z)|,maxp∈pMCS∑z∈zMCS|ξ¯(z,p) ϕ¯(z)|}$
(26)

Once $p(next)$ has been determined, we solve the ODE to numerically get a function $ϕ(Z;p(next))$, from which we get $(NZ−1)$ points (the remaining one at $Zmax$, where $ϕ≡0$, is excluded) and add them into $(xinp,ϕout)$ to enrich the training samples.

### 4.4 Error Metric and Stopping Criterion.

Since Eqs. (16) and (17) cannot obtain absolutely accurate $ϕ¯(Z)$ and $ϕ¯(Z)$ due to the prediction error of $ϕ̂(Z;P)$, we need an error metric to measure the error of currently estimated $ϕ¯(Z)$ and $ϕ¯(Z)$. Since $|ξ¯(z,p)ϕ¯(z)|$ measures the absolute expected improvement rate of $ϕ¯(z)$, if $|ξ¯(z,p)ϕ¯(z)|$ is small for any $z∈zMCS$ and $p∈pMCS$, $ϕ¯(Z)$ is expected to sufficiently accurate. Therefore, we propose to use $maxz∈zMCS,p∈pMCS |ξ¯(z,p)ϕ¯(z)|$ to quantify the error of $ϕ¯(Z)$. Similarly, $maxz∈zMCS,p∈pMCS |ξ¯(z,p) ϕ¯(z)|$ is used to quantify the error of $ϕ¯(Z)$. Combining both, we have the error metric $Γ$, which measures the error of both $ϕ¯(Z)$ and $ϕ¯(Z)$, given by
$Γ=max{maxz∈zMCS,p∈pMCS |ξ¯(z,p)ϕ¯(z)|,maxz∈zMCS,p∈pMCS |ξ¯(z,p) ϕ¯(z)|}$
(27)
Once $Γ$ is small enough, the estimated $ϕ¯(Z)$ and $ϕ¯(Z)$ are expected to be sufficiently accurate. Therefore, the stopping criterion shown in Fig. 7 is defined as
$Γ<γ$
(28)

where $γ$ is a threshold that controls the efficiency and accuracy of the proposed AKM. Generally speaking, a smaller $γ$ will lead to higher accuracy but lower efficiency.

### 4.5 Implementation.

As shown in Fig. 1, $ϕ(Z;P)$ approaches zero when $Z$ takes large value. As a result, $ϕ¯(z)$ and $ϕ¯(z)$ in Eqs. (26) and (27) are likely to take very small values close to zero. It leads to the singularity of the calculation of Eqs. (26) and (27), doing harm to the robustness of the proposed algorithm. To solve this issue, we translate all training samples of $ϕ(Z;P)$ simply by adding a negative constant $ϵ$. This way, the translated $ϕ(Z;P)$ will never approach zero, and the singularity issue is evitable. Trained by the translated samples of $ϕ(Z;P)$, the Kriging model $ϕ̂(Z;P)$ will also lead to the translation of $ϕ¯(Z)$ and $ϕ¯(Z)$. We can translate $ϕ¯(Z)$ and $ϕ¯(Z)$ back simply by subtracting $ϵ$ from them. Note that there is no rigorous theory to quantify how $ϵ$ affects the properties of the proposed AKM. We suggest determining $ϵ$ using
$ϵ=mean{ϕ(0;p)p∈pin}$
(29)

where $mean(·)$ represents mean value.

Based on all the procedures given above, we generate the pseudo-codes of the proposed AKM given in Algorithm 1.

Table 9

Algorithm 1: Pseudo-codes of the proposed method

RowPseudo-codes
1Evenly discretize $Ω$ into $NP$ points $pMCS$.
2Evenly discretize interval $[Zmin,Zmax]$ into $NZ$ points $zMCS$.
3Generate $Nin$ samples $pin$ of $P$ with procedures given in Subsec. 4.2.
4Solve ODE $Nin$ times to get $Nin$ samples $ϕ(Z;p),p∈pin$ of $ϕ(Z;P)$; Calculate $ϵ$ with Eq. (29); $NODE=Nin$.
5Determine $(xinp,ϕout)$ with procedures given in Subsec. 4.2; $ϕout=ϕout+ϵ$.
6WHILE TRUE DO
7Build Kriging model $ϕ(Z;P)$ using $(xinp,ϕout)$.
8Calculate $ϕ¯(Z)$ and $ϕ¯(Z)$ with Eqs. (16) and (17); $ϕ¯(Z)=ϕ¯(Z)−ϵ$; $ϕ¯(Z)=ϕ¯(Z)−ϵ$.
9Calculate $Γ$ with Eq. (27).
10IF$(Γ≥γ)$DO
11Solve Eq. (20) for $p(next)$; $NODE=NODE+1.$
12Solve ODE to get a new sample $ϕ(Z;p(next))$; $ϕ(Z;p(next))=ϕ(Z;p(next))+ϵ$; All points of $ϕ(Z;p(next))$ excluding the one at $Zmax$ are added into $(xinp,ϕout).$
13ELSE
14BREAK WHILE
15END IF
16END WHILE
17RETURN$ϕ¯(Z)$, $ϕ¯(Z)$, and $NODE$.
RowPseudo-codes
1Evenly discretize $Ω$ into $NP$ points $pMCS$.
2Evenly discretize interval $[Zmin,Zmax]$ into $NZ$ points $zMCS$.
3Generate $Nin$ samples $pin$ of $P$ with procedures given in Subsec. 4.2.
4Solve ODE $Nin$ times to get $Nin$ samples $ϕ(Z;p),p∈pin$ of $ϕ(Z;P)$; Calculate $ϵ$ with Eq. (29); $NODE=Nin$.
5Determine $(xinp,ϕout)$ with procedures given in Subsec. 4.2; $ϕout=ϕout+ϵ$.
6WHILE TRUE DO
7Build Kriging model $ϕ(Z;P)$ using $(xinp,ϕout)$.
8Calculate $ϕ¯(Z)$ and $ϕ¯(Z)$ with Eqs. (16) and (17); $ϕ¯(Z)=ϕ¯(Z)−ϵ$; $ϕ¯(Z)=ϕ¯(Z)−ϵ$.
9Calculate $Γ$ with Eq. (27).
10IF$(Γ≥γ)$DO
11Solve Eq. (20) for $p(next)$; $NODE=NODE+1.$
12Solve ODE to get a new sample $ϕ(Z;p(next))$; $ϕ(Z;p(next))=ϕ(Z;p(next))+ϵ$; All points of $ϕ(Z;p(next))$ excluding the one at $Zmax$ are added into $(xinp,ϕout).$
13ELSE
14BREAK WHILE
15END IF
16END WHILE
17RETURN$ϕ¯(Z)$, $ϕ¯(Z)$, and $NODE$.

### 4.6 Validation Discussion.

Theoretically, it is vital to validate the Kriging model to make sure that it has been trained accurately. An explicit validation, however, is not involved in the proposed AKM. There are two main reasons. First, the adaptive training focuses on the accuracy of QoI instead of the accuracy of the Kriging model. Once there is an indication that the accuracy of QoI in current training iteration is sufficient, i.e., the stopping criterion in Eq. (27) is satisfied, the algorithm stops adding more training samples, no matter the Kriging model is globally accurate or not. As a result, when the algorithm has converged, it is very likely that the Kriging model is accurate only on some subdomains but not accurate on other domains. Therefore, it is not suitable to do explicit cross validation. Second, the error metric $Γ$ can measure the accuracy of QoI, and therefore we in fact do validation implicitly. As long as the accuracy of QoI is sufficient, it does not matter if the Kriging model is or not accurate on the entire domain.

## 5 Results

In this section, we illustrate the proposed AKM. MCS is used to solve the same problems with brute force. Results by MCS are treated as standard to verify the proposed AKM. We build the Kriging model and calculate the Kriging predictions using the DACE toolbox [39]. The anisotropic Gaussian kernel is used.

### 5.1 Sheath Profile.

We consider the 1D photoelectron sheath problem discussed in Sec. 2. The sun elevation angle is given as 9 deg. The maximal and minimal values of $P=(vd,Te,Tp,ni,∞,np)$ are given in Table 1. We use both MCS and the proposed AKM to estimate $ϕ¯(Z)$ and $ϕ¯(Z)$. The values of all involved parameters are given in Table 2.

Table 1

Variables of uncertainty

Variables$vd (m/s)$$Te (eV)$$Tp (eV)$$ni,∞ (cm−3)$$np (cm−3)$
Minimum$421,200$$10.8$$1.8$$7.83$$57.6$
Maximum$414,800$$13.2$$2.2$$9.57$$70.4$
Variables$vd (m/s)$$Te (eV)$$Tp (eV)$$ni,∞ (cm−3)$$np (cm−3)$
Minimum$421,200$$10.8$$1.8$$7.83$$57.6$
Maximum$414,800$$13.2$$2.2$$9.57$$70.4$
Table 2

Parameter values

Parameters$N1∼N5$$NP$$Nin$$NP′$$NZ$$γ$
Values$5$$55$$5$$100$$50$$0.01$
Parameters$N1∼N5$$NP$$Nin$$NP′$$NZ$$γ$
Values$5$$55$$5$$100$$50$$0.01$

The domain $Ω$ of $P$ is discretized into $NP=55$ points, which are assembled into $pMCS$. The $Nin=5$ samples in hypercube space $[0,1]5$, generated by Hammersley sampling method, are given in Table 3. Then, the five samples are mapped into $Ω$, as given in Table 4. Rounding the five samples in $Ω$ to the nearest ones in $pMCS$, we get the initial samples $pin$ of $P$, as given in Table 5. Solving the ODE five times, each with a sample in $pin$, we get five samples of $ϕ(Z;P)$ as shown in Fig. 8.

Fig. 8
Fig. 8
Close modal
Table 3

Samples generated by Hammersley sampling method

Sample numberDimension 1Dimension 2Dimension 3Dimension 4Dimension 5
100.50000.33330.20000.1429
20.20.25000.66670.40000.2857
30.40.75000.11110.60000.4286
40.60.12500.44440.80000.5714
50.80.62500.77780.04000.7143
Sample numberDimension 1Dimension 2Dimension 3Dimension 4Dimension 5
100.50000.33330.20000.1429
20.20.25000.66670.40000.2857
30.40.75000.11110.60000.4286
40.60.12500.44440.80000.5714
50.80.62500.77780.04000.7143
Table 4

Samples mapped into $Ω$

Sample number$vd (m/s)$$Te (eV)$$Tp (eV)$$ni,∞ ( cm−3)$$np ( cm−3)$
1421,20012.00001.93338.178059.4286
2439,92011.40002.06678.526061.2571
3458,64012.60001.84448.874063.0857
4477,36011.10001.97789.222064.9143
5496,08012.30002.11117.899666.7429
Sample number$vd (m/s)$$Te (eV)$$Tp (eV)$$ni,∞ ( cm−3)$$np ( cm−3)$
1421,20012.00001.93338.178059.4286
2439,92011.40002.06678.526061.2571
3458,64012.60001.84448.874063.0857
4477,36011.10001.97789.222064.9143
5496,08012.30002.11117.899666.7429
Table 5

Initial samples of $P$

Sample number$vd (m/s)$$Te (eV)$$Tp (eV)$$ni,∞ ( cm−3)$$np ( cm−3)$
1421,20012.00001.90008.265060.8000
2444,60011.40002.10008.700060.8000
3468,00012.60001.80008.700064.0000
4468,00011.40002.00009.135064.0000
5491,40012.00002.10007.830067.2000
Sample number$vd (m/s)$$Te (eV)$$Tp (eV)$$ni,∞ ( cm−3)$$np ( cm−3)$
1421,20012.00001.90008.265060.8000
2444,60011.40002.10008.700060.8000
3468,00012.60001.80008.700064.0000
4468,00011.40002.00009.135064.0000
5491,40012.00002.10007.830067.2000

Each sample of $ϕ(Z;P)$ contains $NZ=50$ numerical points. Excluding the five points at $Zmax$, we have $NZNin−5=245$ training points in $( xinp1,ϕout1)$. With Hammersley sampling method, we generate $NP′=100$ samples of $P$ and hence $100$ training points in $(xinp2,ϕout2)$. Note that all points in $(xinp2,ϕout2)$ have $Z=Zmax$ and $ϕ=0$. Combining $( xinp1,ϕout1)$ and $(xinp2,ϕout2)$, we have 345 training points in $(xinp,ϕout)$. To do the translation mentioned in Subsec. 4.5, we update $ϕout$ simply by $ϕout=ϕout+ϵ$, where $ϵ = −6.97 V$ is obtained with Eq. (29). With the updated $(xinp,ϕout)$, we build an initial Kriging model and then estimate $ϕ¯(Z)$ and $ϕ¯(Z)$ through Eqs. (16) and (17). Finally, we translate $ϕ¯(Z)$ and $ϕ¯(Z)$ back by $ϕ¯(Z)=ϕ¯(Z)−ϵ$ and $ϕ¯(Z)=ϕ¯(Z)−ϵ$. Figure 9 shows the $ϕ¯(Z)$ and $ϕ¯(Z)$ estimated by both MCS and the proposed AKM (with the initial Kriging model). It shows that the initial Kriging model is not able to predict $ϕ¯(Z)$ or $ϕ¯(Z)$ with sufficient accuracy.

Fig. 9
Fig. 9
Close modal

To improve the accuracy, the proposed method indicates adding a sample at $p(next)=(514800,13.2, 2.2,9.57, 57.6)$. With the $p(next)$, we solve the ODE and get a new sample of $ϕ(Z;P)$. This sample contains $NZ=50$ numerical points. We translate all the numerical points and add them, excluding the one at $Zmax$, to update $(xinp,ϕout)$. The reason why we abandon the point at $Zmax$ is that there are already enough points at $Zmax$ in $(xinp2,ϕout2)$. With the updated $(xinp,ϕout)$, we build a new $ϕ̂(Z;P)$. With the new $ϕ̂(Z;P)$ another $p(next)$ is indicated. With similar procedures, more and more samples of $ϕ(Z;P)$ are added to refine $ϕ̂(Z;P)$ until the stopping criterion given in Eq. (28) is satisfied.

The final estimation of $ϕ¯(Z)$ and $ϕ¯(Z)$ is shown in Fig. 10. It shows that the proposed AKM can estimate $ϕ¯(Z)$ and $ϕ¯(Z)$ very accurately. 16 more samples of $ϕ(Z;P)$ have been added to refine $ϕ̂(Z;P)$, and therefore, in total $NODE=Nin+16=21$ ODE solutions are needed. Compared to $NP=3125$ ODE solutions needed in MCS, the proposed method is very efficient.

Fig. 10
Fig. 10
Close modal

### 5.2 Dust Levitation.

In this example, we still consider the same 1D photoelectron sheath problem in Subsec. 5.1, but the objective is to estimate $E¯(Z)$ and $E¯(Z)$ and then calculate the dust levitation height. The values of all involved parameters are given in Table 6.

Table 6:

Parameter values

Parameters$N1∼N5$$NP$$Nin$$NP′$$NZ$$γ$
Values$5$$55$$5$$100$$50$$0.01$
Parameters$N1∼N5$$NP$$Nin$$NP′$$NZ$$γ$
Values$5$$55$$5$$100$$50$$0.01$

The procedures used to estimate $E¯(Z)$ and $E¯(Z)$ are almost the same as that used to estimate $ϕ¯(Z)$ and $ϕ¯(Z)$. The only difference is that the samples of $E(Z;P)$ instead of $ϕ(Z;P)$ are used. The final estimation of $E¯(Z)$ and $E¯(Z)$ is shown in Fig. 11. It shows that the proposed AKM method is very accurate. As for the efficiency, the proposed method needs only $NODE=Nin+18=23$ ODE solutions. Compared to $NP=3125$ ODE solutions needed in MCS, the proposed method is very efficient.

Fig. 11
Fig. 11
Close modal

When the upper and lower bounds of the electric field have been determined, we can use them to determine the levitation heights of the dust grains. Assuming there are two types of dust grains, A and B, in the electric field. The relevant parameters of the grains are given in Table 7, where $e=1.062×10−19 C$ is the electric charge of an electron. The dust levitation heights are shown in Fig. 12 and given in Table 8. Due to the uncertainty of $P$, the levitation heights of both A and B are also uncertain. The levitation height of A may be any value in the interval $[0 m, 9.33 m]$, which is estimated by the proposed method. The interval determined by MCS is $[0 m, 9.26 m]$. It shows that the proposed method can estimate the levitation height of grain A with sufficient accuracy. Similar conclusion applies to the levitation height of grain B.

Fig. 12
Fig. 12
Close modal
Table 7

Parameters of grains A and B

Grains$r (μm)$$m (g)$$q/e$$w (V/m)$
A$0.5$$1.5268×10−12$$50,000$$−0.4658$
B$0.3$$3.2979×10−13$$45,000$$−0.1118$
Grains$r (μm)$$m (g)$$q/e$$w (V/m)$
A$0.5$$1.5268×10−12$$50,000$$−0.4658$
B$0.3$$3.2979×10−13$$45,000$$−0.1118$
Table 8

Dust levitation heights: the proposed AKM obtained accurate levitation heights

GrainsAKMMCSRelative error (%)
A$Zmin* (m)$$0.00$0.00$0.0$
$Zmax* (m)$$9.33$9.26$0.8$
B$Zmin* (m)$$10.88$11.00$−1.1$
$Zmax* (m)$$25.55$25.55$0.0$
GrainsAKMMCSRelative error (%)
A$Zmin* (m)$$0.00$0.00$0.0$
$Zmax* (m)$$9.33$9.26$0.8$
B$Zmin* (m)$$10.88$11.00$−1.1$
$Zmax* (m)$$25.55$25.55$0.0$

Given any dust grain with known $w$ value, we can easily determine its levitation height interval using the method shown in Fig. 12. This will help to evaluate the risk or damage caused by the levitated dust grains for lunar exploration missions.

## 6 Conclusions

We presented an adaptive Kriging-based method to perform UQ analysis of the 1D photoelectron sheath and dust levitation on the lunar surface. A recently derived 1D photoelectron sheath model was used as the high-fidelity physics-based model and the black-box function. Adaptive Kriging method, with a task-oriented learning function and stopping criterion, was utilized to improve the efficiency in calculating the upper and lower bounds of electric potential as well as dust levitation height, given the intervals of model parameters. Experiment analysis shows that the proposed AKM method is both accurate and efficient. Current and ongoing efforts are focused on building adaptive Kriging model for two-dimensional and 3D kinetic particle simulations of lunar plasma/dust environment and perform UQ analysis.

## Acknowledgment

We would like to acknowledge the support from NASA-Missouri Space Grant Consortium through NASA-EPSCoR Missouri and NSF-CMMI No. 1923799.

## Funding Data

• National Aeronautics and Space Administration (NASA-Missouri Space Grant Consortium) (Funder ID: 10.13039/100000104).

• National Science Foundation (CMMI No. 1923799; Funder ID: 10.13039/100000001).

## References

1.
Der Kiureghian
,
A.
, and
Ditlevsen
,
O.
,
2009
, “
Aleatory or Epistemic? Does It Matter?
,”
Struct. Safety
,
31
(
2
), pp.
105
112
.10.1016/j.strusafe.2008.06.020
2.
Williams
,
C. K.
, and
Rasmussen
,
C. E.
,
2006
,
Gaussian Processes for Machine Learning
,
MIT Press
,
Cambridge, MA
.
3.
Zhao
,
J.
,
Wei
,
X.
,
Hu
,
Z.
,
He
,
X.
,
Han
,
D.
,
Hu
,
Z.
, and
Du
,
X.
,
2020
, “
Photoelectron Sheath Near the Lunar Surface: Fully Kinetic Modeling and Uncertainty Quantification Analysis
,”
AIAA
Paper No. 2020-1548.10.2514/6.2020-1548
4.
Zhao
,
J.
,
Wei
,
X.
,
Du
,
X.
,
He
,
X.
, and
Han
,
D.
,
2020
, Photoelectron Sheath on the Lunar Surface, Analytic Solutions and Fully-Kinetic Particle-in-Cell Simulations for Maxwellian and Kappa Distributions (under review).
5.
Nitter
,
T.
,
Havnes
,
O.
, and
Melandsø
,
F.
,
1998
, “
Levitation and Dynamics of Charged Dust in the Photoelectron Sheath Above Surfaces in Space
,”
J. Geophys. Res.: Space Phys.
,
103
(
A4
), pp.
6605
6620
.10.1029/97JA03523
6.
Fu
,
J. H.
,
1971
, “
Surface Potential of a Photoemitting Plate
,”
J. Geophys. Res.
,
76
(
10
), pp.
2506
2509
.10.1029/JA076i010p02506
7.
Poppe
,
A.
, and
Horányi
,
M.
,
2010
, “
Simulations of the Photoelectron Sheath and Dust Levitation on the Lunar Surface
,”
J. Geophys. Res.: Space Phys.
,
115
(
A8
), p. A08106.10.1029/2010JA015286
8.
Wang
,
J.
,
He
,
X.
, and
Cao
,
Y.
,
2008
, “
Modeling Electrostatic Levitation of Dust Particles on Lunar Surface
,”
IEEE Trans. Plasma Sci.
,
36
(
5
), pp.
2459
2466
.10.1109/TPS.2008.2003016
9.
Hirt
,
C.
, and
Featherstone
,
W.
,
2012
, “
A 1.5 km-Resolution Gravity Field Model of the Moon
,”
Earth Planet. Sci. Lett.
,
329–330
, pp.
22
30
.10.1016/j.epsl.2012.02.012
10.
Costes
,
N. C.
,
Carrier
,
W. D.
, III
,
Mitchell
,
J. K.
, and
Scott
,
R. F.
,
1970
, “
Apollo 11: Soil mechanics results
,”
J. Soil Mech. and Found. Div.
,
96
(
SM6
), pp.
2045
2080
.https://trid.trb.org/view/128268
11.
Mooney
,
C. Z.
,
1997
,
Monte Carlo Simulation
,
Sage Publications
,
Thousand Oaks, CA
.
12.
Rasmussen
,
C. E.
,
2004
,
Gaussian Processes in Machine Learning
Springer
,
Berlin
, pp.
63
71
.
13.
Jeong
,
S.
,
Murayama
,
M.
, and
Yamamoto
,
K.
,
2005
, “
Efficient Optimization Design Method Using Kriging Model
,”
J. Aircr.
,
42
(
2
), pp.
413
420
.10.2514/1.6386
14.
Wei
,
X.
, and
Du
,
X.
,
2020
, “
Robustness Metric for Robust Design Optimization Under Time-and Space-Dependent Uncertainty Through Metamodeling
,”
ASME J. Mech. Des.
,
142
(
3
), p.
031110
.10.1115/1.4045599
15.
Shi
,
Y.
,
Lu
,
Z.
,
Xu
,
L.
, and
Chen
,
S.
,
2019
, “
An Adaptive multiple-Kriging-Surrogate Method for Time-Dependent Reliability Analysis
,”
Appl. Math. Modell.
,
70
, pp.
545
571
.10.1016/j.apm.2019.01.040
16.
Moustapha
,
M.
, and
Sudret
,
B.
,
2019
, “
Surrogate-Assisted Reliability-Based Design Optimization: A Survey and a Unified Modular Framework
,”
Struct. Multidiscip. Optim.
,
60
(
5
), pp.
2157
–21
20
.10.1007/s00158-019-02290-y
17.
Yun
,
W.
,
Zhenzhou
,
L.
,
Zhou
,
Y.
, and
Jiang
,
X.
,
2018
, “
AK-SYSi: An Improved Adaptive Kriging Model for System Reliability Analysis With Multiple Failure Modes by a Refined U Learning Function
,” 59(1), pp.
263
278
.
18.
Sun
,
Z.
,
Wang
,
J.
,
Li
,
R.
, and
Tong
,
C.
,
2017
, “
LIF: A New Kriging Based Learning Function and Its Application to Structural Reliability Analysis
,”
Reliab. Eng. Syst. Saf.
,
157
, pp.
152
165
.10.1016/j.ress.2016.09.003
19.
Ma
,
J.
,
Ren
,
Z.
,
Zhao
,
G.
,
Zhang
,
Y.
, and
Koh
,
C.-S.
,
2017
, “
A New Reliability Analysis Method Combining Adaptive Kriging With Weight Index Monte Carlo Simulation
,”
IEEE Trans. Magnetics
, 54(3), pp.
1
4
.10.1109/TMAG.2017.2763198
20.
Zhu
,
Z.
, and
Du
,
X.
,
2016
, “
A System Reliability Method With Dependent Kriging Predictions
,”
ASME
Paper No. DETC2016-59030.10.1115/DETC2016-59030
21.
Zhu
,
Z.
, and
Du
,
X.
,
2016
, “
Reliability Analysis With Monte Carlo Simulation and Dependent Kriging Predictions
,”
ASME J. Mech. Des.
,
138
(
12
), p.
121403
.10.1115/1.4034219
22.
Hu
,
Z.
, and
,
S.
,
2016
, “
A Single-Loop Kriging Surrogate Modeling for Time-Dependent Reliability Analysis
,”
ASME J. Mech. Des.
,
138
(
6
), p.
061406
.10.1115/1.4033428
23.
Zhang
,
L.
,
Lu
,
Z.
, and
Wang
,
P.
,
2015
, “
Efficient Structural Reliability Analysis Method Based on Advanced Kriging Model
,”
Appl. Math. Modell.
,
39
(
2
), pp.
781
793
.10.1016/j.apm.2014.07.008
24.
Lv
,
Z.
,
Lu
,
Z.
, and
Wang
,
P.
,
2015
, “
A New Learning Function for Kriging and Its Applications to Solve Reliability Problems in Engineering
,”
Comput. Math. Appl.
,
70
(
5
), pp.
1182
1197
.10.1016/j.camwa.2015.07.004
25.
Cheng
,
J.
,
Liu
,
Z.
,
Wu
,
Z.
,
Li
,
X.
, and
Tan
,
J.
,
2015
, “
Robust Optimization of Structural Dynamic Characteristics Based on Adaptive Kriging Model and CNSGA
,”
Struct. Multidiscip. Optim.
,
51
(
2
), pp.
423
437
.10.1007/s00158-014-1140-9
26.
Echard
,
B.
,
Gayton
,
N.
, and
Lemaire
,
M.
,
2011
, “
AK-MCS: An Active Learning Reliability Method Combining Kriging and Monte Carlo Simulation
,”
Struct. Saf.
,
33
(
2
), pp.
145
154
.10.1016/j.strusafe.2011.01.002
27.
Dubourg
,
V.
,
Sudret
,
B.
, and
Bourinet
,
J.-M.
,
2011
, “
Reliability-Based Design Optimization Using Kriging Surrogates and Subset Simulation
,”
Struct. Multidiscip. Optim.
,
44
(
5
), pp.
673
690
.10.1007/s00158-011-0653-8
28.
Jones
,
D. R.
,
Schonlau
,
M.
, and
Welch
,
W. J.
,
1998
, “
Efficient Global Optimization of Expensive Black-Box Functions
,”
J. Global Optim.
,
13
(
4
), pp.
455
492
.10.1023/A:1008306431147
29.
Box
,
G. E.
, and
Tiao
,
G. C.
,
2011
,
Bayesian Inference in Statistical Analysis
,
Wiley
,
.
30.
Browne
,
M. W.
,
2000
, “
Cross-Validation Methods
,”
J. Math. Psychol.
,
44
(
1
), pp.
108
132
.10.1006/jmps.1999.1279
31.
Huang
,
X.
,
Chen
,
J.
, and
Zhu
,
H.
,
2016
, “
Assessing Small Failure Probabilities by AK–SS: An Active Learning Method Combining Kriging and Subset Simulation
,”
Struct. Saf.
,
59
, pp.
86
95
.10.1016/j.strusafe.2015.12.003
32.
Wen
,
Z.
,
Pei
,
H.
,
Liu
,
H.
, and
Yue
,
Z.
,
2016
, “
A Sequential Kriging Reliability Analysis Method With Characteristics of Adaptive Sampling Regions and Parallelizability
,”
Reliab. Eng. Syst. Saf.
,
153
, pp.
170
179
.10.1016/j.ress.2016.05.002
33.
Yu
,
Z.
,
Sun
,
Z.
,
Wang
,
J.
, and
Chai
,
X.
,
2018
, “
A New Kriging-Based DoE Strategy and Its Application to Structural Reliability Analysis
,”
, 10(3), pp.
1
13
.10.1177/1687814018767682
34.
Cheng
,
K.
,
Lu
,
Z.
,
Ling
,
C.
, and
Zhou
,
S.
,
2020
, “
Surrogate-Assisted Global Sensitivity Analysis: An Overview
,”
Struct. Multidisc. Optim.
, 61(3), pp.
1
27
.10.1007/s00158-019-02413-5
35.
Zhou
,
H.
,
Zhou
,
Q.
,
Liu
,
C.
, and
Zhou
,
T.
,
2018
, “
A Kriging Metamodel-Assisted Robust Optimization Method Based on a Reverse Model
,”
Eng. Optim.
,
50
(
2
), pp.
253
272
.10.1080/0305215X.2017.1307355
36.
Wu
,
H.
,
Zhu
,
Z.
, and
Du
,
X.
,
2020
, “
System Reliability Analysis With Autocorrelated Kriging Predictions
,”
ASME J. Mech. Des.
,
142
(
10
), p.
101702
.10.1115/1.4046648
37.
Chen
,
W.
,
Tsui
,
K.-L.
,
Allen
,
J. K.
, and
Mistree
,
F.
,
1995
, “
Integration of the Response Surface Methodology With the Compromise Decision Support Problem in Developing a General Robust Design Procedure
,”
Proceedings of the 1995 ASME Design Engineering Technical Conference - Boston, MA, pp. 485–492.
38.
Hosder
,
S.
,
Walters
,
R.
, and
Balch
,
M.
,
2007
, “
Efficient Sampling for Non-Intrusive Polynomial Chaos Applications With Multiple Uncertain Input Variables
,”
AIAA
Paper No. 2007-1939.10.2514/6.2007-1939
39.
Lophaven
,
S. N.
,
Nielsen
,
H. B.
, and
Søndergaard
,
J.
,
2002
, “
Aspects of the Matlab Toolbox DACE
,”
Technical University of Denmark
,
Lyngby, Denmark
, Technical Report No. IMM-REP-2002-13.