搜档网
当前位置:搜档网 › An Adaptive EM Algorithm for NHPP Software Reliability Models

An Adaptive EM Algorithm for NHPP Software Reliability Models

An Adaptive EM Algorithm for NHPP Software Reliability Models
An Adaptive EM Algorithm for NHPP Software Reliability Models

An Adaptive EM Algorithm for NHPP Software Reliability Models

Vidhyashree Nagaraju, University of Massachusetts-Dartmouth

Lance Fiondella, PhD, University of Massachusetts-Dartmouth

Key Words: Expectation-Maximization algorithm, Non-homogeneous Poisson process, Software reliability

SUMMARY & CONCLUSIONS

Non-homogeneous Poisson process (NHPP) software reliability growth models (SRGM) enable several quantitative metrics that can be used to guide important decisions during the software engineering life cycle such as testing resource allocation and release planning. However, many of these SRGM possess complex mathematical forms that make them difficult to apply in practice because traditional statistical procedures such as maximum likelihood estimation must solve a system of non-linear equations to identify the numerical parameters that best characterize a set of failure data. Recently, researchers have made significant progress in overcoming this difficulty by developing an expectation-maximization (EM) algorithm that exhibits better convergence properties and can therefore find the maximum likelihood estimates of complex SRGM with greater ease. This EM algorithm, however, assumes that some model parameters are constant and thus the approach is not capable of identifying the set of numerical parameters that maximize the likelihood function.

This paper presents an adaptive EM algorithm to identify the maximum likelihood estimates of all parameters of multiple NHPP SRGM with complex mathematical forms. We illustrate our enhanced algorithm through a series of examples. The results show that the algorithm can efficiently identify the set of numerical parameters that globally maximizes the likelihood function. Thus, the adaptive algorithm can significantly simplify the application of complex SRGM.

1INTRODUCTION

Software reliability growth models [1] are a well-established methodology based on the non-homogeneous Poisson process [2]. These SRGM enable the estimation of useful metrics including the number of faults remaining, failure rate, and reliability, which is defined as the probability of failure free operation in a specified environment for a specified period of time. SRGM are also used in optimization problems to determine the amount of testing required to achieve a desired level of reliability [3] and to minimize testing costs, while considering the risk of post release failures [4].

Despite the multitude of valuable metrics and optimization applications SRGM offer, it is often a significant challenge to estimate the parameters of a model with traditional fitting procedures such as maximum likelihood estimation (MLE) [5]. This difficulty arises because traditional numerical procedures to find the maximum likelihood estimate of a software failure data set such as the Newton-Raphson method [6] are sensitive to initial parameter estimates and can fail to converge to the MLE if the initial parameter estimates are not sufficiently close to the MLE. The sensitivity of existing model fitting procedures requires a relatively high level of experience, which can deter potential users from applying NHPP-based SRGM to quantitatively assess the reliability of their software. Given the increasing demand for reliable software, a model fitting procedure that is less sensitive to initial parameter estimates is needed so that software reliability growth models can be fit to data with relatively little effort. Such a procedure will simplify the application of NHPP-based SRGM and encourage their widespread use.

Early software tools to automate the application of software reliability growth models such as SMERFS (Statistical Modeling and Estimation of Reliability Functions for Software) [7] implement model fitting with the method of least squares (LSE) and maximum likelihood estimation. However, it can be difficult to estimate the parameters of a SRGM with these traditional methods even for models with just two parameters [8].

Recently, Okamura et al. [9] applied the expectation-maximization algorithm [10] to identify the maximum likelihood estimates of several software reliability growth models characterized by the non-homogeneous Poisson process. The major advantage of the EM algorithm over traditional numerical methods like MLE is that it is dramatically less sensitive to initial parameter estimates. This simplifies the model fitting step and also works well for fitting some SRGM with many parameters [11]. This is an extremely promising development for SRGM. However, there is one major drawback to the standard EM algorithm; it can only be applied to the subset of model parameters where the likelihood function possesses a closed form solution. As a result, these existing EM algorithms maximize likelihood while imposing the unrealistic and impractical assumption that some of the model parameters are constant. Thus, parameter estimates will not be globally maximal and will therefore not be the parameters that best fit the data set. An enhanced EM algorithm to maximize the likelihood with respect to all model parameters is needed to unlock the full potential of the EM algorithm for fitting NHPP-based SRGM.

This paper presents an adaptive expectation-maximization algorithm for non-homogeneous Poisson process software reliability growth models. Our enhanced approach employs the standard EM algorithm within an efficient search procedure to identify the MLE of all model parameters. We

illustrate the steps of this adaptive approach through a detailed example, which demonstrates improved flexibility over the standard EM algorithm.

The remainder of the paper is organized as follows: Section 2 provides an overview of software reliability growth modeling and estimation. Section 3 presents an adaptive expectation-maximization algorithm for NHPP SRGM.

Section 4 illustrates the effectiveness of the algorithm through a numerical example. Section 5 offers conclusions and

directions for future research.

2 SOFTWARE RELIABILITY GROWTH MODELING AND PARAMETER ESTIMATION This section provides an overview of NHPP software

reliability growth models and describes two parameter estimation techniques, namely maximum likelihood estimation and the expectation maximization algorithm.

2.1 NHPP software reliability growth models The nonhomogeneous Poisson process is a stochastic process [2] that counts the number of events observed as a function of time. In the context of software reliability, the

NHPP counts the number of faults detected by time tt . This counting process is characterized by a mean value function (MVF) mm (tt ), which can assume a variety of functional forms. Okamura et al. [9] noted that the MVF of several SRGM can

be written mm (tt )=aa ×FF (tt ), where aa denotes the number of latent faults at the beginning of testing and FF (tt ) is the cumulative distribution function of a continuous probability distribution. For example, substituting the Weibull distribution [5] for FF (tt ) produces the Weibull MVF: mm (tt )=aa?1?ee ?bbtt cc

? (1) where bb and cc are the scale and shape parameters respectively. Setting cc =1 in Equation (1) simplifies to the exponential distribution, also known as the Goel-Okumoto model [4].

Similarly, the Gamma SRGM is:

mm (tt )=aa ?bb cc xx cc?1ee ?bbbb

tt

ddxx (2) where the identity FF (tt )=∫ff (tt )ddtt has been used and Γ(cc )=∫xx cc?1ee ?bb ddxx ∞0

is the gamma function. Here, bb and cc are the scale and shape parameters. Setting cc =2 in Equation (2) simplifies to the delayed S-shaped model [12]. 2.2 Maximum likelihood estimation

Maximum likelihood estimation is a procedure to identify the numerical values of the parameters of a model such that the plot of the MVF curve closely matches a scatter plot of the empirical failure data. Two common types of failure data are failure time and failure count data. In failure time data, the vector of individual failure times TT = are

given. Failure count data consists of a pair of vectors of the form TT ={(tt 1,kk 1) ,(tt 2,kk 2) ,…,(tt nn ,kk nn )}, where tt ii is the time at the end of the ii tt? time observation interval and kk ii , is the number of faults detected in interval ii .

Maximum likelihood estimation maximizes the likelihood function, also known as the joint distribution of the failure data. Commonly, the log-likelihood function is maximized

because the monotonicity of the logarithm ensures that the maximum of the log-likelihood function is equivalent to maximizing the likelihood function. The log-likelihood expression of a failure times data set T is,

LLLL (ΘΘ|ΤΤ)=?mm (tt nn )+ ?log ?λλ(tt ii )? nn ii=1

(3) where Θ is the vector of model parameters and λλ(tt ii ) is the

instantaneous failure rate at time tt ii , which is defined as

λλ(tt )= dd

(mm (tt )) (4) The system of equations to maximize is obtained by substituting a mean value function such as Equation (1) or (2)

into Equation (3) and λλ(tt ii ) is determined by Equation (4).

After simplification with algebraic identities, a system of equations is obtained by computing partial derivatives with

respect to each model parameter and equating these partial

derivatives to zero. The general form of this system of equations is

eeeeΘ

LLLL (Θ)=00 (5) which must then be solved numerically to identify the MLEs that best fit the data set. 2.3 Expectation-maximization algorithm The EM algorithm also maximizes the log-likelihood function. Unlike maximum likelihood estimation, however, it performs this maximization with respect to the complete data, which consists of observed and unobserved data. In the context of software reliability modeling, this missing data are the faults that would be discovered if testing continued beyond tt nn . Thus, the EM algorithm maximizes the log-likelihood

function of the complete data, which can be expressed as Θ?aaaaaammaaxx {EE [log ff (XX ;θθ)|uu (XX )=yy ;Θ′ ]} (6) where X and Y=u(X) are the observed and unobserved random variables, both possessing probability density ff (?;Θ) and Θ′

are the previous iteration’s parameter estimates.

Okamura et al . [9] showed that for a mean value function of the form mm (tt )=aa ×FF (tt ), an initial estimate of the number of faults (aa ) is simply the number of observed faults (nn ), while the remaining initial parameter estimates can be

determined by maximizing the log-likelihood function of the

probability density function ff (?;Θ)=00 and solving to

obtained closed-form expressions for these additional parameters. For example, the initial estimate of the scale parameter of the Weibull SRGM is bb =nn

∑tt ii

cc nn ii=1 (7)

However, the parameter cc lacks a closed-form solution, both in the Weibull and gamma SRGM. Thus, no analytical expression for the initial estimate of these shape parameters exists. Moreover, this lack of a closed-form expression also prevents the derivation of an update rule for the shape parameter, which means that cc must be held constant while parameters aa and bb are iteratively updated to maximize the log-likelihood function for this fixed value of cc .

For example, the Weibull SRGM update rules are [9]

aa ∶=nn +aa ′ee ?bb ′

tt nn

cc

(8) bb ∶=

nn +aa ′ee ?bb ′tt oobboo

cc

∑tt ii cc + aa ′?tt nn cc +1bb

′?ee ?bb ′tt nn cc nn ii=1 (9) where aa ′ and bb ′

are the estimates from the previous iteration, which are computed iteratively until some convergence criterion is satisfied such as |aa ?aa′|+|bb ?bb′|≤εε.

Similarly, the update rules for the Gamma SRGM are [9]

aa ∶=nn +aa ′FF

?(tt nn ;bb ′,cc ) (10) bb ∶= aa?nn +bb ′FF

?(tt oobboo ;bb ′,cc )?∑tt ii +aa ′?cc bb ′

?FF ?(tt nn ;bb ′,cc +1)nn ii=1 (11)

where (?;bb ,cc ) is the survivor function of the gamma distribution.

Clearly, maximizing the log-likelihood function with respect to just two of the three parameters will not find the maximum likelihood estimates of the Weibull or Gamma SRGM without resorting to repeatedly applying this EM algorithm for a range of values of cc to identify the values of aa , bb , and cc that are globally maximal.

3 ADAPTIVE EXPECTATION-MAXIMIZATION ALGORITHM This section proposes an adaptive generalization of the EM algorithm. Figure 1 shows the flow chart for this adaptive EM algorithm.

The inputs to the algorithm are: the vector of individual failure times TT ={tt 1,tt 2,...,tt nn }, a step size ΔΔcc >0 to adjust the shape parameter in each iteration, and two convergence constants for the adaptive and non-adaptive EM algorithms respectively, namely εεAA >0 and εεNNAA >0. These constants define the maximum change between two iterations before the adaptive or non-adaptive algorithm terminates. The constant εεNNAA for the non-adaptive algorithm estimates parameters aa and bb for a fixed value of cc , while εεAA ensures that the adaptive algorithm identifies a value of cc arbitrarily close to the maximum likelihood estimate with respect to parameters aa , bb , and cc .

? (A.1) The first step of the algorithm performs preprocessing on the inputs for use in the later steps of the algorithm, computing the number of observed faults (nn ), the time of the last failure (tt nn ), as well as the sums of the

failure times (∑tt ii nn ii=1) and logarithms of the failure times (∑log tt ii nn ii=1).

?

(A.2) Step two initialize the shape parameter of the Weibull SRGM to cc =1, which reduces to the special

case of the exponential SRGM. This initial choice of cc is

arbitrary. However, cc =1 is a logical initial value because the user may desire to perform a statistical hypothesis test on the significance of cc [13] compared to the reduced model, where cc =1. Thus, this initial value ensures that the MLE of both the exponential and Weibull models are identified. The non-adaptive EM algorithm is run using Equations (8) and (9) for three separate values of cc , including xx ∈{cc ?ΔΔcc ,cc ,cc +Δcc }. These runs of the non-adaptive output produce the estimates of the

remaining aa ?bb and bb

?bb as well as the likelihood LLLL bb (Θ?bb ) for each of these values of cc .

?

(A.3) Step three tests if the log-likelihood estimate of the

decrement cc ?Δcc , denoted LLLL dd ?Θ

?dd ?, is an improvement over the present log-likelihood estimate LLLL pp (Θ

?pp ), which is cc =1 in the first iteration, then the present shape parameter is revised to cc =cc ?ΔΔcc and branches to step (A.4). This update moves the present estimate of parameter cc toward the value of cc that will maximize the likelihood with respect to aa , bb , and cc . Otherwise, the algorithm branches to step (A.6).

?

(A.4) Step four runs the non-adaptive algorithm on

cc =cc ?ΔΔcc to identify parameter estimates for aa ?dd and bb

?dd as well as LLLL dd ?Θ

?dd ?, which can be used in subsequent iterations of the adaptive algorithm to determine if additional decrements to cc will further improve the log-likelihood function.

Figure 1: Adaptive EM algorithm

? (A.5) Step five compares the present error εεpp , defined as

εεpp =?aa

?pp ?aa ?pp?1?+|bb ?pp ?bb ?pp?1| (8) with the convergence constants εεAA to determine if the

change between the parameters is small, which would indicate that the maximum likelihood value of parameter cc has been identified because small changes in parameters aa and bb will only occur when the step size Δcc is small and the present estimate of cc is very close to the value of cc that is globally optimal. Thus, if the present error is smaller than the convergence constant εεAA the adaptive algorithm terminates and returns Θ?pp, which contains the present vector of parameter estimates that maximizes the log-likelihood function of the Weibull SRGM with respect to aa, bb, and cc. Otherwise, the present estimate of cc is not sufficiently close to the maximum and control returns to step (A.3), where the search for cc continues.

Note that the function to compute the present error εεpp given in Equation (8) is arbitrary and that other functions consisting of parameter cc and changes in the log-likelihood function are also possible.

?(A.6) Similar to step (A.3), step six tests if the log-likelihood estimate of the increment cc+Δcc, denoted LLLL ii?Θ?ii?, achieves an improvement over the present log-likelihood estimate LLLL pp(Θ?pp). If so, cc is revised to cc=cc+ΔΔcc and branches to step (A.7). Otherwise, the algorithm branches to step (A.8).

?(A.7) Similar to step four, step seven executes the non-adaptive algorithm on cc=cc+ΔΔcc to identify aa?ii, bb?ii, and LLLL ii?Θ?ii?, for use in subsequent iterations of the adaptive algorithm to determine if additional increments to cc will further improve the log-likelihood function.

?(A.8) Step eight is reached if neither incrementing nor decrementing cc by Δcc improves the log-likelihood function. In this case, it is necessary to reduce the step size to continue the search for a value cc that can identify a direction that will improve the log-likelihood function.

Thus, the step size is halved (Δcc=Δcc/2) and the likelihood function LLLL ii?Θ?ii?and LLLL dd?Θ?dd?at the reduced increment (cc=cc+ΔΔcc) and decrement (cc=cc?ΔΔcc) are computed. Control then transfers directly to step (A.3) because the present estimate LLLL pp?Θ?pp? has not changed and therefore could not have improved. The next iteration then checks if the reduced increment or decrement improves the log-likelihood function.

The algorithm in Figure 1 was presented in the context of the Weibull SRGM, but can also be applied to the gamma SRGM by using Equations (10) and (11) for each execution of the non-adaptive algorithm.

4ILLUSTRATIONS

This section illustrates the adaptive EM algorithm through an example. The first provides details of the steps executed by the algorithm when applied to a specific data set. The second example studies the performance of the algorithm for a range of values of the step size Δcc in order to assess the impact of this parameter on the speed of the algorithm’s convergence to the maximum likelihood estimate. 4.1Example one: Adaptive EM algorithm application

This example provides the details of the adaptive EM algorithm when applied to the SYS1 data set [1], which consists of nn=136 failure times. A value of Δcc=0.1 was used as the step size for the shape parameter and the convergence constant for the adaptive and non-adaptive EM algorithms were set to εεAA=εεNNAA=10?9.

Step (A.2) runs the non-adaptive algorithm for the values cc={0.9,1.0,1.1}, which achieve maxima at ?970.3011,?974.8065, and ?980.7785 respectively. This set of values indicates that decrementing the shape parameter to cc=0.9 improves the log-likelihood, which corresponds to an error of εεpp=4.505 over the present value cc=1.It is also seen that the increment to cc=1.1 produces an even worse maximum log-likelihood value. Thus, since LLLL dd?Θ?dd?>LLLL pp?Θ?pp?, (A.3) transfers control to (A.4), where the non-adaptive algorithm is run with cc=0.8. Now, since εεpp>εεNNAA control is passed from (A.5) to (A.3), where it is observed that cc=0.8 improved the maximum likelihood to ?967.3777. Hence, cc is decremented again to cc=0.7 and the non-adaptive algorithm is run for this value of cc, which improves the log-likelihood further. However, neither cc=0.6 nor cc=0.8 are greater than the log-likelihood at cc=0.7, so control of the algorithm transfers from (A.3) to (A.6) to (A.8), where the step size is reduced to Δcc=0.05 and the non-adaptive algorithm is run for cc=0.65 and cc=0.75. Neither of these points improves the likelihood, so the step size is reduced again to Δcc=0.025, where it is subsequently determined that cc=0.675 is an improvement over cc=0.7. This adaptive procedure continues until the error between two successive values of the maximum likelihood are less than the convergence constant εεAA.

Figure 2 shows the maximum likelihood for the range of values cc in the interval (0.6,1.0) along with the improvements made to the log-likelihood function by the first few iterations of the adaptive algorithm.

Figure 2: Improvements of adaptive EM algorithm Figure 2 illustrates how each iteration of the adaptive EM algorithm improves the log-likelihood function monotonically.

Table 1 reports the value of cc, the maximum likelihood, and error, for each iteration of the adaptive EM algorithm. Table 1 reveals that iterations six through 10 take smaller and smaller steps towards the value of cc that maximizes the log-likelihood function, incrementing or decrementing cc in the direction that increases LLLL pp?Θ?pp?

until the improvement in the

log-likelihood function is less than εεNNAA at which point the algorithm terminates. This value of cc is sufficiently close to the value that maximizes the log-likelihood with respect to parameters aa , bb , and cc . Thus, the value of these parameter estimates obtained in the final iteration may be taken as the maximum likelihood estimates of the parameters of the Weibull SRGM, confirming that the adaptive EM algorithm can successfully identify the global maximum of the log-likelihood function over the entire parameter space and is not restricted to maximization with respect to parameters aa and bb only as was required by the non-adaptive EM algorithm.

Table 1: Iterations of adaptive EM algorithm Iteration Shape (cc?) Log-likelihood Error 1 1.0 -974.8065331 2 0.9 -970.3011469 4.505 3 0.8 -967.3777432 2.923 4 0.7 -966.1265464 1.251 5 0.675

-966.0805926 0.0459 6 0.67812500 -966.0804987 9.385x10-5 7 0.67656250 -966.0803375 0.00016 8 0.67675781 -966.0803349 2.616x10-6 9 0.67673340 -966.0803348 2.868x10-8 10 0.67673950 -966.0803348 2.358x10-9

Figure 3 shows the observed failure data (jagged line) along with the mean value functions for the Exponential (thick smooth line) and Weibull models (thin smooth line) based on the parameters identified by the adaptive EM algorithm.

Figure 3: Model fit by Adaptive EM algorithm

Figure 3 illustrates that the exponential model under predicts the observed failures during the first 25,000 time units and over predicts until the end of testing. Although the exponential model achieves a rough fit to the observed failure data, the Weibull model clearly matches the observed failure data more precisely. The favorable performance of the Weibull model thus highlights the benefits derived from the efficient adaptive EM algorithm.

4.2 Example two: Assessment of step size parameter

This example explores the impact of the step-size parameter (ΔΔcc ) on the performance of the adaptive EM

algorithm. Figure 4 shows the runtime of the non-adaptive EM algorithm for a range of initial step-sizes.

Figure 4: Adaptive EM algorithm performance

Careful examination of Figure 4 for values of ?cc close to zero reveals that very small steps require more time to complete, whereas values of ?cc in the interval from (0.001,0.115) converge rapidly. This decreasing trend agrees with intuition because very small values of ?cc require many iterations to reach the MLE, yet a slightly larger initial step size progresses toward cc =0.676739 relatively quickly. However, a step size of ?cc in the interval (0.115,0.13) takes noticeably longer to converge, but values in the interval (0.13,0.16) exhibit performance similar to the interval (0.001,0.115). Moreover, the algorithm converges more and more slowly as the step size increases for ?cc ≥0.16.

To explain these irregularities in the runtime of the adaptive algorithm, Figure 5 shows the runtime of the non-adaptive EM algorithm for a range of values of the shape parameter. Figure 5 indicates that the non-adaptive algorithm converges very quickly for values of cc ≥0.5. However, for values of cc ≤0.5 the algorithm requires several seconds to complete. To understand why the adaptive algorithm takes noticeably longer to complete when the step size of ?cc is in the interval (0.115,0.13), we refer the reader to Figure 2. For example, when Δcc =0.125, the adaptive algorithm computes the maximum likelihood for cc =0.875 and cc =1.125. Since cc =0.875 improves the log-likelihood over the present value cc =1.0, the algorithm proceeds to compute the maximum likelihood for cc =0.75, which further improves the log-likelihood. Thus, the algorithm continues by running the non-adaptive algorithm at cc =0.625. From Figure 2, it can be seen that cc =0.625 achieves a small improvement over cc =0.75. As a result, the adaptive algorithm next attempts to further improve the log-likelihood by running the non-adaptive algorithm with cc =0.5. However, this is the portion of Figure 5 where the non-adaptive algorithm converges more slowly, which contributes to the slower run time of the adaptive algorithm. This explains why a step size of ?cc in the interval (0.115,0.13) exhibits a slight degradation in performance. Values of Δcc ≥0.16 also overshoot the MLE, producing an even more severe increase in the run time of the adaptive algorithm.

This example shows that a range of values of Δcc avoids the inefficiency of the non-adaptive algorithm. However, this particular set of values may not generalize to other data sets, especially if the time scale or number of failures differs

significantly. Future studies will apply the adaptive algorithm to additional data sets to identify a range of values for Δcc that works well on a wide range of data sets.

Figure 5: Non-adaptive EM algorithm performance

5CONCLUSIONS AND FUTURE RESEARCH This paper presents an adaptive expectation-maximization algorithm to find the parameters of a non-homogeneous Poisson process software reliability growth model. To accomplish this, we incorporated a non-adaptive EM algorithm into a generalized procedure that searches the subset of the parameter space, which the non-adaptive algorithm held constant for the sake of tractability. The algorithm was presented in the context of the Weibull and Gamma SRGM. The examples illustrated that the non-adaptive algorithm find the parameter values that maximize the log-likelihood function and can therefore be used for model fitting. Additional experiments examined the performance of the non-adaptive algorithm with respect to the step size parameter. This identified a range of values that may avoid the inefficiencies of the non-adaptive approach.

Future research will generalize the adaptive EM algorithm to two or more parameters. Specifically, we are developing adaptive EM algorithms for NHPP-based SRGM where two or more parameters must be held constant in the corresponding non-adaptive approach [14]. We will also experiment with a genetic algorithm as a pre-conditioning step to identify an initial estimate for the adaptive EM algorithm.

REFERENCES

1.M. Lyu (ed.), Handbook of Software Reliability

Engineering, McGraw-Hill, New York, NY, 1996.

2.S. Ross, Introduction to Probability Models, Academic

Press: New York, NY, 8th edition, 2003.

3.S. Yamada, H. Ohtera, and H. Narihisa, “Software

reliability growth models with testing-effort,” IEEE Transactions on Reliability, vol. 35, no. 1, (Apr.) pp. 19-

23, 1986.

4.K. Okumoto and A. Goel, “Optimum release time for

software systems based on reliability and cost criteria,”

Journal of Systems and Software, vol. 1, (Sep.) pp. 315-

318, 1980.

5. E. Elsayed, Reliability Engineering, Wiley: Hoboken, NJ,

2nd edition, 2012.

6.R. Burden and J. Faires, Numerical Analysis,

Brooks/Cole: Belmont, CA, 8th edition, 2004. 7.W. Farr and O. Smith, “Statistical Modeling and

Estimation of Reliability Functions for Software (SMERFS) Users Guide,” NAVSWC TR-84-373,

Revision 2, Naval Surface Warfare Center, Dahlgren, VA,

1984.

8.S. Hossain and R. Dahiya, “Estimating the Parameters of

a Non-homogeneous Poisson-Process Model for Software

Reliability,” IEEE Transactions on Reliability, vol. 42,

no. 4, (Dec.) pp. 605-612, 1993.

9.H. Okamura, Y. Watanabe, and T. Dohi, “An Iterative

Scheme for Maximum Likelihood Estimation in Software

Reliability Modeling,” Proc. International Symposium on

Software Reliability Engineering, (Nov.), pp. 246-256,

2003.

10. A. Dempster, N. Laird, and D. Rubin, “Maximum

Likelihood from Incomplete Data via the EM Algorithm,”

Journal of the Royal Statistical Society: Series B, vol. 39

no. 1, (Jan.) pp. 1–38, 1977.

11.H. Okamura, and T. Dohi, “Hyper-Erlang Software

Reliability Model,” Proc. Pacific Rim International Symposium on Dependable Computing, (Dec.), pp. 232-

239, 2008.

12.H. Pham, Software Reliability, Springer: New York, NY,

1999.

13.L. Leemis, Reliability: Probabilistic Models and

Statistical Methods, Prentice-Hall: Englewood Cliffs, NJ,

1995.

14.L. Fiondella and S. Gokhale, “Software Reliability Model

with Bathtub-shaped Fault Detection Rate,” Proc. Annual

Reliability & Maintainability Symposium (Jan.), Orlando,

FL, Session 9D-2, 2011.

BIOGRAPHIES

Vidhyashree Nagaraju

Department of Electrical and Computer Engineering

University of Massachusetts - Dartmouth

285 Old Westport Road

North Dartmouth, MA 02747, USA

e-mail: vnagaraju@https://www.sodocs.net/doc/223490483.html,

Vidhyashree Nagaraju is a Master’s student in the Department

of Electrical & Computer Engineering at the University of Massachusetts Dartmouth. She received her BE (2011) in Electronics and Communication Engineering from Visvesvaraya Technological University in India.

Lance Fiondella, PhD

Department of Electrical and Computer Engineering

University of Massachusetts - Dartmouth

285 Old Westport Road

North Dartmouth, MA 02747, USA

e-mail: lfiondella@https://www.sodocs.net/doc/223490483.html,

Lance Fiondella is an assistant professor in the Department of Electrical & Computer Engineering at the University of Massachusetts Dartmouth. He received his PhD (2012) in

Computer Science & Engineering from the University of

Connecticut. Dr. Fiondella was a 2007 recipient of a scholarship from the IEEE Reliability Society and several conference paper awards, including 2nd Place in the 2011 Tom Fagan Student Paper Competition. He serves as vice-chair of IEEE Standard 1633-2008, IEEE Recommended Practice on Software Reliability.

相关主题