搜档网
当前位置:搜档网 › Nonlinear process monitoring based on maximum variance unfolding projections

Nonlinear process monitoring based on maximum variance unfolding projections

Nonlinear process monitoring based on maximum variance unfolding projections
Nonlinear process monitoring based on maximum variance unfolding projections

Nonlinear process monitoring based on maximum variance unfolding projections

Ji-Dong Shao,Gang Rong *

State Key Laboratory of Industrial Control Technology,Zhejiang University,Hangzhou 310027,China

a r t i c l e i n f o Keywords:

Process monitoring

Dimensionality reduction Kernel matrix learning Linear regression

Maximum variance unfolding projections

a b s t r a c t

Kernel principal component analysis (KPCA)has recently proven to be a powerful dimensionality reduc-tion tool for monitoring nonlinear processes with numerous mutually correlated measured variables.However,the performance of KPCA-based monitoring method largely depends on its kernel function which can only be empirically selected from ?nite candidates assuming that some faulty process samples are available in the off-line modeling phase.Moreover,KPCA works at high computational cost in the on-line monitoring phase due to its dense expansions in terms of kernel functions.To overcome these de?ciencies,this paper proposes a new process monitoring technique comprising fault detection and identi?cation based on a novel dimensionality reduction method named maximum variance unfolding projections (MVUP).MVUP ?rstly applies the recently proposed manifold learning method maximum variance unfolding (MVU)on training samples,which can be seen as a special variation of KPCA whose kernel matrix is automatically learned such that the underlying manifold structure of training samples is ‘‘unfolded”in the reduced space and hence the boundary of distribution region of training samples is preserved.Then MVUP uses linear regression to ?nd the projection that best approximates the implicit mapping from training samples to their lower dimensional embedding learned by MVU.Simulation results on the benchmark Tennessee Eastman process show that MVUP-based process monitoring method is a good alternative to KPCA-based monitoring method.

ó2009Elsevier Ltd.All rights reserved.

1.Introduction

Measurements on numerous process variables are routinely col-lected in modern large scale chemical processes.However,since measured process variables are often mutually correlated due to the mass/energy balances and other operational restrictions (Qin,2003),they are probably driven by much fewer degrees of freedom.In such cases,measured process data samples actually locate on or near a low-dimensional structure (such as subspaces and mani-folds)embedded in the high-dimensional input space.This compli-cates the task of developing a predictive model and performing process monitoring in the input space.

Various dimensionality reduction methods have been used in process monitoring to ?nd a reduced space where the underlying low-dimensional structure of data is revealed and its complemen-tary residual space where noises and outliers are located.Monitor-ing is then performed in these two spaces to detect the variations both inside and outside the model.Principal component analysis (PCA)that ?nds the linear subspace capturing most of the variance in data is the most popular dimensionality reduction method for various settings of linear processes (Bakshi,1998;Ku,Storer,&

Georgakis,1995;MacGregor &Kourti,1995;Nomikos &MacGregor,1994).However,its performance degenerates for nonlinear processes where the underlying low-dimensional structure are nonlinear manifolds (such as nonlinear curves and surfaces)rather than linear subspaces.

Kernel principal component analysis (KPCA)(Scholk?pf,Smola,&Müller,1998)extends PCA to nonlinear cases by performing PCA in a higher or even in?nite dimensional feature space transformed from the implicit mapping involved in a kernel function and has proven to be effective for monitoring nonlinear processes in vari-ous settings (Choi &Lee,2004;Choi,Lee,Lee,Park,&Lee,2005;Lee,Yoo,Choi,VanrolleghemLee,&Lee,2004;Lee,Yoo,&Lee,2004).Compared with other nonlinear dimensionality reduction methods such as autoassociative neural network (Kramer,1991)and principal curves (Dong &McAvoy,1996),KPCA has the advan-tages that the dimension of its reduced space needs not to be spec-i?ed before training and no nonlinear optimization is involved.However,KPCA has the following de?ciencies.(1)KPCA does not explicitly consider the underlying manifold structure of data and the performance of KPCA-based monitoring method largely depends on its kernel function which can only be selected empiri-cally from popular kernels (such as Gaussians and Polynomials)with a given ?nite parameter set (the combination of kernel func-tion form and parameters that provides the best performance on a validation data set containing faulty process samples is selected).

0957-4174/$-see front matter ó2009Elsevier Ltd.All rights reserved.doi:10.1016/j.eswa.2009.03.042

*Corresponding author.Tel.:+8657187953145.

E-mail addresses:jdshao.zju@https://www.sodocs.net/doc/7413403337.html, ,jdshao_zju@https://www.sodocs.net/doc/7413403337.html, (J.-D.Shao),grong@https://www.sodocs.net/doc/7413403337.html, (G.Rong).

Expert Systems with Applications 36(2009)

11332–11340

Contents lists available at ScienceDirect

Expert Systems with Applications

journal homepage:https://www.sodocs.net/doc/7413403337.html,/locate/es

wa

The selection procedure is computationally expensive due to numerous candidates and cannot be performed if we only have normal process samples in the off-line modeling phase which is of-ten the case in practice.(2)It is dif?cult to specify a proper reduced dimension for KPCA(as will be illustrated in Section2).(3)KPCA works at high computational cost in the on-line monitoring phase when prompt response is crucial due to its dense expansions in terms of kernel functions.

The manifold learning method maximum variance unfolding (MVU)(Weinberger&Saul,2006;Weinberger,Sha,&Saul,2004) has recently been proposed as a special variation of KPCA whose kernel matrix is automatically learned from training samples.The kernel matrix is constructed by maximizing the variance in kernel feature space implicitly de?ned by the kernel matrix subject to local constraints that preserve the angles and distances between k-nearest neighbors.As a result,the underlying data manifold is ‘‘unfolded”in its reduced space.More importantly,the boundary of the distribution region of training samples in input space is faithfully preserved(see Fig.1for an illustration).This feature facilitates modeling normal operating conditions using training samples for process monitoring.However,the direct application of MVU only provides a lower dimensional embedding of training samples whereas process monitoring requires a functional map-ping from input space to reduced space to iteratively map new observed process samples onto reduced space.

In this paper,a new process monitoring technique comprising fault detection and identi?cation based on a novel dimensionality reduction method named maximum variance unfolding projec-tions(MVUP)is proposed.MVUP uses MVU and linear regression as building blocks and overcomes the limitations of KPCA.MVUP ?rstly performs MVU on training samples to estimate the intrinsic dimension of data and get a lower dimensional embedding of training samples,then applies linear regression to?nd the projec-tion that best approximates the implicit mapping from training samples to the embedding.Dimensionality reduction is?nally per-formed using the learned projection.MVUP inherits the manifold unfolding and boundary preserving features of MVU(as shown in Fig.1)and is computationally ef?cient in the on-line monitoring phase due to its linear form.In fact,MVUP can be seen as a special variation of the spectral regression method(Cai,He,&Han,2007) which casts the problem of learning a mapping function for a spec-tral embedding method into a regression framework.

The rest of the paper is organized as follows.KPCA is reviewed and analyzed in Section2.The proposed MVUP dimensionality reduction method is presented in Section3.The MVUP-based pro-cess monitoring is developed in Section4.Section5compares the fault detection performance of MVUP-based monitoring method with that of KPCA-based monitoring method and shows the effec-tiveness of our method for fault identi?cation on the benchmark Tennessee Eastman(TE)process.Finally,we conclude our work in Section6.2.Kernel principal component analysis

KPCA(Scholk?pf et al.,1998)was proposed to generalize PCA to nonlinear cases by mapping input samples to a higher or even in?-nite dimensional feature space F and performing PCA there.Spe-ci?cally,let mean-centered training samples x1;...;x N2R D be mapped to U x1

eT;...;U x N

eT2F by some nonlinear mapping U:R D!F.PCA is then performed to?nd the principal compo-nents of mapped samples U x1

eT;...;U x N

eT.KPCA is based on the in-sight that by formulating PCA in terms of dot product exclusively, we can replace the dot product by a kernel function jea;bT?h UeaT;UebTi that implicitly de?nes the mapping U and the feature space F.

Assuming U x1

eT;...;Uex NThave been mean-centered,PCA is performed in F by diagonalizing the empirical covariance matrix

C?

1X N

i?1

U x ieTU x ieTT:e1TEquivalently,we need to solve the eigenvalue problem

Cv?k v:e2TAll the solutions v lie in the spaneU x1eT;...;U x NeTT(as can be seen by substituting(1)into(2))and can be expanded as

v?X N

i?1

a i U x ieT:e3T

The problem is then reduced to that of?nding the coef?cients a i, which can be formulated as the following eigenvalue problem by substituting(3)into(2)

K a?N k a;

where K is the N?N kernel matrix of training samples K ij?j x i;x j

àá

and a?a1;...;a N

? T.

Letting k l be the l th largest eigenvalue of K and a l be the corre-sponding normalized eigenvector,an input sample x can be mapped onto the l th dimension of KPCA space with coordinate value

h v l;UexTi?1????

k l

p

X N

i?1

a l

i

j x i;x

eT;e4T

where the factor1???

k l

p ensures h v l;v l i?1.Since the complexity of evaluating a kernel function is usually OeDTand eigenvectors of K are usually dense,the complexity of computing the d-dimensional embedding for an input sample is OedDNT.But for a training sample x i,since(Ham,Lee,Mika,&Scholk?pf,2004)

h v l;Uex iTi?1????

k l

p K a l

eT

i

?

1

????

k l

p k l a l

eT

i

?

????

k l

p

a l

i

;e5T

its d-dimensional embedding y

i

can be directly computed as

J.-D.Shao,G.Rong/Expert Systems with Applications36(2009)11332–1134011333

y i ?

?????

k1

p

a1

i

;...;

?????

k d

p

a d

i

h i T

:e6T

Finally,to release the assumption that U x1

eT;...;U x N

eThave been mean-centered in F,we only need to replace the kernel matrix K and the kernel evaluation j x i;x

eTin(4)by their centered versions

K?eIà1

N

TKeIà1NT;

where I is the identity matrix and1N is the matrix with all entries set to1=N,and

j x i;x

eT?j x i;x

eTà1

N

X N

i?1

j x i;x i

eTt

X N

i?1

j x i;x

eT

!

t

1

N

X N

i?1

X N

j?1

j x i;x j

àá

:

The reduced dimension d of a dimensionality reduction method is often set to the intrinsic dimension of data estimated by the meth-od.It can be checked that eigenvalues of K are proportional to the variance along corresponding principal components in F.There-fore,a large gap between the d th andedt1Tth eigenvalues indi-cates that training samples locate on a d-dimensional subspace in F,or equivalently,a d-dimensional manifold in the input space (Weinberger&Saul,2006).However,the existence of the large gap is based on the assumption that training samples are mapped onto a linear subspace in https://www.sodocs.net/doc/7413403337.html,ing popular kernels such as Gaussi-ans jea;bT?expeàk aàb k2=tT,polynomials jea;bT?e1th a;b iTd and sigmoids jea;bT?tanheh a;b itbTdoes not explicitly consider the underlying manifold structure of training samples and can not ensure the assumption.

3.Maximum variance unfolding projections

3.1.Maximum variance unfolding

MVU(Weinberger et al.,2004)has recently been proposed as a special variation of KPCA whose kernel matrix K is learned from training samples such that the data manifold in input space is un-folded in the kernel feature space F implicitly de?ned by K.This also make the manifold unfolded in the reduced space of MVU (Weinberger&Saul,2006)since the reduced space is essentially the PCA subspace in F.The problem of learning K is casted to an instance of semide?nite programming(Vandenberghe&Boyd, 1996)which is convex and does not suffer from local optima.After learning K and performing its eigen-decomposition,the d-dimen-sional embedding of training samples is obtained by KPCA embed-ding as(6).The following two subsections respectively review the constraints and the whole optimization of learning K.

3.1.1.Constraints

3.1.1.1.Positive semide?niteness.First,K should be positive semi-de?nite.This guarantees that the elements of K can be interpreted as dot products of training samples in the feature space F implic-itly de?ned by K.

3.1.1.2.Centering.Second,K should store the dot products of mapped training samples in F that are mean-centered:

X N i?1U x ieT?0()

X N

i?1

U x ieT

2

?

X N

i?1

X N

j?1

h U x ieT;U x j

àá

i

?

X N

i?1

X N

j?1

K ij?0:e7T

This constraint ensures the eigenvalues of K can be interpreted as measures of variance along principal components in F.3.1.1.3.Isometry.The?nal constraint is based on the following intuition.Imagine each training sample x i as a steel ball connected to their k nearest neighbors by rigid rods.The lattice of steel balls formed in this way can be seen as a discrete approximation of the underlying manifold structure.Unfolding the manifold should not break or stretch the rigid rods and hence the distances and angles between points and their neighborhoods should be preserved,i.e. the mapping from input space to F should be isometry(cf.Wein-berger et al.,2004).Since the triangle formed by any sample and its neighbors can be determined by specifying the length of all its sides,the constraint can be formally stated as following.Let the N?N binary adjacency matrix S indicate the neighborhood rela-tion by setting S ij?1if x i is a k-nearest neighbor of x j,whenever x i and x j are k-nearest neighbors(S ij?1)or are common k-nearest neighbors of another sample(S T S

h i

ij

>0),the equation

U x ieTàU x j

àá

2

?x iàx j

2

()K iitK jjà2K ij?x iàx j

2

e8Tshould hold.

3.1.2.Optimization

The objective function is based on the observation that any ‘‘fold”between two samples on a manifold serves to decrease the Euclidean distance between them.Hence,to unfold the manifold in F,an objective function that measures the sum of pairwise squared distances between mapped training samples is maximized:

C?

1

2N

X N

i?1

X N

j?1

U x ieTàU x j

àá

2

?

1

2N

X N

i?1

X N

j?1

K iitK jjà2K ij

àá

?TreKT:e9T

The last equation follows from the centering constraint(7).The objective function is also equal to the variance of mapped training samples in F.Hence,the unfolding procedure can be interpreted as pulling U x1

eT;...;U x N

eTas far apart as possible by maximizing their variance which is bounded from above due to the isometry constraint(cf.Weinberger et al.,2004).

Combining the objective function and the three constraints de-?nes an instance of semide?nite programming which optimizes a linear function of the elements in a positive semide?nite matrix with linear equality constraints

K??arg max

K

TreKT

s:t:1:K#0;

2:

X N

i?1

X N

j?1

K ij?0;

3:K iitK jjà2K ij?x iàx j

2

for all i and j such that S ij?1or S T S

h i

ij

>0:

e10T

There are several ef?cient general-purpose toolboxes for solving semide?nite programming problems,such as the SeDuMi(Sturm, 1999)and CSDP(Borchers,1999)toolboxes.

3.2.The usage of MVU

This section gives the usage of MVU as a building block of MVUP.

The only parameter in MVU is k,the number of nearest neigh-bors.The original MVU method assumes that the k-nearest neigh-bor graph of training samples are connected;otherwise,each connected component of the graph is analyzed separately which will lead to a reduce space for each component.However,process monitoring applications need a single,coherent reduced space that

11334J.-D.Shao,G.Rong/Expert Systems with Applications36(2009)11332–11340

all process samples can be mapped onto and monitoring can be performed in.We use the simple strategy that setting k to be the smallest integer that makes the k -nearest graph of training sam-ples connected.

Specifying the reduced dimension d for process monitoring is still an open issue and currently there is no dominant technique (Chiang,Russell,&Braatz,2001).Setting d to be the intrinsic dimension of data is a good heuristic.MVU unfolds the data man-ifold in F (also keep it mean-centered)and hence U x 1eT;...;U x N eTare very likely to lie on a linear subspace in F .Since PCA is per-formed in F and eigenvalues of K are proportional to the variance along principal components,the intrinsic dimension of data can be effectively determined according to the eigenvalues of K learned by MVU,as shown in Fig.2.3.3.Learning the projection

After computing the d -dimensional MVU embedding y 1;...;y N of training samples,we ?nd the linear projection A ?a 1;...;a d ? 2R D ?d that best approximates the implicit mapping from training samples to the embedding,that is,?nd A that satisfy Y ?A T X in the least squares sense (an exact solution might not ex-ist),where Y ?y 1;...;y N ? and X ?x 1;...;x N ? .Each basis vector a l el ?1;...;d Tis obtained by solving the linear least squares regression problem (and normalizing the solution to unit length)

a l ?arg min

a

X N i ?1

a T x i ày l i àá2

;

e11T

where y l i is the l th component of y i .

In the case where X T has full column rank,a l can be solved as (Ben-Israel &Greville,2003)

a l ?X T ty l ?XX T à1

Xy l ;

where X T tis the Moore–Penrose pseudoinverse (Ben-Israel &

Greville,2003)of X T and y l ?y l 1;...;y l N ??T

.

If X T is column rank de?cient (e.g.in the case where training sample number N is smaller than input dimension D ),the problem (11)is ill posed,i.e.,we may have in?nitely many solutions for the linear equations system X T a l ?y l .To make the learned projection generalize well to previously unseen samples,an effective way to tackle this issue is using the ridge regression (Tikhonov,1977)which minimizes a regularized least squares objective function

a l ?arg min

a

X N i ?1

a T x i ày l i

àá2

ta a k k 2

!

;

e12T

where a >0is the regularization parameter.And a l can be solved as (Cai et al.,2007)

a l ?XX T

ta I à1

Xy l :

When the training sample matrix X is very large,some ef?cient iter-ative algorithms such as LSQR (Christopher &Michael (1982))can be used to directly solve (11)or (12)as pointed out in Cai et al.(2007).

The algorithm of MVUP is summarized in the following.(A)MVU embedding.

(a)Specify k to be the smallest integer that makes the k -nearest graph of training samples x 1;...;x N connected.(b)Construct the N ?N binary adjacency graph S of training

samples.Set S ij ?1if x i is a k -nearest neighbor of x j .Otherwise,set S ij ?0.

(c)Learn the kernel matrix K by solving (10).

(d)Perform eigen-decomposition for K and set the reduce

dimension d ed

(e)Compute the d -dimensional MVU embedding of training

samples y 1;...;y N as (6).(B)Learn each basis vector of the linear projection A ?

a 1;...;a d ? that best approximates the implicit mapping from x 1;...;x N to y 1;...;y N by linear regression.The d -dimen-sional embedding y in MVUP subspace of an input sample x is computed as y ?A T x .

4.Process monitoring based on MVUP 4.1.Fault detection

The fault detection method applies MVUP on mean-centered training samples collected under normal operating conditions to compute the projection from input space to MVUP subspace.Mon-itoring is performed in the MVUP subspace as well as the comple-ment residual space to capture the variations inside and outside the model.

To measure the variation inside the MVUP subspace,the T 2sta-tistic is used which is simple and most popular in process monitor-ing applications.Let Y ?y 1;...;y N ? be the matrix of the embedding of training samples in the MVUP subspaces,K be the sample covariance matrix of y 1;...;y N and y be the embedding of an input sample x in MVUP subspace,the T 2statistic associated with x can be computed as

T 2?y T K à1y ?y T YY T =eN à1Th i à1

y

?eN à1Tx T A A T XX T A h i à1

A T x :

e13T

The squared prediction error (SPE)statistic is used to measure the variation in the residual space.The SPE statistic associated with an input sample x is computed as

SPE ?k x à^x

k 2;e14T

where ^x

is the reconstruction of x according to its embedding y in the MVUP subspace.The optimal reconstruction ^x

in the least squares sense can be computed as (Ben-Israel &Greville,2003)

^x

?A T

t

y ?A A T

A à1

A T x ;

e15T

where eA T Ttis the Moore–Penrose pseudoinverse of A T .

The upper control limits for T 2and SPE statistics are computed by performing kernel density estimation (KDE)(Martin &Morris,

J.-D.Shao,G.Rong /Expert Systems with Applications 36(2009)11332–1134011335

1996)on T2and SPE statistics values associated with training sam-ples.In this way,we do not need to assume particular distributions for T2and SPE statistics.

The off-line modeling procedure is summarized below.

(A)Normalize training samples x1;...;x N to zero mean and unit

variance.

(B)Apply MVUP on x1;...;x N,obtaining the projection A.

(C)Compute the embedding of training samples in the MVUP

subspace.

(D)Compute the T2and SPE statistics associated with training

https://www.sodocs.net/doc/7413403337.html,pute the upper control limits for the T2and SPE statistics using KDE.

The on-line monitoring procedure is listed below.

(A)Scale the newly observed input sample x with the mean and

variance obtained at step(A)of the modeling procedure.

(B)Compute its embedding in the MVUP subspace as y?A T x.

(C)Compute the T2and SPE statistics associated with x.Monitor

whether they exceed their upper control limits.

We can see that the complexity of computing the d-dimensional embedding of a previously unseen input sample involved in the on-line monitoring phase is OedDT,in contrast to OedDNTin KPCA-based monitoring method.

4.2.Fault identi?cation

When an out-of-control state is detected,in order to?nd out the source of fault,the measured variables that are not consistent with the normal operating conditions should be identi?ed accord-ing to faulty process samples.Contribution plots(Miller,Swanson, &Heckler,1998;Westerhuis,Gurden,&Smilde,2000)are popular methods for fault identi?cation.The measured variables with the largest contribution to the out-of-control monitoring statistics are considered as abnormal.

Since the basis vectors in A are not necessarily mutually orthog-

onal,we use the generalized contribution plot method(Westerhuis et al.,2000)as following which can be applied to nonorthogonal cases.

By rewriting(13)as

T2?

X D

l?1

y T Kà1A T

elT

x l;

where x l is the l th component of x,and A T

elT

is the l th column of A T, the contribution of the l th variable to the T2statistic associated with

an input sample x is de?ned as C

T2;l

?y T Kà1A T

elT

x l.

By rewriting(14)as

SPE?

X D

l?1

x là^x l

eT2;

where^x l is the l th component of^x computed as(15),the contribu-tion of the l th variable to the SPE statistic associated with an input sample x is de?ned as C SPE;l?x là^x l

eT2.

5.Simulation studies

The Tennessee Eastman(TE)process simulator has been widely used as a benchmark simulation for comparing various process monitoring methods(e.g.Ku et al.,1995;Raich&Cinar,1996; Russell,Chiang,&Braatz,2000).The simulator is based on a nonlinear industrial process consisting of?ve unit operations:an exothermic two-phase reactor,a condenser,a?ash separator,a reboiler striper and a recycle compressor.A?owsheet of the pro-cess together with its implemented control structure(Lyman& Georgakis,1995)is shown in Fig.3.The process has12manipu-lated variables and41measurements.The simulator includes a set of programmed fault modes listed in Table1.Further details on TE process can be found in Chiang et al.(2001).The simulation data of the TE process used in our study was downloaded from https://www.sodocs.net/doc/7413403337.html,.The sampling interval is3min.The

REACTOR

TI

11336J.-D.Shao,G.Rong/Expert Systems with Applications36(2009)11332–11340

semide?nite programming solver used in MVUP was the CSDP v4.9toolbox (Borchers,1999)in MATLAB.5.1.Fault detection of the TE process

We compared the fault detection performance of MVUP-based monitoring method with that of KPCA-based monitoring method (Choi et al.,2005;Lee et al.,2004).Fault detection performance was evaluated by missing alarm rate (d )and detection delay (c )of each method.The missing alarm rate of one method is de?ned as d ?min d T 2;d SPE àá,where d T 2and d SPE are respectively the miss-ing alarm rates of the T 2and SPE charts in this method.Similarly,

the detection delay of one method is de?ned as c ?min c T 2;c SPE àá

.The detection delay of a monitoring chart is de?ned as the time gap between the introducing of fault and three consecutive statistic values exceeding its upper control limit for the ?rst time.Since it is unfair to compare missing alarm rates and detection delays of two methods when they have different false alarm rates,in com-puting above indices,the upper control limit for each monitoring statistic in each method was adjusted to the 5%highest value for the normal operating conditions of the testing data set.In this way,the false alarm rate of each method is adjusted to be equal (5%).

The training data set consists of 500samples collected under normal operating conditions and each testing data set for one fault mode consists of 960samples.Fault was introduced at sample 160in each testing data sets.All variables except the uncontrolled agi-tation speed for a total of 52variables were used for monitoring.5.1.1.Fault detection without validation data

The ?rst experiment was performed under the setting that only normal samples are available in the off-line modeling phase which is often the case in practice.For KPCA,Lee et al.(2004)proposed

using Gaussian kernels j ea ;b T?exp àk a àb k 2

=t

with the parameter t ?xed to 10D d 2,where D is the input dimension and

d 2is th

e variance o

f trainin

g samples.

As shown in Fig.4,the ninth largest normalized eigenvalue (each eigenvalue is divided by the sum of all eigenvalues)of the

matrix learned by MVU suddenly decreases to near zero.This indi-cates that the intrinsic dimension of training data is 8.However,we cannot draw a similar conclusion on intrinsic dimension according to the eigenvalues from KPCA because there is no sharp ‘‘turning point”in the plot.The reduced dimension of KPCA was set to 43by counting the eigenvalues larger than the average eigen-value.This rule was recommended for KPCA in Lee et al.(2004).The missing alarm rate and detection delay of KPCA and MVUP for all 21fault modes are shown in Fig.5.We can see that MVUP provides evidently lower missing alarm rate than KPCA for faults 5,10,16,19and 21and lower detection delay than KPCA for faults 9,15,16,20and 21.And they show similar performances for other fault modes.Both MVUP and KPCA provide very high missing alarm rate for fault 3and fault 9because these two faults impose very little disturbance on monitored variables (Russell et al.,2000).It is interesting to investigate the monitoring charts of MVUP and KPCA for fault 5.When fault 5occurred at sample 160,a posi-tive step change was introduced in the temperature of condenser cooling water,which caused an increase in its ?ow rate and ?uctu-ations in the ?ow rate of the outlet stream from the condenser to the separator,the temperature in the separator and the tempera-ture of separator cooling water.As shown in Fig.6,all monitoring charts detect the fault promptly.After sample 360when the con-trol loops had compensated for the change and most variables had returned to their setpoints,KPCA T 2chart,KPCA SPE chart and MVUP SPE chart indicate that the process had returned to near normal state.However,the condenser cooling water temperature and its ?ow rate were still consistently higher than their normal value.Only the MVUP T 2chart correctly indicates that the fault persists throughout the simulation https://www.sodocs.net/doc/7413403337.html,bining the results of MVUP T 2and SPE charts,we can infer that the process was in a magni?ed state inside the model space after sample 360which to some extent coincides with the true situation that the system had returned to steady but under a heavier load than normal after sample 360.

5.1.2.Fault detection with validation data

In the case where validation data is available in the off-line mod-eling phase,we can select kernel function for KPCA by choosing the one that provides the best performance for the validation data.We merged 21data sets each of which contains 480samples collected under one of the 21fault modes and one extra data set which con-tains 480normal samples as the validation data set.Since Gaussian kernels empirically provides better performance than polynomial kernels and sigmoid kernels,we used Gaussian kernels j ea ;b T?

exp àk a àb k 2=t

with the kernel width t chosen from the candi-date sequence 2à2ts =2D r 2,s ?0;1;...;30,where D is the dimen-sion of input space and r 2is the variance of training samples.

Table 1

Process faults for the Tennessee Eastman process simulator.Fault mode Description

Type 1A/C feed ratio,B composition constant (Stream 4)Step 2B composition,A/C ratio constant (Stream 4)Step 3D feed temperature (Stream 2)

Step 4Reactor cooling water inlet temperature Step 5Condenser cooling water inlet temperature (Stream 2)

Step 6A feed loss (Stream 1)

Step 7C header pressure loss reduced availability (Stream 4)

Step 8A,B,C feed composition (Stream 4)Random variation 9D feed temperature (Stream 2)Random variation 10C feed temperature (Stream 4)

Random variation 11Reactor cooling water inlet temperature Random variation 12Condenser cooling water inlet temperature Random variation 13Reaction kinetics

Slow drift 14Reactor cooling water valve Sticking 15Condenser cooling water valve Sticking 16–20Unknown

Unknown 21

Valve (Stream 4)

Constant position

J.-D.Shao,G.Rong /Expert Systems with Applications 36(2009)11332–11340

11337

In fact,using validation data,we may specify a better reduced dimension d for MVUP and KPCA(e.g.set d to be the value that pro-vides the best performance for validation data).Therefore,we investigated the average missing alarm rates of MVUP and KPCA over21fault modes at each possible reduced dimensions.For each reduced dimension,the kernel width of KPCA was set to the value that provides the lowest missing alarm rate for the validation data. As shown in Fig.7,the average missing alarm rate of KPCA decreases slowly before dimension36and then drops quickly until dimension43,followed by some?uctuations.In contrast,the aver-age missing alarm rate of MVUP drops sharply in the beginning from dimension1to8and decreases very slowly with small?uc-tuations afterwards(this corroborates that setting the reduced dimension to the previously found intrinsic dimension8is a good choice when there is no validation data).In summary,with valida-tion data,MVUP still provides much lower average missing alarm rate than KPCA over a large range of possible reduced dimensions (dimension2–42)and the lowest average missing alarm rates of MVUP(0.176at dimension48)and KPCA(0.178at dimension 50)over all possible dimensions are comparable.The detection de-lay results give the similar conclusion.

5.2.Fault identi?cation of the TE process

We investigated the fault identi?cation performance of our method(set d?8)for four typical fault modes in the TE process (faults5,6,12,and14).In fault5,the positive step change in con-denser cooling water temperature leads to a sharp increase in its ?ow rate,which is measured by variable52.In fault6,there is a step loss in A feed,which is associated with variables1and44. In fault12,there is a random variation in condenser cooling water temperature which causes abnormal in many variables.Among them,the variables associated with temperature and pressure of the separator,i.e.variables11,13,22show the largest?uctuations. In fault14,reactor cooling water valve sticks which causes large ?uctuations in reactor temperature(variable9),reactor cooling

11338J.-D.Shao,G.Rong/Expert Systems with Applications36(2009)11332–11340

water outlet temperature(variable21)and?ow rate of reactor cooling water(variable51).

Fig.8shows the averaged contribution plots of MVUP for these faults over the?rst5hours’s simulation data after the detection of faults(faults5and6are effectively detected by T2charts and faults 12and14are effectively detected by SPE charts).From Fig.8,we can see abnormal variables in each fault mode are at top ranks in the contribution plots and can be easily identi?ed from other variables.

6.Conclusion

This paper proposes a new nonlinear process monitoring tech-nique comprising fault detection and identi?cation based on a no-vel dimensionality reduction method named maximum variance unfolding https://www.sodocs.net/doc/7413403337.html,pared with KPCA-based monitoring method,MVUP-based monitoring method has the following nota-ble features.(1)MVUP inherits the manifold unfolding and bound-ary preserving power of MVU by learning its implicit mapping and does not involve an empirical kernel function selection procedure in its training phase.(2)The intrinsic dimension of data to which the dimension of model space is set can be effectively determined.

(3)Performing dimensionality reduction for a new observed pro-cess sample in the on-line monitoring phase is computationally ef?cient like PCA because the mapping function is a linear projec-tion(the complexity of computing the d-dimensional embedding of an input sample is OedDT,in contrast to OedDNTin KPCA-based method,where D is the input dimension and N is the number of training samples).The effectiveness of MVUP-based monitoring method for fault detection and identi?cation is demonstrated on the benchmark TE process.

Acknowledgements

This work was supported by National Natural Science Founda-tion of China(60421002)and National High Technology R&D Pro-gram of China(2007AA04Z191).

References

Bakshi, B.R.(1998).Multiscale pca with application to multivariate statistical process monitoring.AIChE Journal,44(7),1596–1610.Ben-Israel,A.,&Greville,T.N.E.(2003).Generalized inverses:Theory and applications.

New York:Springer.

Borchers,B.(1999).Csdp,a c library for semide?nite programming.Optimization Methods and Software,11(1),613–623.

Cai,D.,He,X.,&Han,J.(2007).Spectral regression for ef?cient regularized subspace learning.In IEEE11th international conference on computer vision,2007.ICCV 2007,Rio de Janeiro,Brazil(pp.1–8).

Chiang,L.H.,Russell,E.,&Braatz,R.D.(2001).Fault detection and diagnosis in industrial systems.New York:Springer.

Choi,S.W.,&Lee,I.B.(2004).Nonlinear dynamic process monitoring based on dynamic kernel PCA.Chemical Engineering Science,59(24),5897–5908.

Choi,S.W.,Lee,C.,Lee,J.M.,Park,J.H.,&Lee,I.B.(2005).Fault detection and identi?cation of nonlinear processes based on kernel PCA.Chemometrics and Intelligent Laboratory Systems,75(1),55–67.

Christopher,C.P.,&Michael,A.S.(1982).Lsqr:An algorithm for sparse linear equations and sparse least squares.ACM Transactions on Mathematical Software, 8(1),43–71.

Dong,D.,&McAvoy,T.J.(1996).Nonlinear principal component analysis based on principal curves and neural https://www.sodocs.net/doc/7413403337.html,puters and Chemical Engineering, 20(1),65–78.

Ham,J.,Lee, D. D.,Mika,S.,&Scholk?pf, B.(2004).A kernel view of the dimensionality reduction of manifolds.In Proceedings of the21st international conference on machine learning,Banff,Canada(pp.47).

Kramer,M.A.(1991).Nonlinear principal component analysis using autoassociative neural networks.AIChE Journal,37(2),233–243.

Ku,W.,Storer,R.H.,&Georgakis,C.(1995).Disturbance detection and isolation by dynamic principal component analysis.Chemometrics and Intelligent Laboratory Systems,30(1),179–196.

Lee,J.M.,Yoo,C.K.,Choi,S.W.,VanrolleghemLee,P.A.,&Lee,I.B.(2004).Nonlinear process monitoring using kernel principal component analysis.Chemical Engineering Science,59(1),223–234.

Lee,J.M.,Yoo,C.K.,&Lee,I.B.(2004).Fault detection of batch processes using multiway kernel principal component https://www.sodocs.net/doc/7413403337.html,puters and Chemical Engineering,28(9),1837–1847.

Lyman,P.R.,&Georgakis,C.(1995).Plant-wide control of the tennessee eastman https://www.sodocs.net/doc/7413403337.html,puters and Chemical Engineering,19(3),321–331.

MacGregor,J. F.,&Kourti,T.(1995).Statistical process control of multivariate processes.Control Engineering Practice,3(3),403–414.

Martin,E.B.,&Morris,A.J.(1996).Non-parametric con?dence bounds for process performance monitoring charts.Journal of Process Control,6(6),349–358. Miller,P.,Swanson,R.E.,&Heckler,C.F.(1998).Contribution plots:A missing link in multivariate quality control.Applied Mathematics and Computer Science,8(4), 775–792.

Nomikos,P.,&MacGregor,J.F.(1994).Monitoring batch processes using multiway principal component analysis.AIChE Journal,40(8),1361–1375.

Qin,S.J.(2003).Statistical process monitoring:basics and beyond.Journal of Chemometrics,17(8-9),480–502.

Raich, A.,&Cinar, A.(1996).Statistical process monitoring and disturbance diagnosis in multivariable continuous processes.AIChE Journal,42(4), 995–1009.

Russell,E.L.,Chiang,L.H.,&Braatz,R.D.(2000).Fault detection in industrial processes using canonical variate analysis and dynamic principal component analysis.Chemometrics and Intelligent Laboratory Systems,51(1),81–93.

Saul,L.K.,Roweis,S.T.,&Singer,Y.(2004).Think globally,?t locally:Unsupervised learning of low dimensional manifolds.Journal of Machine Learning Research, 4(2),119–155.

J.-D.Shao,G.Rong/Expert Systems with Applications36(2009)11332–1134011339

Scholk?pf,B.,Smola,A.,&Müller,K.R.(1998).Nonlinear component analysis as a kernel eigenvalue problem.Neural Computation,10(5),1299–1319.

Sturm,J.F.(1999).Using sedumi1.02,a matlab toolbox for optimization over symmetric cones.Optimization Methods and Software,11(1),625–653. Tikhonov,A.N.(1977).Solutions of ill-posed problems.New York:Wiley. Vandenberghe,L.,&Boyd,S.(1996).Semide?nite programming.SIAM Review,38(1), 49–95.

Weinberger,K.Q.,Sha,F.,&Saul,L.K.(2004).Learning a kernel matrix for nonlinear dimensionality reduction.In Proceedings of the21st international conference on machine learning,Banff,Canada(p.106).Weinberger,K.Q.,&Saul,L.K.(2006).Unsupervised learning of image manifolds by semide?nite programming.International Journal of Computer Vision,70(1), 77–90.

Westerhuis,J.A.,Gurden,S.P.,&Smilde,A.K.(2000).Generalized contribution plots in multivariate statistical process monitoring.Chemometrics and Intelligent Laboratory Systems,51(1),95–114.

11340J.-D.Shao,G.Rong/Expert Systems with Applications36(2009)11332–11340

助焊剂技术标准

助焊剂技术标准 免清洗液态助焊剂——————————————————————————————————————— 1 范围 本标准规定了电子焊接用免清洗液态助焊剂的技术要求、实验方法、检验规则和产品的标志、包装、运输、贮存。 本标准主要适用于印制板组装及电气和电子电路接点锡焊用免清洗液态助焊剂(简称助焊剂)。使用免清洗液态助焊剂时,对具有预涂保护层印制板组件的焊接,建议选用与其配套的预涂覆助焊剂。 2 规范性引用文件 下列文件中的条款通过本标准的引用而成为本标准的条款。凡是注日期的引用文件,其随后所有的修改单(不包括勘误的内容)或修订版均不适用于本标准,然而,鼓励根据本标准达成协议的各方研究是否使用这些文件。凡是不注日期的引用文件,其最新版本适用于本标准。GB 190 危险货物包装标志 GB 2040 纯铜板 GB 3131 锡铅焊料 GB 电工电子产品基本环境试验规程润湿称量法可焊性试验方法 GB 2828 逐批检查计数抽样程序及抽样表(适用于连续批的检查) GB 2829 周期检查计数抽样程序及抽样表(适用于生产过程稳定性的检查) GB 4472 化工产品密度、相对密度测定通则 GB 印制板表面离子污染测试方法 GB 9724 化学试剂PH值测定通则 YB 724 纯铜线 3 要求 外观 助焊剂应是透明、均匀一致的液体,无沉淀或分层,无异物,无强烈的刺激性气味; 在——————————————————————————————————————— 中华人民共和国信息产业部标 200X-XX-XX发布200X-XX-XX实施 SJ/T 11273-2002 ——————————————————————————————————————— 一年有效保存期内,其颜色不应发生变化。 物理稳定性 按试验后,助焊剂应保持透明,无分层或沉淀现象。 密度 按检验后,在23℃时助焊剂的密度应在其标称密度的(100±)%范围内。

中介效应和调节效应分析方法论文献解读

中介效应和调节效应分析方法论文献 1. 温忠麟,张雷,侯杰泰,刘红云.(2004.中介效应检验程序及其应用. 心理学报,36(5,614-620. 2. 温忠麟,侯杰泰,张雷.(2005.调节效应与中介效应的比较和应用. 心理学报,37(2,268-274. 3. 温忠麟,张雷,侯杰泰.(2006.有中介的调节变量和有调节的中介变量. 心理学报,38(3,448-452. 4. 卢谢峰,韩立敏.(2007.中介变量、调节变量与协变量——概念、统计检验及其比较. 心理科学,30(4,934-936. 5. 柳士顺,凌文辁.(2009.多重中介模型及其应用. 心理科学,32(2,433-435. 6. 方杰,张敏强,邱皓政.(2010.基于阶层线性理论的多层级中介效应. 心理科学进展,18(8,1329-1338. 7. 刘红云,张月,骆方,李美娟,李小山.(2011.多水平随机中介效应估计及其比较. 心理学报,43(6,696-709. 8. 方杰,张敏强,李晓鹏.(2011.中介效应的三类区间估计方法. 心理科学进展,19(5,765-774. 9. 方杰,张敏强.(2012.中介效应的点估计和区间估计:乘积分布法、非参数 B ootstrap 和MCMC 法. 心理学报,44(10,1408-1420. 10. 方杰,张敏强.(2013.参数和非参数Bootstrap 方法的简单中介效应分析比较. 心理科学,36(3,722-727. 11. 叶宝娟,温忠麟.(2013.有中介的调节模型检验方法:甄别和整合. 心理学报,45(9,1050-1060.

12. 刘红云,骆方,张玉,张丹慧.(2013.因变量为等级变量的中介效应分析. 心理学报,45(12,1431-1442. 13. 方杰,温忠麟,张敏强,任皓.(2014.基于结构方程模型的多层中介效应分析. 心理科学进展,22(3,530-539. 14. 方杰,温忠麟,张敏强,孙配贞.(2014.基本结构方程模型的多重中介效应分析. 心理科学,37(3,735-741.

中介效应重要理论及操作务实

中介效应重要理论及操作务实一、中介效应概述中介效应是指变量间的影响关系(X→Y)不是直接的因果链关系而是通过一个或一个以上变量(M)的间接影响产生的,此时我们称M为中介变量,而X通过M对Y产生的的间接影响称为中介效应。中介效应是间接效应的一种,模型中在只有一个中介变量的情况下,中介效应等于间接效应;当中介变量不止一个的情况下,中介效应的不等于间接效应,此时间接效应可以是部分中介效应的和或所有中介效应的总和。在心理学研究当中,变量间的关系很少是直接的,更常见的是间接影响,许多心理自变量可能要通过中介变量产生对因变量的影响,而这常常被研究者所忽视。例如,大学生就业压力与择业行为之间的关系往往不是直接的,而更有可能存在如下关系:eq \o\ac(○,1)就业压力→个体压力应对→择业行为反应。此时个体认知评价就成为了这一因果链当中的中介变量。在实际研究当中,中介变量的提出需要理论依据或经验支持,以上述因果链为例,也完全有可能存在另外一些中介因果链如下:eq \o\ac(○,2)就业压力→个体择业期望→择业行为反应;eq \o\ac(○,3)就业压力→个体生涯规划→择业行为反应;因此,研究者可以更具自己的研究需要研究不同的中介关系。当然在复杂中介模型中,中介变量往往不止一个,而且中介变量和调节变量也都有可能同时存在,导致同一个模型中即有中介效应又有调节效应,而此时对模型的检

验也更复杂。以最简单的三变量为例,假设所有的变量都已经中心化,则中介关系可以用回归方程表示如下: Y=cx+e1 1) M=ax+e2 2) Y=c’x+bM+e3 3) 上述3个方程模型图及对应方程如下:二、中介效应检验方法中介效应的检验传统上有三种方法,分别是依次检验法、系数乘积项检验法和差异检验法,下面简要介绍下这三种方法:1.依次检验法(causual steps)。依次检验法分别检验上述1)2)3)三个方程中的回归系数,程序如下: 1.1首先检验方程1)y=cx+ e1,如果c显著(H0:c=0被拒绝),则继续检验方程2),如果c不显著(说明X对Y无影响),则停止中介效应检验; 1.2 在c显著性检验通过后,继续检验方程2)M=ax+e2,如果a显著(H0:a=0被拒绝),则继续检验方程3);如果a不显著,则停止检验;1.3在方程1)和2)都通过显著性检验后,检验方程3)即y=c’x + bM + e3,检验b的显著性,若b显著(H0:b=0被拒绝),则说明中介效应显著。此时检验c’,若c’显著,则说明是不完全中介效应;若不显著,则说明是完全中介效应,x对y的作用完全通过M 来实现。评价:依次检验容易在统计软件中直接实现,但是这种检验对于较弱的中介效应检验效果不理想,如a较小而b较大时,依次检验判定为中介效应不显著,但是此时ab 乘积不等于0,因此依次检验的结果容易犯第二类错误(接受虚无假设即作出中介效应不存在的判断)。2.系数乘积项

焊接质量检验标准

JESMAY 培训资料 焊接质量检验标准焊接在电子产品装配过程中是一项很重要的技术,也是制造电子产品的重要环节之一。它在电子产品实验、调试、生产中应用非常广泛,而且工作量相当大,焊接质量的好坏,将直接影响到产品的质量。电子产品的故障除元器件的原因外,大多数是由于焊接质量不佳而造成的。因此,掌握熟练的焊接操作技能对产品质量是非常有必要的。(一)焊点的质量要求:保证焊点质量最关键的一点,就是必应该包括电气接触良好、机械接触牢固和外表美观三个方面,对焊点的质量要求,须避免虚焊。1.可靠的电气连接锡焊连接不是靠压力而是靠焊接过程形成牢固连接的合金层达到电焊接是电子线路从物理上实现电气连接的主要手段。气连接的目的。如果焊锡仅仅是堆在焊件的表面或只有少部分形成合金层,也许在最初的测试和工作中不易发现焊点存在的问题,这种焊点在短期内也能通过电流,但随着条件的改变和时间的推移,接触层氧化,脱离出现了,电路产生时通时断或者干脆不工作,而这时观察焊点外表,依然连接良好,这是电子仪器使用中最头疼的问题,也是产品制造中必须十分重视的问题。2.足够机械强度为保证被焊件在受振动或冲击时不至脱落、同时也是固定元器件,保证机械连接的手段。焊接不仅起到电气连接的作用,松动,因此,要求焊点有足够的机械强度。一般可采用把被焊元器件的引线端子打弯后再焊接的方法。作为焊锡材料的铅锡2。要想增加强度,就要有足够的,只有普通钢材的合金,本身强度是比较低的,常用铅锡焊料抗拉强度约为3-4.7kg/cm10% 连接面积。如果是虚焊点,焊料仅仅堆在焊盘上,那就更谈不上强度了。3.光洁整齐的外观并且不伤及导线的绝缘层及相邻元件良好桥接等现象,良好的焊点要求焊料用量恰到好处,外表有金属光泽,无拉尖、的外表是焊接质量的反映,注意:表面有金属光泽是焊接温度合适、生成合金层的标志,这不仅仅是外表美观的要求。 主焊体所示,其共同特点是:典型焊点的外观如图1①外形以焊接导线为中心,匀称成裙形拉开。 焊接薄的边缘凹形曲线焊料的连接呈半弓形凹面,焊料与焊件交界处平② 滑,接触角尽可能小。③表面有光泽且平滑。1图④无裂纹、针孔、夹渣。焊点的外观检查除用目测(或借助放大镜、显微镜观测)焊点是否合乎上述标准以外,还包括以下几个方面焊接质量的;导线及元器件绝缘的损伤;布线整形;焊料飞溅。检查时,除检查:漏焊;焊料拉尖;焊料引起导线间短路(即“桥接”)目测外,还要用指触、镊子点拨动、拉线等办法检查有无导线断线、焊盘剥离等缺陷。(二)焊接质量的检验方法:⑴目视检查目视检查就是从外观上检查焊接质量是否合格,也就是从外观上评价焊点有什么缺陷。目视检查的主要内容有: 是否有漏焊,即应该焊接的焊点没有焊上;① ②焊点的光泽好不好; ③焊点的焊料足不足;(a)(b) ④焊点的周围是否有残留的焊剂;正确焊点剖面图2图6-1 JESMAY 培训资料

电子技术基础1.4(半导体器件)

场效应管是利用电场效应来控制电流的一种半导体器件,它的输出电流决定于输入电压的大小,基本上不需要信号源提供电流,所以输入电阻高,且温度稳定性好。 绝缘栅型场效应管 MOS管增强型NMOS管耗尽型NMOS管增强型PMOS管耗尽型PMOS管 1.4 绝缘栅场效应管(IGFET)

1. G 栅极D 漏极 S 源极B 衬极 SiO 2 P 型硅衬底耗尽层 N + N + 栅极和其它电极之间是绝缘的,故称绝缘栅场效应管。 MOS Metal oxide semiconductor 1.4.1 N 沟道增强型绝缘栅场效应管(NMOS)电路符号 D G S

G D S B P N + N + 2. 工作原理 (1) U GS 对导电沟道的控制作用(U DS =0V) 当U GS ≥U GS(th)时,出现N 型导电沟道。 耗尽层 开启电压:U GS(th) U GS N 型沟道 U GS 值越大沟道电阻越小。

G D S B P N + N + (2) U DS 对导电沟道的影响(U GS >U GS(th)) U GS U DD R D U DS 值小,U GD >U GS(th),沟道倾斜不明显,沟道电阻近似不变,I D 随U DS 线性增加。 I D U GD =U GS -U DS 当U DS 值增加使得U GD =U GS(th),沟道出现预夹断。U DS =U GS -U GS(th) 随着U DS 增加,U GD

1 234 U GS V 2 4 6I D /mA 3. 特性曲线 输出特性曲线:I D =f (U DS ) U GS =常数 转移特性曲线:I D =f (U GS ) U DS =常数 U GS =5V 6V 4V 3V 2V U DS =10V 恒流区 U GS(th) U DS /V 5 10 151 234 I D /mA 可变电阻区 截止区 U GD =U GS(th) 2 GS D DO GS(th)1U I I U ?? =- ? ??? I DO 是U GS =2U GS(th)时的I D 值 I DO U GD >U GS(th) U GD

产品质量检验标准

CaiNi accessories factory
第 1 页 共 1 页
采 妮 饰 品 厂
产品品质检验标准
一、目的: 产品及产品用料检验工作,规范检验过程的判定标准和判定等级,使产品出货的品质满足顾 确保本工厂产品和产品用料品质检验工作的有效性。 二、本标准的适用范围: 本标准适用采妮工厂所有生产的产品及产品用料的品质检验。 三、权责: 1、品质部:检验标准及检验样本的制定,产品检验及判定、放行。 2、生产车间、物控部:所有产品及产品用料报检和产品品质异常的处理。 3、总经理:特采出货及特采用料的核准。 四、定义 1、首饰类: A、项链/手链/腰链 F、发夹 G、手表带 B、耳环 H、领夹 C、胸针 D、介子 E、手镯 J、皮带扣
规范工厂
客需求,
I、袖口钮/鞋扣钮
K、其他配件类(服装、皮包、眼镜等等) 2、产品用料: A、铝质料类 F、钛金属 B、铜料类 G、皮革类 C、铅锡合金 H、不锈钢类 D、锌合金 E、铁质料类 J、包装用料类
I、水晶胶类
K、硅料类(玻璃珠、玻璃石、宝石、珍珠、玛瑙) 3、客户品质等级分类及说明: 1)客户品质等级分类: 品质部根据客户订单注明的: “AAA”、“AA”、“A”三个等级分别对客户品质标准进 行分类 。 2)客户品质等级说明: A、 “AAA”: 品质标准要求比较严格,偏高于正常标准和行业标准。 B、 “AA”: 品质标准要求通用国际化标准和行业标准吻合。 C、 “A”: 品质要求为一般市场通用品质标准。
第 1 页 共 1 页

半导体照明技术作业答案

某光源发出波长为460nm 的单色光,辐射功率为100W ,用Y 值表示其光通量,计算其色度坐标X 、Y 、Z 、x 、y 。 解:由教材表1-3查得460nm 单色光的三色视觉值分别为0.2908X =,0.0600Y =, 1.6692Z =,则对100W P =,有 4356831000.2908 1.98610lm 6831000.0600 4.09810lm 683100 1.6692 1.14010lm m m m X K PX Y K PY Z K PZ ==××=×==××=×==××=× 以及 )()0.144 0.030x X X Y Z y Y X Y Z =++==++=

1. GaP绿色LED的发光机理是什么,当氮掺杂浓度增加时,光谱有什么变化,为什么?GaP红色LED的发光机理是什么,发光峰值波长是多少? 答:GaP绿色LED的发光机理是在GaP间接跃迁型半导体中掺入等电子陷阱杂质N,代替P原子的N原子可以俘获电子,又靠该电子的电荷俘获空穴,形成束缚激子,激子复合发光。当氮掺杂浓度增加时,总光通量增加,主波长向长波移动,这是因为此时有大量的氮对形成新的等电子陷阱,氮对束缚激子发光峰增加,且向长波移动。 GaP红色LED的发光机理是在GaP晶体中掺入ZnO对等电子陷阱,其发光峰值波长为700nm的红光。 2. 液相外延生长的原理是什么?一般分为哪两种方法,这两种方法的区别在哪里? 答:液相外延生长过程的基础是在液体溶剂中溶质的溶解度随温度降低而减少,而且冷却与单晶相接触的初始饱和溶液时能够引起外延沉积,在衬底上生长一个薄的外延层。 液相外延生长一般分为降温法和温度梯度法两种。降温法的瞬态生长中,溶液与衬底组成的体系在均处于同一温度,并一同降温(在衬底与溶液接触时的时间和温度上,以及接触后是继续降温还是保持温度上,不同的技术有不同的处理)。而温度梯度法则是当体系达到稳定状态后,整个体系的温度再不改变,而是在溶液表面和溶液-衬底界面间建立稳定的温度梯度和浓度梯度。 3. 为何AlGaInP材料不能使用通常的气相外延和液相外延技术来制造? 答:在尝试用液相外延生长AlGaInP时,由于AlP和InP的热力学稳定性的不同,液相外延的组分控制十分困难。而当使用氢化物或氯化物气相外延时,会形成稳定的AlCl化合物,会在气相外延时阻碍含Al磷化物的成功生长。因此AlGaInP 材料不能使用通常的气相外延和液相外延技术来制造。

中介效应分析方法

中介效应分析方法 1 中介变量和相关概念 在本文中,假设我们感兴趣的是因变量(Y) 和自变量(X) 的关系。虽然它们之间不一定是因果关系,而可能只是相关关系,但按文献上的习惯而使用“X对的影响”、“因果链”的说法。为了简单明确起见,本文在论述中介效应的检验程序时,只考虑一个自变量、一个中介变量的情形。但提出的检验程序也适合有多个自变量、多个中介变量的模型。 中介变量的定义 考虑自变量X 对因变量Y 的影响,如果X通过影响变量M来影响Y,则称M 为中介变量。例如“, 父亲的社会经济地位”影响“儿子的教育程度”,进而影响“儿子的社会经济地位”。又如,“工作环境”(如技术条件) 通过“工作感觉”(如挑战性) 影响“工作满意度”。在这两个例子中,“儿子的教育程度”和“工作感觉”是中介变量。假设所有变量都已经中心化(即均值为零) ,可用下列方程来描述变量之间的关系: Y = cX + e 1 (1) M = aX + e 2 (2) Y = c’X + bM + e 3 (3) 1 Y=cX+e 1 M=aX+e 2 e 3 Y=c’X+bM+e 3 图1 中介变量示意图 假设Y与X的相关显着,意味着回归系数c显着(即H : c = 0 的假设被拒绝) ,在这个前提下考虑中介变量M。如何知道M真正起到了中介变量的作用,

或者说中介效应(mediator effect ) 显着呢目前有三种不同的做法。 传统的做法是依次检验回归系数。如果下面两个条件成立,则中介效应显着: (i) 自变量显着影响因变量;(ii) 在因果链中任一个变量,当控制了它前面的变量(包括自变量) 后,显着影响它的后继变量。这是Baron 和Kenny 定义的(部分) 中介过程。如果进一步要求: (iii) 在控制了中介变量后,自变量对因变量的影响不显着, 变成了Judd和Kenny 定义的完全中介过程。在只有一个中介变量的情形,上述条件相当于(见图1) : (i) 系数c显着(即H : c = 0 的假设被拒绝) ; (ii) 系数a 显着(即H 0: a = 0 被拒绝) ,且系数b显着(即H : b = 0 被拒绝) 。 完全中介过程还要加上: (iii) 系数c’不显着。 第二种做法是检验经过中介变量的路径上的回归系数的乘积ab是否显着,即检验H : ab = 0 ,如果拒绝原假设,中介效应显着 ,这种做法其实是将ab作为中介效应。 第三种做法是检验c’与c的差异是否显着,即检验H : c - c’= 0 ,如果拒绝原假设,中介效应显着。 中介效应与间接效应 依据路径分析中的效应分解的术语 ,中介效应属于间接效应(indirect effect) 。在图1 中, c是X对Y的总效应, ab是经过中介变量M 的间接效应(也就是中介效应) , c’是直接效应。当只有一个自变量、一个中介变量时,效应之间有如下关系 c = c’+ ab (4) 当所有的变量都是标准化变量时,公式(4) 就是相关系数的分解公式。但公式(4) 对一般的回归系数也成立)。由公式(4) 得c-c’=ab,即c-c’等于中介效 应,因而检验H 0 : ab = 0 与H : c-c’= 0 是等价的。但由于各自的检验统计量 不同,检验结果可能不一样。 中介效应都是间接效应,但间接效应不一定是中介效应。实际上,这两个概念是有区别的。首先,当中介变量不止一个时,中介效应要明确是哪个中介变量的中介效应,而间接效应既可以指经过某个特定中介变量的间接效应(即中介效应) ,也可以指部分或所有中介效应的和。其次,在只有一个中介变量的情形,虽然中介效应等于间接效应,但两者还是不等同。中介效应的大前提是自变量与因变量相关显着,否则不会考虑中介变量。但即使自变量与因变量相关系数是零,仍然可能有间接效应。下面的人造例子可以很好地说明这一有趣的现象。设Y是装配线上工人的出错次数, X 是他的智力, M 是他的厌倦程度。又设智力(X) 对厌倦程度(M) 的效应是 ( =a) ,厌倦程度(M) 对出错次数( Y ) 的效应也是 ( = b) ,而

可焊性试验规范标准

. '. 检验规范 INSPECTION INSTRUCTION 第1页 / 共2页 版本 变更内容 日期 编写者 名称 A 新版 可焊性试验规范 设备 EQUIPMENT 熔锡炉,温度计,显微镜 1.0 目的: 阐述可焊性试验的方法及验收标准 2.0 范围: 适用于上海molex 组装产品的针/端子的可焊性试验 3.0 试验设备与材料: 3.1 试验设备 熔锡炉`温度计`显微镜 3.2 试验材料 无水酒精`助焊剂(液体松香)`焊锡(Sn60或Sn63) 4.0 定义: 4.1 沾锡—--焊锡在被测金属表面上形成一层均匀`光滑`完整而附着的锡层状态,具体见图片A. 4.2 缩锡—--上锡时熔化焊锡覆盖了整个被测表面,试样产品离开熔炉后, 在被测表面上形成形状不规则的锡块,基底金属不暴露, 具体见图片B. 4.3不粘锡—试样产品离开熔锡炉后,被测表面仍然暴露,未形成锡层, 具体见图片C. 4.4 针孔----穿透锡层的小孔状缺陷, 具体见图片D 。 图片A (焊接测试合格) 图片B(表面形成不规则的锡块) 编写者: 校对: 批准: 缩锡 表面形成均匀`光滑`完整而附着的锡层状态

. '. 检验规范 INSPECTION INSTRUCTION 第2页 / 共2页 图片C (铜基底未被锡层覆盖) 图片D (表面有小孔缺陷) 5.0 程序: 5.1试样准备 应防止试样产品沾染油迹,不应刻意的对试样进行清洗`擦拭等清洁工作,以免影响试验的客观性. 5.2熔锡 打开熔锡炉,熔化焊锡,并使熔锡温度保持在245?C ±5?C. 5.3除渣 清除熔锡池表面的浮渣或焦化的助焊剂. 5.4上助焊剂 确保试样产品直立浸入助焊剂中5-10sec,再取出使其直立滴流10-20sec,使的被测部位不会存在多余助焊剂.浸入深度须覆盖整个待测部分. 5.5 上锡 确保试样产品直立浸入熔剂池中5±0.5sec ,以25±6mm/sec 的速度取出,浸入深度须覆盖整个待测 部分. 5.6 冷却 上锡完成后,置放自然冷却. 5.7 清洗 将冷却后的试样产品浸入无水酒精中除去助焊剂,清洗完成后,置于无尘纸上吸干溶液. 6.0 验收标准 在30倍的显微镜下观察,针孔`缩锡`不沾锡等缺陷不得集中于一处,且缺陷所占面积不得超过整个测试面积的5%, 不沾锡 针孔

助焊剂的检验方法(依据标准)

助焊剂的檢驗方法(依據標准) 项目 规格 测试标准 助焊剂分类 ORM0 J-STD-004 物理状态(20℃) 液体 目测 颜色 无色 目测 比重(20℃) 0.822±0.010 GB611-1988 酸价(mgKOH/g) 49.00±5.00 J-STD-004 固态含量(w/w%) 7.50±1.00 JIS-Z-3197 卤化物含量 (w/w%) 无 J-STD-004/2.3.35 吸入容许浓度 (ppm) 400 WS/T206-2001 助焊剂检测方法 6.1助焊剂外观的测定 目视检测成品外观应均匀一致,透明,无沉淀、分层现象,无异物。 6.2助焊剂固体含量的测定 6.2.1(重量分析法) A)原理 将已称重的助焊剂样品先后在水浴及烘箱中除去挥发性物质,冷却后再称重。 助焊剂的固体含量由以上所得到的数值计算而得。 B)仪器 A.实验室常规仪器 B.水浴 C.烘箱 D.电子天平:灵敏度为0.0001g C)步骤 A.有机溶剂助焊剂(沸点低于100℃): a.将烧杯放入恒温110℃± 5℃的烘箱中烘干,放入干燥器中,冷却至室温, 称重(精确至0.001g)。重复以上操作直至烧杯恒重(两次称量相差不超过 0.001g)。 b.移取足量的样品1.0±0.1入烧杯,称重(精确至0.001g)。 c.将烧杯放入110 ± 2℃烘箱中烘1小时,取出后在干燥器中冷却至室温称重 (精确至0.001g) 。 B.水溶剂助焊剂: a.将烧杯放入恒温110°± 2℃的烘箱中烘干,放入干燥器中,冷却至室温,称 重(精确至0.001g)。重复以上操作直至烧杯恒重(两次称量相差不超过 0.001g)。 b.移取足量的样品1.0±0.1入烧杯,称重(精确至0.001g)。 c.将烧杯放入110 ±2℃烘箱中烘3小时,取出后在干燥器中冷却至室温称重 (精确至0.001g) 。

LED半导体照明的发展与应用

LED半导体照明的发展与应用 者按:半导体技术在上个世纪下半叶引发的一场微电子革命,催生了微电子工业和高科技IT产业,改变了整个世界的面貌。今天,化合物半导体技术的迅猛发展和不断突破,正孕育着一场新的革命——照明革命。新一代照明光源半导体LED,以传统光源所没有的优点引发了照明产业技术和应用的革命。半导体LED固态光源替代传统照明光源是大势所趋。1、LED半导体照明的机遇 (1)全球性的能源短缺和环境污染在经济高速发展的中国表现得尤为突出,节能和环保是中国实现社会经济可持续发展所急需解决的问题。作为能源消耗大户的照明领域,必须寻找可以替代传统光源的新一代节能环保的绿色光源。 (2)半导体LED是当今世界上最有可能替代传统光源的新一代光源。 其具有如下优点: ①高效低耗,节能环保; ②低压驱动,响应速度快安全性高; ③固体化封装,耐振动,体积小,便于装配组合; ④可见光区内颜色全系列化,色温、色纯、显色性、光指向性良好,便于照明应用组合; ⑤直流驱动,无频闪,用于照明有利于保护人眼视力; ⑥使用寿命长。 (3)现阶段LED的发光效率偏低和光通量成本偏高是制约其大规模进入照明领域的两大瓶颈。目前LED的应用领域主要集中在信号指示、智能显示、汽车灯具、景观照明和特殊照明领域等。但是,化合物半导体技术的迅猛发展和关键技术的即将突破,使今天成为大力发展半导体照明产业的最佳时机。2003年我国人均GDP首次突破1000美元大关,经济实力得到了进一步的增强,市场上已经初步具备了接受较高光通量成本(初始成本)光

源的能力。在未来的10~20年内,用半导体LED作为光源的固态照明灯,将逐渐取代传统的照明灯。 (4)各国政府予以高度重视,相继推出半导体照明计划,已形成世界性的半导体照明技术合围突破的态势。 ①美国:“下一代照明计划”时间是2000~2010年投资5亿美元。美国半导体照明发展蓝图如表1所示; ②日本:“21世纪的照明计划”,将耗费60亿日元推行半导体照明目标是在2006年用白光LED替代50%的传统照明; ③欧盟:“彩虹计划”已在2000年7月启动通过欧共体的资助推广应用白光LED照明; ④中国:2003年6月17日,由科技部牵头成立了跨部门、跨地区、跨行业的“国家半导体照明工程协调领导小组”。从协调领导小组成立之日到2005年年底之前,将是半导体照明工程项目的紧急启动期。从2006年的“十一五”开始,国家将把半导体照明工程作为一个重大项目进行推动; (5)我国 的半导体LED产业链经过多年的发展已相对完善,具备了一定的发展基础。同时,我国又是照明灯具产业的大国,只要政府和业界协调整合好,发展半导体LED照明产业是大有可为的; 2LED的发展历程(如图1所示) 2.1LED技术突破的历程

助焊剂通用规范.

助焊剂通用规范 2014-08-15发布2014-09-01实施 xxx电子分厂发布

助焊剂通用规范 免清洗液态助焊剂——————————————————————————————————————— 1 范围 本标准规定了电子焊接用免清洗液态助焊剂的技术要求、实验方法、检验规则和产品的标志、包装、运输、贮存。 本标准主要适用于印制板组装及电气和电子电路接点锡焊用免清洗液态助焊剂(简称助焊剂)。使用免清洗液态助焊剂时,对具有预涂保护层印制板组件的焊接,建议选用与其配套的预涂覆助焊剂。 2 规范性引用文件 下列文件中的条款通过本标准的引用而成为本标准的条款。凡是注日期的引用文件,其随后所有的修改单(不包括勘误的内容)或修订版均不适用于本标准,然而,鼓励根据本标准达成协议的各方研究是否使用这些文件。凡是不注日期的引用文件,其最新版本适用于本标准。 GB 190 危险货物包装标志 GB 2040 纯铜板 GB 3131 锡铅焊料 GB 2423.32 电工电子产品基本环境试验规程润湿称量法可焊性试验方法 GB 2828 逐批检查计数抽样程序及抽样表(适用于连续批的检查) GB 2829 周期检查计数抽样程序及抽样表(适用于生产过程稳定性的检查) GB 4472 化工产品密度、相对密度测定通则 GB 4677.22 印制板表面离子污染测试方法 GB 9724 化学试剂PH值测定通则 YB 724 纯铜线 3 要求 3.1 外观 助焊剂应是透明、均匀一致的液体,无沉淀或分层,无异物,无强烈的刺激性气味;一年有效保存期内,其颜色不应发生变化。 3.2 物理稳定性 按5.2试验后,助焊剂应保持透明,无分层或沉淀现象。 3.3 密度 按5.3检验后,在23℃时助焊剂的密度应在其标称密度的(100±1.5)%范围内。 3.4 不挥发物含量 按5.4检验后,助焊剂不挥发物含量应满足表1的规定。

中介效应分析方法

中介效应分析方法 1 中介变量与相关概念 在本文中,假设我们感兴趣的就是因变量(Y) 与自变量(X)的关系。虽然它们之间不一定就是因果关系,而可能只就是相关关系,但按文献上的习惯而使用“X对的影响”、“因果链”的说法。为了简单明确起见,本文在论述中介效应的检验程序时,只考虑一个自变量、一个中介变量的情形。但提出的检验程序也适合有多个自变量、多个中介变量的模型。 1、1 中介变量的定义 考虑自变量X 对因变量Y的影响,如果X通过影响变量M来影响Y,则称M为中介变量。例如“, 父亲的社会经济地位”影响“儿子的教育程度”,进而影响“儿子的社会经济地位”。又如,“工作环境”(如技术条件) 通过“工作感觉”(如挑战性) 影响“工作满意度”。在这两个例子中,“儿子的教育程度”与“工作感觉”就是中介变量。假设所有变量都已经中心化(即均值为零) ,可用下列方程来描述变量之间的关系: Y = cX + e1 (1) M = aX + e2(2) Y = c’X + bM + e3 (3) e1 Y=cX+e1 M=aX+e2 e3 Y=c’X+bM+e3 中介变量示意图 假设Y与X的相关显著,意味着回归系数c显著(即H0: c = 0 的假设被拒绝) ,在这个前提下考虑中介变量M。如何知道M真正起到了中介变量的作用,或者说中介效应(mediator effect ) 显著呢? 目前有三种不同的做法。 传统的做法就是依次检验回归系数。如果下面两个条件成立,则中介效应显著: (i) 自变量显著影响因变量;(ii)在因果链中任一个变量,当控制了它前面的变量(包括自变量) 后,显著影响它的后继变量。这就是Baron与Kenny定义的(部分) 中介过程。如果进一步要求: (iii)在控制了中介变量后,自变量对因变量

中介效应分析方法

中介效应分析方法 1 中介变量和相关概念 在本文中,假设我们感兴趣的是因变量(Y) 和自变量(X) 的关系。虽然它们之间不一定是因果关系,而可能只是相关关系,但按文献上的习惯而使用“X 对的影响”、“因果链”的说法。为了简单明确起见,本文在论述中介效应的检验程序时,只考虑一个自变量、一个中介变量的情形。但提出的检验程序也适合有多个自变量、多个中介变量的模型。 1.1 中介变量的定义 考虑自变量X 对因变量Y 的影响,如果X 通过影响变量M 来影响Y ,则称M 为中介变量。例如“, 父亲的社会经济地位”影响“儿子的教育程度”,进而影响“儿子的社会经济地位”。又如,“工作环境”(如技术条件) 通过“工作感觉”(如挑战性) 影响“工作满意度”。在这两个例子中,“儿子的教育程度”和“工作感觉”是中介变量。假设所有变量都已经中心化(即均值为零) ,可用下列方程来描述变量之间的关系: Y = cX + e 1 (1) M = aX + e 2 (2) Y = c ’X + bM + e 3 (3) 1 Y=cX+e 1 e 2 M=aX+e 2 a b M

e3 Y=c’X+bM+e3 图1 中介变量示意图 假设Y与X的相关显著,意味着回归系数c显著(即H0 : c = 0 的假设被拒绝) ,在这个前提下考虑中介变量M。如何知道M真正起到了中介变量的作用,或者说中介效应(mediator effect ) 显著呢? 目前有三种不同的做法。 传统的做法是依次检验回归系数。如果下面两个条件成立,则中介效应显著: (i) 自变量显著影响因变量;(ii) 在因果链中任一个变量,当控制了它前面的变量(包括自变量) 后,显著影响它的后继变量。这是Baron 和Kenny 定义的(部分) 中介过程。如果进一步要求: (iii) 在控制了中介变量后,自变量对因变量的影响不显著, 变成了Judd和Kenny 定义的完全中介过程。在只有一个中介变量的情形,上述条件相当于(见图1) : (i) 系数c显著(即H0 : c = 0 的假设被拒绝) ;(ii) 系数a 显著(即H0: a = 0 被拒绝) ,且系数b显著(即H0: b = 0 被拒绝) 。完全中介过程还要加上: (iii) 系数c’不显著。 第二种做法是检验经过中介变量的路径上的回归系数的乘积ab是否显著,即检验H0 : ab = 0 ,如果拒绝原假设,中介效应显著 ,这种做法其实是将ab作为中介效应。 第三种做法是检验c’与c的差异是否显著,即检验H0 : c - c’= 0 ,如果拒绝原假设,中介效应显著。 1.2 中介效应与间接效应 依据路径分析中的效应分解的术语 ,中介效应属于间接效应(indirect effect) 。在图1 中, c是X对Y的总效应, ab是经过中介变量M 的间接效应(也就是中介效应) , c’是直接效应。当只有一个自变量、一个中介变量时,效应之间有如下关系 c = c’+ ab (4) 当所有的变量都是标准化变量时,公式(4) 就是相关系数的分解公式。但公式(4) 对一般的回归系数也成立)。由公式(4) 得c-c’=ab,即c-c’等于中介效应,因而检验H0 : ab = 0 与H0 : c-c’= 0 是等价的。但由于各自的检验统计量不同,检验结果可能不一样。 中介效应都是间接效应,但间接效应不一定是中介效应。实际上,这两个概念

半导体照明技术及其应用

《半导体照明技术及其应用》课程教学大纲 (秋季) 一、课程名称:半导体照明技术及其应用Semiconductor Lighting Technology and Applications 二、课程编码: 三、学时与学分:32/2 四、先修课程: 微积分、大学物理、固体物理、半导体物理、微电子器件与IC设计 五、课程教学目标: 半导体照明是指用全固态发光器件LED作为光源的照明,具有高效、节能、环保、寿命长、易维护等显著特点,是近年来全球最具发展前景的高新技术领域之一,是人类照明史上继白炽灯、荧光灯之后的又一场照明光源的革命。本课程注重理论的系统性﹑结构的科学性和内容的实用性,在重点讲解发光二极管的材料、机理及其制造技术后,详细介绍器件的光电参数测试方法,器件的可靠性分析、驱动和控制方法,以及各种半导体照明的应用技术,使学生学完本课程以后,能对半导体照明有深入而全面的理解。 六﹑适用学科专业:电子科学与技术 七、基本教学内容与学时安排: 绪论(1学时) 半导体照明简介、学习本课程的目的及要求 第一章光视觉颜色(2学时) 1光的本质 2光的产生和传播 3人眼的光谱灵敏度 4光度学及其测量 5作为光学系统的人眼 6视觉的特征与功能 7颜色的性质 8国际照明委员会色度学系统 9色度学及其测量 第二章光源(1学时) 1太阳 2月亮和行星 3人工光源的发明与发展 4白炽灯 5卤钨灯 6荧光灯 7低压钠灯

8高压放电灯 9无电极放电灯 10发光二极管 11照明的经济核算 第三章半导体发光材料晶体导论(2学时) 1晶体结构 2能带结构 3半导体晶体材料的电学性质 4半导体发光材料的条件 第四章半导体的激发与发光(1学时) 1PN结及其特性 2注入载流子的复合 3辐射与非辐射复合之间的竞争 4异质结构和量子阱 第五章半导体发光材料体系(2学时) 1砷化镓 2磷化镓 3磷砷化镓 4镓铝砷 5铝镓铟磷 6铟镓氮 第六章半导体照明光源的发展和特征参量(1学时)1发光二极管的发展 2发光二极管材料生长方法 3高亮度发光二极管芯片结构 4照明用LED的特征参数和要求 第七章磷砷化镓、磷化镓、镓铝砷材料生长(3学时)1磷砷化镓氢化物气相外延生长(HVPE) 2氢化物外延体系的热力学分析 3液相外延原理 4磷化镓的液相外延 5镓铝砷的液相外延 第八章铝镓铟磷发光二极管(2学时) 1AlGaInP金属有机物化学气相沉积通论 2外延材料的规模生产问题 3电流扩展 4电流阻挡结构 5光的取出 6芯片制造技术

PCB.A焊锡作业标准及通用检验标准

PCB.A焊锡作业标准及通用检 验标准 【培训教材】 部门: 工艺工程部 编制: 代建平 ※烙铁工作原理 ※烙铁的分类及适用范围 ※烙铁使用前的准备 ※烙铁的使用与操作 ※一般无铅组件焊接参考温度及时间 ※烙铁的使用注意事项及保养要求 ※焊点的焊接标准及焊点的判别 ※焊点不良的原因分析 ※焊接的操作顺序 审查﹕核准﹕版本﹕

A B D E F A. 滑动盖 D 未端 B. 滑动柱 E 感应器 C. 未端 F 测量点 C 1. 目的:为使作业者正确使用电烙铁进行手工焊接作业,使电烙铁得到有效的利用,特制定本教材。 2. 范围:升邦钟表制品厂所有使用电烙铁,烙铁架的人员。 3. 定义: 恒温烙铁:是一种能在一定温度范围自由调节烙铁发热温度,并稳定在较小温度范围内的手工焊接工具,一般在100℃-400℃可调,温度稳定性达±5℃或更好状况。 4.工作原理:通过能量转换使锡合金熔化后适量转移到工件预焊位置,使其凝固,达到人们预期的目的。 5.内容: 5.1电烙铁的分类及适用范围: 5.1.1:一般情况下,烙铁按功率分,可分为30W 烙铁、40W 烙铁、60W 烙铁、恒温烙铁等几种类型,按加热方式分可分 为内热式、外热式。30W 烙铁温度能控制在250℃-300℃之间;40W 烙铁温度能控制在280℃-360℃之间; 60W 烙铁温度能控制在350℃-480℃之间;恒温烙铁具有良好的温度稳定性能,因其温度能在100℃-400℃之间可调和稳定,因此可适用各种焊接环境,因内热式具有发热稳定,不易损坏等优点,所以目前大多数使用内热式烙铁。 5.1.2:烙铁嘴一般是采用紫铜或类似合金的材料制成, 其特点是传热快, 易与锡合金物亲合,烙铁头根据焊接物的大小 形状要求不同的形状, 通常有圆锥、斜圆、扁平、一字形…,通常电烙铁配置相同功率的烙铁嘴,因圆锥型烙铁嘴适用各种焊接环境,尤其在组件脚较密,普通电子组件、焊盘焊接位置较紧凑的位置,焊接对温度敏感组件或其它易烫坏组件和焊盘的环境下使用,具有其不可替代的优越性,如封装IC 脚焊接,普通电容、电阻、二极管排线、石英的焊接等,所以在工厂里运用较多。 5.2:电烙铁的使用前准备: 5.2.1:恒温烙铁是通过电源插线把电源与焊台连接起来的, 一般使用总插头+防静电线, 其中总插头为电源线, 另一根线 为防静电连接线, 在焊接有特殊要求的器件时, 必须作好防静电保护,(烙铁的接地线必须接地)因此电烙铁使用前应由生产管理人员检查烙铁电源线及地线是否有铜线外露或胶落,电源插头接线有无脱落,烙铁嘴是否有松动、破损等,如有异常情况,应及时交由工艺工程部相关人员维修或更换。 5.2.2:使用电烙铁,须配套领取烙铁架,同时应将海棉浸湿再挤干后置入烙铁架相应位置,以作烙铁嘴擦拭之用。 5.2.3:将恒温烙铁插上电源,2分钟左右,烙铁嘴将迅速升温至所设定温度,这时应将烙铁嘴在湿润的海棉上擦拭干净 用锡线在烙铁嘴上涂上薄薄的一层锡,再次在海棉上擦拭到烙铁嘴光亮后方可使用。 5.2.4:电烙铁在开始使用或在使用过程中,须由IPQC 对烙铁嘴的温度及感应电压进行监测,一般监测温度使用烙铁温 度测试仪,监测感应电压使用数字万用表,以确保其焊接温度及感应电压在工艺要求范围,一般至少每4小时须检查一次. 监测烙铁温度具体检测步骤如下: (1)打开电池箱检查电池并确认电池安装极性正确; (2)装传感器红色的一边装到红色的未端C ,装传感器蓝色的一边到蓝色的未端D ;推动滑动盖A 并装另一边到 滑动柱B ; (3)打开电源开关,检查显示屏是否有显示,显示出当时室温时,可以使用该仪器,将使用中的烙铁嘴加上薄 薄一层锡后密切接触到测试点F 上,显示器上将在2-3S 钟内显示烙铁温度。

焊点工艺标准及检验规范

焊点工艺标准及检验规范文件编号 页数生效日期 冷焊 OK NG 特点:焊点程不平滑之外表,严重时于线脚四周,产生这褶裰或裂缝 1.焊锡表面粗糙,无光泽,程粒状。 2.焊锡表面暗晦无光泽或成粗糙粒状,引脚与铜箔未完全熔接。 允收标准:无此现象即为允收,若发现即需二次补焊。 影响性:焊点寿命较短,容易于使用一段时间后,开始产生焊接不良之现象,导致功能失效。 造成原因:1.焊点凝固时,收到不当震动(如输送皮带震动) 2.焊接物(线脚、焊盘)氧化。 3.润焊时间不足。 补救处置:1.排除焊接时之震动来源。 2.检查线脚及焊盘之氧化状况,如氧化过于严重,可事先去除氧化。 3.调整焊接速度,加长润焊时间。 编制审核批准 日期日期日期

焊点工艺标准及检验规范文件编号 页数生效日期 针孔 OK NG 特点:于焊点外表上产生如针孔般大小之孔洞。 允收标准:无此现象即为允收,若发现即需二次补焊。 影响性:外观不良且焊点强度较差。 造成原因:1.PCB含水汽。 2.零件线脚受污染(如矽油) 3.导通孔之空气受零件阻塞,不易逸出。 补救处置:1.PCB过炉前以80~100℃烘烤2~3小时。 2.严格要求PCB在任何时间任何人都不得以手触碰PCB表面,以避免污染。 3.变更零件脚成型方式,避免Coating(零件涂层)落于孔内,或察看孔径与线搭配是否有风孔之现象。 编制审核批准 日期日期日期

焊点工艺标准及检验规范文件编号 页数生效日期 短路 OK NG 特点:在不同线路上两个或两个以上之相邻焊点间,其焊盘上这焊锡产生相连现象。 1.两引脚焊锡距离太近小于0.6mm,接近短路。 2.两块较近线路间被焊锡或组件弯角所架接,造成短路。 允收标准:无此现象即为允收,若发现即需二次补焊。 影响性:严重影响电气特性,并造成零件严重损害。 造成原因:1.板面预热温度不足。 2.助焊剂活化不足。 3.板面吃锡高度过高。 4.锡波表面氧化物过多。 5.零件间距过近。 6.板面过炉方向和锡波方向不配合。 补救处置:1.调高预热温度。 2.更新助焊剂。 3.确认锡波高度为1/2板厚高。 4.清除锡槽表面氧化物。 5.变更设计加大零件间距。 6.确认过炉方向,以避免并列线脚同时过炉,或变更设计并列线脚同一方向过炉。 编制审核批准 日期日期日期