搜档网
当前位置:搜档网 › Incorporating Audio Cues into Dialog and Action Scene Extraction

Incorporating Audio Cues into Dialog and Action Scene Extraction

Incorporating Audio Cues into Dialog and Action Scene

Extraction

Lei Chen?,Shariq J.Rizvi??and M.Tamer¨Ozsu?

?School of Computer Science,University of Waterloo,Waterloo,Canada

Email:{l6chen,tozsu}@uwaterloo.ca

?Computer Science and Engineering Department,Indian Institute of Technology,Mumbai,

India

Email:rizvi@cse.iitb.ac.in

ABSTRACT

In this paper,we present an approach to extract scenes in video.The approach is top-down and uses video editing rules and audio cues to extract simple dialog and action scenes.The underlying model is a?nite state machine coupled with audio cues that are determined using an audio classi?er.

Keywords:shot,scene,editing rules,?nite state machine,support vector machine,audio classi?cation

1.INTRODUCTION

The increasing availability and use of video has raised demands for better modeling of video and the provision of more sophisticated indexing and retrieval techniques.However,compared to text or images,video data are much more complicated.A one minute movie clip may contain about2,000video frames(image),a mixture of three types of sounds(audio),and several lines of close caption(text).How to e?ciently represent and index video data remains a challenging problem.Early video database systems segment video into shots,1–3and extract key frames from each shot to represent it.4–6Such systems have been criticized for three reasons:

?The number of shots becomes very large with the growth of video data,which makes the data di?cult to browse;

?A simple shot does not convey much semantics,since it is produced by a single camera operation.?Using key frames may ignore temporal characteristics of the video.

There have been several attempts7–10to cluster semantically related shots into scenes.All the scene construction algorithms follow similar steps:

1.Visual features are extracted from shots,such as color histograms,textures and shapes.

2.Shots are clustered based on a similarity measure which is computed from the extracted visual features.

3.Clusters that are temporally close to each other are grouped into scenes.

All of these approaches use a“bottom-up”strategy,clustering the shots into“general”scenes without any knowledge about the semantics and structure of the scenes.However,they only employ low-level visual features, which may cause semantically unrelated shots to be clustered into one unit only because they may be“similar”in terms of their low-level visual features.Furthermore,users may not be interested in the“general”scenes constructed in this way,but may focus on particular scenes.In particular,dialog and action scenes have special importance in video,since they constitute basic“sentences”of a movie that consist of three basic types of scenes11:dialogs without action,dialogs with action,and actions without dialog.Automatic extraction of dialog and action scenes from a video is an important issue for practical use of video.There is another shortcoming ?Work performed while the author was visiting the University of Waterloo.

of previous approaches:most of them ignore the accompanying audio data.Although audio in movies is quite an important factor for humans to understand the movie,most of the previous works7–10,12are only focused on using the visual features,and ignore the accompanying audio information.The audio information contain a number of clues about the semantics of the video data.For example,the audio clips which are extracted from the dialog scenes are mainly speech with some music or environment background sound.Therefore,automatic scene extraction models should take the accompanying audio data into consideration in order to make the extraction results closer to human understanding.

Based on video editing rules and audio cues,we develop a Finite State Machine(FSM)model to extract simple dialog or action scenes?from movies.In previous work13we had summarized our approach and reported our preliminary results involving only video editing rules.In this paper,we describe the complete model and discuss our experiments with it.

Audio data of a movie are not simply pure audio such as pure music,pure speech,etc.,but are always a mixture of several types of sounds such as speech with music or environment background sound,environment sound with music https://www.sodocs.net/doc/198155121.html,pared to pure audio classi?cation,14–17there has not been much work on mixed audio classi?cation.18,19The audio classi?cation model that we develop in this paper can di?erentiate between speech with music or environment background sound,and environment sound mixed with music.The incorporation of audio cues into the FSM model based on editing rules results in a powerful extraction system. The experiments that we report indicate that the precision of the prediction model increases signi?cantly with the use of audio data.

The rest of the paper is arranged as follows:Section2brie?y presents the basic video editing rules which are deduced from the analysis of the dialog and action scenes from the editor’s view.These rules guide our extraction model to extract scenes from video.Section3explains the audio features that we use for classifying mixed type audio.In Section4,we present a support vector machine-based classi?cation model to classify mixed audio data.The complete FSM model is presented in Section5.Experimental results on the e?ciency and accuracy of the FSM model coupled with audio classi?cation model are given in Section6,followed,in Section7,by a summary of some related work on audio classi?cation and a comparison of our work with earlier proposals on video scene detection.We conclude,in Section8and indicate some further directions that we are pursuing.

2.VIDEO EDITING RULES FOR CONSTRUCT DIALOG AND ACTION SCENES

A given video clip may be(and commonly is)interpreted di?erently by di?erent users,and this is an example of semantic heterogeneity20within the context of video.However,there is only one meaning that a video editor tries to convey to the audience through constructing a semantically meaningful scene from video shots.There are some basic video editing rules that video editors follow,and developing an scene extraction model based on these rules indirectly solves the semantic heterogeneity problem.

Through the analysis of actor arrangement and camera placement,we?nd that there are only three basic types of video shot patterns in a two person(call a and b)dialog scene:

?a shot in which only actor a’s face is visible throughout the shot(Type A shot);

?a shot in which only actor b’s face is visible throughout the shot(Type B shot);and

?a shot with both actors a and b,with both of their faces visible(Type C shot).

In Figure1(a),three representative frames from three types of shots are shown.These shots are extracted from a short dialog scene between Randall and his ex-girlfriend from the movie“Gone in60seconds”.In this example dialog,actor a is Randall and actor b is his ex-girlfriend.Thus,in Figure??,the?rst frame shows that Randall’s face is visible;it is an A type shot.Similarly,in the second frame,Randall’s ex-girlfriend shows up. The shot which is represented by this frame is a B type shot.In the third frame,both actors appear and with their faces visible.According to the de?nition above,it represents a C type shot.

?In this paper,the term action scene is used to address one-on-one?ghting action scene for simplicity.

(a)Three types of shots(b)Cut-away shot

Figure1.Four types of shots in a dialog scene

In addition to above three types of shots,usually an insert or cut-away shot is introduced to depict something related to the dialog or not covered by those three types of shots.We use symbol#to represent it.Figure1(b) shows a dialog which contains a cut-away shot.In this dialog,we start with a C type shot showing two actors having a dialog about a wooden sword.After that,a cut-away shot is inserted to show the sword,and?nally we get back to a C type shot to re-establish the dialog scenario.

The rules governing the actor arrangement and camera placement in simple action scenes,are the same as those for producing simple dialog scenes.This is true even though,in an action scene,actors move rapidly and cameras follow the actors.

After a set of video shots are obtained from cameras that are used to?lm dialogs,the issue becomes how these shots can be used to construct a dialog scene to express a conversation.Basically,two steps are followed in editing a dialog scene.11,21

1.Video editors use shots involving both actors(type C shot)or showing alternating actors(i.e,either AB

or BA),to set up the dialog scenario,these shots give the audience an early impression of who are involved in the dialog.

During this setting up process,the basic building blocks of dialog scenes are constructed.We call these basic building blocks elementary dialog scenes.An elementary dialog scene includes a set of video shots, and can itself be a dialog scene or be expanded to a longer dialog scene.The set of elementary dialog scenes are determined empirically,based on the analysis of editing rules used to establish dialog scenes and observations of dialog scenes of?ve movies?.As a result,we have identi?ed eighteen types of elementary dialog scenes as depicted in Table3along with statistics about their occurrence frequency in the?ve movies under consideration.In table3,AB and BA elementary dialog scenes do not appear.Through

elementary dialog scenes appearance percent-age

ABAB or BABA41.21%

CAB or CBA21.21%

C or C#C19.39%

ABC or BAC 6.06%

CAC or CBC 3.63%

ABAC or BABC 2.42%

ACC or BCC 2.42%

ACA or BCB 2.42%

ACB or BCA 1.21%

Table1.Statistical data on elementary dialog scenes

the analysis of?ve movies,we?nd that the single appearance of AB or BA usually acts as a separator to connect two di?erent scenes instead of a single dialog.One pair of shot/reverse shot or a C type shot has to be appended to AB or BA to construct an elementary dialog.

?1.“Conair”,1998;2.“Life is Beautiful”,2000;3.“First Knight”1998;4.“Deconstruction”,1990;5.“What dreams may come”1998.

2.Each elementary dialog scene can be expanded by appending three types of shots.

During this editing process,the basic rule that an editor uses is to give a contrast impression to the audience.For example,if the ending shot of one scene is an A type shot,usually a B type shot is appended to expand the scene.Similarly,the editor can append a C type shot as a re-establishing shot from time to time to remind the audience of the whole scenario surrounding the dialog scene.Table2lists expansion rules.

type of end shot in the scene types of shots that may follow

A B or C

B A or C

C A or B or C

Table2.Possible types of shots to be appended

Based on these basic techniques,we draw two rules for the constructions of dialog and action scenes:

Rule1:Dialog or action scenes must start with an elementary dialog scene.

Rule2:An elementary dialog scene is expanded by appending di?erent types of shots.

3.CHARACTERISTICS OF MIXED TYPE AUDIO DATA

Through the observation of?ve test movies§,we?nd an interesting fact:in most of the dialog scenes,the background audio is music or environment sound,but,compared to the foreground speech signals,these music signals are hidden and very easily ignored.In most of the action scenes,the background audio is also music but it can be noticed together with the action e?ect sounds.Therefore,audio data which come from dialog and action scenes can be classi?ed into two audio types,which are speech mixed with music or environment sound (speech mixture)and environment sound mixed with music(env-mus mixture).Incorporating an audio classi?er which can di?erentiate these two mixed types of audio data is expected to increase accuracy of our FSM model.

A number of audio features provide separability between the di?erent classes involved in pure audio classi-?cation.For example,the variance of zero crossing rate(ZCR)is relatively higher for speech than for music, because of the considerable di?erence between the ZCR values of voiced and unvoiced speech.However,for mixed type audio data,the values of these audio features are clustered,which bring the di?culties in mixed type audio classi?cation.In the following subsections,three audio features,which are commonly used to di?erentiate music from speech,are presented with their characteristics in mixed type audio data.We extract80sample clips(5seconds long)of speech mixture and env-mus mixture from the movie“Gladiator”,40for each type.A 1-second window is used to divide each clip into subsegments.

3.1.Variance of Zero Crossing Rate

Due to the sharp di?erence between the ZCR values for voiced and unvoiced components,speech tends to have a high variance in its ZCR values.Because of the absence of any such phenomenon,music tends to have a lower variance.We calculate a measure of the ZCR variance for a window of N frames,as the ratio of frames having ZCR value more than1.5times the average ZCR of the window(high zero-crossing rate ratio17).In our experiments,we divide the1-second window into100frames.Figure2(a)shows average HZCRRs of speech mixture and env-mus mixture.

§1.“Con Air”,1998;2.“Life is Beautiful”,2000;3.“First Knight”1998;4.“Deconstruction”,1990;5.“What dreams may come”1998.

(a)Average HZCRRs(b)Average SRs

(c)Average HRs

Figure2.Characteristics of mixed type audio

3.2.Silence Ratio

Silence Ratio(SR)is de?ned as the ratio between the amount of silence of an audio piece and the length of the piece.As stated in Ref.22,SR is an useful statistics feature for audio classi?cation,it is usually used to di?erentiate music from speech.Normally speech has higher SRs than music.We divide a1-second window into 50frames.For each frame,the root mean square(RMS)is computed and compared to the RMS of the whole window.If the frame RMS is less than50%of window RMS,we consider it a silence frame.With respect to speech mixture,silent periods can be de?ned as the time intervals that only background music or environment sound is playing.Therefore,the threshold of50%is used instead of the10%used in Ref.23for pure speech. Figure2(a)shows average SRs of speech mixture and env-mus mixture.

3.3.Harmonic Ratio

Spectrum analysis shows that pure music is more harmonic than speech,since pure speech contains a sequence of tonal(vowels)and noise(consonants).19Harmonic sound is de?ned as one which contains a series of frequencies which are derived from a fundamental or original frequency as a multiple of that.For each1-second window audio clip,we divide it into10frames.We compute the harmonic frequency of each frame using the algorithm in.24The harmonic ratio(HR)is de?ned as the number of frames having a harmonic frequency to the total number of frames in the window.HR is a measure of the harmony in the clip.Figure2(c)shows the average HRs of speech mixture and env-mus mixture.

As shown in Figure2(b),with our de?ned threshold,the average SRs of speech mixture are much higher than those of env-mus mixture.Only a few points of speech mixture have lower values.The analysis of those clips shows that their background music or environment sound are loud and speech periods are short.Although Figures 2(a)and2(c)show that HZCRRs and HRs of two mixed types audio are not separable in their original space, we still choose these two features for our audio classi?er based on traits of HZCRR and HR in di?erentiating speech from music.In the next section,we present a classi?er which can transform these mixed data into higher dimensional space and separate them there.

4.SVM-BASED AUDIO CLASSIFIER

In this section,we propose an audio classi?er which can di?erentiate speech mixture and env-mus mixture. Support vector machine(SVM)is selected as the classi?er.We select SVM because it always?nds a global minimum compared with other classi?ers such as neural network25and it is capable of learning in sparse high-dimensional spaces with relatively few training examples.26Basically,SVM seeks the separating hyperplane that produces the largest separation margin and it is based on the principle of structural risk minimization.26If the data vectors are linearly separable in the input space,a simple linear SVM can be used for their classi?cation. However,in general,the input data vectors are not linearly separable.SVM uses a nonlinear transformation to map the input data vectors X into higher-dimensional space(feature space)and attempts to linearly separate the mapped data in higher-dimensional space.The nonlinear transformation is embedded in the kernel function of SVM.Therefore,the kernel function plays an important role in transformation and separation in feature space. As shown in Figures2(a),2(b),2(c),our mixed type audio data are not linear separable;we expect the kernel function of SVM to transform these data into higher dimension and make them separable.Two kernel functions are frequently used:

1.polynomial kernel:

K(X,X i)=(X T X i+1)d

where X i are support vectors which are determined from training data and d=1,2,...is the degree of the polynomial.

2.Gaussian Radial Basis Function(RBF)kernel:

K(X,X i)=e? X?X i 2/2δ2

whereδ>0is de?ned to be the global basis function width.

Besides d andδ,there is another parameter,C,which is a positive penalty component used to control the amount of overlap that is allowed between classes.We select mixed type audio clips from di?erent movies?, which are transformed into16bit,44kHz,single channel raw audio clips of5second duration.With these clips, we generated a database with a size of around1600clips and these clips are manually annotated as“speech mixture”or“env-mus mixture”.Half of the clips are used as training data and the rest of the clips are used as test data.We de?ne the accuracy of a SVM-based classi?er as the proportion of correctly identi?ed clips to total number of the clips.SVM Light27is used to build our classi?er.Both of kernel functions de?ned above are tested.In Figure3,we show the accuracy of trained SVM classi?er with the polynomial kernel.With various parameter settings,around86%accuracy can be achieved.These results demonstrate that SVM classi?er is rather robust over the choice of model parameters.Similar accuracy can be achieved with Gaussian RBF kernel withδ=5.Due to the space limitation,these results are not shown.We also test the accuracy of SVM-based

Figure3.Accuracy versus C using polynomial kernel with d=2,3,5

classi?er with di?erent audio features in action and the results are reported in Table3.As we expected,the accuracy increases when one or more features are“turned on”.Table3also shows that SR is a“better”feature ?1.”Crouching Tiger Hidden Dragon”,2000;2.”Gladiator”,2000

in di?erentiating speech mixture and env-mus mixture compared with the other two,which exactly corresponds to the analysis of three features in Section3.

HZCRR SR HR Accuracy

√√√85.94%

√√85.29%

√√72.15%

√√85.42%

√69.35%

√84.17%

√63.25%

Table3.Classi?cation accuracy with di?erent combinations of audio features

5.EXTENDED FSM MODEL

In this section,we present how to build dialog or action scene extraction model based on the video editing rules mentioned in Section2.In order to overcome the shortcomings of the FSM model,we extend the FSM model by incorporating audio cues.

5.1.Video Shot String

We introduce the concept of a video shot string(V SS)to represent the temporal presentation sequence of di?erent types of shots in a video.

De?nition1:V={A,B,C,#}is the set of video shot types whose members are types of video shots de?ned in section2.

De?nition2:A V SS is a string which is composed of symbols from V.Each symbol in V SS represents a shot in a video.The ordering of symbols in the string is from left to right,which represents the shot presentation sequence.

Based on the analysis of the above-discussed two editing steps,we de?ne a V SS of a dialog scene as a V SSDS:

De?nition3:A V SSDS is a VSS whose pre?x is one of the elementary dialog scenes that can be expanded by the rules given in Table2.

The starting elementary dialog scene classi?es a V SSDS as well.Consequently,there are eighteen types of V SSDS s corresponding to those types of dialog scenes.It is easy to prove that these are regular languages over set V.We do not give a complete proof due to lack of space,but the following is the proof of one of these cases, namely the V SSDS whose pre?x is ABAB.Proof of other cases are similar.

{A},{B},{C}are regular languages over V.{ABAB}is a product of regular langauges{A}and{B}: {ABAB}={A}?{B}?{A}?{B},so{ABAB}is a regular language over V,too.A V SSDS that starts with ABAB includes string ABAB and all the strings which are expanded from ABAB by appending A,B or C using the rules in Table2.Appending a shot to a scene is a concatenation operation(?).Therefore,by de?nition of a regular language,28a V SSDS of a dialog scene that starts with ABAB is a regular language over V.

By taking the union of the eighteen types of V SSDS s,we again obtain a regular language over set V. Therefore,V SSDS s that are used to represent the temporal appearance patterns of video shots in dialog scenes are regular languages over set V.

5.2.Finite State Machine to Extract V SSDS s of Dialog Scenes

Since V SSDS s of dialog scenes are regular languages,the next issue will be how to automatically extract the V SSDS s which correspond to dialog scenes from V SS s of the whole video.In other words,how to extract speci?ed regular languages from V SS s?In this paper,we propose a?nite state machine(FSM)model to extract dialog scenes from videos.Note that we are not using the FSM to determine whether a language is a regular language over V,but constructively to extract those parts of a V SS that form regular languages with certain properties.In our proposed FSM,a V SS is used as an input to the FSM.A state is used to represent the status after a number of shots have been processed.An edge between states will determine an allowable transition from current state to another state under a labelled condition.The label of the arc is a symbol which is used to represent a type of shot.A sub-string of the V SS will be extracted by the FSM if and only if there exists a path from initial state to one of?nal states.The symbols on the path correspond to sequence of the shots in that sub-string of V SS.Figure4shows the transition diagram of our proposed FSM which is used to extract V SSDS s of simple dialog scenes between two actors:A video editor follows similar rules that are used to construct dialog

B

Figure4.A FSM extracts V SSDS s of dialog scenes between actor a and actor b

scenes to compose simple action scenes,11,21therefore,temporal appearance patterns of video shots in simple action scenes are similar to those of dialog scenes.The FSM model discussed above is also suitable for extracting simple action scenes(one to one?ghting).

5.3.Incorporating Audio Cues into FSM Model

Since we use the same FSM(which implements the same set of rules)to detect both types of scenes,we must ?nd a mechanism to separate the dialog scenes from action scenes.In Ref.13,we used the average shot length. However,this metric does not perform very well on some action movies(see the results of experiment1in Section 6).The main reason is that we only use the visual e?ect feature(shot length)of dialog and action scenes to di?erentiate them.Here,the visual e?ect refers the impression that a video editor wants to bring to the audience by using di?erent shot lengths in constructing a scene.11In some action movies,in order to show the action clearly,the video editor selects longer shots and this introduces errors into our model causing it to misclassify these types of action scenes as dialog scenes.Based on the observation of the?ve test movies mentioned in Section3,we draw another rule for dialog and action scenes detection in movies.

Rule3:The audio clips along with dialog scenes are usually speech mixture and the audio clips along with action scenes are env-mus mixture.

With rule3,we propose the algorithm to incorporated audio cues into our FSM model as follows:

Step1:For each video clip V i,extract the audio clip A i along with the V i.

Step2:Construct a V SS i for each video clip V i.

Step3:Divide each audio clip A i into segments,each of which lasts5seconds.The averages of three audio features(HZCRR,SR,and HR)are computed for each audio segment and used as inputs to the trained SVM audio classi?er.

Step4:Create audio meta data?le M i for each A i by storing the classi?cation results of each segment in

A i;

Step5:Feed each V SS i to the FSM.

Step6:For each scene(dialog or action)extracted by the FSM,the corresponding audio meta data of the audio clip along with the scene is checked.If the audio clip has more speech mixture segments than env-mus segments,it is classi?ed as a speech clip and the corresponding video scene will be considered as a dialog scene;if the audio clip is determined to have more music or environment segments,the corresponding video scene is detected as an action scene.

6.EXPERIMENT RESULTS

In this section,we present the results of some extraction experiments that were conducted using our extended model.We still use the two movies that have been used in Ref.13as shown in Table4,since both of them contain dialog and action scenes.Video are?rst segmented into shots and appearances of actors are manually marked.

movie title genre year duration

(min)No. shots

Gladiator Action20001541363

Crouching Tiger

and Hidden Dragon

Action20001201575

Table4.The experiment data

We use precision and recall to measure our retrieval results which are well known metrics originally de?ned in the information retrieval literature.Precision measures the proportion of correctly recognized scenes,while recall measures the proportion of scenes that are recognized.The correctness of the detection results and missing detection of the correct scenes are judged by humans.In order to test the e?ect of incorporating audio classi?er, two approaches have been designed:

1.Apply FSM using shot length without any audio cues(the approach used in Ref.13);

2.Apply FSM coupled with a SVM-based audio classi?er;

Table5shows the results of extracting simple dialog scenes and simple action scenes using approach1.This is the baseline for the other experiments.These results indicate that several scenes which are extracted by the

movie title No.detected

dialogs precision

(%)

recall

(%)

No.of

detected

actions

precision

(%)

recall

(%)

Gladiator9589.4796.60258484

Crouching Tiger

and Hidden Dragon

15480.5290.516476.5681.6

Table5.Dialog and action scenes extracted by the FSM with shot length checking

FSM model are neither dialog nor action scenes.Examples are the scenes in which two people stare at each other,the scenes in which two people appear in an interleaving pattern to show the occurrence of two parallel things.However,most of the errors come from the misclassi?cation of dialog and action scenes.From Tables5,

we can see that the precision of retrieval results of the movie“Crouching Tiger and Hidden Dragon”is low.In this movie,there are several action scenes that show the action e?ect,which increases the average shot length above the threshold.There are also several scenes that mix with action and dialog scenes on which the FSM model does not perform very well.

movie title No.detected

dialogs precision

(%)

recall

(%)

No.of

detected

actions

precision

(%)

recall

(%)

Gladiator9193.4096.602986100

Crouching Tiger

and Hidden Dragon

14486.1190.517481.08100

Table6.Dialog and action scenes extracted with FSM coupled with the audio classi?er

In our second experiment,the audio classi?er is coupled with the FSM model to di?erentiate dialog or action scenes(which is approach2).Table6shows the extraction results of https://www.sodocs.net/doc/198155121.html,paring the results shown in Table6with those in Table5,we?nd that the FSM coupled with audio classi?er can achieve both higher precision and recall.Investigation of the results of both movies reveals misclassi?ed dialog or action scenes in experiment1are correctly extracted with the help of audio cues.

7.RELATED WORK

There is very limited work on extracting the semantic scenes using a“top-down”approach.Yoshitiaka et.al12 propose an approach similar to ours to extract semantic scenes(conversation,tension rising and action)based on the grammar of the?lm.However,in their approach,only the repetition of similar shots is employed to detect conversion scenes.

Lienhart et al.29develop a technique to extract dialog scenes with the aid of a face detection algorithm. However,they only extract dialog scenes which show shot/reverse shot patterns.

Neither of these approaches address the extraction of action https://www.sodocs.net/doc/198155121.html,pared to their models,our model has the following advantages:

?We can,in addition to shot/reverse shot dialogs,detect single shot dialogs,dialogs with insertions and cuts,and dialogs with shot/reverse shots and recovering shots.

?Our model is rule based,which is very suitable for on-line content-based query processing.

?Our model can be easily extended to extract group conversions.

?Our model uses audio cues,which improves the system performance.

?Our model is based on the editors’point of view,rather than relying on individual users’interpretation of the video,which can solve the semantic heterogeneity problems caused by di?erent interpretations.

As indicated in the experiments,audio cues play an essential part in an extraction model.The accuracy of audio classi?ers directly a?ects the extraction results.Several audio classi?cation models have been proposed for classifying pure type audio.Saunders14addresses the issue of pure type audio classi?cation for FM radio.The idea is to allow automatic switching of channels when music is interrupted by advertisements.ZCR and short time energy are used to classify input audio into two classes:speech and music.Scheirer and Slaney15use thirteen features in time,frequency,spectrum and cepstrum domains and achieved better classi?cation,concluding that not all of the audio features are necessary to perform an accurate classi?cation.In addition,they claim that they improved the error rate to1.4%for a2.4s window compared to2.8%of Saunders’approach.Based on Scheirer’s conclusion,Carey et al.30make a comparison study on audio features for speech and music discrimination.They ?gure out that simple audio features,such as pitch and amplitude,have signi?cant di?erences between music and speech.Since then,many approaches have been proposed to classify pure type audio using di?erent audio

features and classi?ers.16,17,30–33Very few attempts have been made to classify mixed type audio.Srinivasan et al.19propose a rule-based model with empirically determines thresholds to classify mixed audio into discrete classes.Zhang and Kuo34propose a method for classi?cation into speech,music,song,environmental sound, speech with background music,silence,etc.Again,empirically determined values for classi?cation thresholds have been proposed.Our SVM-based audio classi?cation model does not require setting up empirical thresholds between classes and focuses on di?erentiating speech-mixture from mus-evn mixture.

8.CONCLUSION AND FUTURE WORK

Many approaches have been proposed to cluster video shots into scenes by computing the similarity of the shots based on their low level visual features.However,these clustered shots may not be similar in the semantics they convey,which makes the resulting scenes meaningless.These“bottom-up”approaches do not consider the knowledge that is used to construct scenes,which may also cause the constructed scenes to be too“general”for normal users of video database.In this paper,we extend our previous work on rule-based detection using a FSM model by incorporating audio cues.The extended model considers not only the editing rules that a video editor follows,but also the understanding of humans on audio cues of dialog and action scenes.A SVM-based audio classi?cation model is created to di?erentiate speech with music or environment sound background or environment sound mixed with music.The proposed audio classi?cation model is evaluated with manually annotated audio data,nearly86%accuracy can be achieved.The experimental results on dialog and action scene detection indicate that the FSM model,coupled with the audio classi?er,can achieve much better results con?rming the advantage of incorporating audio cues.

We believe,with help of the domain knowledge,our model can be easily extended to detect more types of semantically meaningful scenes,such as car chase,violence scenes,etc.Our future work will focus on developing more models to extract these scenes and a more general audio classi?cation model which can classify more mixed types of audio data.

ACKNOWLEDGMENTS

This research is funded by Intelligent Robotics and Information Systems(IRIS),a Network of Center of Excellence of the Government of Canada.

REFERENCES

1.T.Kikukawa and S.Kawafuchi,“Development of an automatic summary editing system for the audio-visual

resources,”Transactions on Electronics and Information J72(A),pp.204–212,1992.

2.B.Shahraray,“Scene change detection and content-based sampling of video sequences,”in Proceedings of

IS&T/SPIE2419,pp.2–13,1995.

3.H.Zhang,A.Kankanhalli,and S.Smoliar,“Automatic partitioning of full-motion video,”Multimedia Sys-

tems1,pp.10–28,1993.

4.A.Nagasaka and Y.Tanaka,“Automatic video indexing and full video search for object appearances,”in

Proceedings of2nd Working Conference on Visual Database System,pp.119–133,1991.

5.Y.Tonomura,A.Akutsu,K.Otsuji,and T.Sadakata,“VideoMap and video SpaceIcon:Tools for anatomizing

video content,”in Proceedings of ACM INTERCHI,pp.131–141,1993.

6.H.J.Zhang,C.Y.Low,S.W.Smoliar,and J.H.Wu,“Video parsing,retrieval and browsing:An integrated

and content based solution,”in Proceedings of ACM International Conference on Multimedia,pp.15–24, (San Francisco,CA),1995.

7.M.Yeung and B.-L.Yeo,“Time-constrained clustering for segmentation of video into story units,”in

Proceedings of13th International Conference on Pattern Recognition,3,pp.375–380,1996.

8.Y.Rui,T.S.Huang,and S.Mehrotra,“Exploring video structure beyond the shots,”in Proceedings of

IEEE International Conference on Multimedia Computing and Systems,pp.237–240,1992.

9.A.Hanjalic,https://www.sodocs.net/doc/198155121.html,gendijk,and J.Biemond,“Automatically segmenting movies into logical story units,”in

Proceedings of International Conference on Visual Information Systems,pp.229–236,1999.

10.W.Mahdi,M.Ardebilian,and L.M.Chen,“Automatic video scene segmentation based on spatial-temporal

clues and rhythm,”Networking and Information Systems Journal2(5),pp.1–25,2000.

11.D.Arijon,Grammar of the Film Language,Focal Press,1976.

12.A.Yoshitaka,T.Ishii,and A.Hirakawa,“Content-based retrieval of video data by the grammar of?lm,”

in Proceedings of IEEE Symposium on Visual Languages,3,pp.310–317,1997.

13.L.Chen and M.T.¨Ozsu,“Rule-based scene extraction from video,”in Proceedings of IEEE International

Conference on Image Processing,pp.737–740,September2002.

14.J.Saunders,“Real-time discrimination of broadcast speech/music,”in Proceedings of IEEE International

Conference on Acoustics,Speech,and Signal Processing,pp.993–996,May1996.

15.E.Scheirer and M.Slaney,“Construction and evaluation of a robust multifeature speech/music discrim-

inator,”in Proceedings of IEEE International Conference on Acoustics,Speech,and Signal Processing, pp.21–24,April1997.

16.K.El-Maleh,M.Klein,G.Petrucci,and P.Kabal,“Speech/music discrimination for multimedia appli-

cations,”in Proceedings of IEEE International Conference on Acoustics,Speech,and Signal Processing, pp.2445–2448,June2000.

17.L.Lu,H.Jiang,and H.J.Zhang,“A robust audio classi?cation and segmentation method,”in Proceedings

of ACM International Conference on Multimeida,2001.

18.T.Zhang and C.C.J.Kuo,“Audio content analysis for online audiovisual data,”IEEE Transaction on

Speech and Audio Processing9,pp.619–625,52001.

19.S.Srinivasan,D.Petkovic,and D.Ponceleon,“Towards robust features for classifying audio in the CueVideo

system,”in Proceedings of ACM International Conference on Multimedia,1999.

20.Y.Day,S.Dagtas,M.Iino,A.Khokhar,and A.Ghafoor,“Object-oriented conceptual modeling of video

data,”in Proceedings of the Eleventh International Conference on Proceedings on Data Engineering,pp.401–408,March1995.

21.S.D.Katz,Film Directing shot by shot visualizing from concept to screen,Michael Wiese Productions,1991.

22.G.J.Lu and T.Hankinson,“A technique towards automatic audio classi?cation and retrieval,”in Proceed-

ings of IEEE International Conference on Signal Processing,pp.1142–1145,Oct1998.

23.M.C.Liu and C.Wan,“A study on content-based classi?cation and retrieval of audio database,”in Pro-

ceedings of IEEE International Symposium on Database Engineering and Applications,pp.339–345,July 2001.

24.S.Pfei?er,S.Fischer,and W.E?elsberg,“Automatic audio content analysis,”in Proceedings of ACM

International Conference on Multimedia,1996.

25.C.J.C.Burges.,“A tutorial on support vector machines for pattern recognition,”Data Mining and Knowl-

edge Discovery2(2),pp.121–167,1998.

26.V.Vapnik,Statistical Learning Theory,Wiley,1998.

27.T.Joachims,“SVMlight support vector machine.”https://www.sodocs.net/doc/198155121.html,/.

28.J.L.Hein,Theory of Computation:An introduction,Jones and Bartlett Publishers,1996.

29.R.Lienhart,S.Pfei?er,and W.E?elsberg,“Scene determination based on video and audio features,”in

Proceedings of International Conference on Visual Information Systems,pp.685–690,1999.

30.M.Carey,E.S.Parris,and H.Lloyd-Thomas,“A comparison of features for speech,music discrimination,”

in Proceedings of IEEE International Conference on Acoustics,Speech,and Signal Processing,pp.149–152, April1999.

31.G.Lu and H.Templar,“An investigation of automatic audio classi?cation and segmentation,”in Proceedings

of WCCC-ICSP2000,5th International Conference on Signal Processing,pp.776–781.

32.E.Wold,T.Blum,D.Keislar,and J.Wheaten,“Content-based classi?cation,search,and retrieval of audio,”

IEEE Multimedia3(3),pp.27–36,1996.

33.S.Z.Li,“Content-based audio classi?cation and retrieval using the nearest feature line method,”IEEE

Transaction on Speech and Audio Processing8(5),2000.

34.T.Zhang and C.C.J.Kuo,“Audio content analysis for online audiovisual data,”IEEE Transaction on

Speech and Audio Processing9,pp.619–625,52001.

大功率LED芯片品牌介绍(精)

一、全球led芯片品牌名单汇总台湾led芯片厂商:晶元光电(epistar)简称:es、(联诠、元坤,连勇,国联),广镓光电(huga),艾斯特(AST)专注光电,新世纪(genesisphotonics),华上(arimaoptoelectronics)简称:aoc,泰谷光电(tekcore),奇力,钜新,光宏,晶发,视创,洲磊,联胜(hpo),汉光(hl),光磊(ed),鼎元(tyntek)简称:tk,曜富洲技tc,灿圆(formosaepitaxy),国通,联鼎,全新光电(vpec)等。华兴(ledtechelectronics)、东贝(unityoptotechnology)、光鼎(paralightelectronics)、亿光(everlightelectronics)、佰鸿(brightledelectronics)、今台(kingbright)、菱生精密(lingsenprecisionindustries)、立基(ligitekelectronics)、光宝(lite-ontechnology)、宏齐(harvatek)等。大陆led芯片厂商:三安光电简称(s)、上海蓝光(epilight)简称(e)、士兰明芯(sl)、大连路美简称(lm)、迪源光电、华灿光电、南昌欣磊、上海金桥大晨、河北立德、河北汇能、深圳奥伦德、深圳世纪晶源、广州普光、扬州华夏集成、甘肃新天电公司、东莞福地电子材料、清芯光电、晶能光电、中微光电子、乾照光电、晶达光电、深圳方大,山东华光、上海蓝宝等。国外led芯片厂商: cree,惠普(hp),日亚化学(nichia),丰田合成,大洋日酸,东芝、昭和电工(sdk),lumileds,旭明(smileds),genelite,欧司朗(osram),gelcore,首尔半导体等,普瑞,韩国安萤(epivalley)等。 1、cree 著名led芯片制造商,产品以碳化硅(sic),氮化镓(gan),硅(si)及相关的化合物为基础,包括蓝,绿,紫外发光二极管(led),近紫外激光,射频(rf)及微波器件,功率开关器件及适用于生产及科研的碳化硅(sic)外延片。 2、osram 是世界第二大光电半导体制造商,产品有照明,传感器,和影像处理器。公司总部位于德国,研发和制造基地在马来西亚,约有3400名员工。OSRAM已经从一个传统的灯泡厂商发展成为一个照明领域的高科技公司。 osram最出名的产品是led,长度仅几个毫米,有多种颜色,低功耗,寿命长。 3、nichia 日亚化学,著名led芯片制造商,成立于1956年,开发出世界第一颗蓝色led(1993年),世界第一颗纯绿led(1995年)在该公司LED的生产当中,70%是白色LED,主要有单色芯片型和RGB三色型两大类型,它的荧光粉生产在日本国内市场占据70%的比例,在

全球10大半导体厂商排名及简介

[2011-11-09] 全球10大半导体厂商排名及简介 全球10大半导体厂商排名: 1.英特尔(Intel) 2.三星(Samsung) 3.德州仪器(TI) 4.东芝(Toshiba) 5.台积电(TSMC) 6.意法半导体会(ST) 7.瑞萨科技(Renesas) 8.海力士(Hynix) 9.索尼(Sony) 10.高通(Qualcomm) 1.美国英特尔(Intel)公司,以生产CPU芯片闻名于世! 2.三星(Samsung)电子公司成立于1969年,初期主要生产家用电子产品,如电视机和录像机等。七十年代始进入国际市场,逐渐发展成为全球五大电 子公司之一,产品也由家电扩展到计算机、通讯等诸多方面。九十年代由 于经济方面的原因,三星公司进行了大规模的战略重组。1978年,三星半 导体从三星电子公司分立出来而成为独立的实体,1983年起随着成功发展 了64K DRAM超大规模集成电路,从此在单一家电类半导体产品基础上发展了许多新的半导体产品,逐渐成为全球领先的半导体厂商。它的半导体产 品主要有DRAM、SRAM、闪速存储器、ASIC、CPU和TFF-LCD板等等。 3.德州仪器(TI)公司总部位于美国德克萨斯州达拉斯城,是一家全球性的 半导体公司,是世界领先的数字信号处理和模拟技术的设计商、供应商, 是推动电子数字化进程的引擎。主要IC产品有:数字信号处理器、模拟和混合信号器件、数字逻辑、ASIC、微控制器、语音和图形处理器、可编程 逻辑、军用器件等。 4.东芝(Toshiba)在国际市场上盛名远扬,家喻户晓。在日本之外,东芝拥 有100多家子公司和协作公司的庞大全球网络,仅海外子公司便拥有40,000 多名雇员,他们遍及全球各地,从事着从科研到采购、生产、销售以及市 场调研等工作。分布在世界各地的39个厂家构成东芝的生产网络,制造品种繁多的产品,包括最先进的半导体元件、显象管、彩色电视机、移动式 计算个人电脑、光?磁存储、消耗品及工业用发电机和变电装置。这些技术和设备均处世界领先地位的东芝生产厂家,强有力地保障着公司技术和整体 能力的发展,推动着电子工业的进步;同时也提供了更多的就业机会,进 一步促进了当地经济的蓬勃发展。 5.台积电(TSMC)成立于1987年,是全球最大的专业集成电路制造服务公司。身为专业集成电路制造服务业的创始者与领导者,TSMC在提供先进晶圆制程技术与最佳的制造效率上已建立声誉。自创立开始,TSMC即持续提供客户最先进的技术;2006年的总产能超过七百万片约当八寸晶圆,全年营 收约占专业集成电路制造服务领域的百分之五十。

半导体厂家标示

著名电子厂商标志及公司简介
标 志 公 司 简 介
美国仙童(飞兆),采用世界级 4、5、6-inch 硅片工艺生产逻辑、模拟、混 合信号 IC 和分立元件 美国国际整流器公司成立于 1947 年,是主要的全球功率半导体供应商。 我们的产品通过电能转换,为各种电源系统、电机传动系统及照明镇流器 提供能源。 美国安森美半导体主要拥有三类产品系列: 电源管理和标准模拟集成电路 (放大器、电压参考、接口和比较器);高性能逻辑电路(特殊应用产品、 通信集成电路、时钟、转换器和驱动器);以及包括了有源分立元件和 MOSFET 产品的标准半导体。
IXYS
美国 IXYS(艾赛斯)公司是世界著名的半导体厂家,成立于 1983 年, 总 部设于美国加利福尼亚州(二极管、MOS 管等半导体工厂设于德国) ,其 产品包括 MOSFET、IGBT、 Thyristor、SCR、整流桥、二极管、DCB 块、功率模块等。主要产品:功率模块、MOSFET、整流桥 法国意法半导体公司 SGS-THOMSON,国际著名半导体公司之一。 美国摩托罗拉半导体 Motorola Semiconductor Products Inc.,以数字逻辑器 件、 模拟和接口器件、 通信及功率器件、 各类微处理器、 存储器电路为主。 美国飞思卡尔半导体(原摩托罗拉半导体部)是全球领先的半导体公司。这 家私营跨国公司总部位于德州奥斯汀,在全球 30 多个国家和地区拥有设 计、研发、制造和销售机构。飞思卡尔是全球最大的半导体公司之一。从 其前身———摩托罗拉半导体部算起, 飞思卡尔已有 50 多年的发展历史, 从摩托罗拉分拆出来后,飞思卡尔有了完全中立的市场地位, 美国 APT (先进功率技术)公司(现 Microsemi 公司)。提供双极晶体管、 VDMOS 和 LDMOS 三大类产品。APT 于 2005/10/1 被 Microsemi 公司并购, 因此,APT 公司已于 2006/05/01 正式纳入 Microsemi 公司体系运作。现 APT 是品牌型号的代表。公司名称统一改为 Microsemi。 美国美高森美公司 Microsemi 成立于 1960 年, 是全球性的电源管理、 电 源调理、瞬态抑制和射频/微波半导体器件供应商。长期供应高可靠性的 分立元件给军队和航空的客户。在美国拥有多个制造厂,另外在加州、德 州、爱尔兰、墨西哥、香港、印度等国家和地区提供生产支持。公司的销 售网络遍布全球。 美国惠普半导体 Hewlett Packard Asia Pacific Ltd 有“专”、“精”的特 点, 主产品有红外光电器件、 光电耦合器件、 微波射频器件、 控制器件等, 基本围绕 HP 的仪器仪表和打印机及成像产品。 美国安捷伦科技(NYSE:A)是由美国惠普公司战略重组分立而成的一家致 力于高速增长领域的多元化高科技跨国公司,其业务重点包括通信、电子 及化学分析与生命科学。1999 年 11 月 18 日, 安捷伦科技以代码“A”在 纽约股票交易所挂牌上市。当天,安捷伦公司股票募集金额达 21 亿美元, 在硅谷发展历史上创造了最高记录。安华高科技公司是其半导体业务部。 美国德州仪器 TEXAS INSTRUMENTS(TI)(美国德克萨斯仪器公司) ,总部 位于美国德克萨斯州达拉斯城,是一家全球的半导体公司。

全球著名半导体厂家简介

德州仪器(TI) LOGO: 德州仪器(Texas Instruments),简称TI,是全球领先的半导体公司,为现实世界的信号处理提供创新的数字信号处理(DSP)及模拟器件技术。除半导体业务外,还提供包括传感与控制、教育产品和数字光源处理解决方案。TI总部位于美国得克萨斯州的达拉斯,并在25多个国家设有制造、设计或销售机构。 德州仪器 (TI) 是全球领先的数字信号处理与模拟技术半导体供应商,亦是推动因特网时代不断发展的半导体引擎。 ----作为实时技术的领导者,TI正在快速发展,在无线与宽带接入等大型市场及数码相机和数字音频等新兴市场方面,TI凭借性能卓越的半导体解决方案不断推动着因特网时代前进的步伐! ----TI预想未来世界的方方面面都渗透着 TI 产品的点点滴滴,您的每个电话、每次上网、拍的每张照片、听的每首歌都来自 TI 数字信号处理器 (DSP) 及模拟技术的神奇力量。 网址:https://www.sodocs.net/doc/198155121.html, 意法半导体(ST) LOGO: 意法半导体(ST)集团于1987年6月成立,是由意大利的SGS微电子公司和法国Thomson半导体公司合并而成。1998年5月,SGS-THOMSON Microelectroni cs将公司名称改为意法半导体有限公司 意法半导体是世界最大的半导体公司之一,2006年全年收入98.5亿美元,2007年前半年公司收入46.9亿美元。 公司销售收入在半导体工业五大高速增长市场之间分布均衡(五大市场占2007年销售收入的百分比):通信(35%),消费(17%),计算机(16%),汽车(16%),工业(16%)。 据最新的工业统计数据,意法半导体是全球第五大半导体厂商,在很多市场居世界领先水平。例如,意法半导体是世界第一大专用模拟芯片和电源转换芯片制造商,世界第一大工业半导体和机顶盒芯片供应商,而且在分立器件、手机相机模块和车用集成电路领域居世界前列。 网址:https://www.sodocs.net/doc/198155121.html,

世界著名半导体(芯片)公司--详细

世界半导体公司排名———详细 25大半导体厂商排名(2006年统计数据): 1. 英特尔,营收额313.59亿美元 2. 三星,营收额192.07亿美元 3. 德州仪器,营收额128.32亿美元 4. 东芝,营收额101.66亿美元 5. 意法半导体,营收99.31亿美元 6. Renesas科技,营收82.21亿美元 7. AMD,营收额74.71亿美元 8. Hynix,营收额73.65亿美元 9. NXP,营收额62.21亿美元 10.飞思卡尔,营收额60.59亿美元 11. NEC,营收额56.96亿美元 12. Qimonda,营收额55.49亿美元 13. Micron,营收额52.90亿美元 14. 英飞凌,营收额51.95亿美元 15. 索尼,营收额48.75亿美元 16. 高通,营收额44.66亿美元 17. 松下,营收额41.24亿美元 18. Broadcom,营收额36.57亿美元 19. 夏普,营收额34.76亿美元 20. Elpida,营收额33.54亿美元 21. IBM微电子,营收额31.51亿美元

22. Rohm,营收额29.64亿美元 23. Spansion,营收额26.17亿美元 24. Analog Device,营收额25.99亿美元 25. nVidia,营收额24.75亿美元 飞思卡尔介绍: 飞思卡尔半导体是全球领先的半导体公司,为汽车、消费、工业、网络和无线市场设计并制造嵌入式半导体产品。这家私营企业总部位于德州奥斯汀,在全球30多个国家和地区拥有设计、研发、制造和销售机构。 产品领域: 嵌入式处理器 汽车集成电路 集成通信处理器 适用于2.5G 和3G无线基础设施的射频功率晶体管 基于StarCore? 技术的数字信号处理器(DSP) 下列领域处于领先地位: 8位、16位和32位微控制器 可编程的DSP 无线手持设备射频微处理器 基于Power Architecture?技术的32位嵌入式产品 50多年来,飞思卡尔一直是摩托罗拉下属机构,2004年7月成为独立企业开始了新的生活。此后,公司在董事长兼首席执行官Michel Mayer的领导下,努力提高财务业绩,打造企业文化,构筑全球品牌。 意法半导体(ST)集团介绍

全球著名半导体厂家简介(精)

德州仪器 (TI LOGO : 德州仪器(Texas Instruments),简称TI ,是全球领先的半导体公司,为现实世界的信号处理提供创新的数字信号处理(DSP及模拟器件技术。除半导体业务外,还提供包括传感与控制、教育产品和数字光源处理解决方案。TI 总部位于美国得克萨斯州的达拉斯,并在25多个国家设有制造、设计或销售机构。 德州仪器 (TI 是全球领先的数字信号处理与模拟技术半导体供应商,亦是推动因特网时代不断发展的半导体引擎。 ----作为实时技术的领导者,TI 正在快速发展,在无线与宽带接入等大型市场及数码相机和数字音频等新兴市场方面,TI 凭借性能卓越的半导体解决方案不断推动着因特网时代前进的步伐! ----TI 预想未来世界的方方面面都渗透着 TI 产品的点点滴滴,您的每个电话、每次上网、拍的每张照片、听的每首歌都来自 TI 数字信号处理器 (DSP 及模拟技术的神奇力量。 网址:https://www.sodocs.net/doc/198155121.html, 意法半导体(ST )

LOGO : 意法半导体(ST )集团于1987年6月成立,是由意大利的SGS 微电子公司和法国Thomson 半导体公司合并而成。1998年5月,SGS-THOMSON Microelectronics 将公司名称改为意法半导体有限公司 意法半导体是世界最大的半导体公司之一,2006年全年收入98.5亿美元,2007年前半年公司收入46.9亿美元。 公司销售收入在半导体工业五大高速增长市场之间分布均衡(五大市场占2007年销售收入的百分比):通信(35%),消费(17%),计算机(16%),汽车(16%),工业(16%)。 据最新的工业统计数据,意法半导体是全球第五大半导体厂商,在很多市场居世界领先水平。例如,意法半导体是世界第一大专用模拟芯片和电源转换芯片制造商,世界第一大工业半导体和机顶盒芯片供应商,而且在分立器件、手机相机模块和车用集成电路领域居世界前列。 网址:https://www.sodocs.net/doc/198155121.html, 飞利浦半导体(PHILIPS )

半导体全制程介绍

半导体全制程介绍 《晶圆处理制程介绍》 基本晶圆处理步骤通常是晶圆先经过适当的清洗 (Cleaning)之后,送到热炉管(Furnace)内,在含氧的 环境中,以加热氧化(Oxidation)的方式在晶圆的表面形 成一层厚约数百个的二氧化硅层,紧接着厚约1000到 2000的氮化硅层将以化学气相沈积Chemical Vapor Deposition;CVP)的方式沈积(Deposition)在刚刚长成的二氧化硅上,然后整个晶圆将进行微影(Lithography)的制程,先在晶圆上上一层光阻(Photoresist),再将光罩上的图案移转到光阻上面。接着利用蚀刻(Etching)技术,将部份未被光阻保护的氮化硅层加以除去,留下的就是所需要的线路图部份。接着以磷为离子源(Ion Source),对整片晶圆进行磷原子的植入(Ion Implantation),然后再把光阻剂去除(Photoresist Scrip)。制程进行至此,我们已将构成集成电路所需的晶体管及部份的字符线(Word Lines),依光罩所提供的设计图案,依次的在晶圆上建立完成,接着进行金属化制程(Metallization),制作金属导线,以便将各个晶体管与组件加以连接,而在每一道步骤加工完后都必须进行一些电性、或是物理特性量测,以检验加工结果是否在规格内(Inspection and Measurement);如此重复步骤制作第一层、第二层的电路部份,以在硅晶圆上制造晶体管等其它电子组件;最后所加工完成的产品会被送到电性测试区作电性量测。 根据上述制程之需要,FAB厂内通常可分为四大区: 1)黄光本区的作用在于利用照相显微缩小的技术,定义出每一层次所需要的电路图,因为采用感光剂易曝光,得在黄色灯光照明区域内工作,所以叫做「黄光区」。

半导体全制程介绍

《晶圆处理制程介绍》 基本晶圆处理步骤通常是晶圆先经过适当的清洗(Cleaning)之后,送到热炉管 (Furnace)内,在含氧的环境中,以加热氧化(Oxidation)的方式在晶圆的表面形 成一层厚约数百个的二氧化硅层,紧接着厚约1000到2000的氮化硅层 将以化学气相沈积Chemical Vapor Deposition;CVP)的方式沈积(Deposition)在刚刚长成的二氧化硅上,然后整个晶圆将进行微影(Lithography)的制程,先在 晶圆上上一层光阻(Photoresist),再将光罩上的图案移转到光阻上面。接着利用蚀刻(Etching)技术,将部份未被光阻保护的氮化硅层加以除去,留下的就是所需要的线路图部份。接着以磷为离子源(Ion Source),对整片晶圆进行磷原子的植入(Ion Implantation),然后再把光阻剂去除(Photoresist Scrip)。制程进行至此,我们已将构成集成电路所需的晶体管及部份的字符线(Word Lines),依光罩所提供的设计图案,依次的在晶圆上建立完成,接着进行金属化制程(Metallization),制作金属导线,以便将各个晶体管与组件加以连接,而在每一道步骤加工完后都必须进行一些电性、或是物理特性量测,以检验加工结果是否在规格内(Inspection and Measurement);如此重复步骤制作第一层、第二层...的电路部份,以在硅晶圆上制造晶体管等其它电子组件;最后所加工完成的产品会被送到电性测试区作电性量测。 根据上述制程之需要,FAB厂内通常可分为四大区: 1)黄光本区的作用在于利用照相显微缩小的技术,定义出每一层次所需要的电路图,因为采用感光剂易曝光,得在黄色灯光照明区域内工作,所以叫做「黄光区」。 2)蚀刻经过黄光定义出我们所需要的电路图,把不要的部份去除掉,此去除的步骤就> 称之为蚀刻,因为它好像雕刻,一刀一刀的削去不必要不必要的木屑,完成作品,期间又利用酸液来腐蚀的,所 以叫做「蚀刻区」。 3)扩散本区的制造过程都在高温中进行,又称为「高温区」,利用高温给予物质能量而产生运动,因为本区的机台大都为一根根的炉管,所以也有人称为「炉管区」,每一根炉管都有不同的作用。 4)真空本区机器操作时,机器中都需要抽成真空,所以称之为真空区,真空区的机器多用来作沈积暨离子植入,也就是在Wafer上覆盖一层薄薄的薄膜,所以又称之为「薄膜区」。在真空区中有一站称为 晶圆允收区,可接受芯片的测试,针对我们所制造的芯片,其过程是否有缺陷,电性的流通上是否 有问题,由工程师根据其经验与电子学上知识做一全程的检测,由某一电性量测值的变异判断某一 道相关制程是否发生任何异常。此检测不同于测试区(Wafer Probe)的检测,前者是细部的电子 特性测试与物理特性测试,后者所做的测试是针对产品的电性功能作检测。

世界著名半导体芯片厂商排名及介绍

世界著名半导体芯片厂商排名及介绍25大半导体厂商排名: 1. 英特尔,营收额313.59亿美元 2. 三星,营收额192.07亿美元 3. 德州仪器,营收额128.32亿美元 4. 东芝,营收额101.66亿美元 5. 意法半导体,营收99.31亿美元 6. Renesas科技,营收82.21亿美元 7. AMD,营收额74.71亿美元 8. Hynix,营收额73.65亿美元 9. NXP,营收额62.21亿美元 10.飞思卡尔,营收额60.59亿美元 11. NEC,营收额56.96亿美元 12. Qimonda,营收额55.49亿美元 13. Micron,营收额52.90亿美元 14. 英飞凌,营收额51.95亿美元 15. 索尼,营收额48.75亿美元 16. 高通,营收额44.66亿美元 17. 松下,营收额41.24亿美元 18. Broadcom,营收额36.57亿美元 19. 夏普,营收额34.76亿美元 20. Elpida,营收额33.54亿美元 21. IBM微电子,营收额31.51亿美元 22. Rohm,营收额29.64亿美元 23. Spansion,营收额26.17亿美元 24. Analog Device,营收额25.99亿美元 25. nVidia,营收额24.75亿美元 飞思卡尔介绍:

飞思卡尔半导体是全球领先的半导体公司,为汽车、消费、工业、网络和无线市场设计并制造嵌入式半导体产品。这家私营企业总部位于德州奥斯汀,在全球30多个国家和地区拥有设计、研发、制造和销售机构。 产品领域: 嵌入式处理器 汽车集成电路 集成通信处理器 适用于2.5G 和3G无线基础设施的射频功率晶体管 基于StarCore? 技术的数字信号处理器(DSP) 下列领域处于领先地位: 8位、16位和32位微控制器 可编程的DSP 无线手持设备射频微处理器 基于Power Architecture?技术的32位嵌入式产品 50多年来,飞思卡尔一直是摩托罗拉下属机构,2004年7月成为独立企业开始了新的生活。此后,公司在董事长兼首席执行官Michel Mayer的领导下,努力提高财务业绩,打造企业文化,构筑全球品牌。 意法半导体(ST)集团介绍 于1987年6月成立,是由意大利的SGS微电子公司Società Generale Semiconduttori (SGS) Microelettronica和法国Thomson半导体公司Thomson Semiconducteurs合并而成。1998年5月,SGS-THOMSON Microelectronics将公司名称改为意法半导体有限公司(STMicroelectronics)。 意法半导体有限公司(STMicroelectronics)是全球独立的半导体公司,并成为各种微电子应用系列开发和转让芯片级解决方案的领导者。作为硅片和系统技术、生产能力、知识产权(IP)组合及战略伙伴的超强联合体,意法半导体(ST)集团确立了其系统级芯片技术的最前沿的地位,其产品对当今集成趋势将起到重要作用。 意法半导体(ST)作为全球最大的半导体公司之一,2003年,意法半导体(ST)的净收入达72亿美元,净收益为2.53亿美元。根据独立分析机构的最新资料,意法半导体(ST)仍是全球模拟集成电路、MPEG-2解码器集成电路和ASIC(专用集成电路)/ ASSP的世界级领导厂商。另外,在存储器市场,意法半导体(ST)是NOR闪存的第四大供应商。在应用领域,意法半导体(ST)是机顶盒集成电路最大的供应商,智能卡和和硬盘驱动集成电路和xDSL芯片的第二大供应商,无线通信业务和汽车集成电路的第三大供应商。 公司采用多种制造工艺和专利设计方法来生产和设计产品。意法半导体(ST)亦利用其所拥有的广泛知识产权组合,与其他许多主要半导体制造商达成相互许可证协议,从而增强了公司的工艺与设计技术的深度和广度。

BM 半导体产品选用指南

BM 半导体产品选用指南 ( 中 文 版 ---- 技 术 篇 ) 引言: 作为电源管理的IC 设计公司, 我们瞄准LCD 市场和网络通讯市场的需求, 给客户提供完整的电源管理解决方案, 让客户安心, 省心的享受电子终端产品的设计和制造乐趣. 本文为技术简介, 献给工程师朋友, 最开始的合理的设计定型对整机的性能和整机的成本具有决定性影响, 设计完之后采购要大比例的杀价降成本有点迟了, 通常只有牺牲的是品质和性能了, 所以设计选型的时候是最重要的, 所以以此为出发点, 写此文供您参考! 1. LDO 2. DC/DC 3. 其他 4. 包装规范

LDO 由于在各种电子产品中,主控制或相关芯片需要各种各样的供电电压,各种电压转换芯片应运而生。线性稳压器(由内部的调整电路来完成, 内部没有开关振荡电路)有如下四种: 1. 78 系列(或317)。 绝大多数电源管理的芯片产厂家都有,他们的应用广泛,用量大, 价格便宜,缺点是静态电流大,10mA左右,输入输出压差高。 例如7805 需要+7.5V 输入才能稳定的输出5V,电流超过0.75A 时, 芯片需要很大的散热器。TO252 的贴片7805 是我们的特色。我们的型号是BM7805AS,是标准1A的,市面上CJ78M05都是0.5A还不到.但是价格敏感,所以特地为数字高频头的5V供电做了一颗0.7A的BM78D05,+18V耐压. 2. BIPOLAR LDO (双极型低压差稳压器)。 是为了降低78系列的输入输出压差而发展的。一般的这种IC 的 压差为1.1V~1.2V 左右,静态电流为8-9 mA左右。按电流分有 如下几种(BM 半导体的): A.BM1117 1A 的输出, 输出有可调的(ADJ),输出从1.25V 基准输出电压起调。还有输出固定的5.0V , 3.3V , 2.5V , 1.8V 等。 不同于其他公司的产品,BM1117 的输入电压最高可以达 +18V ,有些台湾和内地设计的1117 为了价格竞争, 输入电 压最大只有+8V, 输出电流只有800mA,当遇到电压突波时, 可能会烧毁,给最终的产品(如DVD 和DVB 等)埋下返修率

半导体简介

《晶柱成长制程》 硅晶柱的长成,首先需要将纯度相当高的硅矿放入熔炉中,并加入预先设定好的金属物质,使产生出来的硅晶柱拥有要求的电性特质,接着需要将所有物质融化后再长成单晶的硅晶柱,以下将对所有晶柱长成制程做介绍。 长晶主要程序︰ 融化(MeltDown) 此过程是将置放于石英坩锅内的块状复晶硅加热制高于摄氏1420度的融化温度之上,此阶段中最重要的参数为坩锅的位置与热量的供应,若使用较大的功率来融化复晶硅,石英坩锅的寿命会降低,反之功率太低则融化的过程费时太久,影响整体的产能。 颈部成长(Neck Growth) 当硅融浆的温度稳定之后,将<1.0.0>方向的晶种渐渐注入液中,接着将晶种往上拉升,并使直径缩小到一定(约6mm),维持此直径并拉长10-20cm,以消除晶种内的排差(dislocation),此种零排差(dislocation-free)的控制主要为将排差局限在颈部的成长。 晶冠成长(Crown Growth) 长完颈部后,慢慢地降低拉速与温度,使颈部的直径逐渐增加到所需的大小。 晶体成长(Body Growth) 利用拉速与温度变化的调整来迟维持固定的晶棒直径,所以坩锅必须不断的上升来维持固定的液面高度,于是由坩锅传到晶棒及液面的辐射热会逐渐增加,此辐射热源将致使固业界面的温度梯度逐渐变小,所以在晶棒成长阶段的拉速必须逐渐地降低,以避免晶棒扭曲的现象产生。 尾部成长(Tail Growth) 当晶体成长到固定(需要)的长度后,晶棒的直径必须逐渐地缩小,直到与液面分开,此乃避免因热应力造成排差与滑移面现象。

《晶柱切片后处理》 硅晶柱长成后,整个晶圆的制作才到了一半,接下必须将晶柱做裁切与检测,裁切掉头尾的晶棒将会进行外径研磨、切片等一连串的处理,最后才能成为一片片价值非凡的晶圆,以下将对晶柱的后处理制程做介绍。 切片(Slicing) 长久以来经援切片都是采用内径锯,其锯片是一环状薄叶片,内径边缘镶有钻石颗粒,晶棒在切片前预先黏贴一石墨板,不仅有利于切片的夹持,更可以避免在最后切断阶段时锯片离开晶棒所造的破裂。切片晶圆的厚度、弓形度(bow)及挠屈度(warp)等特性为制程管制要点。影响晶圆质量的因素除了切割机台本身的稳定度与设计外,锯片的张力状况及钻石锐利度的保持都有很大的影响。 圆边(Edge Polishing) 刚切好的晶圆,其边缘垂直于切割平面为锐利的直角,由于硅单晶硬脆的材料特性,此角极易崩裂,不但影响晶圆强度,更为制程中污染微粒的来源,且在后续的半导体制成中,未经处理的晶圆边缘也为影响光组与磊晶层之厚度,固须以计算机数值化机台自动修整切片晶圆的边缘形状与外径尺寸。 研磨(Lapping) 研磨的目的在于除去切割或轮磨所造成的锯痕或表面破坏层,同时使晶圆表面达到可进行抛光处理的平坦度。 蚀刻(Etching) 晶圆经前述加工制程后,表面因加工应力而形成一层损伤层(damaged layer),在抛光之前必须以化学蚀刻的方式予以去除,蚀刻液可分为酸性与碱性两种。 去疵(Gettering) 利用喷砂法将晶圆上的瑕疵与缺陷感到下半层,以利往后的.. IC制程。 抛光(Polishing) 晶圆的抛光,依制程可区分为边缘抛光与表面抛光两种

全球知名半导体企业简介(精)

搜集了些半导体厂商的 LOGO ,让大家认识下。 从事电子元器件这行应该知道业内一些牛逼的企业。就像广告人知道哪些是4A 广告公司, 会计师知道 4大一样!本文就介绍半导体行业最牛逼的 25家企业。很多小日本的企业。不得不承认人家在技术上很牛逼! 1. 英特尔,营收额 313.59亿美元 英特尔公司 (美国是全球最大的半导体芯片制造商,它成立于 1968年,具有 44年产品创新和市场领导的历史。 1971年,英特尔推出了全球第一个微处理器。微处理器所带来的计算机和互联网革命,改变了整个世界。 2. 三星,营收额 192.07亿美元 三星电子 -主要业务为消费型电子、 DRAM 与 NAND Flash,微控制器和微处理器、无线通信芯片与晶圆代工,美国《财富》杂志 2011年世界 500强行列中排名第 22位,集团旗下的旗舰公司。 2009年营业额约为 99兆 7000亿韩元。 3. 德州仪器,营收额 128.32亿美元 德州仪器(英语:Texas Instruments,简称:TI ,是世界上最大的模拟技术部件制造商,全球领先的半导体跨国公司,以开发、制造、销售半导体和计算机技术闻名于世,主要从事创新型数字信号处理与模拟电路方面的研究、制造和销售。除半导体业务外, 还提供包括传感与控制、教育产品和数字光源处理解决方案。德州仪器(TI 总部位于美国德克萨斯州的达拉斯,并在 25多个国家设有制造、设计或销售机构。 4. 东芝,营收额 101.66亿美元 东芝公司(Toshiba Corporation是日本最大的半导体制造商,亦是第二大综合电机制造商,隶属于三井集团旗下。公司创立于 1875年 7月,原名东京芝浦电气株式会社, 1939年由东京电气株式会社和芝浦制作所合并而成, 业务领域包括数码产品、电子元器件、社会基础设备、家电等。 20世纪 80年代以来,东芝从一个以家用电器、重型电机为主体的企业, 转变为包括通讯、电子在内的综合电子电器企

半导体厂商介绍

半导体厂商介绍 MOSFET的定义与分类。 MOSFET(Metal-Oxide-Semiconductor Field Effect Transistor)在集成电路中叫做绝缘性场效应管。中文名为金属-氧化物-半导体场效应管。MOSFET分为增强型(N型)和耗尽型(P型)。 MOSFET的工作原理 MOSFET的工作原理是在MOSFET G极上外加电压,金属电极相对于P型半导体的情况下,外加正电压,相对于N型半导体外加负电压,在氧化膜下会产生空乏层(depletion layer),若针对氧化膜下为P型半导体的情况,如果再提高电压,就会累积电子,若是N型半导体则会累计空穴,我们称此层为“反转层”(reversion layer)。MOS型场效应管就是利用这个层,作为一个切换开关。MOSFET是电压控制器件。 肖特基二极管的定义 肖特基二极管是以其发明人肖特基博士(Schottky)命名的。SBD(SchottkyBarrierDiode)即肖特基势垒二极管。 肖特基二极管的工作原理 肖特基二极管的工作原理是以贵金属(金、银、铝、铂等)A为正极,以N型半导体B为负极,利用二者接触面上形成的势垒具有整流特性而制成的金属-半导体器件。因为N型半导体中存在着大量的电子,贵金属中仅有极少量的自由电子,所以电子便从浓度高的B中向浓度低的A中扩散。显然,金属A中没有空穴,也就不存在空穴自A向B的扩散运动。随着电子不断从B扩散到A,B表面电子浓度逐渐降低,表面电中性被破坏,于是就形成势垒,其电场方向为B→A。但在该电场作用之下,A中的电子也会产生从A→B的漂移运动,从而消弱了由于扩散运动而形成的电场。当建立起一定宽度的空间电荷区后,电场引起的电子漂移运动和浓度不同引起的电子扩散运动达到相对的平衡,便形成了肖特基势垒。 半导体厂商:国外,台湾,国内。 国外: AOS:万代半导体公司 ST:意法半导体公司 FAIRCHILD:仙童半导体公司(飞兆半导体公司) VISHAY:威世集团 IR:国际整流器公司 APT:美国先进功率技术公司(己被Microsemi公司收购) ONSEMI:安森美半导体公司 Microsemi:美高森美公司 TI:德州仪器 Toshiba:东芝公司 IXYS:艾赛斯公司 RECTRON:美国伟创电子公司 Diodes:美台二极体股份有限公司

半导体公司简介

著名半导体相关公司LOGO 作者: 张晓朋 ETHAN 英特尔公司(Intel Corporation),是世界上最大的半导体公司,也是第一家推出x86 架构处理器的公司,总部位于美国加州圣克拉拉。由罗伯特·诺宜斯、高登·摩尔、安迪·葛洛夫,以集成电路之名(integrated electronics)在1968 年共同创 办Intel 公司,将高级芯片设计能力与领导业界的制造能力结合在一起。Intel 也有开发主板芯片组、网络卡、闪存、绘图芯片、嵌入式处理器,与对通信与计算相关的产品等。现任经营高层是董事长-克雷格·贝瑞特及总经理兼运行长-保罗·欧德宁。Intel Inside 的广告标语与Pentium 系列处理器在90 年代间非常成功的打响Intel 的品牌名号。 超威半导体公司(英文:Advanced Micro Devices, Inc.,简称AMD)是一家专注于微处理器设计和生产的跨国公司,总部位于美国加州硅谷内森尼韦尔。AMD为电脑、通信及消费电子市场供应各种集成电路产品,其中包括中央处理器、图形处理器、闪存、芯片组以及其他半导体技术。公司的主要设计及研究所位于美国和加拿大,主要生产设施位于德国,还在新加坡、马来西亚和中国等地设有测试中心。 英飞凌科技公司(Infineon Technologies)总部位于德国慕尼黑,是德国最大的半导体产品制造商,法兰克福上市代码:IFX。其前身是西门子集团的半导体部门,于1999年独立,2000年上市。其中文名称为亿恒科技,2002年后更名为英飞凌科技。其主要产品为:内存

通信芯片;汽车、工业芯片及智能卡。 阿斯麦(ASML),全称: Advanced Semiconductor Material Lithography, 目前该全称己不做为公司标识使用,公司的注册标识为ASML Holding ,是总部设在荷兰Veldhoven的全球最大的半导体设备制造商之一,向全球复杂集成电路生产企业提供领先的综合性关键设备,ASML为半导体生产商提供光刻机及相关服务。 意法半导体(STMicroelectronics)在很多市场占据领先地位,是世界第一大专用模拟芯片和功率转换芯片制造商,世界第一大工业控制芯片、机顶盒芯片和便携设备及消费电子(包括游戏机和智能电话)用MEMS(微机电系统)芯片供应商。意法半导体在车用集成电路(第三)和计算机外设(第三)等市场也位居世界前列,目前公司正在快速开拓整个MEMS(第五)市场。 德州仪器(Texas Instruments),简称TI,是全球领先的半导体公司,为现实世界的信号处理提供创新的数字信号处理(DSP)及模拟器件技术。除半导体业务外,还提供包括传感与控制、教育产品和数字光源处理解决方案。TI总部位于美国得克萨斯州的达拉斯,并在25多个国家设有制造、设计或销售机构。 Altera公司是世界上“可编程芯片系统”(SOPC)解决方案倡导者。Altera结合带有软件工具的可编程逻辑技术、知识产权(IP)和技术服务,为客户提供高

全球10大半导体厂详解

https://www.sodocs.net/doc/198155121.html, 2007年05月14日 10:54 走进中关村 手机处理器老大-TI 处理器市场独领风骚-Intel 1971年英特尔推出了全球第一颗微处理器,这不仅改变了Intel的命运,更对整个产业产生了深远的影响,处理器的革新带来了计算机和互联网的革命。 处理器市场独领风骚-Intel 从IC Insights的调查数据来看,07年Q1 Intel的销售额为80亿美金,排在所有半导体公司的首位。 根据最近的调查报告显示,Intel的处理器占有率在全世界将近80%,尤其是在Intel今年加大普及酷睿处理器之后,预计在今年的第二季度还会保持一个稳定的增长势头。

Intel的处理器 Intel作为微处理器巨头不仅在处理器行业独领风骚,在芯片组行业也领导者,由于自己生产处理器,所以Intel 自己的芯片组结合自己的处理一直以稳定著称,随着移动办公的普及,移动处理器以及移动芯片组Intel也占下了极大的市场分额。 Intel作为半导体业的老大毫无争议,产值几乎是“老二”Samsung的一倍,先进的生产工艺以及庞大规模的“工厂群”使其芯片产量足以用恐怖来形容。也正是因为如此Intel的处理器产品货源一直很稳定,巨大产量及先进生产工艺使其在与AMD竞争的时候占尽了优势。总之一句话:目前Intel的老大地位还没有人能够撼动! 多管齐下-Samsung 半导体排名第二位的是韩系企业Samsung,通过排行我们可以发现三星和领头羊Intel在半导体制造业上的差距还是相当大的,Samsung在半导体行业主要的产品包括半导体、TFT-LCD面板以及HDD产品。 在国内三星显示器、硬盘和内存的知名度都是相当高的,显示器销量一直名列前三,著名的Samsung金条也是以兼容性出色而见长,而且三星也是世界上最大的DRAM & Flash生产厂商。今年一季度的就是三星为了扩大市场占有率,扩大产能导致的。

中国十大半导体公司排名

2017年中国十大半导体公司排名 2017年已接近尾声,接下来就让小编带你看看最新的中国十大半导体公司排名吧!1、环旭电子(601231)环旭电子股份有限公司是全球 ODM/EMS领导厂商,专为国内外品牌电子产品或模组提供产品设计、微小化、物料采购、生产制造、物流与维修服务。环旭电子成立于2003年,现为日月光集团成员之一,于2012年成为上海证券交易所A股上市公司。环旭电子股份有限公司以信息、通讯、消费电子及汽车电子等高端电子产品EMS、JDM、ODM为主,主要产品包括WiFi ADSL、WiMAX、WiFi AP、WiFi Module、Blue-Tooth Module、 LED LighTIng & Inverter、Barcode Scanner、DiskDrive Array、网络存储器、存储芯片、指纹辨识器等。2、长电科技(600584)成立于1972年, 2003年在上交所主板成功上市。历经四十余年发展,长电科技已成为全球知名的集成电路封装测试企业。长电科技面向全球提供封装设计、产品开发及认证,以及从芯片中测、封装到成品测试及出货的全套专业生产服务。长电科技致力于可持续发展战略,崇尚员工、企业、客户、股东和社会和谐发展,合作共赢之理念,先后被评定为国家重点高新技术企业,中国电子百强企业,集成电路封装技术创新战略联盟理事长单位,中国驰名商标,中国出口产品质量示范企业等,拥有国内唯一的高密度集成电路国家工程实验室、国家级企业技术中心、博士后科研工作站等。由江阴长江电子实业有限公司整体变更设立为股份有限公司,是中国半导体第一大封装生产基地,国内着名的晶体管和集成电路制造商,产品质量处于国内领先

半导体公司简介

著名半导体相关公司LOGO 作者 : 张晓朋 ETHAN 英特尔公司(Intel Corporation),是世界上最大的半导体公司,也是第一家推出 x86 架构处理器的公司,总部位于美国加州圣克拉拉。由罗伯特·诺宜斯、高登·摩尔、安迪·葛洛夫,以集成电路之名(integrated electronics)在 1968 年共同创办 Intel 公司,将高级芯片设计能力与领导业界的制造能力结合在一起。Intel 也有开发主板芯片组、网络卡、闪存、绘图芯片、嵌入式处理器,与对通信与计算相关的产品等。现任经营高层是董事长-克雷格·贝瑞特及总经理兼运行长-保罗·欧德宁。Intel Inside 的广告标语与 Pentium 系列处理器在 90 年代间非常成功的打响Intel 的品牌名号。 超威半导体公司(英文:Advanced Micro Devices, Inc.,简称AMD)是一家专注于微处理器设计和生产的跨国公司,总部位于美国加州硅谷内森尼韦尔。AMD为电脑、通信及消费电子市场供应各种集成电路产品,其中包括中央处理器、图形处理器、闪存、芯片组以及其他半导体技术。公司的主要设计及研究所位于美国和加拿大,主要生产设施位于德国,还在新加坡、马来西亚和中国等地设有测试中心。 英飞凌科技公司(Infineon Technologies)总部位于德国慕尼黑,是德国最大的半导体产品制造商,法兰克福上市代码:IFX。其前身是西门子集团的半导体部门,于1999年独立,2000年上市。其中文名称为亿恒科技,2002年后更名为英飞凌科技。其主要产品为:内存通信芯片;汽车、工业芯片及智能卡。

半导体行业介绍

世界著名半导体公司 世界著名半导体公司 以下是我所了解的一些著名半导体公司的概况,公司排名依据是iSuppli分析报告的各公司2008年收入,25家公司中 美国10家,1英特尔、4德州仪器、8高通(qualcomm、12超威半导体(AMD)、13飞思卡尔(freescale)、14Broadcom、16美光(Micron)、21英伟达(NVIDIA)、22迈威尔科技(Marvell)、25美国模拟器件公司(ADI) 日本9家,3东芝、6瑞萨(renesas)、7索尼(sony)、11日本电气股份有限公司(NEC)、15松下(panasonic)、18夏普电子(Sharp)、19尔必达(elpida)、20罗姆电子(Rohm)、24富士通(Fujitsu) 欧洲3家,5意法半导体(ST)、10英飞凌德国、17恩智浦半导体(NXP) 韩国2家,2三星电子、9海力士(Hynix) 台湾1家,23、联发科(MTK) 英特尔(intel)USA https://www.sodocs.net/doc/198155121.html, ; 成立于1968年的英特尔公司,作为全球最大的芯片制造商,同时也是计算机、网络和通信 产品的领先制造商,英特尔走过了风风雨雨的38年,具有技术产品创新和领导产业发展的38年。回首过去,英特尔的产品,影响了整个IT业的发展,成就了不知多少IT界的精英 和经典事件。 2、三星电子(Samsung)Korea https://www.sodocs.net/doc/198155121.html,/cn/ ; 三星电子(Samsung Electronics KSE:005930、KSE:005935、LSE:SMSN、LSE:SMSD) 是世界上最大的电子工业公司。1938年3月它于朝鲜大邱成立,创始人是李秉喆,现在的社长是李健熙。一开始它是一个出口商,但很快它就进入了许多其它领域。今天它在全世界

相关主题