搜档网
当前位置:搜档网 › 单片机 英文参考文献 翻译

单片机 英文参考文献 翻译

单片机 英文参考文献 翻译
单片机 英文参考文献 翻译

天津科技大学毕业生

外文资料翻译

姓名:

学院:电子信息与自动化学院

专业:测控技术与仪器

一.英文原文

Progress in Computers

Prestige Lecture delivered to IEE, Cambridge, on 5 February 2009

Maurice Wilkes

The first stored program computers began to work around 1950. The one we built in Cambridge, the EDSAC was first used in the summer of 1949.

These early experimental computers were built by people like myself with varying backgrounds. We all had extensive experience in electronic engineering and were confident that that experience would stand us in good stead. This proved true, although we had some new things to learn. The most important of these was that transients must be treated correctly; what would cause a harmless flash on the screen of a television set could lead to a serious error in a computer.

As far as computing circuits were concerned, we found ourselves with an embarass de richess. For example, we could use vacuum tube diodes for gates as we did in the EDSAC or pentodes with control signals on both grids, a system widely used elsewhere. This sort of choice persisted and the term families of logic came into use. Those who have worked in the computer field will remember TTL, ECL and CMOS. Of these, CMOS has now become dominant.

In those early years, the IEE was still dominated by power engineering and we had to fight a number of major battles in order to get radio engineering along with the rapidly developing subject of electronics.dubbed in the IEE light current electrical engineering.properly recognised as an activity in its own right. I remember that we had some difficulty in organising a conference because the power engineers’ ways of doing things were not our ways. A minor source of irritation was that all IEE published papers were expected to start with a lengthy statement of earlier practice, something difficult to do when there was no earlier practice Consolidation in the 1960s

By the late 50s or early 1960s, the heroic pioneering stage was over and the computer field was starting up in real earnest. The number of computers in the world had increased and they were much more reliable than the very early ones . To those years we can ascribe the first steps in high level languages and the first

operating systems. Experimental time-sharing was beginning, and ultimately computer graphics was to come along.

Above all, transistors began to replace vacuum tubes. This change presented a formidable challenge to the engineers of the day. They had to forget what they knew about circuits and start again. It can only be said that they measured up superbly well to the challenge and that the change could not have gone more smoothly.

Soon it was found possible to put more than one transistor on the same bit of silicon, and this was the beginning of integrated circuits. As time went on, a sufficient level of integration was reached for one chip to accommodate enough transistors for a small number of gates or flip flops. This led to a range of chips known as the 7400 series. The gates and flip flops were independent of one another and each had its own pins. They could be connected by off-chip wiring to make a computer or anything else.

These chips made a new kind of computer possible. It was called a minicomputer. It was something less that a mainframe, but still very powerful, and much more affordable. Instead of having one expensive mainframe for the whole organisation, a business or a university was able to have a minicomputer for each major department.

Before long minicomputers began to spread and become more powerful. The world was hungry for computing power and it had been very frustrating for industry not to be able to supply it on the scale required and at a reasonable cost. Minicomputers transformed the situation.

The fall in the cost of computing did not start with the minicomputer; it had always been that way. This was what I meant when I referred in my abstract to inflation in the computer industry ‘going the other way’. As time goes on people get more for their money, not less.

Research in Computer Hardware.

The time that I am describing was a wonderful one for research in computer hardware. The user of the 7400 series could work at the gate and flip-flop level and yet the overall level of integration was sufficient to give a degree of reliability far above that of discreet transistors. The researcher, in a university or elsewhere, could build any digital device that a fertile imagination could conjure up. In the Computer Laboratory we built the Cambridge CAP, a full-scale minicomputer with fancy capability logic.

The 7400 series was still going strong in the mid 1970s and was used for the Cambridge Ring, a pioneering wide-band local area network. Publication of the design study for the Ring came just before the announcement of the Ethernet. Until these two systems appeared, users had mostly been content with teletype-based local area networks.

Rings need high reliability because, as the pulses go repeatedly round the ring, they must be continually amplified and regenerated. It was the high reliability provided by the 7400 series of chips that gave us the courage needed to embark on the project for the Cambridge Ring.

The RISC Movement and Its Aftermath

Early computers had simple instruction sets. As time went on designers of commercially available machines added additional features which they thought would improve performance. Few comparative measurements were done and on the whole the choice of features depended upon the designer’s intuition.

In 1980, the RISC movement that was to change all this broke on the world. The movement opened with a paper by Patterson and Ditzel entitled The Case for the Reduced Instructions Set Computer.

Apart from leading to a striking acronym, this title conveys little of the insights into instruction set design which went with the RISC movement, in particular the way it facilitated pipelining, a system whereby several instructions may be in different stages of execution within the processor at the same time. Pipelining was not new, but it was new for small computers The RISC movement benefited greatly from methods which had recently become available for estimating the performance to be expected from a computer design without actually implementing it. I refer to the use of a powerful existing computer to simulate the new design. By the use of simulation, RISC advocates were able to predict with some confidence that a good RISC design would be able to out-perform the best conventional computers using the same circuit technology. This prediction was ultimately born out in practice.

Simulation made rapid progress and soon came into universal use by computer designers. In consequence, computer design has become more of a science and less of an art. Today, designers expect to have a roomful of, computers available to do their simulations, not just one. They refer to such a roomful by the attractive name of computer farm.

The x86 Instruction Set

Little is now heard of pre-RISC instruction sets with one major exception, namely that of the Intel 8086 and its progeny, collectively referred to as x86. This has become the dominant instruction set and the RISC instruction sets that originally had a considerable measure of success are having to put up a hard fight for survival.

This dominance of x86 disappoints people like myself who come from the research wings.both academic and industrial.of the computer field. No doubt, business considerations have a lot to do with the survival of x86, but there are other reasons as well. However much we research oriented people would like to think otherwise. high level languages have not yet eliminated the use of machine code altogether. We need to keep reminding ourselves that there is much to be said for strict binary compatibility with previous usage when that can be attained. Nevertheless, things might have been different if Intel’s major attempt to produce a good RISC chip had been more successful. I am referring to the i860 (not the i960, which was something different). In many ways the i860 was an excellent chip, but its software interface did not fit it to be used in a workstation.

There is an interesting sting in the tail of this apparently easy triumph of the x86 instruction set. It proved impossible to match the steadily increasing speed of RISC processors by direct implementation of the x86 instruction set as had been done in the past. Instead, designers took a leaf out of the RISC book; although it is not obvious, on the surface, a modern x86 processor chip contains hidden within it a RISC-style processor with its own internal RISC coding. The incoming x86 code is, after suitable massaging, converted into this internal code and handed over to the RISC processor where the critical execution is performed.

In this summing up of the RISC movement, I rely heavily on the latest edition of Hennessy and Patterson’s books on computer design as my supporting authority; see in particular Computer Architecture, third edition, 2003, pp 146, 151-4, 157-8.

The IA-64 instruction set.

Some time ago, Intel and Hewlett-Packard introduced the IA-64 instruction set. This was primarily intended to meet a generally recognised need for a 64 bit address space. In this, it followed the lead of the designers of the MIPS R4000 and Alpha. However one would have thought that Intel would have stressed compatibility with the x86; the puzzle is that they did the exact opposite.

Moreover, built into the design of IA-64 is a feature known as predication

which makes it incompatible in a major way with all other instruction sets. In particular, it needs 6 extra bits with each instruction. This upsets the traditional balance between instruction word length and information content, and it changes significantly the brief of the compiler writer.

In spite of having an entirely new instruction set, Intel made the puzzling claim that chips based on IA-64 would be compatible with earlier x86 chips. It was hard to see exactly what was meant.

Chips for the latest IA-64 processor, namely, the Itanium, appear to have special hardware for compatibility. Even so, x86 code runs very slowly.

Because of the above complications, implementation of IA-64 requires a larger chip than is required for more conventional instruction sets. This in turn implies a higher cost. Such at any rate, is the received wisdom, and, as a general principle, it was repeated as such by Gordon Moore when he visited Cambridge recently to open the Betty and Gordon Moore Library. I have, however, heard it said that the matter appears differently from within Intel. This I do not understand. But I am very ready to admit that I am completely out of my depth as regards the economics of the semiconductor industry.

AMD have defined a 64 bit instruction set that is more compatible with x86 and they appear to be making headway with it. The chip is not a particularly large one. Some people think that this is what Intel should have done. [Since the lecture was delivered, Intel have announced that they will market a range of chips essentially compatible with those offered by AMD.]

The Relentless Drive towards Smaller Transistors

The scale of integration continued to increase. This was achieved by shrinking the original transistors so that more could be put on a chip. Moreover, the laws of physics were on the side of the manufacturers. The transistors also got faster, simply by getting smaller. It was therefore possible to have, at the same time, both high density and high speed.

There was a further advantage. Chips are made on discs of silicon, known as wafers. Each wafer has on it a large number of individual chips, which are processed together and later separated. Since shrinkage makes it possible to get more chips on a wafer, the cost per chip goes down.

Falling unit cost was important to the industry because, if the latest chips are cheaper to make as well as faster, there is no reason to go on offering the old ones, at least not indefinitely. There can thus be one product for the entire

market.

However, detailed cost calculations showed that, in order to maintain this advantage as shrinkage proceeded beyond a certain point, it would be necessary to move to larger wafers. The increase in the size of wafers was no small matter. Originally, wafers were one or two inches in diameter, and by 2000 they were as much as twelve inches. At first, it puzzled me that, when shrinkage presented so many other problems, the industry should make things harder for itself by going to larger wafers. I now see that reducing unit cost was just as important to the industry as increasing the number of transistors on a chip, and that this justified the additional investment in foundries and the increased risk.

The degree of integration is measured by the feature size, which, for a given technology, is best defined as the half the distance between wires in the densest chips made in that technology. At the present time, production of 90 nm chips is still building up

Suspension of Law

In March 1997, Gordon Moore was a guest speaker at the celebrations of the centenary of the discovery of the electron held at the Cavendish Laboratory. It was during the course of his lecture that I first heard the fact that you can have silicon chips that are both fast and low in cost described as a violation of Murphy’s law.or Sod’s law as it is usually called in the UK. Moore said that experience in other fields would lead you to expect to have to choose between speed and cost, or to compromise between them. In fact, in the case of silicon chips, it is possible to have both.

In a reference book available on the web, Murphy is identified as an engineer working on human acceleration tests for the US Air Force in 1949. However, we were perfectly familiar with the law in my student days, when we called it by a much more prosaic name than either of those mentioned above, namely, the Law of General Cussedness. We even had a mock examination question in which the law featured. It was the type of question in which the first part asks for a definition of some law or principle and the second part contains a problem to be solved with the aid of it. In our case the first part was to define the Law of General Cussedness and the second was the problem;A cyclist sets out on a circular cycling tour. Derive an equation giving the direction of the wind at any time.

The single-chip computer

At each shrinkage the number of chips was reduced and there were fewer wires

going from one chip to another. This led to an additional increment in overall speed, since the transmission of signals from one chip to another takes a long time.

Eventually, shrinkage proceeded to the point at which the whole processor except for the caches could be put on one chip. This enabled a workstation to be built that out-performed the fastest minicomputer of the day, and the result was to kill the minicomputer stone dead. As we all know, this had severe consequences for the computer industry and for the people working in it.

From the above time the high density CMOS silicon chip was Cock of the Roost. Shrinkage went on until millions of transistors could be put on a single chip and the speed went up in proportion.

Processor designers began to experiment with new architectural features designed to give extra speed. One very successful experiment concerned methods for predicting the way program branches would go. It was a surprise to me how successful this was. It led to a significant speeding up of program execution and other forms of prediction followed

Equally surprising is what it has been found possible to put on a single chip computer by way of advanced features. For example, features that had been developed for the IBM Model 91.the giant computer at the top of the System 360 range.are now to be found on microcomputers

Murphy’s Law remained in a state of suspension. No longer did it make sense to build experimental computers out of chips with a small scale of integration, such as that provided by the 7400 series. People who wanted to do hardware research at the circuit level had no option but to design chips and seek for ways to get them made. For a time, this was possible, if not easy

Unfortunately, there has since been a dramatic increase in the cost of making chips, mainly because of the increased cost of making masks for lithography, a photographic process used in the manufacture of chips. It has, in consequence, again become very difficult to finance the making of research chips, and this is a currently cause for some concern.

The Semiconductor Road Map

The extensive research and development work underlying the above advances has been made possible by a remarkable cooperative effort on the part of the international semiconductor industry.

At one time US monopoly laws would probably have made it illegal for US

companies to participate in such an effort. However about 1980 significant and far reaching changes took place in the laws. The concept of pre-competitive research was introduced. Companies can now collaborate at the pre-competitive stage and later go on to develop products of their own in the regular competitive manner.

The agent by which the pre-competitive research in the semi-conductor industry is managed is known as the Semiconductor Industry Association (SIA). This has been active as a US organisation since 1992 and it became international in 1998. Membership is open to any organisation that can contribute to the research effort.

Every two years SIA produces a new version of a document known as the International Technological Roadmap for Semiconductors (ITRS), with an update in the intermediate years. The first volume bearing the title ‘Roadmap’ was issued in 1994 but two reports, written in 1992 and distributed in 1993, are regarded as the true beginning of the series.

Successive roadmaps aim at providing the best available industrial consensus on the way that the industry should move forward. They set out in great detail.over a 15 year horizon. the targets that must be achieved if the number of components on a chip is to be doubled every eighteen months.that is, if Moore’s law is to be maintained.-and if the cost per chip is to fall.

In the case of some items, the way ahead is clear. In others, manufacturing problems are foreseen and solutions to them are known, although not yet fully worked out; these areas are coloured yellow in the tables. Areas for which problems are foreseen, but for which no manufacturable solutions are known, are coloured red. Red areas are referred to as Red Brick Walls.

The targets set out in the Roadmaps have proved realistic as well as challenging, and the progress of the industry as a whole has followed the Roadmaps closely. This is a remarkable achievement and it may be said that the merits of cooperation and competition have been combined in an admirable manner.

It is to be noted that the major strategic decisions affecting the progress of the industry have been taken at the pre-competitive level in relative openness, rather than behind closed doors. These include the progression to larger wafers.

By 1995, I had begun to wonder exactly what would happen when the inevitable point was reached at which it became impossible to make transistors any smaller. My enquiries led me to visit ARPA headquarters in Washington DC, where I was given a copy of the recently produced Roadmap for 1994. This made it plain that serious

problems would arise when a feature size of 100 nm was reached, an event projected to happen in 2007, with 70 nm following in 2010. The year for which the coming of 100 nm (or rather 90 nm) was projected was in later Roadmaps moved forward to 2004 and in the event the industry got there a little sooner.

I presented the above information from the 1994 Roadmap, along with such other information that I could obtain, in a lecture to the IEE in London, entitled The CMOS end-point and related topics in Computing and delivered on 8 February 1996.

The idea that I then had was that the end would be a direct consequence of the number of electrons available to represent a one being reduced from thousands to a few hundred. At this point statistical fluctuations would become troublesome, and thereafter the circuits would either fail to work, or if they did work would not be any faster. In fact the physical limitations that are now beginning to make themselves felt do not arise through shortage of electrons, but because the insulating layers on the chip have become so thin that leakage due to quantum mechanical tunnelling has become troublesome.

There are many problems facing the chip manufacturer other than those that arise from fundamental physics, especially problems with lithography. In an update to the 2001 Roadmap published in 2002, it was stated that the continuation of progress at present rate will be at risk as we approach 2005 when the roadmap projects that progress will stall without research break-throughs in most technical areas “. This was the most specific statement about the Red Brick Wall, that had so far come from the SIA and it was a strong one. The 2003 Roadmap reinforces this statement by showing many areas marked red, indicating the existence of problems for which no manufacturable solutions are known.

It is satisfactory to report that, so far, timely solutions have been found to all the problems encountered. The Roadmap is a remarkable document and, for all its frankness about the problems looming above, it radiates immense confidence. Prevailing opinion reflects that confidence and there is a general expectation that, by one means or another, shrinkage will continue, perhaps down to 45 nm or even less.

However, costs will rise steeply and at an increasing rate. It is cost that will ultimately be seen as the reason for calling a halt. The exact point at which an industrial consensus is reached that the escalating costs can no longer be met will depend on the general economic climate as well as on the financial strength of the semiconductor industry itself.。

Insulating layers in the most advanced chips are now approaching a thickness equal to that of 5 atoms. Beyond finding better insulating materials, and that cannot take us very far, there is nothing we can do about this. We may also expect to face problems with on-chip wiring as wire cross sections get smaller. These will concern heat dissipation and atom migration. The above problems are very fundamental. If we cannot make wires and insulators, we cannot make a computer, whatever improvements there may be in the CMOS process or improvements in semiconductor materials. It is no good hoping that some new process or material might restart the merry-go-round of the density of transistors doubling every eighteen months.

I said above that there is a general expectation that shrinkage would continue by one means or another to 45 nm or even less. What I had in mind was that at some point further scaling of CMOS as we know it will become impracticable, and the industry will need to look beyond it.

Since 2001 the Roadmap has had a section entitled emerging research devices on non-conventional forms of CMOS and the like. Vigorous and opportunist exploitation of these possibilities will undoubtedly take us a useful way further along the road, but the Roadmap rightly distinguishes such progress from the traditional scaling of conventional CMOS that we have been used to.

Advances in Memory Technology

Unconventional CMOS could revolutionalize memory technology. Up to now, we have relied on DRAMs for main memory. Unfortunately, these are only increasing in speed marginally as shrinkage continues, whereas processor chips and their associated cache memory continue to double in speed every two years. The result is a growing gap in speed between the processor and the main memory. This is the memory gap and is a current source of anxiety. A breakthrough in memory technology, possibly using some form of unconventional CMOS, could lead to a major advance in overall performance on problems with large memory requirements, that is, problems which fail to fit into the cache.

Perhaps this, rather than attaining marginally higher basis processor speed will be the ultimate role for non-conventional CMOS.

Shortage of Electrons

Although shortage of electrons has not so far appeared as an obvious limitation, in the long term it may become so. Perhaps this is where the exploitation of non-conventional CMOS will lead us. However, some interesting work has been

done.notably by Haroon Amed and his team working in the Cavendish Laboratory.on the direct development of structures in which a single electron more or less makes the difference between a zero and a one. However very little progress has been made towards practical devices that could lead to the construction of a computer. Even with exceptionally good luck, many tens of years must inevitably elapse before a working computer based on single electron effects can be contemplated.

二.中文翻译

微型计算机发展简史

IEE的论文剑桥大学,2009/2/5

莫里斯·威尔克斯

第一台存储程序的计算开始出现于1950前后,它就是1949年夏天在剑桥大学,我们创造的延迟存储自动电子计算机(EDSAC)。

最初实验用的计算机是由象我一样有着广博知识的人构造的。我们在电子工程方面都有着丰富的经验,并且我们深信这些经验对我们大有裨益。后来,被证明是正确的,尽管我们也要学习很多新东西。最重要的是瞬态一定要小心应付,虽然它只会在电视机的荧幕上一起一个无害的闪光,但是在计算机上这将导致一系列的错误。

在电路的设计过程中,我们经常陷入两难的境地。举例来说,我可以使用真空二级管做为门电路,就象在EDSAC中一样,或者在两个栅格之间用带控制信号的五级管,这被广泛用于其他系统设计,这类的选择一直在持续着直到逻辑门电路开始应用。在计算机领域工作的人都应该记得TTL,ECL和CMOS,到目前为止,CMOS已经占据了主导地位。

在最初的几年,IEE(电子工程师协会)仍然由动力工程占据主导地位。为了让IEE 认识到无线工程和快速发展的电子工程并行发展是它自己的一项权利,我们不得不面对一些障碍。由于动力工程师们做事的方式与我们不同,我们也遇到了许多困难。让人有些愤怒的是,所有的IEE出版的论文都被期望以冗长的早期研究的陈述开头,无非是些在早期阶段由于没有太多经验而遇到的困难之类的陈述。

60年代的巩固阶段

60年代初,个人英雄时代结束了,计算机真正引起了重视。世界上的计算机数量已

经增加了许多,并且性能比以前更加可靠。这些我认为归因与高级语言的起步和第一个操作系统的诞生。分时系统开始起步,并且计算机图形学随之而来。

综上所述,晶体管开始代替正空管。这个变化对当时的工程师们是个不可回避的挑战。他们必须忘记他们熟悉的电路重新开始。只能说他们鼓起勇气接受了挑战,尽管这个转变并不会一帆风顺。

小规模集成电路和小型机

很快,在一个硅片上可以放不止一个晶体管,由此集成电路诞生了。随着时间的推移,一个片子能够容纳的最大数量的晶体管或稍微少些的逻辑门和翻转门集成度达到了一个最大限度。由此出现了我们所知道7400系列微机。每个门电路或翻转电路是相互独立的并且有自己的引脚。他们可通过导线连接在一起,作成一个计算机或其他的东西。

这些芯片为制造一种新的计算机提供了可能。它被称为小型机。他比大型机稍逊,但功能强大,并且更能让人负担的起。一个商业部门或大学有能力拥有一台小型机而不是得到一台大型组织所需昂贵的大型机。

随着微机的开始流行并且功能的完善,世界急切获得它的计算能力但总是由于工业上不能规模供应和它可观的价格而受到挫折。微机的出现解决了这个局面。

计算消耗的下降并非起源与微机,它本来就应该是那个样子。这就是我在概要中提到的“通货膨胀”在计算机工业中走上了歧途之说。随着时间的推移,人们比他们付出的金钱得到的更多。

硬件的研究

我所描述的时代对于从事计算机硬件研究的人们是令人惊奇的时代。7400系列的用户能够工作在逻辑门和开关级别并且芯片的集成度可靠性比单独晶体管高很多。大学或各地的研究者,可以充分发挥他们的想象力构造任何微机可以连接的数字设备。在剑桥大学实验室力,我们构造了CAP,一个有令人惊奇逻辑能力的微机。

7400在70年代中期还不断发展壮大,并且被宽带局域网的先驱组织Cambridge Ring 所采用。令牌环设计研究的发表先于以太网。在这两种系统出现之前,人们大多满足于基于电报交换机的本地局域网。

令牌环网需要高可靠性,由于脉冲在令牌环中传递,他们必须不断的被放大并且再生。是7400的高可靠性给了我们勇气,使得我们着手Cambridge Ring.项目。

精简指令计算机的诞生

早期的计算机有简单的指令集,随着时间的推移,商业用微机的设计者增加了另外的他们认为可以微机性能的特性。很少的测试方法被建立,总的来说特性的选取很大程度上依赖于设计者的直觉。

1980年,RISC运动改变了微机世界。该运动是由Patterson 和Ditzel发表了一篇命名为精简指令计算机的情况论文而引起的。

除了RISC这个引人注目缩略词外,这个标题传达了一些指令集合设计的见解,随之引发了RISC运动。从某种意义上说,它推动了线程的发展,在处理器中,同一时间有几个指令在不同的执行阶段称为线程。线程不是个新概念,但是它对微机来说是从未有过的。

RISC受益于一个最近的可用的方法的诞生,该方法使估计计算机性能成为可能而不去真正实现该微机的设计。我的意思是说利用目前存在的功能强大的计算机去模拟新的设计。通过模拟该设计,RISC的提倡者能够有信心的预言,一台使用和传统计算机相同电路的RISC计算机可以和传统的最好的计算机有同样的性能。

模拟仿真加快了开发进度并且被计算机设计者广泛采用。随后,计算机设计者变的多些可理性少了一些艺术性。今天,设计者们希望有满屋可用计算机做他们的仿真,而不只是一台,

X86指令集

除非出现很大意外,要不很少听到有计算机使用早期的RISC指令集了。INTEL 8086及其后裔都与x86密切相关。X86构架已经占据了计算机核心指令集的主导地位。被认为是相当成功的RISC指令集现在的生存空间越来越小了。

对于我们这些从事计算机学术研究的人,X86的统治地位让我们感到失望。毫无疑问,商业上对于x86的生存会有更多的考虑,但是这里还有很多原因,尽管我们多么希望人们考虑其他的方面。高级语言并没有完全消除对机器原始编码的的使用。我们仍需要不断提醒我们自己:我们应该严格的与先前的应用在机器层面上保持兼容。然而,情况也许有所不同,如果Intel的主要目的是为是生产一个好的RISC芯片。有一个已经取得了更大的成功,我所说的i860(不是i960,它们有一些不同)。从许多方面来说,i860是个卓越的芯片,但是它的软件借口不适合在工作站上应用。

对于x86取得胜利的最后有一件有意思的事情。直接应用先前x86的实现方式对于满足RISC处理器的持续增长的速度要求,是不可能的。因此,设计者们没有完全实现RISC指令集,尽管这不是很明显。表面上,一片现代的x86芯片包含了隐藏实现的部分,好象和实现RISC指令集的芯片一样。当致命的异常发生时,X86引入的代码是,经过适当的篡改后,被转化为它的内部代码并且被RISC芯片处理。

对于以上RISC运动的总结,我非常信赖最新版本的哈里斯和培生出版社的有关计算机设计的书籍。请参考特殊计算机体系构造,第三版,2003,P146,151-4,157-8 IA-64指令集

很久以前,Intel 和Hewlett-Packard引进了IA-64指令集。这最初主要是为了满足通常的64位地址空间问题。在这种情况下,随后出现了MIPS R4000和Alpha。然而,人们普遍认为Intel应该与x86构架保持兼容,可令人疑惑的是恰恰相反。

进一步说,IA-64的设计与其他所有的指令集在主要实现方式上有所不同。特别的,每条指令它需要附加的6位。这打乱了传统的在指令字长和信息内容的平衡,并且它改变了编译器作者的原先的大纲。

尽管IA-64是个全新的指令集,但Intel发表了一个令人困惑的声明:基于IA-64的芯片将与早期的x86芯片保持兼容。很难弄懂它所指的是什么。

最新的称为Itaninu IA-64处理器显然需要特殊的兼容性的硬件,尽管如此,x86编码运行的相当慢。

由于以上的复杂因素,IA-64的实现需要更大的体积相对与传统的指令集,这暗示着更大的消耗。因此,在任何情况下,作为常识和一般性的标准,Gordon Moore在访问剑桥最近开放的Betty and Gordon Moore 图书馆时所反复强调。在听到他说问题出现在Intel 内部也许有所不同,我很不理解。但是我已经作好了准备,去接受这样的事实,我已经完全不了解半导体经济学了。

AMD已经定义了一种64位的与x86更加兼容的指令集,并且他们已经取得了进展。这种片子并不是很大。很多人认为这才是Intel应该做的。(在这篇演讲稿被提交之前,Intel 表示他们将销售一系列本质上与AMD兼容的芯片)

更小晶体管的出现

集成度还在不断增加,这是通过缩小原始晶体管以致可以更容易放在一个片子上。进一步说,物理学的定律占在了制造商的一方。晶体管变的更快,更简单,更小。因此,同时导致了更高的集成度和速度。

这有个更明显的优势。芯片被放在硅片上,称为晶片。每一个晶片拥有很大数量的独立芯片,他们被同时加工然后分离。因为缩小以致在每块晶片上有了更多的芯片,所以每块芯片的价格下降了。

单元价格下降对于计算机工业是重要的,因为,如果最新的芯片性能和以前一样但价格更便宜,就没有理由继续提供老产品,至少不应该无限期提供。对于整个市场只需一种产品。

然而,详细计算各项消耗,随着芯片小到一定程度,为了继续保持产品的优势,移到一个更大的圆晶片上是十分必要的。尺寸的不断增加使的圆晶片不再是很小的东西了。最初,圆晶片直径上只有1到2英寸,到2000年已经达到了12英寸。起初,我不太明白,芯片的缩小导致了一系列的问题,工业上应该在制造更大的圆晶片上遇到更多的问

题。现在,我明白了,单元消耗的减少在工业上和在一个芯片上增加电子晶体管的数量是同等重要的,并且,在风险中增加圆晶片厂的投资被证明是正确的。

集成度被特殊的尺寸所衡量,对于特定的技术,它是用在一块高密度芯片上导线间距离的一半来衡量的。目前,90纳米的晶片正在被建成。

对Murphy’s定理的怀疑

1997年3月,在Cavendish实验室建立一百周年纪念庆典上,Gordon Moore被邀作为一名演讲者。在他演讲的过程中,我第一次了解到这样一个事实,我们可以使得硅芯片既快并且消耗低,从而违反在英国被称为Murphy’s 定律或Sod’s 定律。Moore说在其它领域你也许不在二者之间做出取舍,但事实上,在硅片上,同时拥有二者是可能的。

在网上可得到一本相关的书籍,Murphy是在美国空军中从事人体重力加速度研究的工程师。然而在我们的学生时代就已经相当熟悉该定律,当时我们对于该定律有个更接近散文的名字而不是上面我们提到的那两个名字,我们称为General Cussedness定律。甚至它都曾出现在我们的试卷上。问题是这样,第一部分是关于该定律的定义,第二部分是应用该定律解决一道问题。我们的试题是:一、给出General Cussedness定律的定义;

二、当一个骑自行车人围绕着圆做运动时,在任何情况下,考虑到风的因素得到一个平衡公式。

单片机

芯片每次的缩小,芯片数量将减少;并且芯片间的导线也随之减少。这导致了整体速度的下降,因为信号在各个芯片间的传输时间变长了。

渐渐地,芯片的收缩到只剩下处理器部分,缓存都被放在了一个单独的片子上。这使得工作站被建成拥有当代小型机一样的性能,结果搬倒了小型机绝对的基石。正如我们所知道的,这对于计算机工业和从事计算机事业的人产生了深远的影响自从上述时代的开始,高密度CMOS硅芯片成为主导。随着芯片的缩小技术的发展,数百万的晶体管可以放在一个单独的片子上,相应的速度也成比例的增加。

为了得到额外的速度。处理器设计者开始对新的体系构架进行实验。一次成功的实验都预言了一种新的编程方式的分支的诞生。我对此取得的成功感到非常惊奇。它导致了程序执行速度的增加并且其相应的框架。

同样令人惊奇的是,通过更高级的特性建立一种单片机是有可能的。例如,为IBM Model 91开发的新特性,现在在单片机上也出现了。

Murphy定律仍然在中止的状态。它不再适用于使用小规模集成芯片设计实验用的计算机,例如7400系列。想在电路级上做硬件研究的人们没有别的选择除了设计芯片并且找到实现它的办法。一段时间内,这样是可能的,但是并不容易。

不幸的是,制造芯片的花费有了戏剧性的增长,主要原因是制造芯片过程中电路印刷版制作成本的增加。因此,为制作芯片技术追加资金变的十分困难,这是当前引起人们关注的原因。

半导体前景规划

对于以上提到的各个方面,在部分国际半导体工业部门的精诚合作下,广泛的研究与开发工作是可行的。

在以前美国反垄断法禁止这种行为。但是在1980年,该法律发生了很大变化。预竞争概念被引进了该法律。各个公司现在可以在预言竞争阶段展开合作,然后在规则允许的情况下继续开发各自的产品。

在半导体工业中,预竞争研究的管理机构是半导体工业协会。1972年作为美国国内的组织,1998年成为一个世界性的组织。任何一个研究组织都可加入该协会。

每两年,SIA修订一次ITRS(国际半导体科学规划),并且逐年更新。1994年在第一卷中引入了“前景规划”一词,该卷由两个报告组成,些于1992年,在1993年提交。它被认为是该规划的真正开始。

为了推动半导体工业的向前发展,后续的规划提供最好的可利用的工业标准。它们对于15年内的发展做出了详细的规划。要达到的目标是每18个月晶体管的集成度增加一倍,同时每块芯片的价格下降一半,即Moore定律。

对于某些方面,前面的道路是清楚的。在另一方面,制造业的问题是可以预见的并且解决的办法也是可以知道的,尽管不是所有的问题都能够解决。这样的领域在表格中由蓝色表示,同时没有解决办法的,加以红色。红色区域往往称为红色砖墙。

规划建立的目标是现实的,同时也是充满挑战的。半导体工业整体上的进步于该规划密不可分。这是个令人惊讶的成就,它可以说是合作和竞争共同的价值。

值得注意的是,促进半导体工业向前发展的主要的战略决策是相对开放的预竞争机制,而不是闭关锁国。这也包括大规模圆晶片取得进展的原因。

1995年前,我开始感觉到,如果达到了不可能使得晶体管体积更小的临界点时,将发生什么。怀着这样的疑惑,我访问了位于华盛顿的ARPA(美国国防部)指挥总部,在那,我看到1994年规划的复本。我恍然大悟,当圆晶片尺寸在2007年达到100纳米时,将出现严重的问题,在2010年达到70纳米时也如此。在随后的2004年的规划中,当圆晶片尺寸达到100纳米时,也做了相应的规划。不久半导体工业将发展到那一步。

从1994年的规划中我引用了以上的信息,还有就是一篇提交到IEE的题目为CMOS 终结点的论文和在1996年2月8号的Computing上讨论的一些题目。

我现在的想法是,最终的结果是表示一个存在可用的电子数目从数千减少到数百。在这样的情况下,统计波动将成为问题。最后,电路或者不再工作,或者达到了速度的极限。事实上,物理限制将开始让他们感觉到不能突破电子最终的不足,原因是芯片上绝缘层越来越薄,以致量子理论中隧道效应引起了麻烦,导致了渗漏。

相对基础物理学,芯片制造者面对的问题要多出许多,尤其是电路印刷术遇到的困难。2001年更新2002年出版的规划中,陈述了这样一种情况,照目前的发展速度,如果在2005年前在关键技术领域没有取得大的突破的话,半导体业将停止不前。这是对“红色砖墙”最准确的描述。到目前为止是SIA遇到的最麻烦的问题。2003年的规划书强调了这一点,通过在许多地方加上了红色,指示在这些领域仍存在人们没有解决的制造方法问题。

到目前为止,可以很满意的报道,所遇到的问题到及时找到了解决之道。规划书是个非凡的文档,并且它坦白了以上提到的问题,并表示出了无限的信心。主要的见解反映出了这种信心并且有一个大致的期望,通过某种方式,圆晶体将变的更小,也许到45纳米或更小。

然而,花费将以很大的速率增长。也许将成为半导体停滞不前的最终原因。对于逐步增加的花费直到不能满足,这个精确的工业上达到一致意见的平衡点,依赖于经济的整体形势和半导体工业自身的财政状况。

最高级芯片的绝缘层厚度仅有5个原子的大小。除了找到更好的绝缘材料外,我们将寸步难行。对于此,我们没有任何办法。我们也不得不面对芯片的布线问题,线越来越细小了。还有散热问题和原子迁移问题。这些问题是相当基础性的。如果我们不能制作导线和绝缘层,我们就不能制造一台计算机。不论在CMOS加工工艺上和半导体材料上取得多么大的进步。更别指望有什么新的工艺或材料可以使得半导体集成度每18个月翻一番的美好时光了。

我在上文中说到,圆晶体继续缩小直到45纳米或更小是个大致的期望。在我的头脑中,从某点上来说,我们所知道的继续缩小CMOS是不可行的,但工业上需要超越它。

2001年以来,规划书中有一部分陈述了非传统形式CMOS的新兴研究设备。一些精力旺盛的人和一些投机者的探索无疑给了我们一些有益的途径,并且规划书明确分辨出了这些进步,在那些我们曾经使用的传统CMOS方面。

内存技术的进步

非传统的CMOS变革了存储器技术。直到现在,我们仍然依靠DRAM作为主要的存储体。不幸的是,随着芯片的缩小,只有芯片外围速度上的增长——处理器芯片和它相关的缓存速度每两年增加一倍。这就是存储器代沟并且是人们焦虑的根源。存储技术的

一个可能突破是,使用一种非传统的CMOS管,在计算机整体性能上将导致一个很大的进步,将解决大存储器的需求,即缓存不能解决的问题。

也许这个,而不是外围电路达到基本处理器的速度将成为非传统CMOS.的最终角色。

电子的不足

尽管目前为止,电子每表现出明显的不足,然而从长远看来,它最终会不能满足要求。也许这是我们开发非传统CMOS管的原因。在Cavendish实验室里,Haroon Amed

已经作了很多有意义的工作,他们想通过一个单独电子或多或少的表现出0和1的区别。然而对于构造实用的计算机设备只取得了一点点进展。也许由于偶然的好运气,数十年后一台基于一个单独电子的计算机也许是可以实现的。

单片机-英文参考文献1

Structure and fun cti on of the MCS-51 series Structure and function of the MCS-51 series one-chip computer MCS-51is a nameof a piece of one-chip computer series which In tel Compa ny produces. This compa ny in troduced 8 top-grade on e-chip computers of MCS-51 series in 1980 after in troduc ing 8 on e-chip computers of MCS-48 series in 1976. It belong to a lot of kinds this line of one-chip computer the chips have,such as 8051, 8031,8751, 80C51BH, 80C31BH,etc., their basic compositi on, basic performa nee and in structi on system are all the same. 8051 daily representatives- 51 serial on e-chip computers . An one-chip computer system is made up of several followi ng parts: ( 1) One microprocessor of 8 (CPU). ( 2) At slice data memory RAM (128B/256B),it use not depositti ng not can read ing /data that write, such as result not middle of operati on, final result and data wan ted to show, etc. ( 3) Procedure memoryROM/EPRC(4KB/8KB ), is used to preserve the procedure , some in itial data and form in slice. But does not take ROM/EPROM within some on e-chip computers, such as 8031 , 8032, 80C ,etc.. ( 4) Four 8 run side by side I/O in terface P0 four P3, each mouth can use as introduction , may use as export ing too. ( 5) Two timer / coun ter, each timer / coun ter may set up and count in the way, used to count to the exter nal in cide nt, can set up into a timi ng way too, and can accord ing to count or result of timing realize the control of the computer. (6) Five cut off cutting off the control system of the source .

关于力的外文文献翻译、中英文翻译、外文翻译

五、外文资料翻译 Stress and Strain 1.Introduction to Mechanics of Materials Mechanics of materials is a branch of applied mechanics that deals with the behavior of solid bodies subjected to various types of loading. It is a field of study that i s known by a variety of names, including “strength of materials” and “mechanics of deformable bodies”. The solid bodies considered in this book include axially-loaded bars, shafts, beams, and columns, as well as structures that are assemblies of these components. Usually the objective of our analysis will be the determination of the stresses, strains, and deformations produced by the loads; if these quantities can be found for all values of load up to the failure load, then we will have obtained a complete picture of the mechanics behavior of the body. Theoretical analyses and experimental results have equally important roles in the study of mechanics of materials . On many occasion we will make logical derivations to obtain formulas and equations for predicting mechanics behavior, but at the same time we must recognize that these formulas cannot be used in a realistic way unless certain properties of the been made in the laboratory. Also , many problems of importance in engineering cannot be handled efficiently by theoretical means, and experimental measurements become a practical necessity. The historical development of mechanics of materials is a fascinating blend of both theory and experiment, with experiments pointing the way to useful results in some instances and with theory doing so in others①. Such famous men as Leonardo da Vinci(1452-1519) and Galileo Galilei (1564-1642) made experiments to adequate to determine the strength of wires , bars , and beams , although they did not develop any adequate theo ries (by today’s standards ) to explain their test results . By contrast , the famous mathematician Leonhard Euler(1707-1783) developed the mathematical theory any of columns and calculated the critical load of a column in 1744 , long before any experimental evidence existed to show the significance of his results ②. Thus , Euler’s theoretical results remained unused for many years, although today they form the basis of column theory. The importance of combining theoretical derivations with experimentally determined properties of materials will be evident theoretical derivations with experimentally determined properties of materials will be evident as we proceed with

英文文献及中文翻译

毕业设计说明书 英文文献及中文翻译 学院:专 2011年6月 电子与计算机科学技术软件工程

https://www.sodocs.net/doc/6c12776320.html, Overview https://www.sodocs.net/doc/6c12776320.html, is a unified Web development model that includes the services necessary for you to build enterprise-class Web applications with a minimum of https://www.sodocs.net/doc/6c12776320.html, is part of https://www.sodocs.net/doc/6c12776320.html, Framework,and when coding https://www.sodocs.net/doc/6c12776320.html, applications you have access to classes in https://www.sodocs.net/doc/6c12776320.html, Framework.You can code your applications in any language compatible with the common language runtime(CLR), including Microsoft Visual Basic and C#.These languages enable you to develop https://www.sodocs.net/doc/6c12776320.html, applications that benefit from the common language runtime,type safety, inheritance,and so on. If you want to try https://www.sodocs.net/doc/6c12776320.html,,you can install Visual Web Developer Express using the Microsoft Web Platform Installer,which is a free tool that makes it simple to download,install,and service components of the Microsoft Web Platform.These components include Visual Web Developer Express,Internet Information Services (IIS),SQL Server Express,and https://www.sodocs.net/doc/6c12776320.html, Framework.All of these are tools that you use to create https://www.sodocs.net/doc/6c12776320.html, Web applications.You can also use the Microsoft Web Platform Installer to install open-source https://www.sodocs.net/doc/6c12776320.html, and PHP Web applications. Visual Web Developer Visual Web Developer is a full-featured development environment for creating https://www.sodocs.net/doc/6c12776320.html, Web applications.Visual Web Developer provides an ideal environment in which to build Web sites and then publish them to a hosting https://www.sodocs.net/doc/6c12776320.html,ing the development tools in Visual Web Developer,you can develop https://www.sodocs.net/doc/6c12776320.html, Web pages on your own computer.Visual Web Developer includes a local Web server that provides all the features you need to test and debug https://www.sodocs.net/doc/6c12776320.html, Web pages,without requiring Internet Information Services(IIS)to be installed. Visual Web Developer provides an ideal environment in which to build Web sites and then publish them to a hosting https://www.sodocs.net/doc/6c12776320.html,ing the development tools in Visual Web Developer,you can develop https://www.sodocs.net/doc/6c12776320.html, Web pages on your own computer.

步进电机及单片机英文文献及翻译

外文文献: Knowledge of the stepper motor What is a stepper motor: Stepper motor is a kind of electrical pulses into angular displacement of the implementing agency. Popular little lesson: When the driver receives a step pulse signal, it will drive a stepper motor to set the direction of rotation at a fixed angle (and the step angle). You can control the number of pulses to control the angular displacement, so as to achieve accurate positioning purposes; the same time you can control the pulse frequency to control the motor rotation speed and acceleration, to achieve speed control purposes. What kinds of stepper motor sub-: In three stepper motors: permanent magnet (PM), reactive (VR) and hybrid (HB) permanent magnet stepper usually two-phase, torque, and smaller, step angle of 7.5 degrees or the general 15 degrees; reaction step is generally three-phase, can achieve high torque output, step angle of 1.5 degrees is generally, but the noise and vibration are large. 80 countries in Europe and America have been eliminated; hybrid stepper is a mix of permanent magnet and reactive advantages. It consists of two phases and the five-phase: two-phase step angle of 1.8 degrees while the general five-phase step angle of 0.72 degrees generally. The most widely used Stepper Motor. What is to keep the torque (HOLDING TORQUE) How much precision stepper motor? Whether the cumulative: The general accuracy of the stepper motor step angle of 3-5%, and not cumulative.

51单片机毕设参考文献

基于单片机的大棚温湿度控制系统设计 发布: 2011-9-1 | 作者: —— | 来源:caiminghao| 查看: 530次| 用户关注: 摘要:针对研究蔬菜大棚智能温湿度控制,设计了一种基于计算机自动控制的智能蔬菜大棚温湿度控制系统。详细阐述了该系统的温湿度采集、温湿度显示、控制系统等系统软硬件的设计思想,以DS18B20和HM1500LF作为温湿度传感器,以AT89S52单片机为系统核心,最后利用DELPHI软件进行系统仿真。该研究设计的蔬菜大棚智能温湿度控制系统人机界面良好,操作简单方便,自动化程度高,造价低廉,具有良好的应用前景和推广价值。关键词:温度采 摘要:针对研究蔬菜大棚智能温湿度控制,设计了一种基于计算机自动控制的智能蔬菜大棚温湿度控制系统。详细阐述了该系统的温湿度采集、温湿度显示、控制系统等系统软硬件的设计思想,以DS18B20和HM1500LF作为温湿度传感器,以AT89S52单片机为系统核心,最后利用DELPHI软件进行系统仿真。该研究设计的蔬菜大棚智能温湿度控制系统人机界面良好,操作简单方便,自动化程度高,造价低廉,具有良好的应用前景和推广价值。 关键词:温度采集;湿度采集;LCD显示;单片机 0 引言 植物的生长都是在一定的环境中进行的,在生长过程中受到环境中各种因素的影响,其中影响最大的是温度和湿度。若昼夜的温度和湿度变化很大,其对植物生长极为不利。因此必须对温度和湿度进行监测和控制,使其适合植物的生长,以提高其产量和质量。 本系统就是针对大棚内温度、湿度,研究单片机控制的温室大棚自动控制,综合考虑系统的精度、效率以及经济性要求多方面因素之后,设计一种基于计算机自动控制的大棚温湿度控制系统。 本系统实现的蔬菜大棚温湿度控制系统的目标功能如下: (1)系统能对大棚环境温湿度进行采集和显示(现场观温、湿度,软件记录)。 (2)能通过上位机端远程设定蔬菜的生长期适宜温湿度。由主控机统一设置系统时间和温度湿度修正值。 (3)当大棚的环境温湿度参数超过设定的上下限值时控制相应的系统启动。 (4)可实时显示当前温度、时间、报警阈值等信息,并可查询各时间段的温湿度情况,并加以控制。 1 系统各组成模块

10kV小区供配电英文文献及中文翻译

在广州甚至广东的住宅小区电气设计中,一般都会涉及到小区的高低压供配电系统的设计.如10kV高压配电系统图,低压配电系统图等等图纸一大堆.然而在真正实施过程中,供电部门(尤其是供电公司指定的所谓电力设计小公司)根本将这些图纸作为一回事,按其电脑里原有的电子档图纸将数据稍作改动以及断路器按其所好换个厂家名称便美其名曰设计(可笑不?),拿出来的图纸根本无法满足电气设计的设计意图,致使严重存在以下问题:(也不知道是职业道德问题还是根本一窍不通) 1.跟原设计的电气系统货不对板,存在与低压开关柜后出线回路严重冲突,对实际施工造成严重阻碍,经常要求设计单位改动原有电气系统图才能满足它的要求(垄断的没话说). 2.对消防负荷和非消防负荷的供电(主要在高层建筑里)应严格分回路(从母线段)都不清楚,将消防负荷和非消防负荷按一个回路出线(尤其是将电梯和消防电梯,地下室的动力合在一起等等,有的甚至将楼顶消防风机和梯间照明合在一个回路,以一个表计量). 3.系统接地保护接地型式由原设计的TN-S系统竟曲解成"TN-S-C-S"系统(室内的还需要做TN-C,好玩吧?),严格的按照所谓的"三相四线制"再做重复接地来实施,导致后续施工中存在重复浪费资源以及安全隐患等等问题.. ............................(违反建筑电气设计规范等等问题实在不好意思一一例举,给那帮人留点混饭吃的面子算了) 总之吧,在通过图纸审查后的电气设计图纸在这帮人的眼里根本不知何物,经常是完工后的高低压供配电系统已是面目全非了,能有百分之五十的保留已经是谢天谢地了. 所以.我觉得:住宅建筑电气设计,让供电部门走!大不了留点位置,让他供几个必需回路的电,爱怎么折腾让他自个怎么折腾去.. Guangzhou, Guangdong, even in the electrical design of residential quarters, generally involving high-low cell power supply system design. 10kV power distribution systems, such as maps, drawings, etc. low-voltage distribution system map a lot. But in the real implementation of the process, the power sector (especially the so-called power supply design company appointed a small company) did these drawings for one thing, according to computer drawings of the original electronic file data to make a little change, and circuit breakers by their the name of another manufacturer will be sounding good design (ridiculously?), drawing out the design simply can not meet the electrical design intent, resulting in a serious following problems: (do not know or not know nothing about ethical issues) 1. With the original design of the electrical system not meeting board, the existence and low voltage switchgear circuit after qualifying serious conflicts seriously hinder the actual construction, often require changes to the original design unit plans to meet its electrical system requirements (monopoly impress ). 2. On the fire load and fire load of non-supply (mainly in high-rise building in) should be strictly sub-loop (from the bus segment) are not clear, the fire load and fire load of non-qualifying press of a circuit (especially the elevator and fire elevator, basement, etc.

at89c52单片机中英文资料对照外文翻译文献综述

at89c52单片机简介 中英文资料对照外文翻译文献综述 A T89C52 Single-chip microprocessor introduction Selection of Single-chip microprocessor 1. Development of Single-chip microprocessor The main component part of Single-chip microprocessor as a result of by such centralize to be living to obtain on the chip,In immediate future middle processor CPU。Storage RAM immediately﹑memoy read ROM﹑Interrupt system、Timer /'s counter along with I/O's rim electric circuit awaits the main microcomputer section,The lumping is living on the chip。Although the Single-chip microprocessor r is only a chip,Yet through makes up and the meritorous service be able to on sees,It had haveed the calculating machine system property,calling it for this reason act as Single-chip microprocessor r minisize calculating machine SCMS and abbreviate the Single-chip microprocessor。 1976Year the Inter corporation put out 8 MCS-48Set Single-chip microprocessor computer,After being living more than 20 years time in development that obtain continuously and wide-ranging application。1980Year that corporation put out high performance MCS -51Set Single-chip microprocessor。This type of Single-chip microprocessor meritorous service capacity、The addressing range wholly than early phase lift somewhat,Use also comparatively far more at the moment。1982Year that corporation put out the taller 16 Single-chip microprocessor MCS of performance once

单片机传感器参考文献

[1] 王青云. 基于单片机的温度测量系统[J] 2010,(05). [2] 彭立,张建洲,王少华. 自适应温度控制系统的研制[J]东北师大学报(自然科学版), 1994,(01) . [3] Jack Shandle. 即将来临的32位浪潮——ARM构架在32位微控制器领域的应用[J]单片机与嵌入式系统应用, 2004,(03) . [4] 刘侃,张永泰,刘洛琨. ARM程序设计优化策略与技术[J]单片机与嵌入式系统应用, 2004,(04) . [5] 何立民.从Cygnal 80C51F看8位单片机发展之路.单片机与嵌入式系统应用[M],2002年,第5期:P5~8 [6] 夏继强. 单片机实验与实践教程. 北京:北京航空航天大学出版社, 2001 [7] 徐惠民、安德宁.单片微型计算机原理接口与应用.第1版[M].北京:北京邮电大学出版社,1996 [8] 张媛媛,何怡刚,徐雪松. 基于C8051F020的温湿度控制箱设计[J]国外电子元器件, 2004,(10) . [9] 江孝国,王婉丽,祁双喜. 高精度PID温度控制器[J]电子与自动化, 2000,(05) . [10] 于洋. 高低温试验箱微机自动控制系统的设计[J]工业仪表与自动化装置, 2003,(02) . [11] 沈聿农.传感器及应用技术[M].北京:化学工业出版社,2001. [12] 范晶彦.传感器与检测技术应用[M].北京:机械工业出版社,2005. [13] 王俊峰,孟令启.现代传感器应用技术[M].北京:机械工业出版社,2007. [14] 金发庆.传感器技术与应用[M].北京:机械工业出版社,2006. [15] Goldman JM, Petterson MT, Kopotic RJ, Barker SJ.Masimosignal extraction pulse oximetry[J].J Clin Monit Comput.2000;16(7):7 5-83. [16] D. Tulone. On the feasibility of global time estimation under isolation conditions in wireless sensor networks. [17] 王春晖. 环境试验箱中制冷系统的原理分析及优化概述[J]电子质量, 2003,(12) [18] 李建中. 单片机原理及应用[M]西安电子科技大学出版社,2010.(02) [19] 周航慈.单片机应用程序设计技术[M].北京:北京航空航大大学出版社,2005. [20] 何立民.单片机高级教程[M].北京:北京航空航天大学出版社,2001. [21] 夏继强.单片机实验与实践教程[M].北京:北京航空航天大学出版社, 2001. [22] 徐惠民,安德宁.单片微型计算机原理接口与应用[M].北京:北京邮电大学出版社,1996. [23] 李广第.单片机基础[M].北京:北京航空航天大学出版社,1999. [24] 赵晓安. MCS-51单片机原理及应用[M]. 天津:天津大学出版社,2001. [25] 杨清梅,孙建民.传感器与测试技术[M].哈尔滨: 哈尔滨工程大学出版社,2005. [26] 范晶彦.传感器与检测技术应用[M].北京:机械工业出版社,2005. [27] 王俊峰,孟令启.现代传感器应用技术[M].北京:机械工业出版社,2007. [28] 宋文绪,杨帆.自动检测技术[M].北京:高等教育出版社,2000. [1] 王青云. 基于单片机的温度测量系统[J] 2010,(05). [2] 彭立,张建洲,王少华. 自适应温度控制系统的研制[J]东北师大学报(自然科学版), 1994, [3] YD. Tulone. Is it possible to ensure strong data guarantees in highly mobile [4] Jack Shandle. 即将来临的32位浪潮——ARM构架在32位微控制器领域的应用[J]单片机与嵌入式系统应用, 2004,(03) .

英文论文及中文翻译

International Journal of Minerals, Metallurgy and Materials Volume 17, Number 4, August 2010, Page 500 DOI: 10.1007/s12613-010-0348-y Corresponding author: Zhuan Li E-mail: li_zhuan@https://www.sodocs.net/doc/6c12776320.html, ? University of Science and Technology Beijing and Springer-Verlag Berlin Heidelberg 2010 Preparation and properties of C/C-SiC brake composites fabricated by warm compacted-in situ reaction Zhuan Li, Peng Xiao, and Xiang Xiong State Key Laboratory of Powder Metallurgy, Central South University, Changsha 410083, China (Received: 12 August 2009; revised: 28 August 2009; accepted: 2 September 2009) Abstract: Carbon fibre reinforced carbon and silicon carbide dual matrix composites (C/C-SiC) were fabricated by the warm compacted-in situ reaction. The microstructure, mechanical properties, tribological properties, and wear mechanism of C/C-SiC composites at different brake speeds were investigated. The results indicate that the composites are composed of 58wt% C, 37wt% SiC, and 5wt% Si. The density and open porosity are 2.0 g·cm–3 and 10%, respectively. The C/C-SiC brake composites exhibit good mechanical properties. The flexural strength can reach up to 160 MPa, and the impact strength can reach 2.5 kJ·m–2. The C/C-SiC brake composites show excellent tribological performances. The friction coefficient is between 0.57 and 0.67 at the brake speeds from 8 to 24 m·s?1. The brake is stable, and the wear rate is less than 2.02×10?6 cm3·J?1. These results show that the C/C-SiC brake composites are the promising candidates for advanced brake and clutch systems. Keywords: C/C-SiC; ceramic matrix composites; tribological properties; microstructure [This work was financially supported by the National High-Tech Research and Development Program of China (No.2006AA03Z560) and the Graduate Degree Thesis Innovation Foundation of Central South University (No.2008yb019).] 温压-原位反应法制备C / C-SiC刹车复合材料的工艺和性能 李专,肖鹏,熊翔 粉末冶金国家重点实验室,中南大学,湖南长沙410083,中国(收稿日期:2009年8月12日修订:2009年8月28日;接受日期:2009年9月2日) 摘要:采用温压?原位反应法制备炭纤维增强炭和碳化硅双基体(C/C-SiC)复合材

毕业设计_英语专业论文外文翻译

1. Introduction America is one of the countries that speak English. Because of the special North American culture, developing history and the social environment, American English has formed its certain unique forms and the meaning. Then it turned into American English that has the special features of the United States. American English which sometimes also called United English or U.S English is the form of the English language that used widely in the United States .As the rapid development of American economy, and its steady position and strong power in the world, American English has become more and more widely used. As in 2005, more than two-thirds of English native speakers use various forms of American English. The philologists of the United States had divided the English of the United States into four major types: “America n creating”; “Old words given the new meaning”; “Words that eliminated by English”;“The phonetic foreign phrases and the languages that are not from the English immigrates”[1]. Compared to the other languages, American English is much simple on word spelling, usage and grammar, and it is one of the reasons that American English is so popular in the world. The thesis analyzes the differences between American English and British English. With the main part, it deals with the development of American English, its peculiarities compared to that of British English, its causes and tendency. 2. Analyses the Differences As we English learners, when we learning English in our junior or senior school, we already came across some words that have different spellings, different pronunciations or different expressions, which can be represented by following contrasted words: spellings in "color" vs. "colour"; pronunciations in "sec-re-ta-ry" vs. "sec-re-try";

英文科技文献及翻译

外文翻译 DC GENENRATORS 1. INTRODUCTION For all practical purposes, the direct-current generator is only used for special applications and local dc power generation. This limitation is due to the commutator required to rectify the internal generated ac voltage, thereby making largescale dc power generators not feasible. Consequently, all electrical energy produced commercially is generated and distributed in the form of three-phase ac power. The use of solid state converters nowadays makes conversion to dc economical. However, the operating characteristics of dc generators are still important, because most concepts can be applied to all other machines. 2. FIELD WINDING CONNECTIONS The general arrangement of brushes and field winding for a four-pole machine is as shown in Fig.1. The four brushes ride on the commutator. The positive brusher are connected to terminal A1 while the negative brushes are connected to terminal A2 of the machine. As indicated in the sketch, the brushes are positioned approximately midway under the poles. They make contact with coils that have little or no EMF induced in them, since their sides are situated between poles. Figure 1 Sketch of four-pole dc matchine The four excitation or field poles are usually joined in series and their ends brought out to terminals marked F1 and F2. They are connected such that they produce north and south poles alternately. The type of dc generator is characterized by the manner in which the field

相关主题