搜档网
当前位置:搜档网 › Abstract. Towards Nanocomputer Architecture

Abstract. Towards Nanocomputer Architecture

Abstract. Towards Nanocomputer Architecture
Abstract. Towards Nanocomputer Architecture

Towards Nanocomputer Architecture

Paul Beckett, Andrew Jennings

School of Electrical & Computer Systems Engineering

RMIT University

PO Box 2476V Melbourne, Australia

pbeckett@https://www.sodocs.net/doc/118212059.html,.au, ajennings@https://www.sodocs.net/doc/118212059.html,.au

Abstract.

At the nanometer scale, the focus of micro-architecture will move from processing to communication. Most general computer architectures to date have been based on a “stored program” paradigm that differentiates between memory and processing and relies on communication over busses and other (relatively) long distance mechanisms. Nanometer-scale electronics – nano-electronics - promises to fundamentally change the ground-rules. Processing will be cheap and plentiful, interconnection expensive but pervasive. This will tend to move computer architecture in the direction of locally-connected, reconfigurable hardware meshes that merge processing and memory. If the overheads associated with reconfigurability can be reduced or even eliminated, architectures based on non-volatile, reconfigurable, fine-grained meshes with rich, local interconnect offer a better match to the expected characteristics of future nanoelectronic devices.

Keywords: computer architecture, nanocomputer architecture, micro-architecture, nanoelectronic technology, device scaling, array architecture, future trends, QCA, SIMD, MIMD.

1Introduction

Computer designers have traditionally had to trade the performance of a machine for the area occupied by its component switches. However, when the first practical "nano" scale devices - those with dimensions between one and ten nanometers (10 to 100 atomic diameters) -start to emerge from research laboratories within two or three years, they will mandate a new approach to computer design. Montemerlo et al (1996) have described the greatest challenge in nanoelectronics as the development of logic designs and computer architectures necessary to link small, sensitive devices together to perform useful calculations efficiently. Ultimately, the objective is to construct a useful "Avogadro computer" (Durbeck 2001) - one with an architecture that makes efficient use of in the order of 1023 switches to perform computations. In the more immediate term, it is forecast that by 2012 a CMOS (or possibly SiGe) chip will Copyright ?2002, Australian Computer Society, Inc. This paper appeared at the Seventh Asia-Pacific Computer Systems Architecture Conference (ACSAC'2002), Melbourne, Australia. Conferences in Research and Practice in Information Technology, Vol. 6., Feipei Lai and John Morris, Eds. Reproduction for academic, not-for-profit purposes permitted provided this text is https://www.sodocs.net/doc/118212059.html,prise almost 1010 transistors and will operate at speeds in the order of 10 - 15GHz (IST 2000).

The design challenges will be formidable. For example, amongst a long list of major technical difficulties, the SIA roadmap (which refers particularly to CMOS) identifies the following major issues (SIA 1999):

?power management at all levels;

?new architectures to overcome bottlenecks at interconnects;

?ultimate short channel limitations (e.g. at 30nm) requiring more complex gate structures such as SOI or dual-gate transistors;

?the spiralling costs of both lithography and fabrication.

In addition to the fundamental problems caused by high power density (Borkar 1999), physical problems such as leakage, threshold voltage control, tunnelling, electro-migration, high interconnect resistance, crosstalk and the need for robust and flexible error management become significant as device features shrink (Montemerlo et al 1996). These problems, in turn, affect the way that devices may be connected together and will ensure that the performance of future architectures will come to be dominated by interconnection constraints rather than by the performance of the logic (Ghosh et al 1999, Timp, Howard and Swami 1999).

It is likely, therefore, that the physics of nanoelectronic devices will conspire to eliminate the classical stored-program (von-Neumann) architecture as a contender at nanoelectronic device densities. This organisation, which has driven the development of computer architecture for almost 50 years, differentiates between the functions of memory and logic processing and tends to be built around system-wide constructs such as busses and global control signals. It is hard to imagine how any form of globally connected stored-program architecture could be built in a technology where communication even between adjacent switches is difficult.

Nevertheless, if the progress implied by Moore's Law is to continue (Borkar 2000), nanocomputer architectures must eventually supersede conventional, general-purpose microprocessor machines. They will therefore need to perform the same functions as their predecessors as well as sharing many of their overall characteristics. They will (ideally) need to be small, fast, cheap and robust, work at room temperature and run code from a standard compiler, including legacy code. This legacy requirement is often overlooked. It is likely that computing functions will continue to be described in terms of software with its

inherently linear control flow. General purpose computing is dominated by control dependencies and tends to rely on dynamic data structures (Mangione-Smith and Hutchings 1997). How the temporal "control-flow" and dynamic data allocation of such a software description might be mapped efficiently onto the hardware circuits of a nanocomputer is not yet clear. Margolus (1998) offered one vision when he forecast that “…our most powerful large-scale general purpose computers will be built out of macroscopic crystalline arrays of identical … elements. These will be the distant descendants of today’s SIMD and FPGA computing devices: … architectural ideas that are used today in physical hardware will reappear as data structures within this new digital medium”.

This paper will discuss the major issues that will influence computer architecture in the nanoelectronic domain. The paper is organised as follows: section 2 covers the problems of device scaling and how the characteristics of nanoelectronic devices will constrain future architectural development. In Section 3 we look at a small selection of novel architectures that have been developed to deal with these constraints. Finally we speculate on some paths forward for nanocomputers that can accommodate the legacy code requirements.

2Scaling Limits of CMOS

CMOS has been the work-horse technology in commercial VLSI systems for about 10 years, after superseding nMOS in the early 1990's. During that time, transistor channel lengths have shrunk from microns down to today's typical dimensions of 150 to 180nm (Gelsinger 2001) and are certain to further scale to 70-100nm in the near future. Such devices have already built on research lines – for example by Asai and Wada (1997), Taur et al (1997) and Tang et al (2001) - and these experiments have demonstrated that mass-production is possible.

In order to contain an escalating power-density and at the same time maintain adequate reliability margins, traditional CMOS scaling has relied on the simultaneous reduction of device dimensions, isolation, interconnect dimensions, and supply voltages (Davari 1999). However, FET scaling will be ultimately limited by high fields in the gate oxide and the channels, short channel effects that reduce device thresholds and increased sub-threshold leakage currents (McFarland 1997). As a result, Davari has suggested that gains in FET device performance will eventually stall as the minimum effective channel length approaches 30nm at a supply voltage of 1.0V and a gate oxide thickness of about 1.5nm. Beyond this point, any further performance growth will need to rely on increased functional integration with an emphasis on circuit and architectural innovations.

2.1Defect and Reliability Limits

The probability of failure for transistors in current CMOS manufacturing processes range from 10-9 to 10-7 (Forshaw, Nikolic and Sadek 2001) and it appears certain that currently available processes will not be suitable for providing defect-free device structures at sub-100nm scales (Parihar, Singh and Poole 1998). Thus any architecture built from large numbers of nanoscale components will necessarily contain a significant number of defects. An understanding of the role of these defects and how they affect yield will be important to future architectures. Novel low-temperature, 3-D integrated manufacturing technologies such as that proposed by Ohmi et al (2001) might eventually result in reliable, defect-free, high-performance gigahertz-rate systems. However, given the investment in current silicon processing lines, there is no reason to expect that these will be available soon, or that defect rates on typical process lines will improve more than an order of magnitude moving into the nanometre region. Thus, defects are guaranteed to remain a major technical issue at the architectural level.

A closely related problem is the longer term reliability of nanoelectronic technology. The reliability curve developed for ULSI logic by Shibayama et al (1997) (Figure 1) indicates that at gate densities in the order of 107 almost half of systems can be expected to have failed within 10 years (based on the assumption that a single gate failure results in the failure of the entire system). Extrapolating these curves for transistor densities in the order of 109 (the IST forecast for 2006) would imply a 90% failure rate within about 1.3 years1. To maintain the same reliability as a 1 million gate chip would require an error rate in the order of 10-16/hour-gate, four orders of magnitude better than current technology. How these curves might eventually be extended to a system with 1023 devices is unclear. What is clear, however, is that nanocomputer architectures will certainly need to be dynamically defect tolerant - with an ability to find defects as they develop and to reconfigure around them (Koren and Koren 1998).

Operating Time [years]

Figure 1ULSI Reliability Curves

- from (Shibayama et al 1997)

As a result, testing will represent a major issue in nanoelectronic systems. Currently, testing can account for up to 60% of the total costs of production for an ASIC - even for 250nm CMOS (SIA 1999), and this figure will become worse at higher densities. Run-time self-test 1 Using the IST average of 5 transistors per gate.

regimes will therefore be increasingly important in the nanocomputer domain.

2.2Wiring Delay

At a basic level, the wiring delay problem is simple to articulate: as interconnection width and thickness decrease, resistance per unit length increases, while as interconnections become denser (and oxide layers thinner), capacitance also tends to increase (Borkar 1999). For example, if the RC delay of a 1mm metal line in 0.5μm technology is 15ps then at 100nm (in the same materials) the delay would be 340ps (Sylvester and Keutzer 2001).

Ho, Mai and Horowitz (2001) have performed a detailed analysis of the performance of wires in scaled technologies and have identified two distinct characteristics. For short connections (those that tend to dominate current chip wiring) the ratio of local interconnection delay to gate delay stays very close to unity - i.e. interconnection delay closely tracks gate delay with scaling. For metal interconnections, this will be true down to approximately 10nm at which point the simple resistance relationship breaks down and the resistance increases due to quantum effects (Hadley and Mooij 2000).

On the other hand, global wiring tends to increase in length with increasing levels of integration, implying that the interconnection delay of these wires will increase relative to the basic gate delay. Sylvester and Keutzer (2001) conclude that the scaling of global wires will not be sustainable beyond about 180nm due to the rising RC delays of scaled-dimension conductors. However, as interconnect delay will be tolerably small in blocks of 50– 100K gates, they argue for hierarchical implementation methodologies based on macro-blocks of this size.

In addition, at future gigahertz operating speeds, signal "time-of-flight" and attenuation will become significant limiting factors. As both of these depend on the dielectric constant of the propagation material, solving them will require significant changes to processing technology. For example, Ohmi et al (2001) have developed processes that use a gas-isolated, high-k gate dielectric, metal-gate, metal-substrate SOI scheme with thermally conducting through-holes to reduce temperature variations and increase interconnection reliability. These complex, aggressive fabrication schemes contrast markedly with the intrinsic self-assembly mechanisms proposed by Merkle (1996) and others.

2.3Emerging Devices

The problems associated with the scaling of CMOS devices have led to a search for alternative transistor and circuit configurations. Proposals for silicon-based technologies include silicon-on-insulator (Taur et al 1997), single electron devices (Nakajima et al 1997), resonant-tunnelling devices (RTDs) (Capasso et al 1989), (Frazier et al 1993), double layer tunnelling transistors (Geppert 2000) and Schottky Barrier MOSFETS (Tucker 1997). Of these, RTDs appear to hold the most promise as a short to medium-term solution although most of the implementations in the literature to date are based on relatively complex heterostructure technologies -predominately based on GaAs.

RTDs are inherently fast and have been known and used for more than a decade. Their negative differential resistance (NDR) characteristics directly support multi-value logic styles (Waho, Chen and Yamamoto 1996) that can result in significantly simpler circuit designs. RTD circuits are typically based on one or more tunnelling diodes integrated with a conventional (often hetero-structure) FET (Mohan et al 1993). The main problem with this approach has been the need to match the current of the FET and the peak current of the diode(s) although a recent configuration avoids this problem by surrounding a vertical resonant diode structure with a Schottky control gate (Stock et al 2001).

Ultimately, electronic devices may simply cease to be an option at the scale of 1 or 2 nm. A number of molecular based technologies have been suggested as potential alternatives (Goldhaber-Gordon et al 1997, Reed et al 1999) as well as some computing architectures that might exploit them (Ellenbogen 1997, Ellenbogen and Love 1999). There have even been suggestions for nano-mechanical devices - somewhat reminiscent of Shannon's original (1949) relay logic (Drexler 1992, Merkle and Drexler 1996) as well as computational DNA systems (Young and Sheu 1997).

Finally, semiconductor behaviour has recently been demonstrated within very narrow carbon nanotube (fullerene) based structures (Wilson et al 2000). Nano-tube technology may eventually support the construction of non-volatile RAM and logic functions at integration levels approaching 1012 elements/cm2, operating frequencies in excess of 100GHz (Rueckes et al 2000) and, as electron flow can be ballistic in short nanotube wires, supporting current densities in the order of 109A/cm2 - well above the figure that would vaporize traditional interconnect metals. Simple logical operations with nanotubes have just been demonstrated (Liu et al 2001). Rueckes et al (2000) have built a bistable bit and designs for electromechanical logic and memory have been proposed (Ami and Joachim 2001). Further, high band-gap materials such as boron nitride (Chen et al 1999) may also offer interesting nanotube building blocks capable of working at significantly higher temperatures than carbon.

Although some of these emerging device technologies have been demonstrated in the laboratory, it is not at all clear which of them have the potential for robust, high-speed operation at room temperature - at least in the near future.

3Nanocomputer Architecture Candidates

To date, architecture research has responded to the opportunities and challenges offered by device scaling in two ways. The first approach simply increases existing machine resources - more or larger caches; more on-chip processors, often including local DRAM (Kozyrakis and Patterson 1998), direct multi-threading support (i.e. exploiting parallelism between concurrently running tasks

rather than within a single algorithm) and other similar techniques. While being effective for some applications, these can quickly run into all of the physical limitations outlined previously, especially the wire-length problems that can result in unacceptable memory and I/O latency although the 50 to 100K-gate hierarchical design blocks suggested by Sylvester and Keutzer (2001) are certainly large enough to contain a small RISC processor or other quite sophisticated processing elements. Durbeck and Macias (2000) put it this way: "... there is no clear way for CPU/memory architectures to tap into the extremely high switch counts … available with atomic-scale manufacture, because there is no clear way to massively scale up the (CPU) architecture. … there is no such thing as "more" Pentium?. There is such a thing as more Pentiums?, however."

The second approach uses modular and hierarchical architectures to improve the performance of traditional single-thread architectures (Vajapeyam and Valero 2001). Table 1, reproduced from Fountain et al (1998), compares the three main classes of parallel architectures in terms of characteristics applicable to the nanocomputer domain. They conclude that highly regular, locally connected, peripherally interfaced, data-parallel architectures offer a good match to the characteristics of nanoelectronic devices. However, it is worth noting that data-parallel architectures represent only a small portion of the interesting problems in computer architecture and are a poor match for most general purpose computing problems.

Future computer architectures may well be market application driven (Ronen et al 2001), with the characteristics of each market segment resulting in its own optimised parallel microarchitecture. Ronen et al, like Durbeck and Macias, clearly rule out the possibility of today's high-end microprocessor being tomorrow's low-power/low-cost solution.

Parameter Data Function Neural Degree of parallelism High Low high Processor Complexity Low High medium Interconnect Density Low High high Amount of Interfacing Low High low Extensibility High Low low Table 1 A Comparison of Three Parallel

Architecture Classes (Fountain et al 1998)

3.1Quantum Cellular Array Architectures Cellular Arrays (CAs) have been known and studied for almost 40 years (von Neumann 1966). Their architecture is based on the replication of identical processing elements with nearest neighbour connection. The fundamental idea behind the operation of Quantum Cellular Automata (QCA) devices is that the energy state of a suitable assembly of electrons, initially in a specific ground state, will alter as a result of changed boundary conditions (Maccuci et al 1999).

Lent et al (1993) and more recently Porod (1998) have proposed specific realizations of this idea using two-electron cells composed of four quantum dots in which the polarization of one cell induces a polarization in a neighbouring cell through Coulomb interaction in a very non-linear fashion. If left alone, the two electrons will seek the configuration corresponding to the ground state of the cell by tunnelling ("hopping") between the dots. Lent et al have demonstrated that AND gates, OR gates, and inverters can be constructed and interconnected. Fountain et al (1998) comment that circuits built from QCA elements would form extremely coherent computing systems, although some concerns remain about their theoretical validity, and the optimum implementation of memory.

As the coulomb interactions in QCA are based on a small number of electrons (as low as one) they tend to be swamped by thermal noise unless they are operated at very low temperatures (in the milliKelvin range). This will very likely prevent them having a serious impact on the mainstream computing domain. An interesting variation on the QCA - based on magnetism - is described by (Cowburn and Welland 2000). In the Magnetic QCA (MQCA), networks of interacting submicron magnetic dots are used to perform logic operations and propagate information. As MQCA energies are in the order of 1eV they will work well at room temperature. Cowburn and Welland suggest that MQCA technology may eventually offer active device densities in the order of 2.5 x 1011/cm2 with a power-delay product that is 104 times less than current CMOS.

3.2Synthetic Neural Systems

Synthetic Neural Network (SNN) systems, also called artificial neural networks, connectionist networks, or parallel distributed processing networks, are concerned with the synthesis, design, fabrication, training and analysis of neuromorphic (i.e. brain-inspired) electronic systems (Ferry, Grondin and Akers 1989). These systems achieve high performance via the adaptive interconnection of simple switching elements that process information in parallel. Arrays of simple neural processing elements show features such as association, fault tolerance and self-organisation. However, while the complexity of neural processing is low, the interconnection density is high (see Table 1) so there is still a question as to their applicability in the nanocomputer domain.

So far, most of the work in neural networks relates to static networks - classifier systems or associative networks (Gl?sek?tter, Pacha and Goser 1998) that learn to map data by modifying their internal configuration. For example, in addition to employing QCA cells to encode binary information, Porod (1998) has proposed an analogue Quantum-Dot Cellular Neural Network (Q-CNN) in which each cell is described by appropriate state variables, and the dynamics of the whole array is given by the dynamics governing each cell plus the influence exerted by its neighbours.

The alternative approaches - time dependent, biologically inspired networks that process data using a dynamical systems approach - exhibit more interesting emergent behaviour. They require vast numbers of devices to implement but these are likely to be available in the nanocomputing domain. However, as in all CNN

systems, each neural node has to be connected to at least 10 to 100 synapses for useful computation, so it is questionable whether the low drive capability of nanoelectronic devices will be suitable building blocks for these systems.

3.3Locally Connected Machines

A common example of regular, locally connected, data-parallel architectures is the Single Instruction Multiple Data machine. SIMD machines exploit the inherent data parallelism in many algorithms - especially those targeting signal and image processing (Gayles et al 2000). Fountain et al (1998) identify the characteristics that may make the SIMD topology suited to nanocomputer architecture as:

? a regular and repetitive structure;

?local connections between all system elements;

?all external connections made at the array edge;?the existence of feasible strategies for fault tolerance. However, SIMD architecture still suffer from two major problems - global instruction issue as well as global control and clock signals. Global clocking is required by SIMD machines not only to drive each individual (synchronous) element but also to manage inter-element synchronisation.

It is clear from the analysis of Fountain et al (1998) that the interconnection costs of SIMD in the nano-domain are very high - with the majority of the die area in their experiments being taken up by control signal distribution. Numerous asynchronous design techniques (e.g. Hauck 1995) have been proposed to overcome the need for a global clock in SIMD machines. While it is still unclear whether, in practice, these asynchronous techniques actually offer improved performance, they are at least as good as the conventional synchronous approach and may offer the only means to overcome global communication constraints in the nanocomputer domain.

The same considerations appear to constrain other multi-processor architectures such as MIMD. Crawley (1997) has performed a series of experiments on various MIMD architectures and concluded that inter-processor communications will be limited by the availability of wider metal tracks on upper layers (called "fat" wiring by Crawley). The tradeoff here is between track resistance (and therefore delay) and interconnection density. Crawley also notes that more complex computational structures such as carry look-ahead begin to lose their advantages over simpler and smaller structures once wiring delays are factored in.

3.3.1Propagated Instruction Processor

The Propagated Instruction Processor was proposed by Fountain (1997) as a way of avoiding the interconnection problem in SIMD arising from its global instruction flow characteristics. In the PIP architecture, instructions are pipelined in a horizontal direction such that the single-bit functional units can operate simultaneously on multiple algorithms. The technique shares many of the characteristics of SIMD, pipelined processors and systolic arrays. One of the primary advantages of the architecture is its completely local interconnection scheme that results in high performance on selected applications. However, the architecture is still basically SIMD and thus will work best with algorithms from which significant data parallelism can be extracted - e.g. Fountain's examples of point-wise 1-bit AND of two images, an 8-bit local median filter, 32-bit point-wise floating point division and an 8-bit global matrix multiplication (Fountain 1997). In addition, the fault tolerance of the PIP may ultimately depend of an ability to bypass faulty processors without upsetting the timing relationship between propagating instructions - something that has not been reported to date.

3.3.2Merged Processor/Memory Systems -

IRAM and RAW

The structure and performance of memory chips are becoming a liability to computer architecture. There are two basic problems: firstly the so-called "memory wall" (or gap) resulting from a divergence in the relative speed of processor and DRAM that is growing at 50% per year (Flynn 1999). Secondly, while DRAM size is increasing by 60% per year, its fundamental organisation – a single DRAM chip with a single access port - is becoming increasingly difficult to use effectively. This observation has led to the development of a number of merged memory/processor architectures. Two notable examples of this approach are the Intelligent RAM (IRAM) system (Patterson et al 1997), and the Reconfigurable Architecture Workstation (RAW) (Waingold et al 1997). The IRAM system merges processing and memory onto a single chip. The objective is to lower memory latency, increase memory bandwidth, and at the same time improve energy efficiency. The IRAM scheme revives the vector architecture originally found in supercomputers and implements it by merging at least 16MB of DRAM, a 64-bit two-way superscalar processor core with caches, variable width vector units, and a high-performance memory switch onto a single chip.

The RAW microprocessor chip comprises a set of replicated tiles, each tile containing a simple RISC like processor, a small amount of configurable logic, and a portion of memory for instructions and data. Each tile has an associated programmable switch which connects the tiles in a wide-channel point-to-point interconnect. The compiler statically schedules multiple streams of computations, with one program counter per tile. The interconnect provides register-to-register communication with very low latency and can also be statically scheduled. The compiler is thus able to schedule instruction-level parallelism across the tiles and exploit the large number of registers and memory ports.

3.4Reconfigurable and Defect Tolerant

Hardware

Reconfigurable hardware can be used in a number of ways: to provide reconfigurable functional units within a host processor: as a reconfigurable coprocessor unit; as an attached reconfigurable processor in a multiprocessor system; or as a loosely coupled external standalone

processing unit (Compton and Hauck 2000). One of the primary variations between these architectures is the degree of coupling (if any) with a host microprocessor. For example, the OneChip architecture (Carrillo and Chow 2001) integrates a Reconfigurable Functional Unit (RFU) into the pipeline of a superscalar Reduced Instruction Set Computer (RISC). The reconfigurable logic appears as a set of Programmable Function Units that operate in parallel with the standard processor. The Berkeley hybrid MIPS architecture, Garp, (Hauser and Wawrzynek 1997) includes a reconfigurable coprocessor that shares a single memory hierarchy with the standard processor, while the Chimaera system (Hauck et al 1997) integrates reconfigurable logic into the host processor itself with direct access to the host’s register file.

3.4.1Reconfigurable Logic and FPGAs

When FPGAs were first introduced they were primarily considered to be just another form of (mask programmed) gate array - albeit without the large start-up costs and lead times. Since then FPGAs have moved beyond the simple implementation of digital (glue) logic and into general_purpose computation. Although offering flexibility and the ability to optimise an architecture for a particular application, programmable logic tends to be inefficient at implementing certain types of operations, such as loop and branch control (Hartenstein 2001). In addition, there is a perception that fine-grained architectures (those with path widths of one or two bits) exhibit high routing overheads and poor routability (Hartenstein 1997). It is probably true that field-programmable gate arrays (FPGAs) will always be slower and less dense than the equivalent function in full custom, standard cell or mask programmed gate arrays as the configuration nodes take up significant space as well as adding extra capacitance and resistance (and thus delay) to the signal lines. The challenge will be to find new organisations that optimise the balance between reconfigurability and performance.

FPGAs exhibit a range of block granularity. Very fine-grained logic blocks have been applied to bit level manipulation of data in applications such as encryption image processing and filters (Ohta et al 1998) while coarse-grained architectures are primarily used for word-width datapath circuits. Again, the tradeoff is between flexibility and performance - as coarse-grained blocks are optimized for large computations, they can be faster and smaller overall than a set of smaller cells connected to form the same type of structure. However, they also tend to be less flexible, forcing the application to be adapted to the architecture just as for conventional processors.

New techniques are required that maintain the flexibility of the FPGA structure while minimising the effects of its configuration overheads. In particular, the serial re-configuration mechanisms of current FPGAs will clearly not scale indefinitely - a device with 1023 programmable elements could take a few million years to configure! One of the few current proposals to directly address this issue is the Cell Matrix architecture (Macias 1999).3.4.2Defect Tolerant Hardware

As identified previously, defect tolerant architectures will be the only way to economically build computing systems with hundreds of billions of devices because any system using nanoscale components will contain significant numbers of defects. One example of an existing defect tolerant custom configurable system is the Teramac (Heath et al 1998). The basic idea was to build a system out of cheap but imperfect components (FPGAs in this case), find the defects and configure the available good resources using software. The high routability of the Teramac is based on the availability of excessive interconnections - due to its "fat-tree" routing configuration. However, it is possible that current methods for detecting defects such as those used in Teramac will not scale to devices with 1010 configuration bits (Goldstein 2001). Thus, novel parallel defect mapping techniques will need to be developed - most probably built-in, and coupled with self-configuration mechanisms of the type suggested by Macias (1999) or Gericota et al (2001).

"Embryonics" (Mange et al 2000) is a biologically inspired scheme that aims to produce highly robust integrated circuits with self-repair and self-replication properties. In this case, the reconfiguration algorithm is performed on-chip in the form of an "artificial genome" containing the basic configuration of the cell. Its fault tolerance relies on fault detection and location via built-in self-test plus an ability to bypass faulty cells and to substitute spare cells in their place. However, the simplistic system employed – substituting entire columns of cells if just one cell is faulty - has too many limitations to scale successfully, not the least of which is the need to estimate the number of standby logic cells that might be required in a typical implementation.

While the various demonstration systems have their limitations, they do illustrate that it is possible to build a computer system that contains defective components as long as there is sufficient communication bandwidth to support the discovery and use of working components plus the capacity to perform such a rearrangement of working components. An ability to perform self-test will be critical. It is possible that the most important components in nanocomputer architecture might turn out to be its configuration switches and controls.

4Nanocomputer Architecture

Having surveyed the current challenges and opportunities in the nanoelectronic domain, it is now possible to make some predictions about likely characteristics of future nanocomputer architectures. As has been seen, these characteristics lead to: the need for extremely localised interconnect; the use of homogenous arrays that are able to support heterogenous processing structures; the ability to exploit parallelism at multiple levels (e.g. instruction level, multi-threaded etc.); a requirement for dynamic reconfigurability with low reconfiguration overheads (in both space and time) as well as defect and/or fault tolerance - at both the commissioning/configuration stage and at run-time.

4.1.1 Reconfiguring the Memory Gap

Although the fabrication of RAM and digital logic are completely separated at present, and there is a vast and expensive infrastructure supporting both, the functions of logic and memory must eventually merge if the increasing gap in performance between the two is to be overcome.

Non-volatility will be the key. When DRAM is finally superseded by non-volatile memory, it will be possible to envisage a computing system in which all storage – disk,main memory and caches – merges into the processing mesh. Figure 2 illustrates one reason why this would be a good idea. In the memory hierarchy of a conventional processor, it is possible for code and data items to be duplicated in more than five places in the system (e.g.disk, disk cache, memory, memory cache(s), registers). It is fairly easy to argue that this is not a good use of the available machine resources (Flynn 1999).

CPU

Registers ALU

Figure 2 Conventional Memory Hierarchy At present, the two main contenders for non-volatile technology are floating-gate structures and magnetics.The roadmap for non-volatile Magnetic RAM (MRAM)shows it reaching a density of 1Gbit by 2006 (Inomata 2001) and “nano-magnetic” technology (Cowburn and Welland 2000) may eventually support densities of 1012bits.

V V V C

= 0 V C th

V C V C V A .V B

V V C logic 0 ‘1’ 0.2 A.B 0.4 A+B 0.6 ‘0’

Figure 3 Reconfigurable Threshold Logic Cell Although floating-gate devices have been under development for around 30 years (Ohnakado and Ajika 2001), they are unlikely to reach the same densities as magnetic based systems. Figure 3 illustrates an example of a non-volatile reconfigurable logic cell that merges the variable threshold νMOS logic of Kotani, Shibata and Ohmi (1992) with the multi-valued memory cell of Wei and Lin (1992). In this cell, the initial “c-circuit” acts as a simple D/A converter, producing a voltage that is proportional to the two input values. The RTD-based 4-valued memory is used to adjust the offset of this voltage,shown as V GATE in Figure 3, thereby reconfiguring the function of the (two-input) logic gate.

The low-overhead reconfigurability offered by this type of circuit – or by alternatives such as nano-magnetics –may eventually support the creation of a merged memory/processing structure, in which the idea of “mass storage” is replaced by “mass reconfiguration” as program and data become indistinguishable from the processing mesh.

4.1.2 Coding Around the Defects

As outlined previously, all nanocomputer systems will contain faulty components. Defect/fault tolerance supporting the ability to detect and avoid defects at both the commissioning/ configuration stage and at run-time,will therefore be of critical importance. Forshaw et al (2001) have shown that it is theoretically possible to produce working systems with defect rates as high as 10-5to 10-4 if reconfigurable techniques are used to bypass the defects.

Existing static fault mapping techniques (such as are used in hard disk systems, for example) may represent a good starting point, but it is likely that built-in self test (BIST)will be necessary to maintain system integrity in the presence of soft-errors and noise. There have been some initial studies into how to optimally configure BIST in an extremely large cellular array (Goldstein 2000) but no general solutions have been developed as yet.

4.1.3 “Grain Wars”

At least in the short term, the outcome of the coarse-grain

vs. fine-grain argument is difficult to predict as there are strong arguments for both styles. Eventually, however,all nanocomputer architectures will be formed from arrays of simple cells with highly localised interconnect.This will be an inevitable outcome of shrinking geometries as devices evolve towards molecular and quantum/ single electron technologies.

At present, the tendency towards course-grained architectures (e.g. multiple CPU blocks, ALU arrays etc.)is being driven by the high overheads imposed by reconfiguration techniques in devices such as FPGAs. If this can be reduced, for example by the use of multi-value techniques such as was illustrated previously, then fine-grained structures offer a much more general solution to the creation of flexible computing platforms.

4.1.4 A Processor for Every Process

It appears, then, that the ultimate computing fabric will be an homogenous, fine-grained, non-volatile, fault tolerant,reconfigurable, processing array, exhibiting adjacent or nearest neighbour interconnect only and supporting heterogenous structures that are derived by compiling a HLL program. The processing fabric will be reconfigurable in a way that maximises the system’s ability to exploit parallelism - consisting of as many individual processing meshes as are necessary, each configured in an optimal manner for the particular function.

This scheme takes advantage of the future abundance of processing with a scarcity of interconnect. Instead of a

large number of constructed programs, we may instead try to store (close to) all possible programs in the device.In this organisation, programs would be continuously configured within the non-volatile memory/logic -available to respond to an input stimulus by generating an output whenever required. The concept of memory hierarchy would be completely eliminated – if the logic structure is large enough to store and execute all “programs” for that machine. In the more immediate term, configuration “context switching” (Kearney 1998)will replace the loader of conventional operating systems.As this architecture effectively merges processor logic,RAM and disk into one structure, the only remaining potential performance bottleneck will be the input/output channel. Although I/O bandwidth tends not to be as great a problem as the memory/processor interface, current processors working in domains such as multimedia already have some difficulty maintaining high data throughputs and this will continue to be an issue (for example with 3D multimedia). The challenge will be to develop flexible parallel I/O configurations that will allow the internal processes to operate at peak performance.

4.1.5 Legacy Software

General purpose computing is largely sequential,dominated by control dependencies and tends to rely on dynamic data structures that currently do not map well to array architectures (Mangione-Smith 1997). However a nanocomputer will inherit a vast quantity of legacy software that cannot be ignored (it could be said that the Y2K issue revealed just how extraordinarily long-lived are some types of software).

There is no doubt that the translation from a source program to systems with billions of gates will be an extremely complex task. But, ironically, the very availability of a large number of gates makes the task easier. In this case, the synthesis process has access to the resources necessary to create all possible computation paths in the “program” and then simply select the single correct result at the end. This aggressive form of speculation is the basis of the synthesis process for the PipeRench architecture (Goldstein et al 2000).

b

if (x>0) {

add a,b,c add b,a,c } else {

sub a,b,c sub b,a,c }

b

Figure 4 Processing Graph Fragment

The graph fragment in Figure 4 illustrates this point. All arithmetic functions are duplicated as required, as are the intermediate variables - without concern for the hazards that would occur in a typical pipelined system. Note that in this simplified diagram, no data synchronisation mechanism is shown. A number of ideas have been proposed that would be applicable to the nanocomputer

domain, from Asynchronous Wave Pipelines (Hauck,Katoch and Huss 2000) through to gate-level nano-pipelined computation using RTDs (Mazumder et al 1998).

5 Conclusions

We argue that future nanocomputer architectures will be formed from non-volatile reconfigurable, locally-connected hardware meshes that merge processing and memory. In this paper, we have highlighted the characteristics of nanoelectronic devices that make this most likely - primarily the severe limitations on the length interconnection lines between devices. It appears that the current trend towards coarse-grained structures may not be supportable in the long term. If the overheads associated with reconfigurability can be reduced or even eliminated, architectures based on fine-grained meshes with rich, local interconnect offer a better match to the characteristics of nanoelectronic devices.

Of course, having access to a vast, reconfigurable computing platform is only the first step. The question still remains as to what use such an architecture might be put. Will it be necessary to own an "Avagadro computer"in order to run Windows ? 2030? Moravec (1998) has suggested that, if the power of the human brain is in the synapses connecting neurons, then it would take the equivalent of 1014 instructions/sec. to mimic a brain with an estimated 1013 - 1015 synapses. Might the power of nanocomputer architecture finally release the ghost in the machine?

6 References

Ami, S., Joachim, C. (2001). Logic Gates and Memory Cells Based on Single C60 Electromechanical Transistors.Nanotechnology 12 (1):44-52.

Asai, S., Wada, Y. (1997). Technology Challenges for Integration Near and Below 0.1um. Proceedings of the IEEE 85(4):505-520.

Borkar, S. (1999). Design Challenges of Technology Scaling.IEEE Micro 19 (4):23-29.

Borkar, S. (2000). Obeying Moore's Law Beyond 0.18 Micron.Proc. ASIC/SOC Conference, 2000. Proceedings. 13th Annual IEEE International , IEEE, pp:26-31.

Capasso, F., Sen, S., Beltram, F., Lunardi, L., Vangurlekar, A.S., Smith, P., Shah, N. J., Malik, R. J., Cho, A. Y. (1989).Quantum Functional Devices: Resonant-Tunneling Transistors,Circuits with Reduced Complexity, and Multi-Valued Logic.IEEE Transactions on Electron Devices 36. (10).

Carrillo, J. E., Chow, P. (2001). The Effect of Reconfigurable Units in Superscalar Processors. Proc. Ninth International Symposium on Field Programmable Gate Arrays , FPGA 2001,Monterey, CA, USA, ACM, February 11-13, 2001, pp:141-150.Chen, Y., Chadderton, L.T., Fitz Gerald, J., Williams, J.S.(1999). A Solid State Process for Formation of Boron Nitride Nanotubes. Applied Physics Letters, 74 (20):2960-2962.

Compton, C., Hauck, S. (2000). An Introduction to Reconfigurable Computing. IEEE Computer (April 2000).

Cowburn, R. P., Welland, M. E. (2000). Room Temperature Magnetic Quantum Cellular Automata. Science 287:1466-1468.Crawley, D. (1997). An Analysis of MIMD Processor Node Designs for Nanoelectronic Systems . Internal Report. Image Processing Group, Department of Physics & Astronomy,University College. London.

Davari, B. (1999). CMOS Technology: Present and Future. Proc. IEEE Symposium on VLSI Circuits, Digest of Technical Papers, IEEE, 1999, pp:5-9.

Drexler, K. E. (1992). Nanosystems: Molecular Machinery, Manufacturing and Computation, Wiley & Sons. Inc. Durbeck, L. J. K. (2001). An Approach to Designing Extremely Large, Extremely Parallel Systems. Abstract of a talk given at The Conference on High Speed Computing, Salishan Lodge, Gleneden, Oregon, U.S.A., April 26 2001. Conference sponsored by Los Alamos, Lawrence Livermore, and Sandia National Laboratories. Cell Matrix Corporation, https://www.sodocs.net/doc/118212059.html,/entryway/products/pub/SalishanAbs tract2.html, accessed: 12 August, 2001.

Durbeck, L. J. K., Macias, N. J. (2000). The Cell Matrix: An Architecture for Nanocomputing. Cell Matrix Corporation, https://www.sodocs.net/doc/118212059.html,/entryway/products/pub/publications. html, accessed: 12 August, 2001.

Ellenbogen, J. C. (1997). Matter as Software. The Mitre Corporation. https://www.sodocs.net/doc/118212059.html,/technology/nanotech. Ellenbogen, J. C., and Love, J. C. (1999). Architectures for Molecular Electronic Computers: 1. Logic Structures and an Adder Built from Molecular Electronic Diodes,. The Mitre Corporation. https://www.sodocs.net/doc/118212059.html,/technology/nanotech Ferry, D. K., Grondin, R. O., Akers, L. A. (1989). Two-Dimensional Automata in VLSI. In Sub-Micron Integrated Circuits. R. K. Watts (ed), John Wiley & Sons.

Flynn, M. J. (1999). Basic Issues in Microprocessor Architecture. Journal of Systems Architecture45 (12-13):939-948.

Forshaw, M. R. B., Nikolic, K., Sadek, A. (2001). 3rd Annual Report, Autonomous Nanoelectronic Systems With Extended Replication and Signalling, ANSWERS. Technical Report. University College London, Image Processing Group. London, U.K. https://www.sodocs.net/doc/118212059.html,/research/answers/reports/3rd_ye ar_UCL.pdf

Fountain, T. J. (1997). The Propagated Instruction Processor. Proc. Workshop on Innovative Circuits and Systems for Nanoelectronics, Delft, pp:69-74.

Fountain, T. J., Duff, M. J. B. D., Crawley, D. G., Tomlinson, C. and Moffat, C. (1998). The Use of Nanoelectronic Devices in Highly-Parallel Computing Systems. IEEE Transactions on VLSI Systems6 (1):31-38.

Frazier, G., Taddiken, A., Seabaugh, A., Randall, J. (1993). Nanoelectronic Circuits using Resonant Tunneling Transistors and Diodes. Proc. Solid-State Circuits Conference, 1993. Digest of Technical Papers. 40th ISSCC., 1993, IEEE International, 24-26 Feb. 1993, pp:174 - 175.

Gayles, E. S., Kelliher, T. P., Owens, R. M., Irwin, M. J. (2000). The Design of the MGAP-2: A Micro-Grained Massively Parallel Array. IEEE Transactions on Very Large Scale Integration (VLSI) Systems8 (6):709 - 716.

Gelsinger, P. P. (2001). Microprocessors for the New Millennium: Challenges, Opportunities, and New Frontiers. Proc. International Solid-State Circuits Conference ISSCC2001, San Francisco, USA, IEEE.

Geppert, L. (2000) Quantum Transistors: Toward Nanoelectronics. IEEE Spectrum: 46 - 51, September 2000 Gericota, M. G., Alves, G.R., Silva, M.L., Ferreira, J.M. (2001). DRAFT: An On-line Fault Detection Method for Dynamic & Partially Reconfigurable FPGAs. Proc. Seventh International On-Line Testing Workshop, IEEE, pp:34 -36.

Ghosh, P., Mangaser, R., Mark, C., Rose, K. (1999). Interconnect-Dominated VLSI Design. Proc. Proceedings of 20th Anniversary Conference on Advanced Research in VLSI, 21-24 March 1999, pp:114 - 122.

Gl?sek?tter, P., Pacha, C., Goser, K. (1998). Associative Matrix for Nano-Scale Integrated Circuits. Proc. Proceedings of the Seventh International Conference on Microelectronics for Neural, Fuzzy and Bio-Inspired Systems, IEEE.

Goldhaber-Gordon, D., Montemerlo, M. S., Love, J. C., Opiteck, G. J., Ellenbogen, J. C. (1997). Overview of Nano-electronic Devices. Proceedings of the IEEE85 (4):521-540. Goldstein, S. C. (2001). Electronic Nanotechnology and Reconfigurable Computing. Proc. IEEE Computer Society Workshop on VLSI, Orlando, Florida, IEEE.

Goldstein, S. C., Schmit, H., Budiu, M., Cadambi, S., Moe, M., Taylor, R. R. (2000). PipeRench: A Reconfigurable Architecture and Compiler. IEEE Computer (April 2000):70-77. Hadley, P., Mooij, J. E. (2000). Quantum Nanocircuits: Chips of the Future? Internal Report. Delft Institute of Microelectronics and Submicron Technology DIMES and Department of Applied Physics. Delft, NL. http://vortex.tn.tudelft.nl/publi/2000/quantumdev/qdevices.html Hanyu, T., Teranishi, K., Kameyama, M. (1998). Multiple-Valued Logic-in-Memory VLSI Based on a Floating-Gate-MOS Pass-Transistor Network. Proc. IEEE International Solid-State Circuits Conference, pp:194 -195, 437.

Hartenstein, R. (1997). The Microprocessor is No Longer General Purpose: Why Future Reconfigurable Platforms Will Win. Proc. Second Annual IEEE International Conference on Innovative Systems in Silicon, pp:2 -12.

Hartenstein, R. W. (2001). Coarse Grain Reconfigurable Architectures. Proc. Proceedings of the ASP-DAC 2001 Design Automation Conference, 30 Jan.-2 Feb. 2001, pp:564-569. Hauck, O., Katoch, A., Huss, S.A. (2000). VLSI System Design Using Asynchronous Wave Pipelines: A 0.35μ CMOS 1.5GHz Elliptic Curve Public Key Cryptosystem Chip. Proc. Sixth International Symposium on Advanced Research in Asynchronous Circuits and Systems, ASYNC 2000, 2-6 April 2000, pp:188 -197.

Hauck, S. (1995). Asynchronous Design Methodologies: An Overview. Proceedings of the IEEE83 (1):69 - 93.

Hauck, S., Fry, T.W., Hosler, M.M., Kao, J.P. (1997). The Chimaera Reconfigurable Functional Unit. Proc. IEEE Symposium on Field-Programmable Custom Computing Machines, FCCM'97, pp:87 - 96.

Hauser, J. R., Wawrzynek, J. (1997) Garp: A MIPS Processor with a Reconfigurable Coprocessor, Proc. IEEE Symposium on FPGAs for Custom Computing Machines,FCCM'97, pp: 12-21. Heath, J. R., Kuekes, P. J., Snider, G. S., Williams, S. (1998). A Defect-Tolerant Computer Architecture: Opportunities for Nanotechnology. Science280 (12 June 1998):1716-21.

Ho, R., Mai, K.W., Horowitz, M.A. (2001). The Future of Wires. Proceedings of the IEEE89 (4):490 -504.

Inomata, K. (2001). Present and Future of Magnetic RAM Technology. IEICE Transactions on Electronics E84-C (6):740 - 746.

IST (2000). Technology Roadmap for Nanoelectronics, European Commission IST Programme - Future and Emerging Technologies. Compano, R., Molenkamp, L., Paul, D. J. (eds). Kearney, D., Keifer, R. (1998). Hardware Context Switching in a Signal Processing Application for an FPGA Custom Computer. Advanced Computing Research Centre, School of Computer and Information Science, University of SA.

Koren, I., Koren, Z. (1998). Defect Tolernce in VLSI Circuits: Techniques and Yield Analysis. Proceedings of the IEEE86 (6):1819 - 1836.

Kotani, K., Shibata, T., Ohmi, T. (1992). Neuron-MOS Binary Logic Circuits Featuring Dramatic Reduction in Transistor Count and Interconnections. Proc. International Electron Devices Meeting, 1992, 13-16 Dec. 1992, pp:431 -434. Kozyrakis, C. E., Patterson, D. A. (1998). A New Direction for Computer Architecture Research. IEEE Computer31(11):24-32.

Lent, C. S., Tougaw, P. D., Porod, W., Bernstein, G. H. (1993). Quantum Cellular Automata. Nanotechnology4 (1):49-57. Liu, X., Lee, C., Zhou, C, Han, J. (2001). Carbon Nanotube Field-Effect Inverters. Applied Physics Letters79 (20):3329-3331.

Maccuci, M., Francaviglia, S., Luchetti, G., Iannaccone, G. (1999). Quantum-Dot Cellular Automata Circuit Simulation. Technical Report. University of Pisa, Department of Information Engineering.

Macias, N. J. (1999). The PIG Paradigm: The Design and Use of a Massively Parallel Fine Grained Self-Reconfigurable Infinitely Scalable Architecture. Proc. First NASA/DoD Workshop on Evolvable Hardware, 1999.

Mange, D., Sipper, M., Stauffer, A., Tempesti, G. (2000). Toward Robust Integrated Circuits: The Embryonics Approach. Proceedings of the IEEE88 (4):516-543.

Mangione-Smith, W. H., Hutchings, B. L. (1997). Configurable Computing: The Road Ahead. In Reconfigurable Architectures: High Performance by Configware. R. Hartenstein, V. Prasanna (ed). Chicago, IT Press:81 - 96.

Margolus, N. (1998). Crystalline Computation. In The Feynman Lecture Series on Computation, Volume 2. A. Hey (ed), Addison-Wesley.

Mazumder, P., Kulkarni, S., Bhattacharya, M., Jian Ping Sun, Haddad, G.I. (1998). Digital Circuit Applications of Resonant Tunneling Devices. Proceedings of the IEEE86 (4):664 -686. McFarland, G. W. (1997): CMOS Technology Scaling and its Impact on Cache Delay. PhD Thesis. Stanford University, Merkle, R. C. (1996). Design Considerations for an Assembler. Nanotechnology7 (3):210-215.

Merkle, R. C., Drexler, K. E. (1996). Helical Logic. Nanotechnology7 (4):325-339.

Mohan, S., Mazumder, P., Haddad, G. I., Mains, R. K., Sun, J. P. (1993). Logic Design Based on Negative Differential Resistance Characteristics of Quantum Electronic Devices. IEE Proceedings-G: Electronic Devices140 (6):383-391. Montemerlo, M., Love, C., Opiteck, G., Goldhaber-Gordon, D., Ellenbogen, J. (1996). Technologies and Designs for Electronic Nanocomputers. The Mitre Corporation. https://www.sodocs.net/doc/118212059.html,/ technology/nanotech/

Moravec, H. (1998). When Will Computer Hardware Match the Human Brain? Journal of Transhumanism, Vol. 1, https://www.sodocs.net/doc/118212059.html,/volume1/moravec.htm, accessed: 4 March 2001.

Nakajima, A., Futatsugi, T., Kosemura, K., Fukano, T., Yokoyama, N. (1997). Room Temperature Operation of Si Single-electron Memory with Self-aligned Floating Dot Gate. Applied Physics Letters,70 (13):1742 - 1744.

Ohmi, T., Sugawa, S., Kotani, K., Hirayama, M., Morimoto, A. (2001). New Paradigm of Silicon Technology. Proceedings of the IEEE89 (3):394 - 412.

Ohnakado, T., Ajika, N. (2001). Review of Device Technologies of Flash Memories. IEICE Transactions on Electronics E84-C (6):724-733.

Ohta, A., Isshiki, T., Kunieda, H. (1998). A New High Logic Density FPGA For Bit-Serial Pipeline Datapath. Proc. IEEE Asia-Pacific Conference on Circuits and Systems, APCCAS 1998, 24-27 Nov. 1998, pp:755 - 758.

Parihar, V., Singh, R., Poole, K. F. (1998). Silicon Nanoelectronics: 100nm Barriers and Potential Solutions. Proc. IEEE/SEMI Advanced Semiconductor Manufacturing Conference, IEEE, 1998, pp:427-121.

Patterson, D., Anderson, T., Cardwell, N., Fromm, R., Keeton, K., Kozyrakis, C., Thomas, R., Yelick, K. (1997). A Case for Intelligent RAM. IEEE Micro17 (2):34 -44.Porod, W. (1998). Towards Nanoelectronics: Possible CNN Implementations Using Nanoelectronic Devices. Proc. 5th IEEE International Workshop on Cellular Neural Networks and their Applications, London, England, 14-17 April 1998, pp:20 - 25. Reed, M. A., Bennett, D.W., Chen, J., Grubisha, D.S., Jones, L., Rawlett, A.M., Tour, J.M., Zhou, C. (1999). Prospects for Molecular-Scale Devices. Proc. IEEE Electron Devices Meeting, IEEE.

Ronen, R., Mendelson, A., Lai, K., Shih-Lien Lu, Pollack, F., Shen, J.P. (2001). Coming Challenges in Microarchitecture and Architecture. Proceedings of the IEEE98 (3):325 - 340. Rueckes, T., Kim, K., Joselevich, E., Tseng, G. Y., Cheung, C-L., Lieber, C. M. (2000). Carbon Nanotube-Based Nonvolatile Random Access Memory for Molecular Computing. Science 289:94 - 97.

Shibayama, A., Igura, H., Mizuno, M., Yamashina, M. (1997). An Autonomous Reconfigurable Cell Array for Fault-Tolerant LSIs. Proc. IEEE International Solid State Circuits Conference, ISSCC97, IEEE, February 7, 1997, pp:230 - 231, 462.

SIA (1999). International Technology Roadmap for Semiconductors, Semiconductor Industry Association.

Stock, J., Malindretos, J., Indlekofer, K.M., Pottgens, M., Forster, A., Luth, H. (2001). A Vertical Resonant Tunneling Transistor for Application in Digital Logic Circuits. IEEE Transactions on Electron Devices48 (6):1028 -1032. Sylvester, D., Keutzer, K (2001). Impact of Small Process Geometries on Microarchitectures in Systems on a Chip. Proceedings of the IEEE89 (4):467 - 489.

Tang, S. H., Chang, L., Lindert, N., Choi, Y-K., Lee, W-C., Huang, X., Subramanian, V., Bokor, J., King, T-J., Hu, C. (2001). FinFET — A Quasi-Planar Double-Gate MOSFET. Proc. IEEE International Solid State Circuits Conference, ISSCC 2001, San Francisco, USA, February 2001.

Taur, Y., Buchanan, D.A., Wei Chen, Frank, D.J., Ismail, K.E., Shih-Hsien Lo, Sai-Halasz, G.A., Viswanathan, R.G., Wann, H.-J. C., Wind, S.J., Hon-Sum Wong (1997). CMOS Scaling into the Nanometer Regime. Proceedings of the IEEE85 (4):486 - 504.

Timp, G. L., Howard, R. E. and Mankiewich, P., M. (1999). Nano-electronics for Advanced Computation and Communication. In Nanotechnology. G. L. Timp (ed). New-York:, Springer-Verlag Inc.

Tucker, J. R. (1997). Schottky Barrier MOSFETS for Silicon Nanoelectronics. Proc. Proceedings of Advanced Workshop on Frontiers in Electronics, WOFE '97, 6-11 Jan. 1997, pp:97-100. Vajapeyam, S., Valero, M. (2001). Early 21st Century Processors. IEEE Computer34 (4):47-50.

von Neumann, J. (1966). Theory of Self-Reproducing Automata, University of Illinois Press.

Waho, T., Chen, K.J., Yamamoto, M. (1996). A Novel Multiple-Valued Logic Gate Using Resonant Tunneling Devices. IEEE Electron Device Letters 17 (5):223-225. Waingold, E., Tayor, M., Srikrishna, D., Sarkar, V., Lee, W., Lee, V., Kim, J., Frank, M., Finch, P., Barua, R., Babb, J., Amarasinghe,S. Agarwal, A. (1997). Baring It All to Software: Raw Machines. IEEE Computer30:83 - 96.

Wei, S.-J., Lin, H.C. (1992). Multivalued SRAM Cell Using Resonant Tunneling Diodes. IEEE Journal of Solid-State Circuits27 (2):212-216.

Wilson, M., Patney, H., Lee, G., Evans, L. (2000). New Wave Electronics and the Perfect Tube. Monitor, March-May 2001:14-17.

Young, W. C., Sheu, B. J. (1997) Unraveling the Future of Computing. IEEE Circuits and Devices Magazine 13: 14 - 21, November 1997.

学术论文写作的要点范文

一、研究生必备四本 俗话说好记性不如烂笔头,所以一定要首先养成做笔记的好习惯!作为研究生下面这几个本子是必不可少的。 1,实验记录本(包括试验准备本),这当然首当其冲必不可少,我就不多说了; 2,Idea记录本,每次看文献对自己有用的东西先记下,由此产生的idea更不能放过,这可是做研究的本钱,好记性不如烂笔头,以后翻翻会更有想法的; 3,专业概念以及理论进展记录本,每个人不可能对自己领域的概念都了如指掌,初入门者更是如此,这时候小小一个本子的作用就大了; 4,讲座记录本,这本本子可能有些零杂,记录听到的内容,更要记录瞬间的灵感,以及不懂的地方,不可小视! 这四本是你必不可少的,不过作为我们这些非英语专业的研究生来说,还有一个应该具备的本子就是英语好句记录本。 二、论文写作要点 1、选题要小,开掘要深;不要题目很大,内容却很单薄。 2、写作前要读好书、翻阅大量资料、注意学术积累,在这个过程中,还要注重利用网络,特别是一些专业数据库 3、“选题新、方法新、资料新”的三新原则(老板教导的) 4、“新题新做”和“小题大做 总之,一点之见即成文。 三、如何撰写实验研究论文(唐朝枢) 论文发表意识:基础研究成果的表达方式;是否急于发表(创新与严谨的关系);发表的论文与学位论文的区别(反映科学事实而不是反映作者水平) 论文格式:原著 original research paper, full length paper、review综述论文,快报、简报、摘要。不同于教科书、讲义,更不同于工作总结。 撰写前的准备工作:复习和准备好相关文献;再次审定实验目的(学术思想,Idea);实验资料完整并再次审核 1.Introduction:引言 问题的提出;研究的现状及背景;以前工作基础;本工作的目的;思路(可提假说);对象;方法;结果。在…模型上,观察…指标,以探讨…(目的)

Abstract Writing (论文摘要写作精简版)

Writing: Abstract WHAT IS AN ABSTRACT 1. The Definition of an Abstract 1 ) the objectives and scope of investigation; 2) the methods used; 3) the most important results; 4) conclusion or recommendation. 2. Features of Abstracts Brevity Accuracy Specificity Objectivity Informativeness Independency CLASSIFICATION OF ABSTRACTS 1.Indicative Abstracts https://www.sodocs.net/doc/118212059.html,rmative Abstracts https://www.sodocs.net/doc/118212059.html,rmative-indicative Abstracts 4.Other Types of Abstracts 1) Critical Abstracts 2) Mini-abstracts FUNCTIONS OF ABSTRACTS A Screening Device of Documents: An abstract gives readers the idea of what the article is about. A Self-contained Text: We’ll know the information it contains, without seeing the article . A Helpful Preview: It "frames" the article and prepares the reader for the main points to come. To Facilitate Indexing: It will improve the chances of having it read by the right people. STYLISTIC FEATURES OF ABSTRACTS 1. The Length of Abstracts 1) In general, there is a 100-300 word limit to the number of words in an abstract. 2) Do not confuse an abstract with a review. There should be no comment or evaluation. 3) Give information only once. 4) Do not repeat the information given in the title. 5) Do not include any facts or ideas that are not in the text. 6) For informative abstracts, include enough data to support the conclusions. 7) If reference to procedure is essential, try to restrict it to identification of method or process. 8) State results, conclusions, or findings in clear concise fashion. 9) Organize the information in the way that is most useful to the reader. (a thesis-first abstract) 2. Verbs and Tenses Used in Abstracts 1) Active verbs: use active verbs rather than passive verbs. 2) Present tense: background information, existing facts, what is in the paper and conclusion. 3) Past tense /present perfect tense: completed research, methodology or major activities results. 3. Words Used in Abstracts 1) Avoid use of highly specialized words or abbreviations. Define unfamiliar words. 2) Synthesize or rephrase the information into clear, concise statements. 3) Avoid using jargon. 4. Sentence Structures of Abstracts 1) Use third person sentences. 2) Use short sentences, but vary sentence structure. 3) Use complete sentences. 4) The first sentence should present the subject and scope of the report. The thesis or the writer's focus should be presented in the second sentence. The balance of the article is a summary of the important points of each section, including methods, procedures, results and conclusions. 5) Good abstracts are sure to include a variety of pat phrases: a. Background Information (Research has shown... It has been proposed... Another proposed property... The search is on for... One of the promising new...) b. Statement of the Problem (The objective of the research is to prove / verify... The experiment was designed to determine...) e. Statement of Procedure (To investigate this .... A group of 10 specimens / subjects ... Measurements

研究方法与论文写作

一:对语言学本身性质的阐述分析 并且从不同角度来分析语言学 1《认知语言学的最新应用及展望》述介 【作者】卢植; 【机构】暨南大学外国语学院; 【摘要】Gitte Kristiansen,Michel Achard,Ren啨Dirven&Francisco J.Ruiz de MendozaIb偄 nez(eds.).2006.Cognitive Linguistics:Current Applications and Future Per-spectives.Berlin:Mouton de Gruyter.ix+499pp.1.前言《认知语言学的最新应用及展望》是德国Moutonde Gruyter出版社“认知语言学的应用”系列丛书的第一卷。编者在导言中指出,在过去二三十年间,认知语言学逐渐发展成为一个成熟和创新的学科,并不断演化和拓展。一更多还原 【关键词】认知语言学;最新应用;认知基础;概念隐喻理论;语言学模型;展望;认知科学;计算语言学; 2以认知为基础的英汉对比研究——关于对比认知语言学的一些构想 【作者】文旭; 【机构】西南大学; 【摘要】英汉对比研究是我国语言学界研究的一个主要领域,其研究方法和视角都相当丰富。认知语言学是语言学中的一种新范式。将两者结合起来,建立一门新的学科,即对比认知语言学,必将对认知语言学和对比语言学大有裨益。本文探讨了对比认知语言学的理论基础、对比的原则和方法、对比的范围和内容。可以说,语言系统的任何方面,都可以从认知的角度进行对比研究。更多还原 【Abstract】English-Chinese contrastive study is a major field done in Chinese linguistics,which has a number of research approaches and perspectives. Cognitive linguistics is a new linguistic paradigm. Therefore, it is significant to integrate them into a new discipline called cognitive contrastive linguistics. This paper has explored its theoretical foundations, principles, approaches, scope and content. We can say that any aspect of language system can be studied contrastively from the perspective of cog... 更多 还原 3认知社会语言学

教你学术论文毕业论文的写作教程anabstract

Questions on the Abstract ?What is the subject matter/area the research paper is dealing with? ?What background information is provided by the author(s)? ?What is the purpose of the present study? ?How is the research to be done? ?What are some of the important findings? ?What are some of the implications of the study? Elements of structure in an Abstract ?We can see that by asking a number of questions we can discover the structure of the Abstract. ?We can refer to each section as an "element of stmcture"? Tlie six elements of structure can then be refeiTed to as ?Topic Specification (TS), ?Background Information (BI), ?Purpose Statement (PS), ?Methodology and Data (MD), ?Results/Findings (RF), and ?Implications/Conclusions (IC). ?An important issue here is the time for the writing of the Abstract??Usually it is written after the study/research is completed but this is not always the case as, for example, people send abstracts of unfinished

论文写作abstract

How to Write an Abstract for a Research Paper WANG Yan School of International Studies UIBE Issues to address: 1What is an abstract? 2Functions of an abstract 3Structure of an abstract 4Principles of abstract writing 1. What is an abstract? ?An abstract is a condensed version of a longer piece of writing that highlights the major points covered, concisely describes the purpose and scope of the writing, and reviews the writing's contents in abbreviated form. ?It is a concise and clear summary of a complete research paper. ?It tells the reader What you set out to do, and Why you did it,How you did it, What you found (recommendations).

2. Functions of an abstract ?An abstract is used to communicate specific information from the article.?It is aimed at introducing the subject to readers, who may then read the article to find out the author's results, conclusions, or recommendations. 2. Functions of an abstract ?The practice of using key words in an abstract is vital because of today's electronic information retrieval systems. ?Titles and abstracts are filed electronically, and key words are put in electronic storage. ?When people search for information, they enter key words related to the subject, and the computer prints out the titles of all the articles containing those key words. ?An abstract must contain key words about what is essential in an article so that someone else can retrieve information from it. 3. Structure of an abstract ?The components of an abstract ①Background Information ②Subject Matter/Problem Statement ③Purpose ④Method (and Data) ⑤Results / Findings ⑥Conclusion / Implications

硕士综合考试-2015科技论文写作考试作业

研究生课程考核试卷 (适用于课程论文、提交报告) 科目:科技论文写作教师:XXX 姓名:XX 学号:201509020XX 专业:金属材料类别:学术 上课时间:2015年9 月至2015 年11月 考生成绩: 卷面成绩平时成绩课程综合成绩阅卷评语: 阅卷教师(签名) 重庆大学研究生院制

重庆大学研究生《科技论文写作》课程考核要求 注:1、本试卷格式用于考核方式为“提交报告”、“课程论文”、“考查”等各类别研究生课程的考核。 2、要有明确的课程考核要求:如课程论文(报告)题目(范围)、篇幅(字数)、必须的参考资料、提交时间等。并提前将课程考核试卷发给学生。 3、提交课程论文撰写格式参考《重庆大学博士、硕士学位论文撰写格式标准》。

1.结合组织结构图,阐释科技论文写作中IMRD结构中Abstract 和Introduction的异同。(10分) 答: 相同点:在摘要中的1.1与引言中的1部分相对应都阐述了研 究领域以及研究课题的重要性。摘要中的1.2与引言中的2、3 部分的阐述与研究领域中的gap有关。 不同点:摘要的1、2、3部分分别阐述了研究的背景目的、研 究的方法以及实验结果。而引言1、2、3部分分别阐述了研究 的领域、找出领域中的gap以及填补gap。 2.请结合自己研究方向,例表介绍10个相关SCI期刊全称,并注 明其2015年最新影响因子和JCR分区。(10 分) 序号期刊名IF JCR分区 1 2 3 4 5 6 7 8 9 10 Materials & Design Acta Materialia Scripta Materialia Materials Science & Engineering A Journal of Alloys and Compounds Transactions of Nonferrous Metals Society of China Science and Technology of Advan- ced Materials Journal of Materials Science & Technology Materials Characterization Advanced Engineering Materials 3.501 4.465 3.224 2.567 2.999 1.178 3.513 1.909 1.845 1.758 2 1 2 2 2 4 2 3 3 3

论文写作规范和写作模板

榆林学院本科毕业论文写作规范 毕业设计(论文)是学生毕业前的最后一个重要学习环节,是学习深化与升华的重要过程。它既是学生学习、研究与实践成果的全面总结,又是对学生素质与能力的一次全面检验。为了提高我校毕业设计(论文)的质量,同时为撰写、指导、审核和评价毕业设计(论文)提供基本依据,特制定本规范。 一、毕业设计(论文)格式、页面设置、用纸 1、页面设置:页边距按以下标准设置:上边距2.54CM,下边距2.54CM,左、右边距为3CM,页面与页脚距边界保持默认值,不留装订线。 2、字间距:采用标准字间距。 3、正文:中文为宋体,英文为“Times News Roman”,小四号。行间距:采用20磅行间距。正文中的图和表必须有编号,如:“表3-1”、“图2-5”等。图名和表名字体用相应的五号字体,表格的上下空行,表名无需英文翻译。正文中如有“引言”部分,“引言”二字前统一不用序号。 4、一级标题:如:“摘要”(二字间打两个空格)、“目录”(二字间打两个空格)、“第1章绪论”、“参考文献”、“致谢”(二字间打两个空格)、“附录”(二字间打两个空格)等,黑体(不加粗),3号,居中排列,前留空行(段前0行、段后0行,行距为固定值20磅的空行),段前0磅,段后间距设置为30磅,行距为20磅。每一个一级标题单独另起一页。“ABSTRACT”字体采用Times News Roman,加粗,其余设置同上。 5、二级标题:如:“2.1 认证方案”、“6.5 小结”等,黑体(不加粗),小3号,前留空行(段前0行、段后0行,行距为固定值20磅的空行),段前0磅,段后间距设置为18磅,行距20磅,左对齐。 6、三级标题:“3.1.1 试题库的数量要求”、“4.5.1 批量添加考生”等,黑体(不加粗),4号,前留空行(段前0行、段后0行,行距为固定值20磅的空行),段前0磅,段后间距设置为12磅,行距20磅,左对齐。其余标题与正文设置相同。 7、页眉:论文的页眉设置应从摘要开始到最后,在每一页的最上方,用5号宋体,居中排列,页眉之下划双线(直接套用模板),页眉设置如下:正文页眉为“榆林学院本科毕业论文”与“论文题目”交替出现。 “摘要”、“ABSTRACT”、“目录”、“参考文献”、“致谢”页眉的相应设置为“摘要”、“ABSTRACT”、“目录”、“参考文献”、“致谢”,其中“摘要”、“目录”、“致

SCI论文写作经验整理

仔细阅读所投期刊官网上的“Guide for Authors”,并且遵循所有行文结构。 正文文字用12号字或小四号字,要设置页码和行号,页码设置在页面底端居中位置,行号设置成“每页重编行号”。 《SCI论文写作和发表:You Can Do It》一书中建议的行文顺序为:先写Results部分,对应着Results部分再写其他部分,最后写Abstract部分。 论文在撰写时要自始至终都用英语写,写作时行文时态要注意(且要求相当严格)。一般来说,大多数情况下是过去时态,在Introduction文献回顾,Methods整个部分,Results结果总结,Discussion中的大部分,都用过去时态陈述。其他情况下可以用一般时态来描述。 当提到本文(此图、此表等)说明(表达)了什么的时候,要用一般现在时。 目录、标题中通常省略冠词,图中的横、纵坐标的名称前不加冠词。 避免使用极端修饰词(如最好、第一)和社论性语言(如令人惊异地、令人感兴趣地)。 句子的时态: 引述文献结果时用过去时,讲述试验和试验结果时用过去时; 讲述图表所示结果时用现在时。 1.Title ●Title要围绕研究对象、研究方法和研究结果三个部分或至少两部分来设计。Title 中切记不能出现缩写词和具体结果。 ●作者姓名,名在前,姓在后;在地址中,城市名和邮政编码之间不应该有“,”(逗号)。 2.Abstract ●一段写完,200 words左右。对于初学者,可以将Abstract限制在10句:第一句 写科研背景和目的,第二句概括性地写本文做了些什么,接下来用3~4句话来写试

验方法,再用3~4句话写试验结果,最后一句写总结或意义。 ●按照行文顺序,依次介绍主要研究对象(subject)、实验设计(design)、实验步骤 (procedures)以及最后结果(results)。 ●写作要求:用含有关键词的短的简单句,以使Abstract清楚简洁;避免使用缩写词 和晦涩难懂的词句;以过去时为主(问题的陈述和结论可用现在时);强调研究的创新和重要方面。 3.Introduction ●Introduction是外刊文章最难写的部分之一(另外就是Discussion)。外刊论文对于 Introduction的要求是非常高的,一个好的Introduction相当于文章成功了一半。要写好Introduction,最重要的是要保持鲜明的层次感和极强的逻辑性,这两点是紧密结的,即在符合逻辑性的基础上建立层层递进的关系。 ●Introduction要保证简短,顺序是一般背景介绍、别人工作成果、自己研究目的以 及工作简介,其中介绍别人工作时只需介绍和自己最为相关的方面。 ●阐述自己研究领域的基本内容。要尽量简洁明了,不罗嗦; ●文献总结回顾是Introduction的重头戏之一,要特别着重笔墨来描写; ●分析过去研究的局限性并阐明自己研究的创新点,这是整个Introduction的高潮; ●总结性描述论文的研究内容,可以分为一二三四等几个方面来描述,为Introduction 做最后的收尾工作。 ●Introduction开头的几句话要使用题目中的关键词,以便使文章直进主题。接下来 就简要介绍课题的背景知识,也就是描述文献研究。然后提出现在的问题或缺陷在哪里,作者是如何进一步去研究的。结尾可以用三句话:我们做了什么,发现了什么和科研的意义。

期刊对摘要的不同要求_Abstract_Carrie

China E-Newsletter: [Wallace Editing] 期刊对摘要的不同要求 亲爱的研究者: 在各地演讲时,我曾接获许多关于学术论文发表方面的询问,并同时收到大量电子邮件寻求这方面的协助。接下来,我们将在每期的电子报中陆续为大家回复这些问题,希望这些答案对您的写作有所裨益。欢迎您将问题传达给我们,我们将竭诚为您解答。若您对其他相关的学术写作信息有兴趣,请点击此处。 Dr. Steve Wallace, Director of Wallace Academic Editing https://www.sodocs.net/doc/118212059.html, wallaceediting@https://www.sodocs.net/doc/118212059.html,华乐思论文编辑 提问: 我所投稿的期刊曾要求我提供论文的总结,请问论文的总结(Summary)和摘要(Abstract)是一样的吗?最常见的摘要类型有哪些呢?撰写摘要时有特定的字数限制吗?医学论文的摘要不同于一般摘要吗?在此先感谢您的回复。 ― C.G. 台大医学生 回答: 摘要的定义 摘要也就是所谓的总结,它简明扼要地概括论文的内容,或是作为整篇论文的缩影。除了论文的标题之外,摘要是最常被阅读的部分。它通常位于整篇论文的开头、标题页之后。摘要非常的重要,因为编辑与读者往往只会阅读论文的这一部份。摘要也针对后面的文章内容,提供读者一个初步的印象,因此应尽可能完美地呈现摘要。 根据国际医学期刊编辑委员会(International Committee of Medical Journal Editors, ICMJE)的规定,摘要应提供研究的情境或背景,说明研究目的、基本程序(包含研究对象或实验动物的选择、观察方法及分析方法)、主要调查结果(如果可能的话,提供明确的效应值及它们在统计上的重要性),以及主要结论。在摘要里,应强调研究有何新的、重要的发现以及研究观察。(1) 除了作为投稿论文的一部份,摘要在论文里是不可或缺的。以口头方式提交研究报告,或是作为科学研讨会的海报,摘要是整个计划的唯一代表。个人资料库及记录也可以在摘要中归纳说明。由于在许多电子数据库里,摘要是论文实际上被检索的部分,因此作者应注意摘要是否能充分反应论文内容。遗憾地是,许多摘要与文章内文并不一致(2),因此作者应确认摘要中包含的信息与结论也出现在原稿的正文里。

英语专业学术论文写作:摘要

学术论文写作:摘要 一、摘要的写作目的和结构要素 摘要简要地概述论文的内容, 拥有与正文同等量的主要信息,即不阅读全文,就能获得必要的信息。其结构要素是: (1) 主题阐述(Topic specification); (2) 研究目的陈述(Purpose statement); (3) 理论指导(Theory/Perspective) (3) 研究方法(Methodology and Data); (4) 研究结果/发现(Results/Findings); (5) 研究结论/启示(Conclusions/Implications)。 练习1:就结构要素评析下面4个摘要(为判断方便,列汉语标题) Sample 1 1. Introduction 2. Translation Activity in New Century 2.1Definition and Purpose of Translation Activity 2.2 Translation Activity under the Background of Cross-culture Communication 2.2.1The Trend of Cross-culture Communication 2.2.2 New Requirements for Translation Activity 3. The Trend of Cross-culture Communication 3.1 Definitions of Cultural Symbols 3.2 The Formation of Characteristic Cultural Symbols 3.3Main Categories of Cultural Symbols 4. Strategy in Dealing with Cultural Symbols Translation 4.1 Comparison between Domestication and Foreignization 4.2 Nida Eugene. A and Dynamic Equivalence Translation

英文生物论文写作AbstractExample01

英文生物论文写作(homework1) 中科院海洋研究所地球科学学院0609班 2013E8006861080 王婷 Feedbacks and Questions When I accomplish reading these 10 abstract, I learned that: Frist ,When writing abstracts of academic papers, many authors are used to applying past tense; Rather than using past tense aimlessly, they are adapt to use it to describe the methods or results or both(5 papers). When writing abstracts of reviews, few authors use past tense(4 reviews in all, No past tense). Second, There are three papers using passive voice in almost whole abstracts, except these authors who are adept at doing this, others are prefer to apply positive voice. Third,Academic papers have disparate writing form compared with reviews. They must contain backgrounds, questions, methods, results and conclusions, although sometimes methods and results can be incorporated part. While reviews seem not to have a fixed pattern of writing. Fourth, Only does a specific abbreviation appear more than twice in an abstract, the use of this abbreviation can be right and standard. Finally I realized that except for conciseness, an abstract must be well-organized in order to catching the eyes of readers. As a newbie abstract writers, we need to learn a lot more. Through reading and dissecting these 10 abstract I have questions as follows: 1. Whether we should try to avoid using long and complex sentences? Compound sentences sometimes seem to be redundant and be easier to mislead readers. 2. In some situations, for example “we report the first cloning and functional characterization of an SR molecule from teleost fish (Tetraodon nigroviridis). This SR (TnSR) ……,”Is it necessary to specify “TnSR”in advance in first sentence, then use its abbreviation directly in the text that follows? If it is necessary,how should we do? 3. When writing an abstract, should we use sentences like “we characterizes for the first time……”, “we report the first cloning and functional characterization of ……”,and“to our knowledge……” ect.? Why? 4. How should we do to stress the novelty or significance of our research? 5. Why many authors write sentences like “data on their occurrence and functions ……are limited.”or “To date very few studies have been carried out to ……” and so on? If this usage is correct, what’s the function or purpose? 6. What’s the difference between “novel (more specific)” and “new”? 7. Like academic papers, do abstracts of reviews have a specific and standard writing form? 8. By hearing professor Yu’s explanation, I known in the following example what “which” refers to is not very clear. I still wonder how to rewrite this sentence correctly?

论文写作摘要带批注

A Study on Prospective Development of Chinese Animation Industry in Light of Disney History 中文摘要 迪士尼是总部设在美国的大型跨国公司,经过八十多年的发展至今已不止于从事动画电影这一个行业了,主要业务还有主题公园,玩具,图书,电子游戏和传媒网络等多个产业。八十年来,迪士尼从小小的卡通工作室成为家庭娱乐集团,它可以很自豪的称自己是美国乃至世界动漫及娱乐界的精英。 本文主要从迪士尼公司的发展史来探讨中国动漫产业的发展现状及发展前景。希望能够对迪士尼公司的成长历程有深层次的认识,并从中得到启迪。从迪士尼发展史、迪士尼本人、迪士尼动画、迪士尼乐园、迪士尼品牌以及迪士尼在中国的相关活动来研究迪士尼公司进入中国的市场之前、之后的大陆方面业务。主要研究迪士尼在动画方面取得的成就并对它的市场策略做了详细的研究。从其历史发展状况来看中国动漫产业的历史发展状况及取得的成就和今后的动态;从其在动漫产业上的一系列策略来研究中国动漫市场相关策略;从其公司的经营及扩展状况来了解和分析中国动漫公司的经营情况;从其动漫的原创性和创新性来展望中国国产动漫应该注意和借鉴的经验,从而进一步推动整个产业链的形成。虽然在过去的二十多年来中国动漫产业有了很好的发展,遗憾的是只有蓝猫系列建立起来自己的品牌,所以中国动漫产业要想在竞争激烈的市场中立足,必须像迪士尼一样不断创新,建立品牌,实行全球化战略。 关键词:迪士尼;中国动漫;品牌

Abstract The Walt Disney Company, together with its subsidiaries and affiliates, is a leading diversified international family entertainment and media enterprise with four business segments: media networks, parks and resorts, studio entertainment and consumer products. For more than eight decades, the name Walt Disney has been preeminent in the field of family entertainment. From humble beginnings as a cartoon studio in the 1920s to today’s global corporation, the Walt Disney Company continues to proudly provide quality entertainment for every member of the family, across America and around the world. Over the past 2004, Animation industry in China has got very good development. Regrettably, it has only Blue Cat cartoon series produced by SanChen that established its own brand, has gained the corresponding social effects and economic effects. If Animation industry in China wants to gain achievement in fierce market competition, it could not blindly dispread on quantity. Establishing brand, structuring brand culture is really basic approach to cultivate markets. Contend for market, capture marketplace finally. Key words: Disney; animation industry; brand culture

摘要模板abstract

考研英语写作破解-应用文写作(小作文)- 摘要 1.写作攻略 摘要是对原始文献(或文章)的基本内容进行浓缩并写成语义连贯的短文。它以迅速掌握原文内容梗概为目的,不需加入任何主观评论和解释,但必须简明、确切地表述原文的重要内容。摘要写作要求简明扼要,用词准确。摘要题型写作是难度非常大的一种写作形式。随着考生英语水平的不断提高,这种题型引起考生的普遍重视。写作摘要时应该注意以下几点: (1)动笔之前,考生一定要认真仔细地阅读所给原文,力求抓住其大意,掌握原文要点。如果一遍不能明白,可以多读几次,读的次数越多越容易明白文章大意。 (2)摘要的长度不应超过原文的三分之一,一般是原文的四分之一或五分之一,考试时应遵守规定的字数限制。 (3)在做摘要时考生切忌照搬原文,应该用自己的语言来写。一篇摘要应该是语言的再创造,而不是原文词语的简单堆砌。 (4)摘要应与原文的观点保持一致,为了使摘要条理清晰,无特殊情况应按原文的逻辑顺序排列。 (5)摘要要做到内容完整,没有大的遗漏,使读者无需查看原文就能够获得原文的主要内容。应重点反映主要观点,删除细节。 (6)削除例子,简化描述。用简短的语句代替冗长的语句。 (7)检查与修改时,考生应重点检查是否遗漏了原文的要点或包含了细节。 另外,写摘要时应该使用简单的衔接词,如but, then, thus, yet, for 等等,而不能使用at the same time, on the other hand 等。 2.必背模版句型 This paper mainly deals with... 这篇论文主要是写…… This article focuses on the topics of (that, having, etc)... 这篇文章的强调的主题是…… This essay presents knowledge that... 这篇短文主要是讲关于……的知识。 This thesis discusses/analyzes... 这篇论文讨论/分析了…… This paper provides an overview of... 这篇文章综述了…… This article compares...and summarizes key findings. 这篇文章比较了……并总结了主要的发现。 This paper presents up-to-date information on...

相关主题