搜档网
当前位置:搜档网 › Charting Past, Present, and Future Research in Ubiquitous Computing

Charting Past, Present, and Future Research in Ubiquitous Computing

Charting Past, Present, and Future Research in Ubiquitous Computing
Charting Past, Present, and Future Research in Ubiquitous Computing

Charting Past,Present,and Future Research in Ubiquitous Computing

GREGORY D.ABOWD and ELIZABETH D.MYNATT

Georgia Institute of Technology

The proliferation of computing into the physical world promises more than the ubiquitous availability of computing infrastructure;it suggests new paradigms of interaction inspired by constant access to information and computational capabilities.For the past decade,applica-tion-driven research in ubiquitous computing(ubicomp)has pushed three interaction themes: natural interfaces,context-aware applications,and automated capture and access.To chart a course for future research in ubiquitous computing,we review the accomplishments of these efforts and point to remaining research challenges.Research in ubiquitous computing implic-itly requires addressing some notion of scale,whether in the number and type of devices,the physical space of distributed computing,or the number of people using a system.We posit a new area of applications research,everyday computing,focussed on scaling interaction with respect to time.Just as pushing the availability of computing away from the traditional desktop fundamentally changes the relationship between humans and computers,providing continuous interaction moves computing from a localized tool to a constant companion. Designing for continuous interaction requires addressing interruption and resumption of interaction,representing passages of time and providing associative storage models.Inherent in all of these interaction themes are difficult issues in the social implications of ubiquitous computing and the challenges of evaluating ubiquitous computing research.Although cumula-tive experience points to lessons in privacy,security,visibility,and control,there are no simple guidelines for steering research efforts.Akin to any efforts involving new technologies, evaluation strategies form a spectrum from technology feasibility efforts to long-term use studies—but a user-centric perspective is always possible and necessary.

Categories and Subject Descriptors:H.5.2[Information Interfaces and Presentation]: User Interfaces—Evaluation/methodology;Interaction styles;Prototyping;H.5.m[Informa-tion Interfaces and Presentation]:Miscellaneous;J.m[Computer Applications]:Miscel-laneous;K.4.2[Computers and Society]:Social Issues

General Terms:Human Factors

Additional Key Words and Phrases:Augmented reality,context-aware applications,capture and access,evaluation,everyday computing,natural interfaces,social implications,ubiquitous computing,user interfaces

Authors’address:College of Computing&GVU Center,Georgia Institute of Technology, Atlanta,GA30332-0280;email:abowd@https://www.sodocs.net/doc/1d9546530.html,;mynatt@https://www.sodocs.net/doc/1d9546530.html,.

Permission to make digital/hard copy of part or all of this work for personal or classroom use is granted without fee provided that the copies are not made or distributed for profit or commercial advantage,the copyright notice,the title of the publication,and its date appear, and notice is given that copying is by permission of the ACM,Inc.To copy otherwise,to republish,to post on servers,or to redistribute to lists,requires prior specific permission and/or a fee.

?2000ACM1073-0516/00/0300–0029$5.00

ACM Transactions on Computer-Human Interaction,Vol.7,No.1,March2000,Pages29–58.

30?G.D.Abowd and E.D.Mynatt

1.INTRODUCTION

Weiser introduced the area of ubiquitous computing(ubicomp)and put forth a vision of people and environments augmented with computational resources that provide information and services when and where desired [Weiser1991].For the past decade,ubicomp researchers have attempted this augmentation with the implicit goal of assisting everyday life and not overwhelming it.Weiser’s vision described a proliferation of devices at varying scales,ranging in size from hand-held“inch-scale”personal devices to“yard-scale”shared devices.This proliferation of devices has indeed occurred,with commonly used devices such as hand-held personal digital assistants(PDAs),digital tablets,laptops,and wall-sized electronic white-boards.The development and deployment of necessary infrastructure to support continuous mobile computation is arriving.

Another aspect of Weiser’s vision was that new applications would emerge that leverage off these devices and infrastructure.Indeed,ubicomp promises more than just infrastructure,suggesting new paradigms of interaction inspired by widespread access to information and computa-tional capabilities.In this article,we explore how this applications perspec-tive has evolved in the decade since the start of the Ubiquitous Computing project at Xerox PARC.Specifically,we review the accomplishments and outline remaining challenges for three themes:

—We desire natural interfaces that facilitate a richer variety of communi-cations capabilities between humans and computation.It is the goal of these natural interfaces to support common forms of human expression and leverage more of our implicit actions in the world.Previous efforts have focused on speech input and pen input,but these interfaces still do not robustly handle the errors that naturally occur with these systems; also these interfaces are too difficult to build.

—Ubicomp applications need to be context-aware,adapting their behavior based on information sensed from the physical and computational envi-ronment.Many applications have leveraged simple context,primarily location and identity,but numerous challenges remain in creating reus-able representations of context,and in creating more complex context from sensor fusion and activity recognition.

—Finally,a large number of ubicomp applications strive to automate the capture of live experiences and provide flexible and universal access to those experiences later on.

Undertaking issues of scale is implicit in the definition of ubicomp research.Weiser defined the notion of scale as a broad space of computa-tional devices[Weiser1991].Likewise,scaling systems with respect to distribution of computation into physical space reinforces the desire to break the human away from desktop-bound interaction.Requirements for critical-mass acceptance and collaboration imply scaling with respect to people.A final dimension,time,presents new challenges for scaling a ACM Transactions on Computer-Human Interaction,Vol.7,No.1,March2000.

Charting Past,Present,and Future Research in Ubiquitous Computing?31 system.Pushing the availability of interaction to a“24-by-7”(24hours a day,7days a week)basis uncovers another class of largely unexplored interactions that will also push ubicomp research into the next century.To address scaling with respect to time,in Section5,we introduce a new theme,called everyday computing,that promotes informal and unstruc-tured activities typical of much of our everyday lives.These activities are continuous in time,a constant ebb and flow of action that has no clear starting or ending point.Familiar examples are orchestrating tasks,com-municating with family and friends,and managing information.

The structure of this article follows the evolutionary path of past work in ubicomp.The first step in this evolution,demonstrated by the PARCTab [Want et al.1995]and Liveboard[Elrod et al.1992],is computers encased in novel form factors.Often these computational appliances push on traditional areas in computer science such as networking and operating systems.Since these new form factors often do not work well with tradi-tional input devices such as the keyboard and mouse,developing new,and more natural,input capabilities is the next step.An example of this work is the pen-based shorthand language Unistroke for the PARCTab[Goldberg and Richardson1993].After some initial demonstrations,infrastructure is needed to deploy these devices for general use.For example,numerous tour guide systems that mimic the first use of Active Badges[Want et al.1992] have been built and deployed for real use.

It is at this point that application designers begin working with these new systems to develop novel uses,often focusing on implicit user input to minimize the intrusion of technology into everyday life.The objective of this application-centered research is to understand how everyday tasks can be better supported,and how they are altered by the introduction of ubiquitous technologies.For example,ubicomp applications in support of common meeting tasks at PARC(through the Tivoli project)have resulted in new ways to scribe and organize materials during meetings.Capture environments in educational settings have provided more opportunities to understand the patterns of longer-term reviewing tasks over large multi-media records.Applications of wearable computers initially emphasized constant access to traditional individual tasks,such as accessing email. More recent applications have attempted to augment an individual’s mem-ory and provide implicit information sharing between groups.The direction of applications research,what Weiser himself deemed the ultimate purpose for ubicomp research,is deeply influenced by authentic and extended use of ubicomp systems.

Today we are just starting to understand the implications of continuous immersion in computation.The future will hold much more than constant availability of tools to assist with traditional,computer-based tasks. Whether we wear computers on our body,or have them embedded in our environment,the ability of computers to alter our perception of the physical world,to support constant connectivity to distant people and places,to provide information at our fingertips,and to continuously partner with us

ACM Transactions on Computer-Human Interaction,Vol.7,No.1,March2000.

32?G.D.Abowd and E.D.Mynatt

in our thoughts and actions offers much more than a new“killer app”—it offers the possibility of a killer existence.

Overview.In this article,we investigate the brief history of ubiquitous computing through exploration of the above-mentioned interaction themes—natural interfaces,context-aware computing,and automated cap-ture and access for live experiences.In addition to reviewing the research accomplishments in these application research themes,we also outline some of the remaining research challenges for HCI researchers to pursue in the new millennium.We then explain the necessity for ubicomp research to explore continuous everyday activities.This area of research motivates applications that build off of the three earlier themes and moves ubicomp more into the realm of everyday computing characterized by continuously present,integrative,and unobtrusive interaction.Inherent in all of these interaction themes are difficult issues in the social implications of ubiqui-tous computing and the challenges of evaluating ubiquitous computing research.We conclude with our reflections on these issues and description, via case studies,of our current strategies for evaluation of ubicomp systems.

https://www.sodocs.net/doc/1d9546530.html,PUTING WITH NATURAL INTERFACES

Ubiquitous computing inspires application development that is“off the desktop.”Implicit in this mantra is the assumption that physical interac-tion between humans and computation will be less like the current desktop keyboard/mouse/display paradigm and more like the way humans interact with the physical world.Humans speak,gesture,and use writing utensils to communicate with other humans and alter physical artifacts.These natural actions can and should be used as explicit or implicit input to ubicomp systems.

Computer interfaces that support more natural human forms of commu-nication(e.g.,handwriting,speech,and gestures)are beginning to supple-ment or replace elements of the GUI interaction paradigm.These interfaces are lauded for their learnability and general ease of use,and their ability to support tasks such as authoring and drawing without drastically changing the structure of those tasks.Additionally,they can be used by people with disabilities for whom the traditional mouse and keyboard are less accessi-ble.

There has been work for many years in speech-related interfaces,and the emerging area of perceptual interfaces is being driven by a long-standing research community in computer vision and computational perception [Turk1997;1998].Pen-based or free-form interaction is also realizing a resurgence after the failure of the first generation of pen computing.More recently,researchers have suggested techniques for using objects in the physical world to manipulate electronic artifacts,creating so-called grasp-able[Fitzmaurice et al.1995]or tangible user interfaces[Ishii and Ullmer 1997].Harrison et al.[1998]have attached sensors to computational devices in order to provide ways for physical manipulations of those devices ACM Transactions on Computer-Human Interaction,Vol.7,No.1,March2000.

Charting Past,Present,and Future Research in Ubiquitous Computing?33 to be interpreted appropriately by the applications running on those devices.Applications that support natural interfaces will leverage off of all of these input and output modalities.Instead of attempting to review the impressive amount of work in natural interfaces,we focus on two issues that are important for enabling the rapid development of effective natural interfaces.One important area we will not discuss is that of multimodal integration,a theme with its own conferences and journals already.

2.1First-Class Natural Data Types

To ease the development of more applications with natural interfaces,we must be able to handle other forms of input as easily as keyboard and mouse input.The raw data or signals that underlie these natural interfac-es—audio,video,ink,and sensor input—need to become first-class types in interactive system development.As programmers,we expect that any user interface toolkit for development provides a basic level of support for “fundamental”operations for textual manipulation,and primitives for keyboard and mouse interaction.Similarly,we need basic support for manipulating speech—such as providing speaker pause cues or selection of speech segments or speaker identification—as well as for video and ink and other signals,such as physical device manipulations detected by sensors. Take,for example,freeform,pen-based interaction.Much of the interest in pen-based computing has focussed on recognition techniques to convert the“ink”from pen input to text.However,some applications,such as personal note-taking,do not require conversion from ink to text.In fact,it can be intrusive to the user to convert handwriting into some other form. Relatively little effort has been put into standardizing support for freeform, pen input.Some formats for exchanging pen input between platforms exist, but little effort has gone into defining effective mechanisms for manipulat-ing the freeform ink data type within programs.

What kinds of operations should be supported for a natural data type such as ink?The Tivoli system provided basic support for creating ink data and distinguishing between uninterpreted,freeform ink data and special, implicitly structured gestures[Minneman et al.1995;Moran et al.1995; 1996;1997a].Another particularly useful feature of freeform ink is the ability to merge independent strokes together as they form letters,words, and other segments of language.In producing Web-based notes in Class-room2000(discussed in more detail below),for example,we wanted annotations written with a pen by a lecturer to link to the audio or video of what was said or seen at that same time during a lecture[Abowd1999]. The annotations are timestamped,but it is not all that useful to associate an individual penstroke to the exact time it was written in class.We used a temporal and spatial heuristic to statically merge penstrokes together and assign them a more meaningful,word-level timestamp[Abowd et al. 1998b].Chiu and Wilcox[1998]have produced a more general and dynamic algorithm,based on hierarchical agglomeration,to selectively link audio and ink.These structuring techniques need to become standard and avail-

ACM Transactions on Computer-Human Interaction,Vol.7,No.1,March2000.

34?G.D.Abowd and E.D.Mynatt

able to all applications developers who wish to create freeform,pen-based interfaces.And as the work of Chiu and Wilcox demonstrates,some of the structuring techniques can apply to more than one natural data type.We must also think about primitive operations that combine different natural data types.

2.2Error-Prone Interaction for Recognition-Based Interaction

When used for recognition-based tasks,natural interfaces come with a new set of problems:they permit new and more kinds of mistakes.When recognition errors occur,the initial reaction of system designers is to try to eliminate them, e.g.,by improving recognition accuracy.However,Van Buskirk and LaLomia[1995]found that a reduction of5–10%in the absolute error rate is necessary before the majority of people will even notice a difference in a speech recognition system.

Worse yet,eliminating errors may not be possible.Even humans make mistakes when dealing with these same forms of communication.For an example,consider handwriting recognition.Even the most expert hand-writing recognizers(humans)can have a recognition accuracy as low as 54%[Schomaker1994].Human accuracy increases to88%for cursive handwriting[Schomaker1994],and96.8%for printed handwriting[Frank-ish et al.1992],but it is never perfect.This evidence all suggests that computer handwriting recognition will never be perfect.Indeed,computer-based recognizers are even more error-prone than humans.The data they can use is often less fine-grained than what humans are able to sense.They have less processing power.And variables such as fatigue can cause usage data to differ significantly from training data,causing reduced recognition accuracy over time[Frankish et al.1992].

On the other hand,recognition accuracy is not the only determinant of user satisfaction.Both the complexity of error recovery dialogues[Zajicek and Hewitt1990]and the value-added benefit for any given effort[Frank-ish et al.1995]affect user satisfaction.For example,Frankish et al.[1995] found that users were less frustrated by recognition errors when the task was to enter a command in a form than when they were writing journal entries.They suggest that the pay-back for entering a single word in the case of a command is much larger when compared with the effort of entering the word in a paragraph of a journal entry.

Error handling is not a new problem.In fact,it is endemic to the design of computer systems that attempt to mimic human abilities.Research in the area of error handling for recognition technologies must assume that errors will occur,and then answer questions about the best ways to deal with them.Several research areas for error handling of recognition-based interfaces have emerged:

—Error reduction:This involves research into improving recognition tech-nology in order to eliminate or reduce errors.It has been the focus of extensive research,and could easily be the subject of a whole paper on its own.Evidence suggests that its holy grail,the elimination of errors,is probably not achievable.

ACM Transactions on Computer-Human Interaction,Vol.7,No.1,March2000.

Charting Past,Present,and Future Research in Ubiquitous Computing?35—Error discovery:Before either the system or the user can take any action related to a given error,one of them has to know that the error has occurred.The system may be told of an error through explicit user input, and can help the user to find errors through effective output of uncertain interpretations of recognized input.Three techniques are used to auto-mate such error discovery—thresholding of confidence measures,histori-cal statistics[Marx and Schmandt1994],and explicit rule specification [Baber and Hone1993].

—Reusable infrastructure for error correction:Toolkits provide reusable components and are most useful when a class of common,similar problems exists.Interfaces for error handling would benefit tremen-dously from a toolkit that presents a library of error-handling techniques of recognition-based input.Such a toolkit would have to handle the inherent ambiguities that arise when multiple interpretations are gener-ated for some raw input.A prototype toolkit has been proposed by Mankoff et al.[2000]to support reusable recovery techniques,but many challenges remain.

3.CONTEXT-AWARE COMPUTING

Two compelling early demonstrations of ubicomp were the Olivetti Re-search Lab’s Active Badge[Want et al.1992]and the Xerox PARCTab [Want et al.1995],both location-aware appliances.These devices leverage a simple piece of context,user location,and provide valuable services (automatic call forwarding for a phone system,automatically updated maps of user locations in an office).Whereas the connection between computa-tional devices and the physical world is not new—control systems and autonomously guided satellites and missiles are other examples—these simple location-aware appliances are perhaps the first demonstration of linking implicit human activity with computational services that serve to augment general human activity.

Location is a common piece of context used in application development. The most widespread applications have been GPS-based car navigation systems and handheld“tour guide”systems that vary the content displayed (video or audio)by a hand-held unit given the user’s physical location in an exhibit area[Abowd et al.1997;Bederson1995;Cheverst et al.1998; Opperman and Specht1998].Another important piece of context is recog-nizing individual objects.Earlier systems focused on recognizing some sort of barcode or identifying tag,while recent work includes the use of vision-based recognition.Fitzmaurice et al.[1993;1995]demonstrated using a hand-held device to“see inside”walls and pieces of machinery. Rekimoto and Nagao’s[1995]NaviCam(see Figure1)recognized color barcodes overlaying additional information about objects on a hand-held video display.Recent efforts[Jebara et al.1997]are attempting to substi-tute visual object recognition strategies so that objects do not have to be individually tagged.

ACM Transactions on Computer-Human Interaction,Vol.7,No.1,March2000.

Although numerous systems that leverage a person’s identity and/or location have been demonstrated,these systems are still difficult to imple-ment.Salber et al.[1999]created a “context toolkit”that simplifies design-ing,implementing,and evolving context-aware applications.This work emphasizes the strict separation of context sensing and storage from application-specific reaction to contextual information,and this separation facilitates the construction of context-aware applications.Mynatt et al.

[1998]point to the common design challenge of creating a believable experience with context-aware interfaces noting that the responsiveness of the interface is key to the person associating additional displays with their movements in the physical world.

In many ways,we have just scratched the surface of context-aware computing with many issues still to be addressed.Here we will discuss challenges in incorporating more context information,representing context,ubiquitous access to context sensing and context fusion,and the coupling of context and natural interaction to provide effective augmented reality.

3.1What Is Context?

There is more to context than position and identity.Most context-aware systems still do not incorporate knowledge about time,history (recent

or Fig.1.In Rekimoto and Naga’s [1995]NaviCam system,a handheld device recognizes tagged objects and then overlays context-sensitive information.

36?G.D.Abowd and E.D.Mynatt

ACM Transactions on Computer-Human Interaction,Vol.7,No.1,March 2000.

Charting Past,Present,and Future Research in Ubiquitous Computing?37 long past),other people than the user,as well as many other pieces of information often available in our environment.Although a complete definition of context is illusive,the“five W’s”of context are a good minimal set of necessary context:

—Who:Current systems focus their interaction on the identity of one particular user,rarely incorporating identity information about other people in the environment.As human beings,we tailor our activities and recall events from the past based on the presence of other people.—What:The interaction in current systems either assumes what the user is doing or leaves the question open.Perceiving and interpreting human activity is a difficult problem.Nevertheless,interaction with continu-ously worn,context-driven devices will likely need to incorporate inter-pretations of human activity to be able to provide useful information.—Where:In many ways,the“where”component of context has been explored more than the others.Of particular interest is coupling notions of“where”with other contextual information,such as“when.”Some tour guide systems have theorized about learning from a history of move-ments in the physical world,perhaps to tailor information display based on the perceived path of interest by the user.Again these ideas need fuller exploration.

—When:With the exception of using time as an index into a captured record or summarizing how long a person has been at a particular location,most context-driven applications are unaware of the passage of time.Of particular interest is understanding relative changes in time as an aid for interpreting human activity.For example,brief visits at an exhibit could be indicative of a general lack of interest.Additionally, when a baseline of behavior can be established,action that violates a perceived pattern would be of particular interest.For example,a context-aware home might notice when an elderly person deviated from a typically active morning routine.

—Why:Even more challenging than perceiving“what”a person is doing is understanding“why”that person is doing it.Sensing other forms of contextual information that could give an indication of a person’s affec-tive state[Picard1997],such as body temperature,heart rate,and galvanic skin response,may be a useful place to start.

3.2Representations of Context

Related to the definition of context is the question of how to represent context.Without good representations for context,applications developers are left to develop ad hoc and limited schemes for storing and manipulating this key information.The evolution of more sophisticated representations will enable a wider range of capabilities and a true separation of sensing context from the programmable reaction to that context.

ACM Transactions on Computer-Human Interaction,Vol.7,No.1,March2000.

38?G.D.Abowd and E.D.Mynatt

3.3The Ubiquity of Context Sensing—Context Fusion

An obvious challenge of context-aware computing is making it truly ubiqui-tous.Having certain context,in particular positioning information,has been shown useful.However,there are few truly ubiquitous,single-source context services.Positioning is a good example.GPS does not work indoors and is even suspect in some urban regions as well.There are a variety of indoor positioning schemes as well,with differing characteristics in terms of cost,range,granularity,and requirements for tagging,and no single solution is likely to ever meet all requirements.

The solution for obtaining ubiquitous context is to assemble context information from a combination of related context services.Such context fusion is similar in intent to the related,and well-researched,area of sensor fusion.Context fusion must handle the seamless handing off of sensing responsibility between boundaries of different context services. Negotiation and resolution strategies need to integrate information from competing context services when the same piece of context is concurrently provided by more than one service.This fusion is also required because sensing technologies are not100%reliable or https://www.sodocs.net/doc/1d9546530.html,bining measures from multiple sources could increase the confidence value for a particular interpretation.In short,context fusion assists in providing reliable ubiquitous context by combining services in parallel,to offset noise in the signal,and sequentially to provide greater coverage.

3.4Coupling Context-Aware and Natural Interaction—Augmented Reality The goal of many context-aware applications is to allow the user to receive, in real-time,information based on actions in the physical world.The tour guide systems are a good example—the user’s movements in an exhibit triggered the display of additional,context-sensitive information.These applications typically used separate,albeit portable,devices that require attention away from the rest of the physical world.The best metaphor to describe these interactions is that the user is“probing the world with a tool,”similar to tools such as electronic stud finders and geiger counters. By incorporating augmented vision and augmented hearing displays,as well as natural input such as voice and gesture,we will more closely integrate context-aware interaction with the physical world in which it resides[MacIntyre and Feiner1996;MacIntyre and Mynatt1998;Starner et al.1997].In these interactions,the system is modifying how a user perceives the physical world.This tighter integration of information and perception should allow for more natural,seamless,hands-busy,and seren-dipitous interaction(see Figure2).

4.AUTOMATED CAPTURE AND ACCESS TO LIVE EXPERIENCES

Much of our life in business and academia is spent listening to and recording,more or less accurately,the events that surround us,and then trying to remember the important pieces of information from those events. There is clear value,and potential danger,in using computational re-ACM Transactions on Computer-Human Interaction,Vol.7,No.1,March2000.

sources to augment the inefficiency of human record-taking,especially when there are multiple streams of related information that are virtually impossible to capture as a whole manually.Tools to support automated capture of and access to live experiences can remove the burden of doing something humans are not good at (i.e.,recording)so that they can focus attention on activities they are good at (i.e.,indicating relationships,summarizing,and interpreting).

There has been a good deal of research related to this general capture-and-access theme,particularly for meeting-room/classroom environments and personal note-taking.Early work by Schmandt and Arons [1985]and Hindus and Schmandt [1992]captured audio from phone conversations and provided ways to access the content of the recorded conversations.The two systems,PhoneSlave and Xcapture,treated audio as uninterpreted data and were successful using simple techniques to provide informative over-views of live conversations.More recent research efforts have tried to capture other types of input,such as freeform ink.The Tivoli system used a suite of software tools to support a scribe at a meeting [Minneman et al.1995;Moran et al.1996;1997b]as well as some electronic whiteboard technology—the LiveBoard [Elrod et al.1992]—to support group discus-sion.Artifacts produced on the electronic whiteboard during the meeting are timestamped.This temporal information is used after the meeting to index into recorded audio or video,thus providing the scribe a richer set of notes from the meeting.Similar integration between recorded ink annota-tions and audio/video is supported in Classroom 2000for university lec-tures [Abowd 1999;Abowd et al.1998a;1998b],with a greater emphasis on automating the postproduction of captured material into universally acces-sible interfaces for a large population of students.Other capture systems,such as Authoring on the Fly [Bacher and Ottmann 1996]and

Cornell’s

Fig. 2.In the KARMA system (on the left)augmented views required heavy,clunky,headmounted displays [Feiner et al.1993].Now lightweight glasses,such as the ones shown on the right above from MicroOptical,provide similar display capabilities.

Charting Past,Present,and Future Research in Ubiquitous Computing ?39ACM Transactions on Computer-Human Interaction,Vol.7,No.1,March 2000.

40?G.D.Abowd and E.D.Mynatt

Lecture Browser[Mukhopadhyay and Smith1999],also focus on capture of presentations with attention to capturing arbitrary program interactions and production-quality video capture from multiple sources.

These systems focus on the capture of a public,group experience.Other capture systems,such as Marquee[Weber and Poon1994,Filochat[Whit-taker et al.1994],We-Met[Wolf and Rhyne1992],the Audio Notebook [Stifelman1996;1997],Dynomite[Wilcox et al.1997],NotePals[Davis et al.1999],and MRAS[White et al.1998],focus on capture for the individ-ual.StuPad[Truong et al.1999]was the first system to provide mixed public and personal capture.

Most of the above efforts produce some sort of multimedia interface to review the captured experience.By focusing on this postproduction phase, some systems provide automated support for multiple camera fusion, integration of various presentation media,and content-based retrieval mechanisms to help search through a large repository of captured informa-tion.The postproduction results can then be accessed through a multime-dia interface,typically distributed via the Web.Abowd[1999]provides a review of some of these research and commercial systems.

In all of these cases,the emphasis on ubiquity is clearly seen in the separate capture and access phases.Electronic capture is moved away from traditional devices,like the keyboard,and brought closer to the user in the form of pen-based interfaces or actual pen and paper.Input in the form of voice and gesture is also accepted and is either treated as raw data or further interpreted to provide more understanding of the captured experi-ence.

4.1Challenges in Capture and Access

Despite substantial research and advances in automated capture systems, there are a number of open research issues,that we summarize here.We separate out issues primarily associated with capture from those primarily associated with access.

4.1.1Capture.We have mentioned earlier the importance of having a good driving application for ubicomp research.In the capture domain,the main compelling applications have been for meeting support and education/ training.These are indeed compelling application areas.In particular,our evidence in Classroom2000points to overwhelming acceptance of capture from the student perspective[Abowd1999].There are many more possibil-ities,however,for exploring capture in equally compelling domains:—Many of us record the special events in our lives—vacations,birthday parties,visits from relatives and friends—and we often spend time,years later,reflecting and remembering the events through the recordings on film and in diaries.How many times have we wished we had a camera at a particularly precious moment in our lives(a child’s first steps)only to fumble for the recording device and miss the moment?How difficult is it sometimes to find the picture or film of a significant event?

ACM Transactions on Computer-Human Interaction,Vol.7,No.1,March2000.

Charting Past,Present,and Future Research in Ubiquitous Computing?41—In many collaborative design activities,the critical insights or decisions are often made in informal settings and are usually not documented properly.Technical exchanges often flow quite freely in opportunistic encounters.Even in more formal design meetings,the rich exchange of information and discussions around artifacts,such as storyboards or architectural recommendations,is often very poorly captured.Recently, we have begun experimenting with support to capture both informal brainstorming activities[Brotherton et al.1999]and structured design meetings[Richter et al.1999].

—Maintenance of a building might be better supported if we captured a record of the actual construction of the building—in contrast to the building plans.When repairs are needed,the appropriate technician could“replay”the construction and maintenance history of the relevant building artifact in order to determine the right course of repair.

With the exception of the Audio Notebook,NotePals and Cornell’s Lec-ture Browser,there has been little work on capturing artifacts in the physical world and making them easily accessible in the access phase.The emergence of low-cost capture hardware,such as the CrossPad?and the mimio?from Virtual Ink,will lead more researchers to work in this area. Much of the capture currently being done is for what we would call raw streams of information that are captured mainly for the purpose of direct playback.No further analysis on those streams is done.However,it is often useful to derive additional information from a simple stream to provide a greater understanding of the live event.For example,Stifelman used results from discourse analysis to further segment the captured audio stream and make better prediction about when new topics commenced in a discussion[Stifelman1997].Similarly,Chiu and Wilcox[1998]proposed a hierarchical agglomeration technique for using pause detection to segment and associate both ink and audio.Other computational perception tech-niques can be used to analyze the simple audio,ink,or video signals. Another application of signal analysis is to improve the recording of raw streams.How can we automate certain well-known production practices that merge multiple camera feeds into a single,coherent,high-quality video that can be viewed later?Single,fixed camera angles are not sufficient to capture the salient parts of a live experience,but when we scale a system like Classroom2000to an entire campus,we cannot afford to pay technicians to sit in each of the classrooms.The single biggest challenge here is being able to determine the focus of attention for the group,and more difficult,for each individual at a live event.

4.1.2Access.In the access phase,we need to provide a number of playback capabilities.The simplest is to playback in real time,but there are often situations in which this is inappropriate or overly inefficient.In reviewing a lecture for an exam,a student does not always want to sit through an entire lecture again,but he or she might want to pinpoint a particular topic of discussion and replay only that portion.Alternatively,a

ACM Transactions on Computer-Human Interaction,Vol.7,No.1,March2000.

42?G.D.Abowd and E.D.Mynatt

summarization of the experience which gleans salient points from across an entire captured session might be more appropriate.

Synchronization of multiple captured streams during playback is vital. Commercial streaming products,such as RealNetworks G2/SMIL?and Microsoft’s MediaPlayer/ASF?,are emerging standards to allow for power-ful synchronization of programmer-defined media streams.However,it is not clear that any of these products will support the foreshadowing of streams so that a user can see what lies ahead in reviewing a stream.Such foreshadowing can help a user skim more quickly to a point of interest.

In most of the systems,the captured material is static upon reaching the access phase.Of course,there are often cases where annotating or revising captured material is appropriate,as well as then revising revised notes and so on.Although versioning is not a new problem to computer scientists, there are numerous challenges in providing an intuitive interface to multiple versions of captured material,especially when some of the mate-rial is already time-based such as audio and video.A timeline is an effective interface for manipulating and browsing a captured session,but when the time associated with a captured artifact is split up into a number of non-contiguous time segments,the usefulness of the timeline is at least questionable.Newer time-based interaction techniques,such as Lifestreams[Fertig et al.1996],Timewarp[Edwards and Mynatt1997], and time-machine computing[Rekimoto1999]are good starting points. Finally,and perhaps most challenging,as these systems move from personalized systems to capturing events in more public settings,privacy concerns for the capture and later access of this material increase.Al-though these issues must be addressed in the specific design of each system,we still need general techniques for tagging material and authen-ticating access.We will discuss these issues later in this article.

5.TOWARD EVERYDAY COMPUTING

Earlier,we described an emerging area of interaction research,everyday computing,which results from considering the consequences of scaling ubiquitous computing with respect to time.Just as pushing the availability of computing away from the traditional desktop fundamentally changes the relationship between humans and computers,providing continuous interac-tion moves computing from a localized tool to a constant presence.Our motivations for everyday computing stem from wanting to support the informal and unstructured activities typical of much of our everyday lives. These activities are continuous in time,a constant ebb and flow of action that has no clear starting or ending point.Familiar examples are orches-trating tasks,communicating with family and friends,and managing information.

Designing for everyday computing requires addressing these features of informal,daily activities:

—They rarely have a clear beginning or end:Either as a fundamental activity,such as communication,or as a long-term endeavor,such as ACM Transactions on Computer-Human Interaction,Vol.7,No.1,March2000.

Charting Past,Present,and Future Research in Ubiquitous Computing?43 research in human-computer interaction,these activities have no point of https://www.sodocs.net/doc/1d9546530.html,rmation from the past is often recycled.Although new names may appear in an address book or new items on a to-do list,the basic activities of communication or information management do not cease.A basic tenet in HCI is designing for closure.Given a goal,such as spell-checking a document,the steps necessary to accomplish that goal should be intuitively ordered with the load on short-term memory held to a reasonable limit.The dialogue is constrained so that the goal is accomplished before the user begins the next endeavor.When designing for an activity,principles such as providing visibility of the current state, freedom in dialogue,and overall simplicity in features play a prominent role.

—Interruption is expected:Thinking of these activities as continuous,albeit possibly operating in the background,is a useful conceptualization.One side-effect is that resumption of an activity does not start at a consistent point,but is related to the state prior to interruption.Interaction must be modeled as a sequence of steps that will,at some point,be resumed and built upon.In addition to representing past interaction,the interface can remind the user of actions left uncompleted.

—Multiple activities operate concurrently:Since these activities are contin-uous,the need for context-shifting among multiple activities is assumed. Application interfaces can allow the user to monitor a background activity,assisting the user in knowing when he or she should resume that activity.Resumption may be opportunistic,based on the availability of other people,or on the recent arrival of needed information.For example,users may want to resume an activity based on the number of related events that have transpired,such as reading messages in a newsgroup only after a reasonable number of messages have been previously posted.To design for background awareness,interfaces should support multiple levels of“intrusiveness”in conveying monitoring infor-mation that matches the relative urgency and importance of events. Current desktop interfaces only provide a small beginning in addressing these issues with multiple windows in a desktop interface.With minimal screen real estate,users must manage opening,closing,and restacking the many windows associated with a variety of tasks.Simple awareness cues are included in some desktop icons,indicating that new email has been received for example,but there are few controls for creating levels of notification to meet different awareness needs.The Rooms interface presented a compelling interface for spatially organizing documents and applications in multiple persistent working spaces[Card et al.1999; Henderson et al.1986].This standard has yet to be met by current commercial“task”bars for changing application focus.A useful extension to Rooms would be both to provide awareness of“background”rooms,and to assist the user in remembering past activity when returning to a room.

ACM Transactions on Computer-Human Interaction,Vol.7,No.1,March2000.

44?G.D.Abowd and E.D.Mynatt

—Time is an important discriminator:Time is a fundamental human measuring stick although it is rarely represented in computer interfaces. Whether the last conversation with a family member was last week or five minutes ago is relevant when interpreting an incoming call from that person.When searching for a paper on a desk,whether it was last seen yesterday or last month informs the search.There are numerous ways to incorporate time into human-computer interfaces[Edwards and Mynatt 1997;Fertig et al.1996;Rekimoto1999].As we try to regain our working state,interfaces can represent past events contingent on the length of time(minutes,hours,days)since the last interaction.As applications interpret real-world events,such as deciding how to handle an incoming phone call or to react to the arrival at the local grocery store,they can utilize timing information to tailor their interaction.

—Associative models of information are needed:Hierarchical models of information are a good match for well-defined tasks,while models of information for activities are principally associative,since information is often reused on multiple occasions,from multiple perspectives.For example,assume you have been saving email from colleagues,friends, and family for a long time.When dealing with current mail,you may attempt to organize it into a hierarchy of folders on various topics.Over time,this organization has likely changed,resulting in a morass of messages that can be searched with varying degrees of success.Likewise, interfaces for to-do lists are often failures given the difficulty in organiz-ing items in well-defined lists.Associative and context-rich models of organization support activities by allowing the user to reacquire the information from numerous points of view.These views are inherent in the need to resume an activity in many ways,for many reasons.For example,users may want to retrieve information based on current context such as when someone enters their office or when they arrive at the grocery store.They may also remember information relative to other current information,e.g.,a document last edited some weeks ago or the document that a colleague circulated about some similar topic.

As computing becomes more ubiquitously available,it is imperative that the tools offered reflect their role in longer-term activities.Although principles in everyday computing can be applied to desktop interfaces, these design challenges are most relevant given a continuously changing user context.In mobile scenarios,users shift between activities while the computing resources available to them also vary for different environ-ments.Even in an office setting,various tools and objects play multiple roles for different activities.For example,use of a computer-augmented whiteboard varies based on contextual information such as people present. Different physical objects such as a paper file or an ambient display can provide entry points and background information for activities.This distri-bution of interaction in the physical world is implicit in the notion of everyday computing,and thus clearly relevant to research in ubiquitous computing.

ACM Transactions on Computer-Human Interaction,Vol.7,No.1,March2000.

Charting Past,Present,and Future Research in Ubiquitous Computing?45

5.1Synergy Among Themes

Research in everyday computing also continues to explore the three earlier interaction themes,but with the focus of designing a continuously available environment.Ishii’s work in tangible media explores using natural inter-faces to support communication and background awareness[Ishii and Ullmer1997].Current efforts in“Roomware”[Streitz et al.1999]aim to create wall-sized and table-like interaction areas that support a greater range of informal human activity.

With respect to context-aware interaction,Audio Aura[Mynatt et al. 1998]is clearly related to previous tour guide systems,as a change in location triggers information delivery on a portable device.The motivation for Audio Aura,however,is to continuously augment the background auditory periphery of the user.By adding dynamic information about the activity of colleagues and communication channels(e.g.,email),Audio Aura enhances the perceptible sphere of information available while the user continues with daily activities.

Likewise,applications for automated capture and access are moving into less structured environments.The Remembrance Agent[Rhodes1997; Rhodes and Starner1996]retrieves information based on physical context information including visual recognition.As the user can instruct the system about what to remember,the agent becomes a storehouse of everyday information that is continuously available,but indexed based on physical location.An unmet goal,first proposed by Bush[1945]is the design of personal memory containers that record continuously and later try to provide useful indices and summaries of the daily information they capture(see Lamming and Flynn[1994]).

5.2Research Directions in Everyday Computing

Everyday computing offers many challenges to the HCI research commu-nity.In our current and future work,we are focusing on the following challenges:

—Design a continuously present computer interface:There are multiple models for how to portray computers that are ubiquitous,although none of these models are wholly satisfying.The notion of an information appliance[Norman1998]typically reflects a special-purpose device that sits dumbly in the background without any knowledge of on-going activity.These interfaces often borrow from traditional GUI concepts and from consumer https://www.sodocs.net/doc/1d9546530.html,putational systems that continue to operate in the background,perhaps learning from past activity and acting opportunistically,are typically represented as anthropomorphized agents.However it is doubtful that every interface should be based on dialogue with a talking head or human-oriented personality.Research in wearables explores continually worn interfaces[Starner et al.1997],but these are limited by the current input and display technologies and are typically rudimentary text-based interfaces.

ACM Transactions on Computer-Human Interaction,Vol.7,No.1,March2000.

46?G.D.Abowd and E.D.Mynatt

—Presenting information at different levels of the periphery of human attention:Despite increasing interest in tangible media and peripheral awareness,especially in computer-supported collaborate work(CSCW) and wearable computing,current interfaces typically present a generic peripheral backdrop with no mechanism for the user,or the background task,to move the peripheral information into the foreground of attention. Our current design experiments are aimed at creating peripheral inter-faces that can operate at different levels of the user’s periphery.—Connecting events in the physical and virtual worlds:People operate in two disconnected spaces:the virtual space of email,documents,and Web pages and the physical space of face-to-face interactions,books,and paper files.Yet human activity is coordinated across these two spaces. Despite efforts as early as the Digital Desk[Wellner1993]there is much work left to be done to understand how to combine information from these spaces to better match how people conceptualize their own endeav-ors.

—Modifying traditional HCI methods to support designing for informal, peripheral,and opportunistic behavior:There is no one methodology for understanding the role of computers in our everyday lives.However, combining information from methods as different as laboratory experi-ments and ethnographic observations is far from simple.In our research and classroom projects,our goal is to learn by doing,by interrogating the results we derive from different evaluation strategies.We have con-sciously chosen a spectrum of methods that we believe match the ques-tions we are asking.Learning how these methods inform each other and how their results can be combined will be an on-going effort throughout our work.We continue this discussion in the next section on evaluating ubicomp systems.

6.ADDITIONAL CHALLENGES FOR UBICOMP

Two important topics for ubicomp research—evaluation and social implica-tions—cut across all themes of research,so we address them here.

6.1Evaluating Ubicomp Systems

In order to understand the impact of ubiquitous computing on everyday life,we navigate a delicate balance between prediction of how novel technologies will serve a real human need and observation of authentic use and subsequent coevolution of human activities and novel technologies [Carroll and Rosson1991].Formative and summative evaluation of ubi-comp systems is difficult for several reasons,which we will discuss.These challenges are why we see relatively little published from an evaluation or end-user perspective in the ubicomp community.A notable exception is the work published by Xerox PARC researchers on the use of the Tivoli capture system in the context of technical meetings[Moran et al.1996].Since research in ubiquitous computing will have limited impact in the HCI ACM Transactions on Computer-Human Interaction,Vol.7,No.1,March2000.

Charting Past,Present,and Future Research in Ubiquitous Computing?47 community until it respects the need for evaluation,we have some advice for those wishing to undertake the challenges.

6.1.1Finding a Human Need.The first major difficulty in evaluating a ubicomp system is simply having a reliable system to evaluate.The technology used to create ubicomp systems is often on the cutting edge and not well understood by developers,so it is difficult to create reliable and robust systems that support some activity on a continuous basis.Conse-quently,a good portion of reported ubicomp work remains at this level of unrobust demonstrational prototypes.This kind of research is often criti-cized as being technocentric,but as we will show,it is still possible to do good user-centered feasibility research with cutting-edge technology.

It is important in doing ubicomp research that a researcher build a compelling story,from the end-user’s perspective,on how any system or infrastructure to be built will be used.The technology must serve a real or perceived human need,because,as Weiser[1993]noted,the whole purpose of ubicomp is to provide applications that serve the humans.The purpose of the compelling story is not simply to provide a demonstration vehicle for research results.It is to provide the basis for evaluating the impact of a system on the everyday life of its intended population.The best situation is to build the compelling story around activities that you are exposed to on a continuous basis.In this way,you can create a living laboratory for your work that continually motivates you to“support the story”and provides constant feedback that leads to better understanding of the use. Designers of a system are not perfect,and mistakes will be made.Since it is already a difficult challenge to build robust ubicomp systems,you should not pay the price of building a sophisticated infrastructure only to find that it falls far short of addressing the goals set forth in the compelling story. You must do some sort of feasibility study of cutting-edge applications before sinking substantial effort into engineering a robust system that can be scrutinized with deeper evaluation.However,these feasibility evalua-tions must still be driven from an informed,user-centric perspective—the goal is to determine how a system is being used,what kinds of activities users are engaging in with the system,and whether the overall reactions are positive or negative.Answers to these questions will both inform future design as well as future evaluation plans.It is important to understand how a new system is used by its intended population before performing more quantitative studies on its impact.

Case Study:Xerox PARC’s Flatland.Designing ubiquitous computing applications requires designers to project into the future how users will employ these new technologies.Although designing for a currently impos-sible interaction is not a new HCI problem,this issue is exacerbated by the implied paradigm shift in HCI resulting from the distribution of computing capabilities into the physical environment.

In our design work for Flatland[Mynatt et al.1999],we employed ethnographic observations of whiteboard use in the office,coupled with questionnaires and interviews,to understand how people used their white-

ACM Transactions on Computer-Human Interaction,Vol.7,No.1,March2000.

48?G.D.Abowd and E.D.Mynatt

boards on a daily basis(see Figure3).The richness of the data from the observations was both inspirational in our design work and a useful constraint.For example,the notion of“hot spots,”portions of the board that users expect to change frequently,was the result of day-to-day observations of real whiteboard use.The data from the observations were key in grounding more in-depth user studies through questionnaires and inter-views.Without these data,discussions would too easily slip into what users think they might do.By referring to two weeks of observational data,we were able to uncover and examine the details of daily practice.

Although the technology for our augmented whiteboard was not ready for deployment,or even user testing,we were able to gather a wealth of information from observations and interviews that critically informed our design.

Case Study:Audio Aura.The affordances and usability issues of novel input and output technologies are not well understood when they are first introduced.Often these technologies are still unusable for any real,long-term use setting.Nevertheless user-centric evaluations are needed to influence subsequent designs.In the design of Audio Aura[Mynatt et al. 1998],we were interested in exploring how peripheral awareness of rele-vant office activities could be enhanced through use of ambient sound in a mobile setting.Our combination of active badges,wireless headphones,and audio generation was too clunky for real adoption by long-term users.The headphones were socially prohibitive as they covered the ears with large, black shells.The capabilities for the development language,Java,to control sound presentation were too limited for creating rich auditory spaces. Nevertheless,we wanted to understand the potential interaction knowing that these technological limitations would be removed in the future.

We employed scenarios of interaction,based on informal observations of the Xerox PARC work environment,to guide our design and evaluation. These scenarios incorporated information about how people at PARC work together,including practices such as gathering at the coffee bistro,often dropping by people’s offices for impromptu conversations,and even the physical oddities of the building such as the long hallways that are the backbone of the layout.By grounding our scenarios in common practices, potential users could reflect on their daily activities when evaluating our designs.The scenarios also helped us understand a particular interaction issue:timing.In one of our scenarios,the communication path between the component technologies was not fast enough to meet the interaction de-mands.Although the speed could be increased,this modification required balancing a set of trade-offs,namely speed versus scalability,both impor-tant for our design goals.In short,the scenarios helped us understand the design space for further exploration.

6.1.2Evaluating in the Context of Authentic Use.Deeper evaluation results require real use of a system,and this,in turn,requires a deploy-ment into an authentic setting.The scaling dimensions that characterize ubicomp systems—device,space,people,or time—make it impossible to use ACM Transactions on Computer-Human Interaction,Vol.7,No.1,March2000.

英语作文:我的未来My Future

英语作文:我的未来My Future 导语:你有想过未来吗?下面是yuwenmi小编为备考的同学准备的优秀英语作文,欢迎阅读与借鉴,谢谢! 我的未来My Future I think I will be a teacher in the future, because I like to stay with children. I'll live in shanghai because I went to shanghai last summer and fell in love with it. I think it's really a beautiful city. As a teacher, I’ll try my best to teach my students well and tell them how to be a useful person. In my free time, I’ll listen to music, pop songs and go shopping with my friends, Sometimes I'll keep pets-maybe a colorful bird. It makes me happy. During the summer holiday, I’ll go to Italy on vacation. I hear that it's a great place to have fun. 我觉得将来我会成为一名教师,因为我喜欢和孩子们待在一起。我会住在上海,因为我去年夏天去上海的时候就爱上了它。我真的觉得那是一个美丽的城市。作为一名老师,我会尽力教好我的学生并告诉他们如何做一个有用的人。在我的空闲时间,我会听音乐,流行歌曲,和我的朋友去购物,

牛津译林版8B Unit1 past and present课堂练习(共8份)

八年级(下)英语课堂十分钟练习Unit1 Welcome to the unit 班级姓名 一、翻译词组 1、半小时前_____________________ 2、你见过我的食物吗_____________________________ 3、我刚刚吃了它。________________________________ 4、你变了。___________________________________ 5、过去常常和我分享一切_________________________ 6、过去常常对我好_______________________________ 7、不同时期的交通方式__________________________ ~ 8、步行去那儿________________________ 9、太多人和车辆_______________________ 二、根据旬意及中文提示完成单词 1. At ____________(现在,目前),we can fly to Seoul from Yancheng. can’t forget t he poor life in the____________ (过去). ’ve____________ (刚刚)known about it. Sorry to hear that. 4. He has changed. He____________(曾经)to be so kind to his wife. 5. In big cities, people often go to work by ____________(地铁). It’s fast and cheap. 三、用括号中所给单词的适当形式填空 ’re different forms of transport at different____________ (time). 《 want something ____________ (eat) because I am hungry. 3. 一Where is my book 一It____________ (be)on the desk five minutes ago. used to____________ (go) to school by bus, but now I ride a bike. has____________ (change)a lot. He always wanted to play with others but now he doesn’t. 四、单项选择 ( ) 1. We got here____________ half an hour ago. A. since B./ C. for D. in ( )2.一____________he ____________at this school two years ago 一一Yes,I think so. A. Did ; study B. Has, studied C. Was; study D. Did; studied ( ) goes to school _______every day.It’s five minutes’ walk from his home to school. A. by bus B. by plane C. on foot D. by train < ( )4.一China develops(发展) so fast.一That’s true. It____________a lot already. A. has changed B. changed C. will change D. changes ( ) building is a cinema now. It____________a supermarket. A. was used to be B. used to be C. is used to being D. uses to be 五、根据中文提示完成句子 1、我妈妈一小时前在家里。__________________________________________________ 2、---你看见我的电脑了吗” --- “不,没有。” ________________________________________________________________________ 3、格林先生每天乘地铁去上班。________________________________________________ 4、许多年前Kate曾经是一名护士。________________________________________________ ) 5、这部电影有助于你了解中国的过去和现在。

八年级优秀英语作文my future

八年级优秀英语作文my future 从我的能力倾向及人格特质来判断,我理想的生活方式将是担任一名科学家,从事研究讲学以及著作。 由于我来自农家,尤其喜欢亲近土地。 如果我有经济能力在乡下过着田园生活,我会深感福佑。 至于社交生活,「单纯」是我所要追求的,所以我确实不需要太多的朋友。 如果我现在懒散,这一切都将流于空谈;要达成目标,我必须重视锻炼自己的身心。 这是一个高度竞争的社会,人人都渴望拔得头筹。 那不但是一场体力和脑力的竞争,也是耐心、信心和毅力的马拉松。 人生并不全是美好的事,但以我目前得自这所精英聚集的学校的顶尖老师所施的教化,我必定有个光明的前程。 首先,我必须要有一个健康的身体保证我能工作或做别的事情。 我十分赞同“身体是革命的本钱这一说法。 此外,我也会和我的妻子和孩子组建一个和谐美满的家庭,我是家里的支柱。 我要为他们努力工作,所以一份好的工作也是必不可少的。 我想成为一名工程师,这挣得比较多。 我也要赡养我和我妻子的父母,从这点来看,一份好的工作就显

得十分重要了。 但是,我仍然希望我闲暇空余的时间,去旅旅游或者做我想做的事。 简单来说,我想要的未来就和家人一起过平淡富足的日子。 学生英语作文my futureI think I will be a teacher in the future, because I like to stay with children. I'll live in shanghai because I went to shanghai last summer and fell in love with it. I think it's really a beautiful city. As a teacher, I’ll try my best to teach my students well and tell them how to be a useful person. In my free time, I’ll listen to music, pop songs and go shopping with my friends, Sometimes I'll keep pets-maybe a colorful bird. It makes me happy. During the summer holiday, I’ll go to Italy on vacation. I hear that it's a great place to have fun.我觉得将来我会成为一名教师,因为我喜欢和孩子们待在一起。 我会住在上海,因为我去年夏天去上海的时候就爱上了它。 我真的觉得那是一个美丽的城市。 作为一名老师,我会尽力教好我的学生并告诉他们如何做一个有用的人。 在我的空闲时间,我会听音乐,流行歌曲,和我的朋友去购物,有时候我会养宠物,也许会是一只色彩鲜艳的鸟。 它能让我开心。 暑假的时候我会去意大利度假。

大学英语精读第三版第一册Book1 Unit 8答案

大学英语精读第三版第一册Book1 Unit 8答案 III Vocabulary Activities 1.1)e 2)a 3)c 4)g 5)d 6)f 7)b 2.1) instantly 2) absolute 3) privacy 4) approaching 5) indicates 6) acquire 7) astonished 8) avoid 9) Confidence 10) expose 11) slight 12) hint 13) range 14) slapped 3.1) scrape together 2) in search of 3) hold back 4) robbed of 5) think over 6) came upon 7) come true 8) express their sincere gratitude IV Enriching your wordpower: 1.1) surprised 2) surprise 3) Surprisingly 4) surprised 5) surprising 6) surprise 1) pleasantly 2) please 3) pleasing 4) pleasure 5) pleasant 1)admire 2) admiration,admire,admire 3) admirable 4) admiring 5)admiringly 6) admired 1) astonishingly 2) astonishing 3) astonishment 4) astonish 5) astonished 2.1) hand-written 2) hand-held 3) hand-made 4) weather-beaten 5) tongue-tied

My Future 英语作文

My Future Everyone has the future, also include me, I also have a dream, I will have a lot of change in the future, for example, my looks, my work, my hobbies, and so on, there will be change. I think I will have a long brown hair, I will be fatter than now, and I will higher than now, this is not I can decide. But I want to be a host. However, I would like to be an English teacher, because I like children, they are clever and lovely, and I can help children open their eyes in the world and I believe I can give them a good start. So I must learn English well, I can easier to get the job in the future, I will probably won't be famous. As for my hobbies, I would very much like to drawing cartoon characters and reading magazines. But in the future, I'll be more like skating and singing songs, because they are exciting and interesting, they can keep me healthy。 I hope my future plan will come true, they are very wonderful, isn't it? 我的未来 每个人都有未来,也包括我,我也有一个梦,我在将来会有很多的改变,例如,我的长相,我的工作,我的爱好,等等,都会有改变。 我认为我将会有棕色的长头发,我将会比现在胖一点,并且我会比现在高,这都不是我能决定的。但是我想成为一名主持人。然而,我更想成为的是一名英语教师,因为我喜欢孩子,他们又聪明又可爱,而且我可以帮助孩子们打开他们的眼界在世界和我相信我可以给他们一个好的开端。所以我必须学好英语,我才可以更容易得到这份工作在将来,我将可能不会有名。 至于我的爱好,现在我非常喜欢画漫画人物和读杂志。可是在将来,我会更喜欢滑冰和唱歌,因为它们又是刺激的又有趣,他们还能使我保持健康。 我希望我的未来计划能够成真,它们都很精彩的,难道不是吗?

(完整版)大学英语精读第一册课后练

大学英语精读第一册课后练习部分答案 Unit 1 Cloze (A) 1. aware 2. performance 3. average 4. adequate 5. set aside 6. mentions 7.look over 8. commit (B) 1. if/once 2. about 3. it 4. know 5. up 6. as 7. from 8. words 9. into 10.other 11. for 12. when Translation 1、他这次考试的失败使他意识到定期复习功课的重要。 His failure in the exam has made him aware of the importance of reviewing his lessons regularly. 2、请一定不要忘记离家前你父母对你说过的话。 Be sure not to forget what your parents said to you before you left home.3、我确信她的英语知识对这项工作来说是足够的了。 I'm sure her knowledge of English is adequate for the job. 4、这篇文章的目的是告诉学生怎样培养良好的学习习惯。 The purpose of this article is to tell the students how to develop good study habits. 5、在当今时代,人们越来越多地依靠计算机(computers)来解决各种各样的问题。 In our age, people depend more and more on computers to solve various kinds of difficult problems.

八年级英语下册 Unit1《Past and present》同步练习3 牛津译林版

Unit 1 Past and present Name__________ 一、选择填空: ( )1. They______ in Beijing when they_____ married. A. live, get B. lived, have got C. lived, got D. have lived, got ( )2. Now, this small village_______ a big modern city. A. is turning into B. turns into C. has turned into D. turned into ( )3. After resting for a long time, Mr Green looks_____ than before he left the hospital. A. unhealthier B. healthier C. more healthily D. health ( )4. The doctor looked him over and said there was_____ with her. A. nothing wrong much B. much wrong nothing C. much nothing wrong D. nothing much wrong ( )5. Sorry, I haven’t finished the work_____. A. already B. before C. yet D. ever ( )6. My son_____ up yet because he _____ to bed very late last night. A. hasn’t got, has gone B. didn’t get, went C. doesn’t get, went D. hasn’t got, went ( )7. Look at that new model plane. It must_____ a lot of money. A. cost B. pay C. spend D. take ( )8. Thanks to the cleaners, the environment_____ in our city recently. A. improved B. improves C. has improved D. had improved ( )9. Since he started high school, he’s come to school by bike_____. A. on his own B. by his own C. in his own D. with his own ( )10. You’re_____ late, the meeting has been over. A. terribly B. nearly C. terrible D. near ( )11. The fans are very sorry to hear that famous actor_____ for half an hour. A. has left B. has been away C. has gone D. has gone away ( )12. I have bought a Chinese – English dictionary? When and where_____ you _____ it? A. have, bought B. did buy C. will, buy D. do , buy ( )13. What a nice T-shirt! How much did you_____ for it? It_____ me twenty yuan. A. pay, cost B. pay, paid C. cost, pay D. cost, cost ( )14. Where is Jack ?Sorry, I don’t know. Go and ask his brother. He _____ know. A. can B. may C. must D. need ( )15. When you do eye exercises, you must keep your eyes_______. A. close B. closed C. open D. closing ( )16. Don’t worry, these toys are______ than those. A. safer B. more safely C. safe D. more safety ( )17. I can’t b uy this coat, because it is_____ expensive.

My-Future-英语作文

My Future Everyone has the future, also including me. I also have a dream. I will have a lot of changes in the future, such as my looks, my job, my hobbies, and so on. I think I will have long brown hair and I will be fatter than now, and I will taller than now, this is not I can decide. But I want to be a host. However, I would like to be an English teacher, because I like children, they are clever and lovely, and I can help children open their eyes in the world and I believe I can give them a good start. So I must learn English well, I can easier to get the job in the future, I will probably won't be famous. As for my hobbies, I would very much like to drawing cartoon characters and reading magazines. But in the future, I'll be more like skating and singing songs, because they are exciting

大学英语精读第一册课文翻译

大学英语精读第一册课 文翻译 Pleasure Group Office【T985AB-B866SYT-B182C-BS682T-STT18】

第一单元 课程开始之际,就如何使学习英语的任务更容易提出一些建议似乎正当其实。 学习英语的几种策略 学习英语决非易事。它需要刻苦和长期努力。 虽然不经过持续的刻苦努力便不能期望精通英语,然而还是有各种有用的学习策略可以用来使这一任务变得容易一些。以下便是其中的几种: 1.不要以完全相同的方式对待所有的生词。你可曾因为简直无法记住所学的所有生词而抱怨自己的记忆力太差其实,责任并不在你的记忆力。如果你一下子把太多的生词塞进头脑,必定有一些生词会被挤出来。你需要做的是根据生词日常使用的频率以不同的方式对待它们。积极词汇需要经常练习,有用的词汇必须牢记,而在日常情况下不常出现的词只需见到时认识即可。你会发现把注意力集中于积极有用的词上是扩大词汇量最有效的途径。 2.密切注意地道的表达方式。你可曾纳闷过,为什么我们说“我对英语感兴趣”是“I’m interested in English”,而说“我精于法语”则是“I’m good at French”你可曾问过自己,为什么以英语为母语的人说“获悉消息或密秘”是“learn the news or secret”,而“获悉某人的成功或到来”却是“learn of someone’s success or arrival”这些都是惯用法的例子。在学习英语时,你不仅必须注意词义,还必须注意以英语为母语的人在日常生活中如何使用它。 3.每天听英语。经常听英语不仅会提高你的听力,而且有助你培养说的技能。除了专为课程准备的语言磁带外,你还可以听英语广播,看英语电视和英语电影。第一次听录好音的英语对话或语段,你也许不能听懂很多。先试着听懂大意,然后在反复地听。你会发现每次重复都会听懂更多的东西。

译林牛津 8B Unit1 Past and present 课后巩固练习卷(电子档含答案)

8B Unit1 Past and present课后巩固练习 一、单项选择(每小题1分,满分15分) 1. Mrs Lee is busy preparing for____ interview these days. She says ____ interview will take place next Monday. A. an; the B. a; the C. an; an D, the; an 2.一Do you enjoy your new life near the airport? 一Well. I enjoy my life here except for(除了)the ____ pollution. The planes make it hard for me to sleep well sometimes. A. air B. water C. noise D. light 3. Amy____ Jeff when they were both 25 years old. A. got married B. married C. married with D. married to 4.一Have you ____ read the book A Tale of Two Cities? 一No, ____. A. already; ever B. yet; ever C. ever; never D. yet; already 5一You have never visited there, ____? 一____. I have been there twice. A. have you; Yes, I have B. haven't you; No, I haven't C. have you; No, I haven't D. haven't you;Yes. I have 6. Linda’s________ her homework already and she’s________ newspapers for news now. A. finishes; reads B. finished; read C. finished; reading D. finishing; read 7. New Year’s Day is coming soon. All of us will____ our hometown for the family get-together tomorrow. A. return back to B. return to C. return D. go back 8. Can you see a lot of young trees on________ side of the street, children? A. both B. all C. the other D. every 9. I think a computer’s memory is the same as human’s________ . but it’s also very different. A. in fact B. anyway C. on the way D. in some ways 10. —China’s 30 science research(调查) teams________ at Changcheng Station in 2014. —Wonderful! We________ a lot in this field already. A. arrived; improved B. arrived; have improved C. have arrived; improved D. have arrived; have improved 11. The Greens went to Beijing in 2008,and they ____ there for four years since then. A. live B. lived C. has lived D. have lived 12. Listen! The phone ____ . Please go to answer it. A. rings B. is ringing C. rang D. has rung 13. ____ buy your ticket from a ticket machine. There are lots of people there. A. Not B. Not to C. Don't D. Don't to 14. We live in Shanxi Province. She's our beautiful home. We all love . A. him B. them C. her D. it 15.—You look really beautiful in this pink dress, Sally. —____ .

英语作文My future

提纲: 1、以MYFutureplan为题目,100字左右 2、我决定大学毕业后当一名中学教师,这是我孩提时代的梦想 3、我很喜欢教师在个职业,与中学生在一起可以使自己永保青春 4、我国师资力量缺乏,主要原因是教师工作辛苦,收入低。想当老师的人很少,所以需要大批有 [ manyfriends. Iamdeterminedtomakemydreamscometrue.Iknowitwon'tbeeasy,butwhenIdomakemydreams cometrue,I'llbethehappiestpersonintheworld. MyFuture In10years,IthinkIwillbeadoctor.IthinkIwillliveNingBo.BecauseIwasborninthere.SoIlikeitverymuch. Asadoctor,Imusttrymybesttohelppatient.AndIthinkIwillliveinanapartmentwithmybestfriends.Iwilha vealovelydog.BecauseIlikedogsverymuch.Iwilprobablysmimmingeveryday.IthinkIwillgotoShangh aionmyvacation,andonedayImightevenvisitGreece.

Asthesonggoes“Myfutureisn’tadream.”Ilovethesongwhichbringsmeconfidencewhensingingitever ytime.Ibelievethatallourdreamscancometrueifwehavecouragetopursuethem.WhenIwasyoungmyf atheralwaysaskedmewhatIwouldbeinthefuture.SometimesIfounditveryhardtogiveacertainreply.“I wanttobeadoctor.”“Iwanttobeateacher.”and“I’dliketobeascientist!”Manyoftheseanswersareperhap sverychildishandridiculous.ButIneverthinktheyarefaraway. Howtimeflies!Whoisabletogiveadefinitiontohisfuture?Iknowclearlythatthosehighbuildings arebasedonsolidfoundation.Asastudent,Ishouldhaveareasonableaim,andstudyhard.Mygoalist oenterthebestuniversityforfurth erstudyaftermiddleschool.Iknowit’shardwork,andI’llco meacrossmanydifficultiesandfrustrations.Butnomatterwhattheyare,I’llkeepworkingonitan dnevergiveup.Myteachersaysthere’sonlyonekindofpeoplethataretrulysuccessful:thosewhoa rebraveenoughtoputupwithhardships.Ev enifIwon’tachievethegoal,IhavenoregretsforwhatIh 如果我有 也

大学英语精读第一册(第三版)答案

大学英语精读第三版(上海外语教育出版社董亚芬主编)第一册答案Book 1 Unit 1 Study&Practice Vocabulary Activities 1. 1)e 2)g 3)j 4)a 5)b 6)i 7)c 8)d 9)h 10)f 2. 1) handling 2) summarized 3) process 4) absorb 5) are bound to 6) feel free 7) for instance 8) strategies 9) complained 10) has committed... to memory 11) Nevertheless 12) rely... on 13) Apart from 14) command 3. 1) over and over again 2) at a time 3) put it into practice 4) watching out for 5) by no means 6) concentrate on 7) In addition to 8) in detail Enriching Your Word Power 1. 1)action 2)employ 3)announce 4)examination 5)communication 6)express 7)compose 8)improvement 9)concentration 10)management 11)consider 12)motivate 13)development

15)discuss 16)operate 17)division 18)production 19)educate 20)repeat 2. 1) a) additional b) add c) addition d) addition 2) a) effectively b) effect c) effective d) effect 3) a)helpful b) help c) helpless d) help e) helplessly f) helpfully g) helpful 4) a) reliant b) reliable c) reliance, reliable d) relies e) reliably 5) a) repetition b) repeating c) repeatedly d) repeated e) repetition Usage 1) In my opinion 2) According to Mary 3) In our opinion 4) According to today’s papers 5) In most doctors’ opinion ( According to most doctors) Structure 1. 1) Shakespeare was not only a dramatist but also an actor. 2) Miss Crain not only took me home in her car, but also came the next day to see if I had recovered. 3) Hainan Island attracts tourists not only in winter but also in summer. 4) There is always a black market not only in Britain, but also in other European countries. 5) At the Athens Olympics in 2004, Liu Xiang not only won a gold medal in the 110-meter hurdles, but also broke the Olympic record. 2. 1) It is true that your sentences are all gramma tically correct, but they don’t make any sense. 2) It is true that they lost that battle, but they still went on fighting. 3) It is true that Tom’s very clever and hardworking, but I still don’t think he is the right person for the job. 4) It is true that learning English is by no means easy, but we can make the task easier by using some learning strategies. Cloze 1. 1) strategies 2) frequently 3) over and over again 4) commit to memory 5) acquaintance 6) watch out for 7) communicate 8) process

8BUnit1Pastandpresent

8B Unit 1 Past and present Welcome to the unit 预习检测 班级:姓名: 一翻译下列短语。 1 in the bowl__________________ 2 play with you_______________ 3 be kind to me _________________ 4 乘公共汽车_______________ 5 自从1965年以来______________ 6 not any more_______________ 7 at different times______________ 8 past and present_______________ 9 double-decker__________________ 10 light rail___________________ 达标检测 一用所给单词的适当形式填空. 1. Welcome to my party. Just help ___________(you) to the food and drink. 2. He is really ____________(interest) in Maths. 3 ._________(luck), he passed the exam. 4.Don’t you think life is ___________(good) than before. 5. It is __________(healthy) to eat too much fat food. 6. I think it’s ___________(possible) for a pri mary school to solve that junior high Maths problem. 7. It is ___________(polite) to ask a woman how old she is. 8. I __________(like) football, but I like basketball very much. 9. We should not be ______________(honest) . 10. He feels ___________(happy) because he has no friends. 二学完本课,你还记得对话中的这几个词吗?要记住。 I’ve you’ve eaten seen Reading(1) 预习导学 一请对照单词表,预习本课出现的生词,并在文中标出它们。 二请仔细阅读课文,回答问题。 1 What’s the main idea of this interview? 2 Where did Mr. Chen live with his families before 1965? 3 Do they like the new life? Why? 三读对话,试着完成P10 C. Reading(1) 预习检测: 一你能读出本课出现的生词,并写出汉语意思吗? southern _________ till__________ married___________wife __________ over________stall__________ cinema___________ factory_________ usedto__________dump___________waste___________poison________ pollute__________ realize________ reduce__________ lonely__________

相关主题