搜档网
当前位置:搜档网 › A flexible architecture for Virtual Humans in Networked Collaborative Virtual Environments

A flexible architecture for Virtual Humans in Networked Collaborative Virtual Environments

A flexible architecture for Virtual Humans in Networked Collaborative Virtual Environments
A flexible architecture for Virtual Humans in Networked Collaborative Virtual Environments

A flexible architecture for Virtual Humans in Networked Collaborative Virtual Environments
Igor Pandzic1, Tolga Capin2, Elwin Lee1, Nadia Magnenat Thalmann1, Daniel Thalmann2 MIRALab - CUI University of Geneva 24 rue du Général-Dufour CH1211 Geneva 4, Switzerland {Igor.Pandzic,Nadia.Thalmann}@cui.unige.ch http://miralabwww.unige.ch/ Computer Graphics Laboratory Swiss Federal Institute of Technology (EPFL) CH1015 Lausanne, Switzerland {capin, thalmann}@lig.di.epfl.ch http://ligwww.epfl.ch/
2 1
Abstract
Although a lot of research has been going on in the field of Networked Collaborative Virtual Environments, most of the existing systems still use fairly simple embodiments for the representation of participants in the environments. This kind of systems can greatly benefit from a more sophisticated human representation. The users' more natural perception of each other (and of autonomous actors) increases their sense of being together, and thus the overall sense of presence in the environment. The communication facilities are also improved because a realistic embodiment allows for gestural and facial communication. We present a flexible framework for the integration of virtual humans in the Networked Collaborative Virtual Environments. It is based on a modular architecture that allows flexible representation and control of the virtual humans, whether they are controlled by a physical user using all sorts of tracking and other devices, or by an intelligent control program turning them into autonomous actors. The modularity of the system allows for fairly easy extensions and integration with new techniques making it interesting also as a testbed for various domains from "classic" VR to psychological experiments. We present results in terms of functionalities, applications tried and measurements of performance and network traffic with an increasing number of participants in the simulation.
1. Introduction
The Networked Collaborative Virtual Environments (NCVE) are often described as systems that permit to the users to feel as if they were together in a shared Virtual Environment. Indeed, the feeling of "being together" is extremely important for collaboration, as well as for the sense of presence felt by the subjects. A very important factor for the feeling of being together in a virtual world is the way users perceive each other: their embodiment. In a broader sense, this includes not just the graphical appearance but also the way movements, actions and emotions are represented. Although Networked Collaborative Virtual Environments have been around as a topic of research for quite some time, in most of the existing systems the embodiments are fairly simple, ranging from primitive cube-like appearances, non-articulated human-like or cartoon-like avatars to articulated body representations using rigid body segments [Barrus 96, Carlsson 93, Macedonia 94, Singh 95]. Ohya et al. [Ohya 95] report the use of human representations with animated bodies and faces in a virtual teleconferencing application. In the approach we adopted when developing the Virtual Life Network (VLNET) system [Capin 94, Thalmann 95, Pandzic 96], more sophisticated virtual humans are simulated, including full anatomically based body articulation, skin deformations and facial animation (figure 1). Managing multiple instances of such complex representations and the involved data flow in the context of
1

a Networked Collaborative Virtual Environment, while maintaining versatile input and interaction options, requires careful consideration of the software architecture to use. In view of the complexity of the task and versatility we wanted to achieve, a highly modular, multiprocess architecture was a logical choice. Some of the modules are fixed as part of the system core, while the others are replaceable external processes allowing greater flexibility in controlling the events in the environment, in particular the movement and facial expressions of the virtual humans. We first introduce in general the problems linked with involving Virtual Humans in NCVEs. Next section contains the presentation of overall properties of the VLNET system, followed by a detailed description of the software architecture adopted for the system: VLNET server, then VLNET client with its processes, engines, interfaces and drivers. The results section covers some experimental applications of VLNET, as well as performance and network measurements with an increasing number of users. Finally we present our conclusions and discuss possibilities for future work.
Figure 1: An example session of the Virtual Life Network
2. Introducing Virtual Humans in NCVEs
The participant representation in a networked VE system has several functions: ? perception (to see if anyone is around) ? localization (to see where the person is) ? identification (to recognize the person) ? visualization of interest focus (to see where the person's attention is directed) ? visualization of actions (to see what the person is doing) ? communication (lip movement in sync with speech, facial expressions, gestures) Virtual Humans can fulfill all these functions in an intuitive, natural way resembling the way we achieve these tasks in real life. Even with limited sensor information, a virtual human frame can be
2

constructed in the virtual world, reflecting the activities of the real user. Slater and Usoh [Slater94] indicate that such a body, even if crude, already increases the sense of presence that the participants feel. The participants visualize the environment through the eyes of their virtual actor, and move their virtual body by different means of body control. In addition, introducing the human-like autonomous actors for various tasks increases the level of interaction within the virtual environment. Introducing virtual humans in the distributed virtual environment is a complex task combining several fields of expertise [Capin 97]. The principal components are: ? virtual human simulation, involving real time animation/deformation of bodies and faces ? virtual environment simulation, involving visual data base management and rendering techniques with real time optimizations ? networking, involving communication of various types of data with varying requirements in terms of bitrate, error resilience and latency ? interaction, involving support of different devices and paradigms ? artificial intelligence (in case autonomous virtual humans are involved), involving decision making processes and autonomous behaviors Each of the involved components represents in itself an area of research and most of them are very complex. When combining them together the interaction between components and their impact on each other have to be considered. For example, using virtual humans sets new requirements on interaction which has to allow not only simple interaction with the environment, but at the same time the visualization of the actions through the body and face representing the user; the necessity to communicate data through the network forces more compact representation of face and body animation data. Considering the total complexity of the above components, a divide-and-conquer approach is a logical choice. By splitting the complex task into modules (see figure 3), each with a precise function and with well defined interfaces between them, several advantages are achieved: ? high flexibility ? easier software management, especially in a team work environment ? higher performance ? leveraging the power of multiprocessor hardware or distributed environments when available Flexibility is particularly important, because of the multitude of emerging hardware and software technologies that can potentially be linked with NCVE systems (various input devices and techniques, AI algorithms, real-time data sources driving multi-user applications). This is especially interesting in a research environment where a NCVE system can be used as a testbed for research in fields of AI, psychology, medical information systems etc. In general, a good NCVE system must allow implementation of different applications while transparently performing its basic tasks (networking, user representation, interaction, rendering...) and letting the application programmer concentrate on the application-specific problems. From the software management point of view, a monolithic system of this complexity would be extremely difficult to manage, in particular by a team of programmers. By carefully assigning tasks to processes and synchronizing them intelligently, higher performance can be achieved [Rohlf 94]. Finally, a multi-process system will naturally harness the power of multi-processor hardware if it is available. It is also possible to distribute modules on several hosts. Once it is decided to split the system into modules, roughly corresponding to the above listed components, it is necessary to define in detail the task of each module and the means of communication between them. Next section describes in detail how it is done in the case of the Virtual Life Network system.
3. Virtual Life Network
Based on the considerations from the previous section we have developed the Virtual Life Network (VLNET) system. From the networking point of view, VLNET is based on a fairly simple client/server architecture. Next two subsections discuss in more detail the server and the client architecture.
3

3.1. VLNET Server A VLNET server site consists of a HTTP server and a VLNET Connection Server. They can serve several worlds, which can be either VLNET files or VRML 1.0 files. For each world, a World Server is spawned as necessary, i.e. when a client requests a connection to that particular world. The life of a World Server ends when all clients are disconnected. Figure 2 schematically depicts a VLNET server site with several connected clients. A VLNET session is initiated by a Client connecting to a particular world designated by a URL. The Client first fetches the world database from the HTTP server using the URL. After that it extracts the host name from the URL and connects to the VLNET Connection Server on the same host. The Connection Server spawns the World Server for the requested world if one is not already running and sends to the Client the port address of the World Server. Once the connection is established, all communication between the clients in a particular world passes through the World Server. In order to reduce the total network load, the World Server performs the filtering of messages by checking the users' viewing frusta in the virtual world and distributing messages only on as-needed basis. Clients keep the possibility of contourning this mechanism by requesting a higher delivery insurance level for a particular message, e.g. for heartbeat messages of a dead reckoning algorithm [Capin 97-1].
HTTP Server
World 1
World 2
World 3
VLNET Connection Server
VLNET World Server
VLNET World Server
VLNET Client
VLNET Client
VLNET Client
Figure 2: Connection of several clients to a VLNET server site 3.2. VLNET Client The design of the VLNET Client is highly modular, with functionalities split into a number of processes. Figure 3 presents an overview of the modules and their connections. VLNET has an open architecture, with a set of interfaces allowing a user with some programming knowledge to access the system core and change or extend the system by plugging custom-made modules, called drivers, into the VLNET interfaces. In the next subsections we explain in some detail the VLNET Core with its various processes, as well as the drivers and the possibilities for system extension they offer. 3.2.1. VLNET Core The VLNET core is a set of processes, interconnected through shared memory, that perform basic VLNET functions. The Main Process performs higher level tasks, like object manipulation, navigation, body representation, while the other processes provide services for networking (Communication Process), database loading and maintenance (Database Process) and rendering (Cull Process and Draw Process).
4

3.2.1.1. The Main Process The Main Process consists of five logical entities, called engines, covering different aspects of VLNET. It also initializes the session and spawns all other processes and drivers. Each engine is equipped with an interface for the connection of external drivers. The Object Behavior Engine takes care of the predefined object behaviors, like rotation or falling, and has an interface allowing to program different behaviors using external drivers. The Navigation and Object Manipulation Engine takes care of the basic user input: navigation, picking and displacement of objects. It provides an interface for the navigation driver. If no navigation driver is activated, standard mouse navigation exists internally. The Body Representation Engine is responsible for the deformation of the body. In any given body posture (defined by a set of joint angles) this engine will provide a deformed body ready to be rendered. The body representation is based on the Humanoid body model [Boulic 95]. This engine provides the interface for changing the body posture. A standard Body Posture Driver is provided, that connects also to the navigation interface to get the navigation information, then uses the Walking Motor and the Arm Motor [Boulic 90, Pandzic 96] to generate the natural body movement based on the navigation. The Facial Representation Engine provides the synthetic faces with a possibility to change expressions or the facial texture. The Facial Expression Interface is used for this task. It can be used to animate a set of parameters defining the facial expression. The facial representation is a polygon mesh model with Free Form Deformations simulating muscle actions [Kalra 92].
5

NET
COMMUNICATION PROCESS
VIDEO ENGINE
VIDEO INTERFACE
VIDEO DRIVER
CAMERA, VCR...
MESSAGE QUEUE
FACIAL REPRESENTATION ENGINE
FACIAL EXPRESSION INTERFACE
CAMERA FACIAL EXPRESSION DRIVER
DATA BASE PROCESS MAIN SHARED MEMORY CULL PROCESS
BODY REPRESENTATION ENGINE
BODY POSTURE INTERFACE
BODY POSTURE DRIVER
FOB,..
NAVIGATION AND OBJECT MANIPULATION ENGINE
NAVIGATION INTERFACE
NAVIGATION DRIVER
SB, MOUSE, TRACKING
SCREEN, HMD
DRAW PROCESS
OBJECT BEHAVIOR ENGINE VLNET MAIN PROCESS
OBJECT BEHAVIOR INTERFACE
OBJECT BEHAVIOR DRIVER ...MORE OBJECT BEHAVIOR DRIVERS
VLNET CORE
LEGEND:
If any of the drivers runs on a remote host, the network interface is automatically installed here
Internal VLNET processes; can be changed only by recompiling VLNET Logical entities within VLNET main process, called engines Internal shared memory segments for data exchange within internal processes; not accessable to users External shared memory interfaces, accessable to the users through defined APIs External processes (called drivers); can be programed by the user using the defined APIs; they are replacable and sometimes optional External devices; sometimes optional or replacable
Figure 3: Virtual Life Network system overview The Video Engine manages the streaming of dynamic textures to the objects in the environment and their correct mapping. Its interface provides the user the possibility to stream video textures on any object(s) in the environment. The facial texture mentioned concerning the Facial Representation Engines is a special case and it is handled by both engines. All the engines in the VLNET core process are coupled to the main shared memory and to the message queue. They use the data from the culling process in order to perform the "computation culling". This means that operations are performed only for user embodiments and other objects when they are within the field of view, e.g. there is no need to perform facial expressions if the face is not visible at the moment. Substantial speedup is achieved using this technique.
6

3.2.1.2. Cull and Draw Processes Cull and Draw processes access the main shared memory and perform the functions of culling and drawing as their names suggest. These processes are standard SGI Performer [Rohlf 94] processes. 3.2.1.3. The Communication Process The Communication Process handles all network connections and communication in VLNET. All other processes and engines are connected with it through the Message Queue. They fill the queue with outgoing messages for the Communication Process to send. The messages are sent to the server, distributed and received by Communication Processes of other clients. The Communication Process puts these incoming messages into the Message Queue from where the other processes and engines can read them and react. All messages in VLNET use the standard message packet. The packet has a standard header determining the sender and the message type, and the message body. The message body content depends on the message type but is always of the same size (80 bytes), satisfying all message types in VLNET. For certain types of messages (positions, body postures, facial expressions) a dead-reckoning algorithm is implemented within the Communication Process [Capin 97-1]. The video data from the Video Engine is a special case and is handled using a separate communication channel. It is given lower priority than the other data. By isolating the communications in this separate process, and by keeping the VLNET Server relatively simple, we leave the possibility to switch relatively easily to a completely different network topology, e.g. multicasting instead of client/server. 3.2.1.4. The Data Base Process The Data Base Process takes care of the off-line fetching and loading of objects and user representations. By keeping these time consuming operations in the separate process non-blocking operation of the system is assured. 3.2.2. The Drivers The drivers provide the simple and flexible means to access and control all the complex functionalities of VLNET. Simple, because each driver is programmed using a very small API that basically consists of exchanging crucial data with VLNET through shared memory. Flexible, because using various combinations of drivers it is possible to support all sorts of input devices ranging from the mouse to the camera with complex gesture recognition software, to control all the movements of the body and face using those devices, to control objects in the environment and stream video textures to them, to build any amount of artificial intelligence in order to produce autonomous or semi-autonomous virtual humans in the networked virtual environment. The Drivers are directly tied to the Engines in the VLNET Main Process. Each engine provides a shared memory interface to which a driver can connect. Most drivers are optional and the system will provide minimal functionality (plain navigation and manipulation of objects) without any drivers. The drivers are spawned by the VLNET Main Process on the beginning of the session, based on the command line where all combinations of drivers can be specified. The drivers can be spawned on the local host or on a remote host, in which case the transparent networking interface processes are inserted on both hosts. In a simple case, as with most drivers shown in figure 3, a driver controls only one engine. However, it is possible to control more than one engine with a single driver, insuring synchronization and cooperation. 3.2.2.1. Driver types The Facial Expression Driver is used to control expressions of the user's face. The expressions are defined either using the Minimal Perceptible Actions (MPAs) [Kalra 93]. The MPAs provide a complete set of basic facial actions, and using them it is possible to define any facial expression. Examples of existing facial expression drivers include a driver that uses the video signal from the camera to track facial features and map them into the MPAs describing expressions [Pandzic 94] and a driver that lets the user choose from a menu of expressions or emotions to show on his face. The facial expression driver is optional. The Body Posture Driver controls the motion of the user's body. The postures are defined using a set of joint angles corresponding to 72 degrees of freedom of the skeleton model used in VLNET. An obvious example of using this driver is direct motion control using magnetic trackers [Molet 96]. A more complex driver is used to control body motion in a general case when trackers are not used. This driver connects also to the Navigation Interface and uses the navigation trajectory to generate the walking motion
7

and arm motion. It also imposes constraints on the Navigation Driver, e.g. not allowing the hand to move further then arm length or take an unnatural posture. This is the standard body posture driver which is spawned by the system unless another driver is explicitly requested. The Navigation Driver is used for navigation, hand movement, head movement, basic object manipulation and basic system control. The basic manipulation includes picking objects up, carrying them and letting them go, as well as grouping and ungrouping of objects. The system control provides access to some system functions that are usually accessed by keystrokes, e.g. changing drawing modes, toggling texturing, displaying statistics. Typical examples are a spaceball driver, tracker+glove driver, extended mouse driver (with GUI console). There is also an experimental facial navigation driver letting the user navigate using his head movements and facial expressions tracked by a camera [Pandzic 94]. If no navigation driver is used, internal mouse navigation is activated within the Navigation Engine. The Object Behavior Driver is used to control the behavior of objects. Currently it is limited to controlling motion and scaling. Examples include the control of a ball in a tennis game and the control of graphical representation of stock values in a virtual stock exchange. The Video Driver is used to stream video texture (but possibly also static textures) onto any object in the environment. Alpha channel can be used for blending and achieving effects of mixing real and virtual objects/persons. This type of driver is also used to stream facial video on the user's face for facial communication [Pandzic 96-1].
8

Figure 4: Snapshots from applications
4. Results
4.1. Applications Several experimental applications were developed using the VLNET system. Some snapshots are presented in figure 4. 4.1.1. Entertainment NCVE systems lend themselves to development of all sorts of multi user games. We had successful demonstrations of chess and other games played between Switzerland and Singapore, as well as between Switzerland and several European countries. A virtual tennis game has been developed [Noser 96] where the user plays against an opponent who is an autonomous virtual human. The referee is also an autonomous virtual human capable of refereeing the game and communicating the points, faults etc. by voice. Currently a multi user adventure game is under development. 4.1.2. Teleshopping In collaboration with Chopard, a watch company in Geneva, a teleshopping application was successfully demonstrated between Geneva and Singapore. The users were able to communicate with each other within a rich virtual environment representing a watch gallery with several watches exposed. They could examine the watches together, exchange bracelets, and finally choose a watch.
9

4.1.3. Medical education In collaboration with our colleagues working on a medical project we have used the 3D data of human organs, muscles and skeleton reconstructed from MRI images [Beylot 96] to build a small application in the field of medical education. The goal is to teach a student the position, properties and function of a particular muscle. In the virtual classroom several tools are at the disposition of the professor. MRI, CT and anatomical slice images are on the walls showing the slices where the muscle is visible. A 3D reconstruction of the skeleton is available together with the muscle, allowing to examine the shape of the muscle and the points of attachment to the skeleton. Finally, an autonomous virtual human with a simple behavior is there to demonstrate the movements resulting from a contraction of the discussed muscle. 4.1.4. Stock exchange Currently an application is being developed to visualize in 3D the real time updates of stock exchange data and allow interactions of users with the data and with each other. 4.2. Performance and networking Considering the graphical and computational complexity of the human representations used in VLNET, we are currently not aiming for a system scalable to large number of users, but rather trying to obtain a high quality experience for a small number of users. The graphs of performance and network traffic (figures 6 and 7) show that the system is indeed not scalable to any larger number of participants. Nevertheless, the results are reasonable for a small number of users and, more importantly, their analysis permits to gain insight to the steps needed to insure a better scalability. 4.2.1. Experiment design In order to conveniently simulate a growing number of users we have designed simple drivers to generate some reasonable motion and facial expressions. The navigation driver we used generates a random motion within a room, and the facial expressions driver generates a facial animation sequence repeatedly. By launching several clients on different hosts using these drivers we can easily simulate a session with a number of persons walking in the room and emoting using their faces. Although we looked for a way to simulate something close to a real multi-user session in a controlled way, there is a difference in the fact that simulated users move their bodies and faces all the time. In a real session, the real users would probably make pauses and thus generate less activity. Therefore we expect somewhat better results in a real session then the ones shown here, although for practical reasons we did not make such tests with real users. In order to evaluate the overhead induced by the use of high level body representation in the NCVE, we have undertaken three series of measurements: with full body representation, simplified body representation and without a body representation. The full body representation involves complex graphical representation (approx. 10 000 polygons) and deformation algorithms. The simplified body representation consists of a body with reduced graphical complexity (approx. 1500 polygons), with facial deformations and with a simplified body animation based on displacement of rigid body elements (no deformation). The tests without the body representation were made for the sake of comparison. To mark the positions of users we used simple geometric shapes. No facial or body animation is involved. Figure 5 shows some snapshots from the measurement sessions. The network traffic (figure 7) was measured on the server. We have measured incoming and outgoing traffic in kilobits per second. For the performance measures (figure 6), we connected one standard, user controlled client to be able to control the point of view. The rest of the clients were user simulations as described above. Performance is measured in number of frames per second. Since it varies depending on the point of view because of rendering and computation culling, we needed to insure consistent results. We have chosen to take measurements with a view where all the user embodiments are within the field of view, which represents the minimal performance for a given number of users in the scene. The performance was measured on a Silicon Graphics Indigo Maximum Impact workstation with 200 MHz R4400 processor and 128M RAM. 4.2.2. Analysis of performance results Figure 6 shows the variation of performance with respect to the number of users in the simulation with different complexities of body representation as explained in the previous subsection. It is important to notice that this is the minimal performance, i.e. the one measured at the moment when all the users'
10

embodiments are actually within the field of view. Rendering and computation culling boost the performance when the user is not looking at all the other embodiments because they are not rendered, and the deformation calculations are not performed for them. The embodiments that are out of the field of view do not decrease performance significantly, which means that the graph in figure 6 can be also interpreted as the peak performance when looking at N users, regardless of the total number of users in the scene. For example, even if the total number of participants is 9, performance can still be 9 frames per second at the moment when only one user is in the field of view. This makes the system much more usable then the initial look at the graphs might suggest.
11

a)
b) c) Figure 5: Snapshots from performance and network measurements: a) full bodies; b) simplified bodies; c) no body representation We have also measured the percentage of time spent on two main tasks: rendering and applicationspecific computation. With any number of users, the rendering takes around 65 % of the time, and the rest is spent on the application-specific computation, i.e. mostly the deformation of faces and bodies. For the case of simplified body representation, the percentage is 58 %. On machines with less powerful graphics hardware, an even greater percentage of the total time is dedicated to rendering. The results show that the use of complex body representation induces a considerable overhead on the rendering side and somewhat less on the computation side. The fact that the performance can be boosted
12

significantly by simply changing the body representation proves that the system framework does not induce an overhead in itself. The system is scalable in the sense that for a particular hardware and requirements a setup can be found to produce satisfying performance. Most importantly, this proves that the system should lend itself to the implementation of extended Level of Detail [Rohlf 94] techniques managing automatically the balance between performance and quality of the human representation.
30 25 20 Frames/sec 15 10 5 0 1 2 3 4 5 6 7 8 Number of users 9
Full body No body Simple body
Figure 6: Minimal performance with respect to the number of users in the session 4.2.3. Analysis of the network results Figure 7 shows the bit rates measured on the VLNET server during a session with varying number of simulated users walking within a room as described in the subsection on experiment design. We have measured separately the incoming and outgoing traffic, then calculated the total.
2500 In Out Kbit/sec 1500 Total Projected No body 500 Total DR
2000
1000
0 1 2 3 4 5 6 7 8 Number of users 9
Figure 7: Network traffic measurements with respect to the number of users in the session Obviously, the incoming traffic grows linearly with the number of users, each user generating same bitrate. It can be remarked that the bitrate generated per user is roughly 30 Kbit/sec covering transmission of body positions, body postures and facial expressions. It is worthwhile remarking that the In traffic curve measured at the server corresponds also to maximal incoming traffic on a client. The client will receive the incoming bitstream corresponding to
13
the the the the

graph in the situations where all the embodiments of other users are in the viewing frustum of the local user, i.e. in the worst case. Otherwise, when looking at N users the incoming traffic will correspond to N users at the graph. This is similar to the situation with performance measurements. The outgoing traffic represents the distribution of the data to all the clients that need it. The Total traffic is the sum of incoming and outgoing traffic. The Projected traffic curve represents the traffic calculated mathematically for the case of total distribution (all to all). The filtering technique at the server (see subsection 3.1.) insures that the messages are distributed only on as-needed basis and keeps the Total curve well below the Projected curve. Further reduction is achieved using the dead-reckoning technique [Capin 97-1], as illustrated by the curve labeled "Total DR". Network traffic is the same when using full and simplified body representations, because same messages are transferred. In the case when no body representation is used, less network traffic is generated because there is no need to send messages about body postures and facial expressions. A user without a body representation sends only position updates, generating approximately 8 Kbit/sec. The total traffic curve without a body representation is shown in figure 7 with label "No body".
5. Conclusions
We have discussed problems involved with introducing high level human representations in the Networked Collaborative Virtual Environments and presented one solution to this problem - the Virtual Life Network. The system's modular architecture, the function of each component and their interconnections were described in detail. Various experimental applications highlighted in the results section show that this kind of system lends itself to a wide variety of collaborative applications. The performance and network measurements quantify the overhead induced on the CPU and the network by the introduction of complex virtual humans. They show that the overhead is considerable, but at the same time they show the potential of the system towards scalability.
6. Future work
The work on more realistic real time body representations and performance improvements, as well as methods for capturing and interpreting body and face movement, is ongoing. The performance results indicate a path for future research in the direction of more scalable body representations. Level of Detail technique can be extended to the body representation not only in graphical, but also in procedural sense by switching to simpler animation/deformation techniques for lower levels of detail. This method can be further extended to act in reducing network traffic.
7. Acknowledgments
This research is financed by "Le Programme Prioritaire en Telecommunications de Fonds National Suisse de la Recherche Scientifique" and the TEN-IBC project VISINET. Numerous colleagues at LIG and MIRALab have directly or indirectly helped this research by providing libraries, body and environment models, scenarios for applications, in particular Eric Chauvineau, Hansrudi Noser, Marlene Poizat, Laurence Suhner and Jean-Claude Moussaly. Dr. Jean Fasel, CMU, University of Geneva, provided the advice for the medical application.
8. References
[Barrus96] Barrus J. W., Waters R. C., Anderson D. B., "Locales and Beacons: Efficient and Precise Support For Large Multi-User Virtual Environments", Proceedings of IEEE VRAIS, 1996. [Beylot 96] Beylot P., Gingins P., Kalra P., Magnenat-Thalmann N., Maurel W., Thalmann D., Fasel J. "3D Interactive Topological Modeling using Visible Human Dataset", Proceedings of EUROGRAPHICS 96, Poitiers, France, 1996. [Boulic 90] Boulic R., Magnenat-Thalmann N. M.,Thalmann D. "A Global Human Walking Model with Real Time Kinematic Personification", The Visual Computer, Vol.6(6),1990. [Boulic 95] Boulic R., Capin T., Huang Z., Kalra P., Lintermann B., Magnenat-Thalmann N., Moccozet L., Molet T., Pandzic I., Saar K., Schmitt A., Shen J., Thalmann D., "The Humanoid Environment for Interactive Animation of Multiple Deformable Human Characters", Proceedings of Eurographics '95, 1995.
14

[Capin 95] Capin T.K., Pandzic I.S., Magnenat-Thalmann N., Thalmann, D., "Virtual Humans for Representing Participants in Immersive Virtual Environments", Proceedings of FIVE '95, London, 1995. [Capin 97] Capin T.K., Pandzic I.S., Noser H., Magnenat Thalmann N., Thalmann D. "Virtual Human Representation and Communication in VLNET Networked Virtual Environments", IEEE Computer Graphics and Applications, Special Issue on Multimedia Highways, 1997 (to appear). [Capin 97-1] Capin T.K., Pandzic I.S., Thalmann D., Magnenat Thalmann N. "A Dead-Reckoning Algorithm for Virtual Human Figures", Proc. VRAIS'97, IEEE Press, 1997 (to appear). [Carlsson93] Carlsson C., Hagsand O., "DIVE - a Multi-User Virtual Reality System", Proceedings of IEEE VRAIS '93, Seattle, Washington, 1993. [Kalra92] Kalra P., Mangili A., Magnenat Thalmann N., Thalmann D., "Simulation of Facial Muscle Actions Based on Rational Free Form Deformations", Proc. Eurographics '92, pp.59-69., 1992. [Kalra 93] Kalra P. "An Interactive Multimodal Facial Animation System", PhD Thesis nr. 1183, EPFL, 1993 [Macedonia 94] Macedonia M.R., Zyda M.J., Pratt D.R., Barham P.T., Zestwitz, "NPSNET: A Network Software Architecture for Large-Scale Virtual Environments", Presence: Teleoperators and Virtual Environments, Vol. 3, No. 4, 1994. [Molet96] Molet T., Boulic R., Thalmann D., "A Real Time Anatomical Converter for Human Motion Capture", Proc. of Eurographics Workshop on Computer Animation and Simulation, 1996. [Noser 96] Noser H., Pandzic I.S., Capin T.K., Magnenat Thalmann N., Thalmann D. "Playing Games through the Virtual Life Network", Proceedings of Artificial Life V, Nara, Japan, 1996. [Ohya95] Ohya J., Kitamura Y., Kishino F., Terashima N., "Virtual Space Teleconferencing: Real-Time Reproduction of 3D Human Images", Journal of Visual Communication and Image Representation, Vol. 6, No. 1, pp. 1-25, 1995. [Pandzic 94] Pandzic I.S., Kalra P., Magnenat-Thalmann N., Thalmann D., "Real-Time Facial Interaction", Displays, Vol. 15, No 3, 1994. [Pandzic96] Pandzic I.S., Capin T.K., Magnenat Thalmann N., Thalmann D. "Motor functions in the VLNET Body-Centered Networked Virtual Environment", Proc. of 3rd Eurographics Workshop on Virtual Environments, Monte Carlo, 1996. [Pandzic96-1] Pandzic I.S., Capin T.K., Magnenat Thalmann N., Thalmann D. "Towards Natural Communication in Networked Collaborative Virtual Environments", Proc. FIVE 96, Pisa, Italy, 1996. [Rohlf94] Rohlf J., Helman J., "IRIS Performer: A High Performance Multiprocessing Toolkit for RealTime 3D Graphics", Proc. SIGGRAPH'94, 1994. [Semwal96] S. K. Semwal, R. Hightower, S. Stansfield, "Closed Form and Geometric Algorithms for Real-Time Control of an Avatar", Proc. VRAIS 96. pp. 177-184. [Singh95] Singh G., Serra L., Png W., Wong A., Ng H., "BrickNet: Sharing Object Behaviors on the Net", Proceedings of IEEE VRAIS '95, 1995. [Slater 94] Slater M., Usoh M. "Body Centered Interaction in Immersive Virtual Environments", in Artificial Life and Virtual Reality, N. Magnenat Thalmann, D. Thalmann, eds., John Wiley, pp 1-10, 1994. [Thalmann95] D. Thalmann, T. K. Capin, N. Magnenat Thalmann, I. S. Pandzic, “Participant, UserGuided, Autonomous Actors in the Virtual Life Network VLNET”, Proc. ICAT/VRST ’95, pp. 3-11. [Thalmann96] D. Thalmann, J. Shen, E. Chauvineau, “Fast Realistic Human Body Deformations for Animation and VR Applications”, Proc. Computer Graphics International ’96, Pohang, Korea,1996.
15

step7与wincc flexible仿真

使用Wincc Flexible与PLCSIM进行联机调试是可行的,但是前提条件是安装Wincc Flexible时必须选择集成在Step7中,下面就介绍一下如何进行两者的通讯。 Step1:在Step7中建立一个项目,并编写需要的程序,如下图所示: 为了演示的方便,我们建立了一个起停程序,如下图所示: Step2:回到Simatic Manager中,在项目树中我们建立一个Simatic HMI Station的对象,如果Wincc Flexible已经被安装且在安装时选择集成在Step7中的话,系统会调用Wincc Flexible程序,如下图所示:

为方便演示,我们这里选择TP270 6寸的屏。 确定后系统需要加载一些程序,加载后的Simatic Manager界面如下图所示:

Step3:双击Simatic HMI Station下Wincc Flexible RT,如同在Wincc Flexible软件下一样的操作,进行画面的编辑与通讯的连接的设定,如果您安装的Wincc Flexible软件为多语言版本,那么通过上述步骤建立而运行的Wincc Flexible界面就会形成英语版,请在打开的Wincc Flexible软件菜单Options-〉Settings……中设置如下图所示即可。 将项目树下通讯,连接设置成如下图所示: 根据我们先前编写的起停程序,这里只需要使用两个M变量与一个Q变量即可。将通讯,变量设置成如下图所示:

将画面连接变量,根据本文演示制作如下画面: 现在我们就完成了基本的步骤。

Step4:模拟演示,运行PLCSIM,并下载先前完成的程序。 建立M区以及Q区模拟,试运行,证实Step7程序没有出错。 接下来在Wincc Flexible中启动运行系统(如果不需要与PLCSIM联机调试,那么需要运行带仿真器的运行系统),此时就可以联机模拟了。 本例中的联机模拟程序运行如下图所示:

英语选修六课文翻译Unit5 The power of nature An exciting job的课文原文和翻译

AN EXCITING JOB I have the greatest job in the world. I travel to unusual places and work alongside people from all over the world. Sometimes working outdoors, sometimes in an office, sometimes using scientific equipment and sometimes meeting local people and tourists, I am never bored. Although my job is occasionally dangerous, I don't mind because danger excites me and makes me feel alive. However, the most important thing about my job is that I help protect ordinary people from one of the most powerful forces on earth - the volcano. I was appointed as a volcanologist working for the Hawaiian V olcano Observatory (HVO) twenty years ago. My job is collecting information for a database about Mount Kilauea, which is one of the most active volcanoes in Hawaii. Having collected and evaluated the information, I help other scientists to predict where lava from the volcano will flow next and how fast. Our work has saved many lives because people in the path of the lava can be warned to leave their houses. Unfortunately, we cannot move their homes out of the way, and many houses have been covered with lava or burned to the ground. When boiling rock erupts from a volcano and crashes back to earth, it causes less damage than you might imagine. This is because no one lives near the top of Mount Kilauea, where the rocks fall. The lava that flows slowly like a wave down the mountain causes far more damage because it

winccflexible系统函数

WinCC Flexible 系统函数报警 ClearAlarmBuffer 应用 删除HMI设备报警缓冲区中的报警。 说明 尚未确认的报警也被删除。 语法 ClearAlarmBuffer (Alarm class number) 在脚本中是否可用:有 (ClearAlarmBuffer) 参数 Alarm class number 确定要从报警缓冲区中删除的报警: 0 (hmiAll) = 所有报警/事件 1 (hmiAlarms) = 错误 2 (hmiEvents) = 警告 3 (hmiSystem) = 系统事件 4 (hmiS7Diagnosis) = S7 诊断事件 可组态的对象 对象事件 变量数值改变超出上限低于下限 功能键(全局)释放按下 功能键(局部)释放按下 画面已加载已清除 数据记录溢出报警记录溢出 检查跟踪记录可用内存很少可用内存极少 画面对象按下 释放 单击 切换(或者拨动开关)打开 断开 启用 取消激活 时序表到期 报警缓冲区溢出 ClearAlarmBufferProtoolLegacy 应用

该系统函数用来确保兼容性。 它具有与系统函数“ClearAlarmBuffer”相同的功能,但使用旧的ProTool编号方式。语法 ClearAlarmBufferProtoolLegacy (Alarm class number) 在脚本中是否可用:有 (ClearAlarmBufferProtoolLegacy) 参数 Alarm class number 将要删除其消息的报警类别号: -1 (hmiAllProtoolLegacy) = 所有报警/事件 0 (hmiAlarmsProtoolLegacy) = 错误 1 (hmiEventsProtoolLegacy) = 警告 2 (hmiSystemProtoolLegacy) = 系统事件 3 (hmiS7DiagnosisProtoolLegacy) = S7 诊断事件 可组态的对象 对象事件 变量数值改变超出上限低于下限 功能键(全局)释放按下 功能键(局部)释放按下 画面已加载已清除 变量记录溢出报警记录溢出 检查跟踪记录可用内存很少可用内存极少 画面对象按下 释放 单击 切换(或者拨动开关)打开 断开 启用 取消激活 时序表到期 报警缓冲区溢出 SetAlarmReportMode 应用 确定是否将报警自动报告到打印机上。 语法 SetAlarmReportMode (Mode) 在脚本中是否可用:有 (SetAlarmReportMode) 参数 Mode 确定报警是否自动报告到打印机上: 0 (hmiDisnablePrinting) = 报表关闭:报警不自动打印。

八年级下册3a课文

八年级下学期全部长篇课文 Unit 1 3a P6 In ten years , I think I'll be a reporter . I'll live in Shanghai, because I went to Shanfhai last year and fell in love with it. I think it's really a beautiful city . As a reporter, I think I will meet lots of interesting people. I think I'll live in an apartment with my best friends, because I don' like living alone. I'll have pets. I can't have an pets now because my mother hates them, and our apartment is too small . So in ten yers I'll have mny different pets. I might even keep a pet parrot!I'll probably go skating and swimming every day. During the week I'll look smart, and probably will wear a suit. On the weekend , I'll be able to dress more casully. I think I'll go to Hong Kong vacation , and one day I might even visit Australia. P8 Do you think you will have your own robot In some science fiction movies, people in the future have their own robots. These robots are just like humans. They help with the housework and do most unpleasant jobs. Some scientists believe that there will be such robots in the future. However, they agree it may take hundreds of years. Scientist ae now trying to make robots look like people and do the same things as us. Janpanese companies have already made robts walk and dance. This kond of roots will also be fun to watch. But robot scientist James White disagrees. He thinks that it will be difficult fo a robot to do the same rhings as a person. For example, it's easy for a child to wake up and know where he or she is. Mr White thinks that robots won't be able to do this. But other scientists disagree. They think thast robots will be able t walk to people in 25 to 50tars. Robots scientists are not just trying to make robots look like people . For example, there are already robots working in factories . These robots look more like huge arms. They do simple jobs over and over again. People would not like to do such as jobs and would get bored. But robots will never bored. In the futhre, there will be more robots everwhere, and humans will have less work to do. New robots will have different shapes. Some will look like humans, and others might look like snakes. After an earthquake, a snake robot could help look for people under buildings. That may not seem possibe now, but computers, space rockets and even electric toothbrushes seemed

选修6英语课本原文文档

高中英语选修 6 Unit 1 A SHORT HISTORY OF WESTERN PAINTING Art is influenced by the customs and faith of a people. Styles in Western art have changed many times. As there are so many different styles of Western art, it would be impossible to describe all of them in such a short text. Consequently, this text will describe only the most important ones. Starting from the sixth century AD. The Middle Ages(5th to the 15th century AD) During the Middle Ages, the main aim of painters was to represent religious themes. A conventional artistof this period was not interested in showing nature and people as they really were. A typical picture at this time was full of religious symbols, which created a feeling of respect and love for God. But it was evident that ideas were changing in the 13th century when painters like Giotto di Bondone began to paint religious scenes in a more realistic way. The Renaissance(15th to 16th century) During the Renaissance, new ideas and values gradually replaced those held in the Middle Ages. People began to concentrate less on

英语选修六课文翻译第五单元word版本

英语选修六课文翻译 第五单元

英语选修六课文翻译第五单元 reading An exciting job I have the greatest job in the world. travel to unusual places and work alongside people from all over the world sometimes working outdoors sometimes in an office sometimes using scientific equipment and sometimes meeting local people and tourists I am never bored although my job is occasionally dangerous I don't mind because danger excites me and makes me feel alive However the most important thing about my job is that I heIp protect ordinary people from one of the most powerful forces on earth-the volcano. I was appointed as a volcanologist working for the Hawaiian Volcano Observatory (HVO) twenty years ago My job is collecting information for a database about Mount KiLauea which is one of the most active volcanoes in Hawaii Having collected and evaluated the information I help oyher scientists to predict where lava from the path of the lava can be warned to leave their houses Unfortunately we cannot move their homes out of the way and many houses have been covered with lava or burned to the ground. When boiling rock erupts from a volcano and crashes back to earth, it causes less damage than you might imagine. This is because no one lives near the top of Mount Kilauea, where the rocks fall. The lava that flows slowly like a wave down the mountain causes far more damage because it buries everything in its path under the molten rock. However, the eruption itself is really exciting to watch and I shall never forget my first sight of one. It was in the second week after I arrived in Hawaii. Having worked hard all day, I went to bed early. I was fast asleep when suddenly my bed began shaking and I heard a strange sound, like a railway train passing my window. Having experienced quite a few earthquakes in Hawaii already, I didn't take much notice. I was about to go back to sleep when suddenly my bedroom became as bright as day. I ran out of the house into the back garden where I could see Mount Kilauea in the distance. There had been an eruption from the side of the mountain and red hot lava was fountaining hundreds of metres into the air. It was an absolutely fantastic sight. The day after this eruption I was lucky enough to have a much closer look at it. Two other scientists and I were driven up the mountain and dropped as close as possible to the crater that had been formed duing the eruption. Having earlier collected special clothes from the observatory, we put them on before we went any closer. All three of us looked like spacemen. We had white protective suits that covered our whole body, helmets,big boots and special gloves. It was not easy to walk in these suits, but we slowly made our way to the edge of the crater and looked down into the red, boiling centre. The other two climbed down into the crater to collect some lava for later study, but this being my first experience, I stayed at the top and watched them.

人教版英语选修六Unit5 the power of nature(An exciting Job)

高二英语教学设计 Book6 Unit 5 Reading An Exciting Job 1.教学目标(Teaching Goals): a. To know how to read some words and phrases. b. To grasp and remember the detailed information of the reading material . c. To understand the general idea of the passage. d. To develop some basic reading skills. 2.教学重难点: a.. To understand the general idea of the passage. b. To develop some basic reading skills. Step I Lead-in and Pre-reading Let’s share a movie T: What’s happened in the movie? S: A volcano was erupting. All of them felt frightened/surprised/astonished/scared…… T: What do you think of volcano eruption and what can we do about it? S: A volcano eruption can do great damage to human beings. It seems that we human beings are powerless in front of these natural forces. But it can be predicted and damage can be reduced. T: Who will do this kind of job and what do you think of the job? S: volcanologist. It’s dangerous. T: I think it’s exciting. Ok, this class, let’s learn An Exciting Job. At first, I want to show you the goals of this class Step ⅡPre-reading Let the students take out their papers and check them in groups, and then write their answers on the blackboard (Self-learning) some words and phrases:volcano, erupt, alongside, appoint, equipment, volcanologist, database, evaluate, excite, fantastic, fountain, absolutely, unfortunately, potential, be compared with..., protect...from..., be appointed as, burn to the ground, be about to do sth., make one’s way. Check their answers and then let them lead the reading. Step III Fast-reading 这是一篇记叙文,一位火山学家的自述。作者首先介绍了他的工作性质,说明他热爱该项工作的主要原因是能帮助人们免遭火山袭击。然后,作者介绍了和另外二位科学家一道来到火山口的经历。最后,作者表达了他对自己工作的热情。许多年后,火山对他的吸引力依然不减。 Skimming Ⅰ.Read the passage and answer: (Group4) 1. Does the writer like his job?( Yes.) 2. Where is Mount Kilauea? (It is in Hawaii) 3. What is the volcanologist wearing when getting close to the crater? (He is wearing white protective suits that covered his whole body, helmets, big boots and

西门子触摸屏软件Wincc flexible 使用总结

西门子触摸屏软件Wincc flexible 使用总结Wincc Flexible使用: 1. 退出系统的命令是Stopruntime。 2. 尽量不要用超级兔子或优化大师清理注册表和系统垃圾,因为会有S7和Flexible的文件 一起被清楚掉,这样容易造成使用故障,如组态错误或不能下载,卡巴斯基也不用。 3. Flexible的键控的触摸屏中K键是全局键,其设置要在模板里设置。 1. 使用按钮时,注意: A:颜色变化 B:功能实现 C:对应的按键 2:要使用按钮的动画中的可见性选项,需要设置此变量类型为位0,不行的话就设置为整型。 4. 要想在触摸屏处于运行状态时下载程序,需要在Romate control旁打勾。 5. 可以通过DP口写触摸屏(连到MPI上)的程序,不要这时要启用路由功能。 6. 触摸屏可以通过DP传输,在硬件里组态到MPI上面,实际连接到DP口,编程线也接 到DP口上,下载即可,不要启用路由功能。 7. M变量只有释义后才能在Flexible中看到。 8. 如果在编译时出现“无效参数”或类似的查不出原因的错误,就在菜单-----选项里选择 删除临时文件,就可以解决这个问题。

9. 对于一些指示变量变化的信号,必须指示传感器的信号,可以添加一个符号库,然后在 布局里设置其背景为透明的,填充颜色模式为实心的,设置其前景色、背景色都是灰色 的,然后在动画---外观里设置其指向需要显示的变量,设置在不同值时不同的前景色即 可。 10. 在屏上显示控制面板时,不能下载。 11. 如果连接的PLC名称改变,可以在选项里选择重新连接,不过前提是PLC 的名称必须 和原来变量连接的PLC的名称一致。 12. 按钮的焦点颜色和宽度指的是按钮被激活时,在按钮上显示的边框的颜色和宽度,一般 把宽度设为1,颜色无所谓。 13. 如果要实现中英文切换,步骤如下:a,在项目语言里,设置编辑语言和参考语言;b, 在画面里做一个按钮,设置单击时的动作是设置-----》Setlanguage,函数选择Toggle的 话,表示单击一次改变一下语言,函数选择en-GB的话表示单击后切换到英文,选择 zh-CN的话,单击后切换到中文;c,在设备设置-----》语言和字体里设置两种语言的显 示格式;d,在语言设置------》项目文本里设置相应的按钮对应的英文翻译。 14. 建立配方步骤:a,在配方里新建一个配方;

Unit5 Reading An Exciting Job(说课稿)

Unit5 Reading An Exciting Job 说课稿 Liu Baowei Part 1 My understanding of this lesson The analysis of the teaching material:This lesson is a reading passage. It plays a very important part in the English teaching of this unit. It tells us the writer’s exciting job as a volcanologist. From studying the passage, students can know the basic knowledge of volcano, and enjoy the occupation as a volcanologist. So here are my teaching goals: volcanologist 1. Ability goal: Enable the students to learn about the powerful natural force-volcano and the work as a volcanologist. 2. Learning ability goal: Help the students learn how to analyze the way the writer describes his exciting job. 3. Emotional goal: Make the Students love the nature and love their jobs. Learn how to express fear and anxiety Teaching important points: sentence structures 1. I was about to go back to sleep when suddenly my bedroom became as bright as day. 2. Having studied volcanoes now for more than twenty years, I am still amazed at their beauty as well as their potential to cause great damage. Teaching difficult points: 1. Use your own words to retell the text. 2. Discuss the natural disasters and their love to future jobs. Something about the S tudents: 1. The Students have known something about volcano but they don’t know the detailed information. 2. They are lack of vocabulary. 3. They don’t often use English to express themselves and communicate with others.

an exciting job 翻译

我的工作是世界上最伟大的工作。我跑的地方是稀罕奇特的地方,我见到的是世界各地有趣味的人们,有时在室外工作,有时在办公室里,有时工作中要用科学仪器,有时要会见当地百姓和旅游人士。但是我从不感到厌烦。虽然我的工作偶尔也有危险,但是我并不在乎,因为危险能激励我,使我感到有活力。然而,最重要的是,通过我的工作能保护人们免遭世界最大的自然威力之一,也就是火山的威胁。 我是一名火山学家,在夏威夷火山观测站(HVO)工作。我的主要任务是收集有关基拉韦厄火山的信息,这是夏威夷最活跃的火山之一。收集和评估了这些信息之后,我就帮助其他科学家一起预测下次火山熔岩将往何处流,流速是多少。我们的工作拯救了许多人的生命,因为熔岩要流经之地,老百姓都可以得到离开家园的通知。遗憾的是,我们不可能把他们的家搬离岩浆流过的地方,因此,许多房屋被熔岩淹没,或者焚烧殆尽。当滚烫沸腾的岩石从火山喷发出来并撞回地面时,它所造成的损失比想象的要小些,这是因为在岩石下落的基拉韦厄火山顶附近无人居住。而顺着山坡下流的火山熔岩造成的损失却大得多,这是因为火山岩浆所流经的地方,一切东西都被掩埋在熔岩下面了。然而火山喷发本身的确是很壮观的,我永远也忘不了我第一次看见火山喷发时的情景。那是在我到达夏威夷后的第二个星期。那天辛辛苦苦地干了一整天,我很早就上床睡觉。我在熟睡中突然感到床铺在摇晃,接着我听到一阵奇怪的声音,就好像一列火车从我的窗外行驶一样。因为我在夏威夷曾经经历过多次地震,所以对这种声音我并不在意。我刚要再睡,突然我的卧室亮如白昼。我赶紧跑出房间,来到后花园,在那儿我能远远地看见基拉韦厄火山。在山坡上,火山爆发了,红色发烫的岩浆像喷泉一样,朝天上喷射达几百米高。真是绝妙的奇景! 就在这次火山喷发的第二天,我有幸做了一次近距离的观察。我和另外两位科学被送到山顶,在离火山爆发期间形成的火山口最靠近的地方才下车。早先从观测站出发时,就带了一些特制的安全服,于是我们穿上安全服再走近火山口。我们三个人看上去就像宇航员一样,我们都穿着白色的防护服遮住全身,戴上了头盔和特别的手套,还穿了一双大靴子。穿着这些衣服走起路来实在不容易,但我们还是缓缓往火山口的边缘走去,并且向下看到了红红的沸腾的中心。另外,两人攀下火山口,去收集供日后研究用的岩浆,我是第一次经历这样的事,所以留在山顶上观察他们

英语选修六课文翻译第五单元

英语选修六课文翻译第五单元 reading An exciting job I have the greatest job in the world. travel to unusual places and work alongside people from all over the world sometimes working outdoors sometimes in an office sometimes using scientific equipment and sometimes meeting local people and tourists I am never bored although my job is occasionally dangerous I don't mind because danger excites me and makes me feel alive However the most important thing about my job is that I heIp protect ordinary people from one of the most powerful forces on earth-the volcano. I was appointed as a volcanologist working for the Hawaiian Volcano Observatory (HVO) twenty years ago My job is collecting information for a database about Mount KiLauea which is one of the most active volcanoes in Hawaii Having collected and evaluated the information I help oyher scientists to predict where lava from the path of the lava can be warned to leave their houses

新目标英语七年级下册课文Unit04

新目标英语七年级下册课文Unit04 Coverstion1 A: What does your father do? B: He's a reporter. A:Really?That sounds interesting. Coverstion2 A:What does your mother do,Ken? B:She's a doctor. A:Really?I want to be a doctor. Coverstion3 A:What does your cousin do? B:You mean my cousin,Mike? A:Yeah,Mike.What does he do? B:He is as shop assistant. 2a,2b Coverstion1 A: Anna,doesyour mother work? B:Yes,she does .She has a new job. A:what does she do? B: Well ,she is as bank clerk,but she wants to be a policewoman. Coverstion2 A:Is that your father here ,Tony?

B:No,he isn't .He's working. B:But it's Saturday night.What does he do? B:He's a waiter Saturday is busy for him. A:Does he like it? B:Yes ,but he really wants to be an actor. Coverstion3 A:Susan,Is that your brother? B:Yes.it is A:What does he do? B:He's a student.He wants to be a doctor. section B , 2a,2b Jenny:So,Betty.what does yor father do? Betty: He's a policeman. Jenny:Do you want to be a policewoman? Betty:Oh,yes.Sometimes it's a little dangerous ,but it's also an exciting job.Jenny, your father is a bank clerk ,right? Jenny:Yes ,he is . Sam:Do you want to be a bank clerk,too? Jenny:No,not really.I want to be a reporter. Sam: Oh,yeah?Why? Jenny:It's very busy,but it's also fun.You meet so many interesting people.What about your father ,Sam. Sam: He's a reporter at the TV station.It's an exciting job,but it's also very difficult.He always has a lot of new things to learn.Iwant to be a reporter ,too

高中英语_An exciting job教学设计学情分析教材分析课后反思

人教版选修Unit 5 The power of nature阅读课教学设计 Learning aims Knowledge aims: Master the useful words and expressions related to the text. Ability aims: Learn about some disasters that are caused by natural forces, how people feel in dangerous situations. Emotion aims: Learn the ways in which humans protect themselves from natural disasters. Step1 Leading-in 1.Where would you like to spend your holiday? 2.What about a volcano? 3.What about doing such dangerous work as part of your job ? https://www.sodocs.net/doc/c017428346.html, every part of a volcano. Ash cloud/volcanic ash/ Crater/ Lava /Magma chamber 【设计意图】通过图片激发学生兴趣,引出本单元的话题,要求学生通过讨论, 了解火山基本信息,引出火山文化背景,为后面的阅读做铺垫。利用头脑风暴法收集学生对课文内容的预测并板书下来。文内容进行预测,培养学生预测阅读内容的能力。同时通过预测激起进一步探究 Step2. Skimming What’s the main topic of the article?

相关主题