搜档网
当前位置:搜档网 › usenix01 A toolkit for user-level file systems

usenix01 A toolkit for user-level file systems

usenix01 A toolkit for user-level file systems
usenix01 A toolkit for user-level file systems

A toolkit for user-level?le systems

David Mazi`e res Department of Computer Science,NYU

dm@https://www.sodocs.net/doc/c118178065.html,

Abstract

This paper describes a C++toolkit for easily extending the Unix?le system.The toolkit exposes the NFS in-terface,allowing new?le systems to be implemented portably at user level.A number of programs have im-plemented portable,user-level?le systems.However, they have been plagued by low-performance,deadlock, restrictions on?le system structure,and the need to re-boot after software errors.The toolkit makes it easy to avoid the vast majority of these problems.Moreover,the toolkit also supports user-level access to existing?le sys-tems through the NFS interface—a heretofore rarely em-ployed technique.NFS gives software an asynchronous, low-level interface to the?le system that can greatly ben-e?t the performance,security,and scalability of certain applications.The toolkit uses a new asynchronous I/O library that makes it tractable to build large,event-driven programs that never block.

1Introduction

Many applications could reap a number of bene?ts from a richer,more portable?le system interface than that of Unix.This paper describes a toolkit for portably ex-tending the Unix?le system—both facilitating the cre-ation of new?le systems and granting access to existing ones through a more powerful interface.The toolkit ex-ploits both the client and server sides of the ubiquitous Sun Network File System[15].It lets the?le system developer build a new?le system by emulating an NFS server.It also lets application writers replace?le system calls with networking calls,permitting lower-level ma-nipulation of?les and working around such limitations as the maximum number of open?les and the synchrony of many operations.

We used the toolkit to build the SFS distributed?le system[13],and thus refer to it as the SFS?le system development toolkit.SFS is relied upon for daily use by several people,and thus shows by example that one can build production-quality NFS loopback servers.In addition,other users have picked up the toolkit and built functioning Unix?le systems in a matter of a week.We have even used the toolkit for class projects,allowing students to build real,functioning Unix?le systems. Developing new Unix?le systems has long been a dif-?cult task.The internal kernel API for?le systems varies signi?cantly between versions of the operating system, making portability nearly impossible.The locking disci-pline on?le system data structures is hair-raising for the non-expert.Moreover,developing in-kernel?le systems has all the complications of writing kernel code.Bugs can trigger a lengthy crash and reboot cycle,while ker-nel debugging facilities are generally less powerful than those for ordinary user code.

At the same time,many applications could bene?t from an interface to existing?le systems other than POSIX.For example,non-blocking network I/O per-mits highly ef?cient software in many situations,but any synchronous disk I/O blocks such software,reducing its throughput.Some operating systems offer asynchronous ?le I/O through the POSIX aio routines,but aio is only for reading and writing?les—it doesn’t allow?les to be opened and created asynchronously,or directories to be read.

Another shortcoming of the Unix?le system inter-face is that it foments a class of security holes known as time of check to time of use,or TOCTTOU,bugs[2]. Many conceptually simple tasks are actually quite dif?-cult to implement correctly in privileged software—for instance,removing a?le without traversing a symbolic link,or opening a?le on condition that it be accessible to a less privileged user.As a result,programmers often leave race conditions that attackers can exploit to gain greater privilege.

The next section summarizes related work.Section3 describes the issues involved in building an NFS loop-back server.Section4explains how the SFS toolkit fa-cilitates the construction of loopback servers.Section5 discusses loopback clients.Section6describes applica-tions of the toolkit and discusses performance.Finally, Section7concludes.

2Related work

A number of?le system projects have been implemented as NFS loopback servers.Perhaps the?rst example is the Sun automount daemon[5]—a daemon that mounts re-mote NFS?le systems on-demand when their pathnames are referenced.Neither automount nor a later,more ad-vanced automounter,amd[14],were able to mount?le systems in place to turn a pathname referenced by a user into a mount point on-the-?y.Instead,they took the ap-proach of creating mount points outside of the directory served by the loopback server,and redirecting?le ac-cesses using symbolic links.Thus,for example,amd might be a loopback server for directory/home.When it sees an access to the path/home/am2,it will mount the corresponding?le system somewhere else,say on/a/ amsterdam/u2,then produce a symbolic link,/home/ am2→/a/amsterdam/u2.This symbolic link scheme complicates life for users.For this and other reasons, Solaris and Linux pushed part of the automounter back into the kernel.The SFS toolkit shows they needn’t have done so for mounting in place,one can in fact implement a proper automounter as a loopback server.

Another problem with previous loopback automoun-ters is that one unavailable server can impede access to other,functioning servers.In the example from the previ-ous paragraph,suppose the user accesses/home/am2but the corresponding server is unavailable.It may take amd tens of seconds to realize the server is unavailable.Dur-ing this time,amd delays responding to an NFS request for?le am2in/home.While the the lookup is pending, the kernel’s NFS client will lock the/home directory, preventing access to all other names in the directory as well.

Loopback servers have been used for purposes other than automounting.CFS[3]is a cryptographic?le sys-tem implemented as an NFS loopback server.Unfortu-nately,CFS suffers from deadlock.It predicates the com-pletion of loopback NFS write calls on writes through the ?le system interface,which,as discussed later,leads to deadlock.The Alex ftp?le system[7]is implemented using NFS.However Alex is read-only,which avoids any deadlock problems.Numerous other?le systems are constructed as NFS loopback servers,including the se-mantic?le system[9]and the Byzantine fault-tolerant ?le system[6].The SFS toolkit makes it considerably easier to build such loopback servers than before.It also helps avoid many of the problems previous loop-back servers have had.Finally,it supports NFS loopback clients,which have advantages discussed later on.

New?le systems can also be implemented by replac-ing system shared libraries or even intercepting all of a process’s system calls,as the UFO system does[1].Both methods are appealing because they can be implemented by a completely unprivileged user.Unfortunately,it is hard to implement complete?le system semantics us-ing these methods(for instance,you can’t hand off a ?le descriptor using sendmsg()).Both methods also fail in some cases.Shared libraries don’t work with stat-ically linked applications,and neither approach works with setuid utilities such as lpr.Moreover,having dif-ferent namespaces for different processes can cause con-fusion,at least on operating systems that don’t normally support this.

FiST[19]is a language for generating stackable?le systems,in the spirit of Ficus[11].FiST can output code for three operating systems—Solaris,Linux,and FreeBSD—giving the user some amount of portability. FiST outputs kernel code,giving it the advantages and disadvantages of being in the operating system.FiST’s biggest contributions are really the programming lan-guage and the stackability,which allow simple and el-egant code to do powerful things.That is somewhat or-thogonal to the SFS toolkit’s goals of allowing?le sys-tems at user level(though FiST is somewhat tied to the VFS layer—it couldn’t unfortunately be ported to the SFS toolkit very easily).Aside from its elegant language, the big trade-off between FiST and the SFS toolkit is per-formance vs.portability and ease of debugging.Loop-back servers will run on virtually any operating system, while FiST?le systems will likely offer better perfor-mance.

Finally,several kernel device drivers allow user-level programs to implement?le systems using an interface other than NFS.The now defunct UserFS[8]exports an interface similar to the kernel’s VFS layer to user-level https://www.sodocs.net/doc/c118178065.html,erFS was very general,but only ran on Linux.Arla[17],an AFS client implementation,con-tains a device,xfs,that lets user-level programs imple-ment a?le system by sending messages through/dev/ xfs0.Arla’s protocol is well-suited to network?le sys-tems that perform whole?le caching,but not as general-purpose as UserFS.Arla runs on six operating systems, making xfs-based?le systems portable.However,users must?rst install xfs.Similarly,the Coda?le system[12] uses a device driver/dev/cfs0.

3NFS loopback server issues

NFS loopback servers allow one to implement a new?le system portably,at user-level,through the NFS proto-col rather than some operating-system-speci?c kernel-internal API(e.g.,the VFS layer).Figure1shows the architecture of an NFS loopback server.An application accesses?les using system calls.The operating system’s NFS client implements the calls by sending NFS requests

Figure1:A user-level NFS loopback server

to the user-level server.The server,though treated by the kernel’s NFS code as if it were on a separate machine, actually runs on the same machine as the applications.It responds to NFS requests and implements a?le system using only standard,portable networking calls.

3.1Complications of NFS loopback

servers

Making an NFS loopback server perform well poses a few challenges.First,because it operates at user-level, a loopback server inevitably imposes additional context switches on applications.There is no direct remedy for the situation.Instead,the loopback?le system imple-menter must compensate by designing the rest of the sys-tem for high performance.

Fortunately for loopback servers,people are willing to use?le systems that do not perform optimally(NFS it-self being one example).Thus,a?le system offering new functionality can be useful as long as its performance is not unacceptably slow.Moreover,loopback servers can exploit ideas from the?le system literature.SFS, for instance,manages to maintain performance compet-itive with NFS by using leases[10]for more aggressive attribute and permission caching.An in-kernel imple-mentation could have delivered far better performance, but the current SFS is a useful system because of its en-hanced security.

Another performance challenge is that loopback servers must handle multiple requests in parallel.Oth-erwise,if,for instance,a server waits for a request of its own over the network or waits for a disk read,multiple requests will not overlap their latencies and the overall throughput of the system will suffer.

Worse yet,any blocking operation performed by an NFS loopback server has the potential for deadlock.This is because of typical kernel buffer allocation strategy.On many BSD-derived Unixes,when the kernel runs out of buffers,the buffer allocation function can pick some dirty buffer to recycle and block until that particular buffer has been cleaned.If cleaning that buffer requires calling into the loopback server and the loopback server is waiting for the blocked kernel thread,then deadlock will ensue. To avoid deadlock,an NFS loopback server must never block under any circumstances.Any?le I/O within a loopback server is obviously strictly prohibited.How-ever,the server must avoid page faults,too.Even on op-erating systems that rigidly partition?le cache and pro-gram memory,a page fault needs a struct buf to pass to the disk driver.Allocating the structure may in turn require that some?le buffer be cleaned.In the end,a mere debugging printf can deadlock a system;it may ?ll the queue of a pseudo-terminal handled by a remote login daemon that has suffered a page fault(an occur-rence observed by the author).A large piece of soft-ware that never blocks requires fundamentally different abstractions from most other software.Simply using an in-kernel threads package to handle concurrent NFS re-quests at user level isn’t good enough,as the thread that blocks may be the one cleaning the buffer everyone is waiting for.

NFS loopback servers are further complicated by the kernel NFS client’s internal locking.When an NFS re-quest takes too long to complete,the client retransmits it.After some number of retransmissions,the client con-cludes that the server or network has gone down.To avoid?ooding the server with retransmissions,the client locks the mount point,blocking any further requests, and periodically retransmitting only the original,slow re-quest.This means that a single“slow”?le on an NFS loopback server can block access to other?les from the same server.

Another issue faced by loopback servers is that a lot of software(e.g.,Unix implementations of the ANSI C getcwd()function)requires every?le on a system to have a unique(st_dev,st_ino)pair.st_dev and st_ino are?elds returned by the POSIX stat()function.Histori-cally,st_dev was a number designating a device or disk partition,while st_ino corresponded to a?le within that disk partition.Even though the NFS protocol has a?eld equivalent to st_dev,that?eld is ignored by Unix NFS clients.Instead,all?les under a given NFS mount point are assigned a single st_dev value,made up by the ker-nel.Thus,when stitching together?les from various sources,a loopback server must ensure that all st_ino ?elds are unique for a given mount point.

A loopback server can avoid some of the problems of slow?les and st_ino uniqueness by using multi-ple mount points—effectively emulating several NFS servers.One often would like to create these mount points on-the-?y—for instance to“automount”remote servers as the user references them.Doing so is non-trivial because of vnode locking on?le name lookups.

While the NFS client is looking up a?le name,one can-not in parallel access the same name to create a new mount point.This drove previous NFS loopback auto-mounters to create mount points outside of the loopback ?le system and serve only symbolic links through the loopback mount.

As user-level software,NFS loopback servers are eas-ier to debug than kernel software.However,a buggy loopback server can still hang a machine and require a reboot.When a loopback server crashes,any reference to the loopback?le system will block.Hung processes pile up,keeping the?le system in use and on many operating systems preventing unmounting.Even the unmount com-mand itself sometimes does things that require an NFS RPC,making it impossible to clean up the mess without a reboot.If a loopback?le system uses multiple mount points,the situation is even worse,as there is no way to traverse higher level directories to unmount the lower-level mount points.

In summary,while NFS loopback servers offer a promising approach to portable?le system development, a number of obstacles must be overcome to build them successfully.The goal of the SFS?le system develop-ment toolkit is to tackle these problems and make it easy for people to develop new?le systems.

4NFS loopback server toolkit

This section describes how the SFS toolkit supports building robust user-level loopback servers.The toolkit has several components,illustrated in Figure2.nfs-mounter is a daemon that creates and deletes mount points.It is the only part of the SFS client that needs to run as root,and the only part of the system that must function properly to prevent a machine from get-ting wedged.The SFS automounter daemon creates mount points dynamically as users access them.Fi-nally,a collection of libraries and a novel RPC compiler simplify the task of implementing entirely non-blocking NFS loopback servers.

4.1Basic API

The basic API of the toolkit is effectively the NFS3 protocol[4].The server allocates an nfsserv object, which might,for example,be bound to a UDP socket. The server hands this object a dispatch function.The ob-ject then calls the dispatch function with NFS3RPCs. The dispatch function is asynchronous.It receives an ar-gument of type pointer to nfscall,and it returns noth-ing.To reply to an NFS RPC,the server calls the reply method of the nfscall object.This needn’t happen be-fore the dispatch routine returns,however.The nfscall can be stored away until some other asynchronous event completes.

4.2The nfsmounter daemon

The purpose of nfsmounter is to clean up the mess when other parts of the system fail.This saves the loopback ?le system developer from having to reboot the machine, even if something goes horribly wrong with his loopback server.nfsmounter runs as root and calls the mount and unmount(or umount)system calls at the request of other processes.However,it aggressively distrusts these pro-cesses.Its interface is carefully crafted to ensure that nf-smounter can take over and assume control of a loopback mount whenever necessary.

nfsmounter communicates with other daemons through Unix domain sockets.To create a new NFS mount point,a daemon?rst creates a UDP socket over which to speak the NFS protocol.The daemon then passes this socket and the desired pathname for the mount point to nfsmounter(using Unix domain socket facilities for passing?le descriptors across processes). nfsmounter,acting as an NFS client to existing loopback mounts,then probes the structure of any loopback?le systems traversed down to the requested mount point. Finally,nfsmounter performs the actual mount system call and returns the result to the invoking daemon. After performing a mount,nfsmounter holds onto the UDP socket of the NFS loopback server.It also remem-bers enough structure of traversed?le systems to recre-ate any directories used as mount points.If a loopback server crashes,nfsmounter immediately detects this by receiving an end-of-?le on the Unix domain socket con-nected to the server.nfsmounter then takes over any UDP sockets used by the crashed server,and begins serving the skeletal portions of the?le system required to clean up underlying mount points.Requests to other parts of the?le system return stale?le handle errors,helping en-sure most programs accessing the crashed?le system exit quickly with an error,rather than hanging on a?le access and therefore preventing the?le system from being un-mounted.

nfsmounter was built early in the development of SFS. After that point,we were able to continue development of SFS without any dedicated“crash boxes.”No mat-ter what bugs cropped up in the rest of SFS,we rarely needed a reboot.This mirrors the experience of students, who have used the toolkit for class projects without ever knowing the pain that loopback server development used to cause.

On occasion,of course,we have turned up bugs in ker-nel NFS implementations.We have suffered many kernel panics trying to understand these problems,but,strictly

Figure2:Architecture of the user-level?le system toolkit

speaking,that part of the work quali?es as kernel devel-opment,not user-level server development.

4.3Automounting in place

The SFS automounter shows that loopback automounters can mount?le systems in place,even though no previ-ous loopback automounter has managed to do so.SFS consists of a top level directory,/sfs,served by an auto-mounter process,and a number of subdirectories of/sfs served by separate loopback servers.Subdirectories of /sfs are created on-demand when users access the direc-tory names.Since subdirectories of/sfs are handled by separate loopback servers,they must be separate mount points.

The kernel’s vnode locking strategy complicates the task of creating mount points on-demand.More specif-ically,when a user references the name of an as-yet-unknown mount point in/sfs,the kernel generates an NFS LOOKUP RPC.The automounter cannot immedi-ately reply to this RPC,because it must?rst create a mount point.On the other hand,creating a mount point requires a mount system call during which the kernel again looks up the same pathname.The client NFS im-plementation will already have locked the/sfs directory during the?rst LOOKUP RPC.Thus the lookup within the mount call will hang.

Worse yet,the SFS automounter cannot always im-mediately create a requested mount point.It must val-idate the name of the directory,which involves a DNS lookup and various other network I/O.Validating a di-rectory name can take a long time,particularly if a DNS server is down.The time can be suf?cient to drive the NFS client into retransmission and have it lock the mount point,blocking all requests to/sfs.Thus,the auto-mounter cannot sit on any LOOKUP request for a name in/sfs.It must reply immediately.

The SFS automounter employs two tricks to achieve what previous loopback automounters could not.First, it tags nfsmounter,the process that actually makes the mount system calls,with a reserved group ID(an idea ?rst introduced by HLFSD[18]).By examining the credentials on NFS RPCs,then,the automounter can differentiate NFS calls made on behalf of nfsmounter from those issued for other processes.Second,the au-tomounter creates a number of special“.mnt”mount points on directories with names of the form/sfs/ .mnt/0/,/sfs/.mnt/1/,....The automounter never delays a response to a LOOKUP RPC in the/sfs directory. Instead,it returns a symbolic link redirecting the user to another symbolic link in one of the.mnt mount points. There it delays the result of a READLINK RPC.Because the delayed readlink takes place under a dedicated mount point,however,no other?le accesses are affected. Meanwhile,as the user’s process awaits a READLINK reply under/sfs/.mnt/n,the automounter actually mounts the remote?le system under/sfs.Because nfs-mounter’s NFS RPCs are tagged with a reserved group ID,the automounter responds differently to them—giving nfsmounter a different view of the?le system from the user’s.While users referencing the pathname in /sfs see a symbolic link to/sfs/.mnt/...,nfsmounter sees an ordinary directory on which it can mount the re-mote?le system.Once the mount succeeds,the auto-mounter lets the user see the directory,and responds to the pending READLINK RPC redirecting the user to the original pathname in/sfs which has now become a di-rectory.

A?nal problem faced by automounters is that the commonly used getcwd()library function performs an lstat system call on every entry of a directory containing mount points,such as/sfs.Thus,if any of the loopback servers mounted on immediate subdirectories of/sfs become unresponsive,getcwd()might hang,even when run from within a working?le system.Since loopback servers may depend on networked resources that become transiently unavailable,a loopback server may well need to become unavailable.When this happens,the loopback server noti?es the automounter,and the automounter re-turns temporary errors to any process attempting to ac-cess the problematic mount point(or rather,to any pro-

cess except nfsmounter,so that unavailable?le systems can still be unmounted).

4.4Asynchronous I/O library

Traditional I/O abstractions and interfaces are ill-suited to completely non-blocking programming of the sort re-quired for NFS loopback servers.Thus,the SFS?le system development toolkit contains a new C++non-blocking I/O library,libasync,to help write programs that avoid any potentially blocking operations.When a function cannot complete immediately,it registers a call-back with libasync,to be invoked when a particular asyn-chronous event occurs.At its core,libasync supports callbacks when?le descriptors become ready for I/O, when child processes exit,when a process receives sig-nals,and when the clock passes a particular time.A cen-tral dispatch loop polls for such events to occur through the system call select—the only blocking system call a loopback server ever makes.

Two complications arise from this style of event-driven programming in a language like C or C++.First, in languages that do not support closures,it can be in-convenient to bundle up the necessary state one must preserve to?nish an operation in a callback.Second, when an asynchronous library function takes a callback and buffer as input and allocates memory for its results, the function’s type signature does not make clear which code is responsible for freeing what memory when.Both complications easily lead to programming errors,as we learned bitterly in the?rst implementation of SFS which we entirely scrapped.

libasync makes asynchronous library interfaces less error-prone through aggressive use of C++templates.A heavily overloaded template function,wrap,produces callback objects through a technique much like func-tion currying:wrap bundles up a function pointer and some initial arguments to pass the function,and it re-turns a function object taking the function’s remaining arguments.In other words,given a function:

res_t function(a1_t,a2_t,a3_t);

a call to wrap(function,a1,a2)produces a func-tion object with type signature:

res_t callback(a3_t);

This wrap mechanism permits convenient bundling of code and data into callback objects in a type-safe way. Though the example shows the wrapping of a simple function,wrap can also bundle an object and method pointer with arguments.wrap handles functions and ar-guments of any type,with no need to declare the combi-nation of types ahead of time.The maximum number of

class foo:public bar{

/*...*/

};

void

function()

{

reff=new refcounted

(/*constructor arguments*/);

ptrb=f;

f=new refcounted

(/*constructor arguments*/);

b=NULL;

}

Figure3:Example usage of reference-counted pointers arguments is determined by a parameter in a perl script that actually generates the code for wrap.

To avoid the programming burden of tracking which of a caller and callee is responsible for freeing dynamically allocated memory,libasync also supports reference-counted garbage collection.Two template types offer reference-counted pointers to objects of type T—ptr and ref.ptr and ref behave identically and can be assigned to each other,except that a ref cannot be NULL.One can allocate a reference-counted version of any type with the template type refcounted,which takes the same constructor arguments as type T.Figure3 shows an example use of reference-counted garbage col-lection.Because reference-counted garbage collection deletes objects as soon as they are no longer needed,one can also rely on destructors of reference-counted objects to release resources more precious than memory,such as open?le descriptors.

libasync contains a number of support routines built on top of the core callbacks.It has asynchronous?le handles for input and formatted output,an asynchronous DNS resolver,and asynchronous TCP connection es-tablishment.All were implemented from scratch to use libasync’s event dispatcher,callbacks,and reference counting.libasync also supplies helpful building blocks for objects that accumulate data and must deal with short writes(when no buffer space is available in the kernel). Finally,it supports asynchronous logging of messages to the terminal or system log.

4.5Asynchronous RPC library and com-

piler

The SFS toolkit also supplies an asynchronous RPC li-brary,libarpc,built on top of libasync,and a new RPC

compiler,rpcc.rpcc compiles Sun XDR data structures into C++data structures.Rather than directly output code for serializing the structures,however,rpcc uses templates and function overloading to produce a generic way of traversing data structures at compile time.This allows one to write concise code that actually compiles to a number of functions,one for each NFS data type. Serialization of data in RPC calls is but one application of this traversal mechanism.The ability to traverse NFS data structures automatically turned out to be useful in a number of other situations.

As an example,one of the loopback servers constitut-ing the client side of SFS uses a protocol very similar to NFS for communicating with remote SFS servers.The only differences is that the SFS protocol has more ag-gressive?le attribute caching and lets the server call back to the client to invalidate attributes.Rather than manually extract attribute information from the return structures of 21different NFS RPCs,the SFS client uses the RPC li-brary to traverse the data structures and extract attributes automatically.While the compiled output consists of nu-merous functions,most of these are C++template instan-tiations automatically generated by the compiler.The source needs only a few functions to overload the traver-sal’s behavior on attribute structures.Moreover,any bug in the source will likely break all21NFS functions. 4.6Stackable NFS manipulators

An SFS support library,libsfsmisc,provides stackable NFS manipulators.Manipulators take one nfsserv ob-ject and produce a different one,manipulating any calls from and replies to the original nfsserv object.A loop-back NFS server starts with an initial nfsserv object, generally nfsserv_udp which accepts NFS calls from a UDP socket.The server can then push a bunch of manip-ulators onto this nfsserv.For example,over the course of developing SFS we stumbled across a number of bugs that caused panics in NFS client implementations.We developed an NFS manipulator,nfsserv_fixup,that works around these bugs.SFS’s loopback servers push nfsserv_fixup onto their NFS manipulator stack,and then don’t worry about the speci?cs of any kernel bugs. If we discover another bug to work around,we need only put the workaround in a single place to?x all loopback servers.

Another NFS manipulator is a demultiplexer that breaks a single stream of NFS requests into multiple streams.This allows a single UDP socket to be used as the server side for multiple NFS mount points.The de-multiplexer works by tagging all NFS?le handles with the number of the mount point they belong to.Though ?le handles are scattered throughout the NFS call and re-turn types,the tagging was simple to implement using the traversal feature of the RPC compiler.

4.7Miscellaneous features

The SFS toolkit has several other features.It supplies a small,user-level module,mallock.o,that loopback servers must link against to avoid paging.On systems supporting the mlockall()system call,this is easily ac-complished.On other systems,mallock.o manually pins the text and data segments and replaces the malloc()li-brary function with a routine that always returns pinned memory.

Finally,the SFS toolkit contains a number of de-bugging features,including aggressive memory check-ing,type checking for accesses to RPC union structures, and easily toggleable tracing and pretty-printing of RPC traf?c.Pretty-printing of RPC traf?c in particular has proven an almost unbeatable debugging tool.Each NFS RPC typically involves a limited amount of computation. Moreover,separate RPC calls are relatively independent of each other,making most problems easily reproducible. When a bug occurs,we turn on RPC tracing,locate the RPC on which the server is returning a problematic re-ply,and set a conditional breakpoint to trigger under the same conditions.Once in the debugger,it is generally just a matter of stepping through a few functions to un-derstand how we arrive from a valid request to an invalid reply.

Despite the unorthodox structure of non-blocking dae-mons,the SFS libraries have made SFS’s60,000+lines of code(including the toolkit and all daemons)quite manageable.The trickiest bugs we have hit were in NFS implementations.At least the SFS toolkit’s tracing facili-ties let us quickly verify that SFS was behaving correctly and pin the blame on the kernel.

4.8Limitations of NFS loopback servers Despite the bene?ts of NFS loopback servers and their tractability given the SFS toolkit,there are two serious drawbacks that must be mentioned.First,the NFS2 and3protocols do not convey?le closes to the server. There are many reasons why a?le system implementor might wish to know when?les are closed.We have im-plemented a close simulator as an NFS manipulator,but it cannot be100%accurate and is thus not suitable for all needs.The NFS4protocol[16]does have?le closes, which will solve this problem if NFS4is deployed. The other limitation is that,because of the potential risk of deadlock in loopback servers,one can never pred-icate the completion of an NFS write on that of a write issued to local disk.Loopback servers can access the

local disk,provided they do so asynchronously.libasync offers support for doing so using helper processes,or one can do so as an NFS loopback client.Thus,one can build a loopback server that talks to a remote server and keeps a cache on the local disk.However,the loopback server must return from an NFS write once the corresponding remote operation has gone through;it cannot wait for the write to go through in the local cache.Thus,techniques that rely on stable,crash-recoverable writes to local disk, such as those for disconnected operation in CODA[12], cannot easily be implemented in loopback servers;one would need to use raw disk partitions.

5NFS loopback clients

In addition to implementing loopback servers,libarpc al-lows applications to behave as NFS clients,making them loopback clients.An NFS loopback client accesses the local hard disk by talking to an in-kernel NFS server, rather than using the standard POSIX open/close/read/ write system call interface.Loopback clients have none of the disadvantages of loopback servers.In fact,a loop-back client can still access the local?le system through system calls.NFS simply offers a lower-level,asyn-chronous alternative from which some aggressive appli-cations can bene?t.

The SFS server software is actually implemented as an NFS loopback client using libarpc.It reaps a num-ber of bene?ts from this architecture.The?rst is https://www.sodocs.net/doc/c118178065.html,ing asynchronous socket I/O,the loopback client can have many parallel disk operations outstanding simultaneously.This in turn allows the operating sys-tem to achieve better disk arm scheduling and get higher throughput from the disk.Though POSIX does offer op-tional aio system calls for asynchronous?le I/O,the aio routines only operate on open?les.Thus,without the NFS loopback client,directory lookups,directory reads, and?le creation would all still need to be performed syn-chronously.

The second bene?t of the SFS server as a loopback client is security.The SFS server is of course trusted,as it may be called upon to serve or modify any?le.The server must therefore be careful not to perform any op-eration not permitted to the requesting users.Had the server been implemented on top of the normal?le sys-tem interface,it would also have needed to perform ac-cess control—deciding on its own,for instance,whether or not to honor a request to delete a?le.Making such de-cisions correctly without race conditions is actually quite tricky to do given only the Unix?le system interface.1 1In the example of deleting a?le,the server would need to change its working directory to that of the?le.Otherwise,between the server’s access check and its unlink system call,a bad user could replace the di-As an NFS client,however,it is trivial to do.Each NFS request explicitly speci?es the credentials with which to execute the request(which will generally be less priv-ileged than the loopback client itself).Thus,the SFS server simply tags NFS requests with the appropriate user credentials,and the kernel’s NFS server makes the access control decision and performs the operation(if ap-proved)atomically.

The?nal bene?t of having used a loopback client is in avoiding limits on the number of open?les.The total number of open?les on SFS clients connected to an SFS server may exceed the maximum number of open?les allowed on the server.As an NFS loopback client,the SFS server can access a?le without needing a dedicated ?le descriptor.

A user of the SFS toolkit actually prototyped an SFS server that used the POSIX interface rather than act as a loopback client.Even without implementing leases on attributes,user authentication,or unique st_ino?elds, the code was almost as large as the production SFS server and considerably more bug-prone.The POSIX server had to jump through a number of hoops to deal with such issues as the maximum number of open?les.

5.1Limitations of NFS loopback clients The only major limitation on NFS loopback clients is that they must run as root.Unprivileged programs cannot ac-cess the?le system with the NFS interface.A related concern is that the value of NFS?le handles must be carefully guarded.If even a single?le handle of a direc-tory is disclosed to an untrusted user,the user can access any part of the?le system as any user.Fortunately,the SFS RPC compiler provides a solution to this problem. One can easily traverse an arbitrary NFS data structure and encrypt or decrypt all?le handles encountered.The SFS toolkit contains a support routine for doing so.

A?nal annoyance of loopback clients is that the?le systems they access must be exported via NFS.The ac-tual mechanics of exporting a?le system vary signi?-cantly between versions of Unix.The toolkit does not yet have a way of exporting?le systems automatically.Thus, users must manually edit system con?guration?les be-fore the loopback client will run.

rectory with a symbolic link,thus tricking the server into deleting a ?le in a different(unchecked)directory.Cross-directory renames are even worse—they simply cannot be implemented both atomically and securely.An alternative approach might be for the server to drop priv-ileges before each?le system operation,but then unprivileged users could send signals to the server and kill it.

#Lines Function

19includes and global variables

22command-line argument parsing

8locate and spawn nfsmounter

5get server’s IP address

11ask server’s portmap for NFS port

14ask server for NFS?le handle

20call nfsmounter for mount

20relay NFS calls

119Total

Figure4:Lines of code in dumbfs

6Applications of the toolkit

The SFS toolkit has been used to build a number of sys-tems.SFS itself is a distributed?le system consisting of two distinct protocols,a read-write and a read-only pro-tocol.On the client side,each protocol is implemented by a separate loopback server.On the server side,the read-write protocol is implemented by a loopback client. (The read-only server uses the POSIX interface.)A num-ber of non-SFS?le systems have been built,too,in-cluding a?le system interface to CVS,a?le system to FTP/HTTP gateway,and?le system interfaces to several databases.

The toolkit also lets one develop distributed?le sys-tems and?t them into the SFS framework.The SFS read-write protocol is very similar to NFS3.With few modi?cations,therefore,one can transform a loopback server into a network server accessible from any SFS client.SFS’s libraries automatically handle key manage-ment,encryption and integrity checking of session traf-?c,user authentication,and mapping of user credentials between local and remote machines.Thus,a distributed ?le system built with the toolkit can without much effort provide a high level of security against network attacks. Finally,the asynchronous I/O library from the toolkit has been used to implement more than just?le systems. People have used it to implement TCP proxies,caching web proxies,and TCP to UDP proxies for networks with high loss.Asynchronous I/O is an extremely ef?cient tool for implementing network servers.A previous ver-sion of the SFS toolkit was used to build a high perfor-mance asynchronous SMTP mail server that survived a distributed mail-bomb attack.(The“hybris worm”in-fected thousands of machines and made them all send mail to our server.)

6.1dumbfs–A simple loopback server

To give a sense for the complexity of using the SFS toolkit,we built dumbfs,the simplest possible loopback ?le system.dumbfs takes as arguments a server name,a pathname,and a mount point.It creates a loopback NFS mount on the mount point,and relays NFS calls to a re-mote NFS server.Though this functionality may sound worthless,such a utility does actually have a use.Be-cause the SFS RPC libraries will trace and pretty-print RPC traf?c,dumbfs can be used to analyze exactly how an NFS client and server are behaving.

The implementation of dumbfs required119lines of code(including blank lines for readability),as shown in Figure4.Notably missing from the breakdown is any clean-up code.dumbfs does not need to catch any sig-nals.If it dies for any reason,nfsmounter will clean up the mount point.

Figure5shows the implementation of the NFS call re-laying code.Function dispatch is called for each NFS RPC.The?rst two lines manipulate the“auth unix pa-rameters”of the NFS call—RPC terminology for user and group IDs.To reuse the same credentials in an out-going RPC,they must be converted to an AUTH*type. The AUTH*type is de?ned by the RPC library in the op-erating system’s C library,but the authopaque routines are part of the SFS toolkit.

The third line of dispatch makes an outgoing NFS RPC.nfsc is an RPC handle for the remote NFS server. nc->getvoidarg returns a pointer to the RPC argu-ment structure,cast to void*.nc->getvoidres sim-ilarly returns a pointer to the appropriate result type, also cast to void*.Because the RPC library is asyn-chronous,nfsc->call will return before the RPC com-pletes.dispatch must therefore create a callback,using wrap to bundle together the function reply with the ar-gument nc.

When the RPC?nishes,the library makes the call-back,passing an additional argument of type clnt stat to indicate any RPC-level errors(such as a timeouts).If such an error occurs,it is logged and propagated back as the generic RPC failure code SYSTEM ERR.warn is the toolkit’s asynchronous logging facility.The syntax is similar to C++’s cout,but warn additionally converts RPC and NFS enum error codes to descriptive strings. If there is no error,reply simply returns the result data structure just?lled in by the outgoing NFS RPC.

6.1.1dumbfs performance

To analyze the inherent overhead of a loopback server, we measured the performance of dumbfs.We ran ex-periments between two800MHz Pentium III machines, each with256MBytes of PC133RAM and a Seagate ST318451LW disk.The two machines were connected with100Mbit/sec switched Ethernet.The client ran FreeBSD4.2,and the server OpenBSD2.8.

void

dispatch(nfscall*nc)

{

static AUTH*ao=authopaque_create();

authopaque_set(ao,nc->getaup());

nfsc->call(nc->proc(),nc->getvoidarg(),nc->getvoidres(), wrap(reply,nc),ao);

}

static void

reply(nfscall*nc,enum clnt_stat stat)

{

if(stat){

warn<<"NFS server:"<

nc->reject(SYSTEM_ERR);

}

else

nc->reply(nc->getvoidres());

}

Figure5:dumbfs dispatch routine

To isolate the loopback server’s worst characteristic, namely its latency,we measured the time to perform an operation that requires almost no work from the server—an unauthorized fchown system call.NFS required an average of186μsec,dumbfs320μsec.Fortunately,raw RPC latency is a minor component of the performance of most real applications.In particular,the time to process any RPC that requires a disk seek will dwarf dumbfs’s latency.As shown in Figure7,the time to compile emacs20.7is only4%slower on dumbfs than NFS.Fur-thermore,NFS version4has been speci?cally designed to tolerate high latency,using such features as batching of RPCs.In the future,latency should present even less of a problem for NFS4loopback servers

To evaluate the impact of dumbfs on data movement, we measured the performance of sequentially reading a 100MByte sparse?le that was not in the client’s buffer cache.Reads from a sparse?le cause an NFS server to send blocks of zeros over the network,but do not re-quire the server to access the disk.This represents the worst case scenario for dumbfs,because the cost of ac-cessing the disk is the same for both?le systems and would only serve to diminish the relative difference be-tween NFS3and dumbfs.NFS3achieved a through-put of11.2MBytes per second(essentially saturating the network),while dumbfs achieved10.3.To get a rough idea of CPU utilization,we used top command to exam-ine system activity while streaming data from a50GByte sparse?le through dumbfs.The idle time stayed between 30–35%.6.2cryptfs–An encrypting?le system

As a more useful example,we built an encrypting?le system in the spirit of CFS[3].Starting from dumbfs,it took two evenings and an additional600lines of C++to build cryptfs—a cryptographic?le system with a very crude interface.cryptfs takes the same arguments as dumbfs,prompts for a password on startup,then encrypts all?le names and data.

cryptfs was inconvenient to use because it could only be run as root and unmounting a?le system required killing the daemon.However,it worked so well that we spent an additional week building cryptfsd—a daemon that allows multiple encrypted directories to be mounted at the request of non-root users.cryptfsd uses almost the same encrypting NFS translator as cryptfs,but is addi-tionally secure for use on machines with untrusted non-root users,provides a convenient interface,and works with SFS as well as NFS,

6.2.1Implementation

The?nal system is1,920lines of code,including cryptfsd itself,a utility cmount to mount encrypted di-rectories,and a small helper program pathinfo invoked by cryptfsd.Figure6.2shows a breakdown of the lines of code.By comparison,CFS is over6,000lines(though of course it has different features from cryptfs).CFS’s NFS request handing code(its analogue of nfs.C)is2,400 lines.

cryptfsd encrypts?le names and contents using Rijn-

#Lines File Function

223cryptfs.h Structure de?nitions,inline&template functions,global declarations

90cryptfsd.C main function,parse options,launch nfsmounter

343findfs.C Translate user-supplied pathname to NFS/SFS server address and?le handle 641nfs.C NFS dispatch routine,encrypt/decrypt?le names&contents

185afs.C NFS server for/cfs,under which encrypted directories are mounted

215adm.C cryptfsadm dispatch routine—receive user mount requests

26cryptfsadm.x cryptfsadm RPC protocol for requesting mounts from cryptfsd

63cmount.C Utility to mount an encrypted directory

134pathinfo.c Helper program—run from findfs.C to handle untrusted requests securely 1,920Total

Figure6:Lines of code in cryptfsd.The last two source?les are for stand-alone utilities.

dael,the AES encryption algorithm.File names are en-crypted in CBC mode,?rst forwards then backwards to ensure that every byte of the encrypted name depends on every byte of the original name.Symbolic links are treated similarly,but have a48-bit random pre?x added so that two links to the same?le will have different en-cryptions.

The encryption of?le data does not depend on other parts of the?le.To encrypt a16-byte block of plaintext ?le data,cryptfsd?rst encrypts the block’s byte offset in the?le and a per-?le“initialization vector,”producing 16bytes of random-looking data.It exclusive-ors this data with the plaintext block and encrypts again.Thus, any data blocks repeated within or across?les will have different encryptions.

cryptfsd could have used a?le’s inode number or a hash of its NFS?le handle as the initialization vector. Unfortunately,such vectors would not survive a tradi-tional backup and restore.Instead,cryptfsd stores the initialization vector at the beginning of the?le.All?le contents is then shifted forward512bytes.We used512 bytes though8would have suf?ced because applications may depend on the fact that modern disks write aligned 512-byte sectors atomically.

Many NFS servers use8KByte aligned buffers.Shift-ing a?le’s contents can therefore hurt the performance of random,aligned8KByte writes to a large?le;the server may end up issuing disk reads to produce com-plete8KByte buffers.Fortunately,reads are not nec-essary in the common case of appending to?les.Of course,cryptfs could have stored the initialization vector elsewhere.CFS,for instance,stores initialization vec-tors outside of?les in symbolic links.This technique incurs more synchronous disk writes for many metadata operations,however.It also weakens the semantics of the atomic rename operation.We particularly wished to avoid deviating from traditional crash-recovery seman-tics for operations like rename.

Like CFS,cryptfs does not handle sparse?les prop-erly.When a?le’s size is increased with the truncate system call or with a write call beyond the?le’s end,any unwritten portions of the?le will contain garbage rather than0-valued bytes.

The SFS toolkit simpli?ed cryptfs’s implementation in several ways.Most importantly,every NFS3RPC (except for NULL)can optionally return?le attributes containing?le size information.Some operations addi-tionally return a?le’s old size before the RPC executed. cryptfs must adjust these sizes to hide the fact that it shifts ?le contents forward.(Also,because Rijndael encrypts blocks of16bytes,cryptfs adds16bytes of padding to ?les whose length is not a multiple of16.)

Manually writing code to adjust?le sizes wherever they appear in the return types of21different RPC calls would have been a daunting and error-prone task.How-ever,the SFS toolkit uses the RPC compiler’s data struc-ture traversal functionality to extract all attributes from any RPC data structure.Thus,cryptfs only needs a total of15lines of code to adjust?le sizes in all RPC replies. cryptfs’s implementation more generally bene?ted from the SFS toolkit’s single dispatch routine architec-ture.Traditional RPC libraries call a different service function for every RPC procedure de?ned in a proto-col.The SFS toolkit,however,does not demultiplex RPC procedures.It passes them to a single function like dispatch in Figure5.When calls do not need to be demultiplexed(as was the case with dumbfs),this vastly simpli?es the implementation.

File name encryption in particular was simpli?ed by the single dispatch routine architecture.A total of9 NFS3RPCs contain?le names in their argument types. However,for7of these RPCs,the?le name and directory ?le handle are the?rst thing in the argument structure. These calls can be handled identically.cryptfs therefore implements?le name encryption in a switch statement with7cascaded“case”statements and two special cases.

(Had the?le names been less uniformly located in argu-ment structures,of course,we could have used the RPC traversal mechanism to extract pointers to them.) Grouping code by functionality rather than RPC pro-cedure also results in a functioning?le system at many more stages of development.That in turn facilitates in-cremental testing.To develop cryptfs,we started from dumbfs and?rst just?xed the?le sizes and offsets.Then we special-cased read and write RPCs to encrypt and de-crypt?le contents.Once that worked,we added?le name encryption.At each stage we had a working?le system to test.

A function to“encrypt?le names whatever the RPC procedure”is easy to test and debug when the rest of the?le system works.With a traditional demultiplexing RPC library,however,the same functionality would have been broken across9functions.The natural approach in that case would have been to implement one NFS RPC at a time,rather than one feature at a time,thus arriving at a fully working system only at the end.

6.2.2cryptfs performance

To evaluate cryptfs’s performance,we measured the time to untar the emacs20.7software distribution,con?gure the software,compile it,and then delete the build tree. Figure7shows the results.The white bars indicate the performance on FreeBSD’s local FFS?le system.The solid gray bars show NFS3over UDP.The solid black bars show the performance of cryptfs.The leftward slant-ing black bars give the performance of dumbfs for com-parison.

The untar phase stresses data movement and latency. The delete and con?gure phases mostly stress latency. The compile phase additionally requires CPU time—it consumes approximately57seconds of user-level CPU time as reported by the time command.In all phases, cryptfs is no more than10%slower then NSF3.Interest-ingly enough,cryptfs actually outperforms NFS3in the delete phase.This does not mean that cryptfs is faster at deleting the same?les.When we used NFS3to delete an emacs build tree produced by cryptfs,we observed the same performance as when deleting it with cryptfs.Some artifact of cryptfs’s?le system usage—perhaps the fact that almost all?le names are the same length—results in directory trees that are faster to delete.

For comparison,we also ran the benchmark on CFS using the Blow?sh cipher.2The black diagonal stripes 2Actually,CFS did not execute the full benchmark properly—some directories could not be removed,putting rm into an in?nite loop.Thus, for CFS only,we took out the benchmark’s rm-rf command,instead deleting as much of the build tree as possible with?nd/xargs and then renaming the emacs-20.7directory to a garbage https://www.sodocs.net/doc/c118178065.html,beled“CFS-async”show its performance.CFS out-performs even straight NFS3on the untar phase.It does so by performing asynchronous?le writes when it should perform synchronous ones.In particular,the fsync system call does nothing on CFS.This is extremely dangerous—particularly since CFS runs a small risk of deadlocking the machine and requiring a https://www.sodocs.net/doc/c118178065.html,ers can loose the contents of?les they edit.

We?xed CFS’s dangerous asynchronous writes by changing it to open all?les with the O FSYNC?ag.The change did not affect the number system calls made,but ensured that writes were synchronous,as required by the NFS2protocol CFS implements.The results are shown with gray diagonal stipes,labeled“CFS-sync.”CFS-sync performs worse than cryptfs on all phases.For com-pleteness,we also veri?ed that cryptfs can beat NFS3’s performance by changing all writes to unstable and giv-ing fake replies to COMMIT RPCs—effectively what CFS does.

Because of the many architectural differences between cryptfs and CFS,the performance measurements should not be taken as a head-to-head comparison of SFS’s asynchronous RPC library with the standard libc RPC facilities that CFS uses.However,CFS is a useful pack-age with performance many that people?nd acceptable given its functionality.These experiments show that, armed with the SFS toolkit,the author could put together a roughly equivalent?le system in just a week and a half. Building cryptfsd was a pleasant experience,because every little piece of functionality added could immedi-ately be tested.The code that actually manipulates NFS RPCs is less than700lines.It is structured as a bunch of small manipulations to NFS data structures being passed between and NFS client and server.There is a mostly one-to-one mapping from RPCs received to those made. Thus,it is easy for cryptfs to provide traditional crash-recovery semantics.

7Summary

User-level software stands to gain much by using NFS over the loopback interface.By emulating NFS servers, portable user-level programs can implement new?le sys-tems.Such loopback servers must navigate several tricky issues,including deadlock,vnode and mount point lock-ing,and the fact that a loopback server crash can wedge a machine and require a reboot.The SFS?le system development toolkit makes it relatively easy to avoid these problems and build production-quality loopback NFS servers.

The SFS toolkit also includes an asynchronous RPC library that lets applications access the local?le system using NFS.For aggressive applications,NFS can actu-

E x e c u t i o n T i m e (s e c o n d s )

Local NFS3cryptfs Figure 7:Execution time of emacs untar,con?gure,compile,and delete.

ally be a better interface than the traditional open /close /read /write .NFS allows asynchronous access to the ?le system,even for operations such as ?le creation that are not asynchronously possible through system calls.More-over,NFS provides a lower-level interface that can help avoid certain race conditions in privileged software.

Acknowledgments

The author is grateful to Frans Kaashoek for implement-ing the POSIX-based SFS server mentioned in Section 5,for generally using and promoting the SFS toolkit,and for his detailed comments on several drafts of this pa-per.The author also thanks Marc Waldman,Benjie Chen,Emmett Witchel,and the anonymous reviewers for their helpful feedback.Finally,thanks to Yoonho Park for his help as shepherd for the paper.

Availability

The SFS ?le system development toolkit is free software,released under version 2of the GNU General Public Li-cense.The toolkit comes bundled with SFS,available from https://www.sodocs.net/doc/c118178065.html,/.

References

[1]Albert D.Alexandrov,Maximilian Ibel,Klaus E.

Schauser,and Chris J.Scheiman.Extending the operating system at the user level:the Ufo global ?le system.In Proceedings of the 1997USENIX ,pages 77–https://www.sodocs.net/doc/c118178065.html,ENIX,January 1997.[2]Matt Bishop and Michael Dilger.Checking for race

conditions in ?le https://www.sodocs.net/doc/c118178065.html,puting Systems ,9(2):131–152,Spring 1996.

[3]Matt Blaze.A cryptographic ?le system for unix.

In 1st ACM Conference on Communications and Computing Security ,pages 9–16,November 1993.[4]B.Callaghan,B.Pawlowski,and P.Staubach.NFS

version 3protocol speci?cation.RFC 1813,Net-work Working Group,June 1995.[5]Brent Callaghan and Tom Lyon.The automounter.

In Proceedings of the Winter 1989USENIX ,pages 43–https://www.sodocs.net/doc/c118178065.html,ENIX,1989.[6]Miguel Castro and Barbara Liskov.Practical

byzantine fault tolerance.In Proceedings of the 3rd Symposium on Operating Systems Design and Im-plementation ,pages 173–186,February 1999.[7]Vincent Cate.Alex—a global ?lesystem.In Pro-ceedings of the USENIX File System Workshop ,May 1992.[8]Jeremy https://www.sodocs.net/doc/c118178065.html,erFS.

http://

https://www.sodocs.net/doc/c118178065.html,/~jeremy/userfs/.

[9]David K.Gifford,Pierre Jouvelot,Mark Sheldon,

and James O’Toole.Semantic ?le systems.In Proceedings of the 13th ACM Symposium on Op-erating Systems Principles ,pages 16–25,Paci?c Grove,CA,October 1991.ACM.[10]Cary G.Gray and David R.Cheriton.Leases:An

ef?cient fault-tolerant mechanism for distributed ?le cache consistency.In Proceedings of the 12th ACM Symposium on Operating Systems Principles .ACM,1989.[11]John S.Heidemann and Gerald J.Popek.File

system development with stackable layers.ACM Transactions on Computer Systems ,12(1):58–89,February 1994.

[12]James J.Kistler and M.Satyanarayanan.Discon-

nected operation in the coda?le system.ACM Transactions on Computer Systems,10(1):3–25, 1992.

[13]David Mazi`e res,Michael Kaminsky,M.Frans

Kaashoek,and Emmett Witchel.Separating key management from?le system security.In Proceed-ings of the17th ACM Symposium on Operating Sys-tems Principles,pages124–139,Kiawa Island,SC, 1999.ACM.

[14]Jan-Simon Pendry and Nick Williams.Amd The

4.4BSD Automounter Reference Manual.London,

SW72BZ,UK.Manual comes with amd software distribution.

[15]Russel Sandberg,David Goldberg,Steve Kleiman,

Dan Walsh,and Bob Lyon.Design and implemen-tation of the Sun network?lesystem.In Proceed-ings of the Summer1985USENIX,pages119–130, Portland,OR,https://www.sodocs.net/doc/c118178065.html,ENIX.

[16]S.Shepler,B.Callaghan,D.Robinson,R.Thurlow,

C.Beame,M.Eisler,and

D.Noveck.NFS version

4protocol.RFC3010,Network Working Group, December2000.

[17]Assar Westerlund and Johan Danielsson.Arla—

a free AFS client.In Proceedings of the1998

USENIX,Freenix track,New Orleans,LA,June https://www.sodocs.net/doc/c118178065.html,ENIX.

[18]Erez Zadok and Alexander Dupuy.HLFSD:De-

livering email to your$HOME.In Proceedings of the Systems Administration Conference(LISA VII), Monterey,CA,https://www.sodocs.net/doc/c118178065.html,ENIX. [19]Erez Zadok and Jason Nieh.FiST:A language for

stackable?le systems.In Proceedings of the2000 https://www.sodocs.net/doc/c118178065.html,ENIX,June2000.

英语选修六课文翻译Unit5 The power of nature An exciting job的课文原文和翻译

AN EXCITING JOB I have the greatest job in the world. I travel to unusual places and work alongside people from all over the world. Sometimes working outdoors, sometimes in an office, sometimes using scientific equipment and sometimes meeting local people and tourists, I am never bored. Although my job is occasionally dangerous, I don't mind because danger excites me and makes me feel alive. However, the most important thing about my job is that I help protect ordinary people from one of the most powerful forces on earth - the volcano. I was appointed as a volcanologist working for the Hawaiian V olcano Observatory (HVO) twenty years ago. My job is collecting information for a database about Mount Kilauea, which is one of the most active volcanoes in Hawaii. Having collected and evaluated the information, I help other scientists to predict where lava from the volcano will flow next and how fast. Our work has saved many lives because people in the path of the lava can be warned to leave their houses. Unfortunately, we cannot move their homes out of the way, and many houses have been covered with lava or burned to the ground. When boiling rock erupts from a volcano and crashes back to earth, it causes less damage than you might imagine. This is because no one lives near the top of Mount Kilauea, where the rocks fall. The lava that flows slowly like a wave down the mountain causes far more damage because it

八年级下册3a课文

八年级下学期全部长篇课文 Unit 1 3a P6 In ten years , I think I'll be a reporter . I'll live in Shanghai, because I went to Shanfhai last year and fell in love with it. I think it's really a beautiful city . As a reporter, I think I will meet lots of interesting people. I think I'll live in an apartment with my best friends, because I don' like living alone. I'll have pets. I can't have an pets now because my mother hates them, and our apartment is too small . So in ten yers I'll have mny different pets. I might even keep a pet parrot!I'll probably go skating and swimming every day. During the week I'll look smart, and probably will wear a suit. On the weekend , I'll be able to dress more casully. I think I'll go to Hong Kong vacation , and one day I might even visit Australia. P8 Do you think you will have your own robot In some science fiction movies, people in the future have their own robots. These robots are just like humans. They help with the housework and do most unpleasant jobs. Some scientists believe that there will be such robots in the future. However, they agree it may take hundreds of years. Scientist ae now trying to make robots look like people and do the same things as us. Janpanese companies have already made robts walk and dance. This kond of roots will also be fun to watch. But robot scientist James White disagrees. He thinks that it will be difficult fo a robot to do the same rhings as a person. For example, it's easy for a child to wake up and know where he or she is. Mr White thinks that robots won't be able to do this. But other scientists disagree. They think thast robots will be able t walk to people in 25 to 50tars. Robots scientists are not just trying to make robots look like people . For example, there are already robots working in factories . These robots look more like huge arms. They do simple jobs over and over again. People would not like to do such as jobs and would get bored. But robots will never bored. In the futhre, there will be more robots everwhere, and humans will have less work to do. New robots will have different shapes. Some will look like humans, and others might look like snakes. After an earthquake, a snake robot could help look for people under buildings. That may not seem possibe now, but computers, space rockets and even electric toothbrushes seemed

选修6英语课本原文文档

高中英语选修 6 Unit 1 A SHORT HISTORY OF WESTERN PAINTING Art is influenced by the customs and faith of a people. Styles in Western art have changed many times. As there are so many different styles of Western art, it would be impossible to describe all of them in such a short text. Consequently, this text will describe only the most important ones. Starting from the sixth century AD. The Middle Ages(5th to the 15th century AD) During the Middle Ages, the main aim of painters was to represent religious themes. A conventional artistof this period was not interested in showing nature and people as they really were. A typical picture at this time was full of religious symbols, which created a feeling of respect and love for God. But it was evident that ideas were changing in the 13th century when painters like Giotto di Bondone began to paint religious scenes in a more realistic way. The Renaissance(15th to 16th century) During the Renaissance, new ideas and values gradually replaced those held in the Middle Ages. People began to concentrate less on

英语选修六课文翻译第五单元word版本

英语选修六课文翻译 第五单元

英语选修六课文翻译第五单元 reading An exciting job I have the greatest job in the world. travel to unusual places and work alongside people from all over the world sometimes working outdoors sometimes in an office sometimes using scientific equipment and sometimes meeting local people and tourists I am never bored although my job is occasionally dangerous I don't mind because danger excites me and makes me feel alive However the most important thing about my job is that I heIp protect ordinary people from one of the most powerful forces on earth-the volcano. I was appointed as a volcanologist working for the Hawaiian Volcano Observatory (HVO) twenty years ago My job is collecting information for a database about Mount KiLauea which is one of the most active volcanoes in Hawaii Having collected and evaluated the information I help oyher scientists to predict where lava from the path of the lava can be warned to leave their houses Unfortunately we cannot move their homes out of the way and many houses have been covered with lava or burned to the ground. When boiling rock erupts from a volcano and crashes back to earth, it causes less damage than you might imagine. This is because no one lives near the top of Mount Kilauea, where the rocks fall. The lava that flows slowly like a wave down the mountain causes far more damage because it buries everything in its path under the molten rock. However, the eruption itself is really exciting to watch and I shall never forget my first sight of one. It was in the second week after I arrived in Hawaii. Having worked hard all day, I went to bed early. I was fast asleep when suddenly my bed began shaking and I heard a strange sound, like a railway train passing my window. Having experienced quite a few earthquakes in Hawaii already, I didn't take much notice. I was about to go back to sleep when suddenly my bedroom became as bright as day. I ran out of the house into the back garden where I could see Mount Kilauea in the distance. There had been an eruption from the side of the mountain and red hot lava was fountaining hundreds of metres into the air. It was an absolutely fantastic sight. The day after this eruption I was lucky enough to have a much closer look at it. Two other scientists and I were driven up the mountain and dropped as close as possible to the crater that had been formed duing the eruption. Having earlier collected special clothes from the observatory, we put them on before we went any closer. All three of us looked like spacemen. We had white protective suits that covered our whole body, helmets,big boots and special gloves. It was not easy to walk in these suits, but we slowly made our way to the edge of the crater and looked down into the red, boiling centre. The other two climbed down into the crater to collect some lava for later study, but this being my first experience, I stayed at the top and watched them.

人教版英语选修六Unit5 the power of nature(An exciting Job)

高二英语教学设计 Book6 Unit 5 Reading An Exciting Job 1.教学目标(Teaching Goals): a. To know how to read some words and phrases. b. To grasp and remember the detailed information of the reading material . c. To understand the general idea of the passage. d. To develop some basic reading skills. 2.教学重难点: a.. To understand the general idea of the passage. b. To develop some basic reading skills. Step I Lead-in and Pre-reading Let’s share a movie T: What’s happened in the movie? S: A volcano was erupting. All of them felt frightened/surprised/astonished/scared…… T: What do you think of volcano eruption and what can we do about it? S: A volcano eruption can do great damage to human beings. It seems that we human beings are powerless in front of these natural forces. But it can be predicted and damage can be reduced. T: Who will do this kind of job and what do you think of the job? S: volcanologist. It’s dangerous. T: I think it’s exciting. Ok, this class, let’s learn An Exciting Job. At first, I want to show you the goals of this class Step ⅡPre-reading Let the students take out their papers and check them in groups, and then write their answers on the blackboard (Self-learning) some words and phrases:volcano, erupt, alongside, appoint, equipment, volcanologist, database, evaluate, excite, fantastic, fountain, absolutely, unfortunately, potential, be compared with..., protect...from..., be appointed as, burn to the ground, be about to do sth., make one’s way. Check their answers and then let them lead the reading. Step III Fast-reading 这是一篇记叙文,一位火山学家的自述。作者首先介绍了他的工作性质,说明他热爱该项工作的主要原因是能帮助人们免遭火山袭击。然后,作者介绍了和另外二位科学家一道来到火山口的经历。最后,作者表达了他对自己工作的热情。许多年后,火山对他的吸引力依然不减。 Skimming Ⅰ.Read the passage and answer: (Group4) 1. Does the writer like his job?( Yes.) 2. Where is Mount Kilauea? (It is in Hawaii) 3. What is the volcanologist wearing when getting close to the crater? (He is wearing white protective suits that covered his whole body, helmets, big boots and

Unit5 Reading An Exciting Job(说课稿)

Unit5 Reading An Exciting Job 说课稿 Liu Baowei Part 1 My understanding of this lesson The analysis of the teaching material:This lesson is a reading passage. It plays a very important part in the English teaching of this unit. It tells us the writer’s exciting job as a volcanologist. From studying the passage, students can know the basic knowledge of volcano, and enjoy the occupation as a volcanologist. So here are my teaching goals: volcanologist 1. Ability goal: Enable the students to learn about the powerful natural force-volcano and the work as a volcanologist. 2. Learning ability goal: Help the students learn how to analyze the way the writer describes his exciting job. 3. Emotional goal: Make the Students love the nature and love their jobs. Learn how to express fear and anxiety Teaching important points: sentence structures 1. I was about to go back to sleep when suddenly my bedroom became as bright as day. 2. Having studied volcanoes now for more than twenty years, I am still amazed at their beauty as well as their potential to cause great damage. Teaching difficult points: 1. Use your own words to retell the text. 2. Discuss the natural disasters and their love to future jobs. Something about the S tudents: 1. The Students have known something about volcano but they don’t know the detailed information. 2. They are lack of vocabulary. 3. They don’t often use English to express themselves and communicate with others.

an exciting job 翻译

我的工作是世界上最伟大的工作。我跑的地方是稀罕奇特的地方,我见到的是世界各地有趣味的人们,有时在室外工作,有时在办公室里,有时工作中要用科学仪器,有时要会见当地百姓和旅游人士。但是我从不感到厌烦。虽然我的工作偶尔也有危险,但是我并不在乎,因为危险能激励我,使我感到有活力。然而,最重要的是,通过我的工作能保护人们免遭世界最大的自然威力之一,也就是火山的威胁。 我是一名火山学家,在夏威夷火山观测站(HVO)工作。我的主要任务是收集有关基拉韦厄火山的信息,这是夏威夷最活跃的火山之一。收集和评估了这些信息之后,我就帮助其他科学家一起预测下次火山熔岩将往何处流,流速是多少。我们的工作拯救了许多人的生命,因为熔岩要流经之地,老百姓都可以得到离开家园的通知。遗憾的是,我们不可能把他们的家搬离岩浆流过的地方,因此,许多房屋被熔岩淹没,或者焚烧殆尽。当滚烫沸腾的岩石从火山喷发出来并撞回地面时,它所造成的损失比想象的要小些,这是因为在岩石下落的基拉韦厄火山顶附近无人居住。而顺着山坡下流的火山熔岩造成的损失却大得多,这是因为火山岩浆所流经的地方,一切东西都被掩埋在熔岩下面了。然而火山喷发本身的确是很壮观的,我永远也忘不了我第一次看见火山喷发时的情景。那是在我到达夏威夷后的第二个星期。那天辛辛苦苦地干了一整天,我很早就上床睡觉。我在熟睡中突然感到床铺在摇晃,接着我听到一阵奇怪的声音,就好像一列火车从我的窗外行驶一样。因为我在夏威夷曾经经历过多次地震,所以对这种声音我并不在意。我刚要再睡,突然我的卧室亮如白昼。我赶紧跑出房间,来到后花园,在那儿我能远远地看见基拉韦厄火山。在山坡上,火山爆发了,红色发烫的岩浆像喷泉一样,朝天上喷射达几百米高。真是绝妙的奇景! 就在这次火山喷发的第二天,我有幸做了一次近距离的观察。我和另外两位科学被送到山顶,在离火山爆发期间形成的火山口最靠近的地方才下车。早先从观测站出发时,就带了一些特制的安全服,于是我们穿上安全服再走近火山口。我们三个人看上去就像宇航员一样,我们都穿着白色的防护服遮住全身,戴上了头盔和特别的手套,还穿了一双大靴子。穿着这些衣服走起路来实在不容易,但我们还是缓缓往火山口的边缘走去,并且向下看到了红红的沸腾的中心。另外,两人攀下火山口,去收集供日后研究用的岩浆,我是第一次经历这样的事,所以留在山顶上观察他们

英语选修六课文翻译第五单元

英语选修六课文翻译第五单元 reading An exciting job I have the greatest job in the world. travel to unusual places and work alongside people from all over the world sometimes working outdoors sometimes in an office sometimes using scientific equipment and sometimes meeting local people and tourists I am never bored although my job is occasionally dangerous I don't mind because danger excites me and makes me feel alive However the most important thing about my job is that I heIp protect ordinary people from one of the most powerful forces on earth-the volcano. I was appointed as a volcanologist working for the Hawaiian Volcano Observatory (HVO) twenty years ago My job is collecting information for a database about Mount KiLauea which is one of the most active volcanoes in Hawaii Having collected and evaluated the information I help oyher scientists to predict where lava from the path of the lava can be warned to leave their houses

新目标英语七年级下册课文Unit04

新目标英语七年级下册课文Unit04 Coverstion1 A: What does your father do? B: He's a reporter. A:Really?That sounds interesting. Coverstion2 A:What does your mother do,Ken? B:She's a doctor. A:Really?I want to be a doctor. Coverstion3 A:What does your cousin do? B:You mean my cousin,Mike? A:Yeah,Mike.What does he do? B:He is as shop assistant. 2a,2b Coverstion1 A: Anna,doesyour mother work? B:Yes,she does .She has a new job. A:what does she do? B: Well ,she is as bank clerk,but she wants to be a policewoman. Coverstion2 A:Is that your father here ,Tony?

B:No,he isn't .He's working. B:But it's Saturday night.What does he do? B:He's a waiter Saturday is busy for him. A:Does he like it? B:Yes ,but he really wants to be an actor. Coverstion3 A:Susan,Is that your brother? B:Yes.it is A:What does he do? B:He's a student.He wants to be a doctor. section B , 2a,2b Jenny:So,Betty.what does yor father do? Betty: He's a policeman. Jenny:Do you want to be a policewoman? Betty:Oh,yes.Sometimes it's a little dangerous ,but it's also an exciting job.Jenny, your father is a bank clerk ,right? Jenny:Yes ,he is . Sam:Do you want to be a bank clerk,too? Jenny:No,not really.I want to be a reporter. Sam: Oh,yeah?Why? Jenny:It's very busy,but it's also fun.You meet so many interesting people.What about your father ,Sam. Sam: He's a reporter at the TV station.It's an exciting job,but it's also very difficult.He always has a lot of new things to learn.Iwant to be a reporter ,too

高中英语_An exciting job教学设计学情分析教材分析课后反思

人教版选修Unit 5 The power of nature阅读课教学设计 Learning aims Knowledge aims: Master the useful words and expressions related to the text. Ability aims: Learn about some disasters that are caused by natural forces, how people feel in dangerous situations. Emotion aims: Learn the ways in which humans protect themselves from natural disasters. Step1 Leading-in 1.Where would you like to spend your holiday? 2.What about a volcano? 3.What about doing such dangerous work as part of your job ? https://www.sodocs.net/doc/c118178065.html, every part of a volcano. Ash cloud/volcanic ash/ Crater/ Lava /Magma chamber 【设计意图】通过图片激发学生兴趣,引出本单元的话题,要求学生通过讨论, 了解火山基本信息,引出火山文化背景,为后面的阅读做铺垫。利用头脑风暴法收集学生对课文内容的预测并板书下来。文内容进行预测,培养学生预测阅读内容的能力。同时通过预测激起进一步探究 Step2. Skimming What’s the main topic of the article?

新目标英语七年级下课文原文unit1-6

Unit1 SectionA 1a Canada, France ,Japan ,the United States ,Australia, Singapore ,the United Kingdom,China 1b Boy1:Where is you pen pal from,Mike? Boy2:He's from Canada. Boy1:Really?My pen pal's from Australia.How about you,Lily?Where's your pen pal from? Girl1:She's from Japan.Where is Tony's pen pal from? Gril2:I think she's from Singapore. 2b 2c Conversation1 A: Where's you pen pal from, John? B: He's from Japan, A:Oh,really?Where does he live? B: Tokyo. Conversation2 A: Where's your pen pal from, Jodie? B: She's from France. A: Oh, she lives in Pairs. Conversation3 A: Andrew, where's your pen pal from? B: She's from France. A: Uh-huh. Where does she live? B: Oh, She lives in Paris.

相关主题