搜档网
当前位置:搜档网 › Context-aware Design of Adaptable Multimodal Documents

Context-aware Design of Adaptable Multimodal Documents

Context-aware Design of Adaptable Multimodal Documents
Context-aware Design of Adaptable Multimodal Documents

Context-Aware Design of Adaptable Multimodal Documents Augusto Celentano and Ombretta Gaggi

Dipartimento di Informatica,Universit`a Ca’Foscari di Venezia

Via Torino155,30172Mestre(VE),Italia

{auce,gaggi}@dsi.unive.it

Abstract.In this paper we present a model and an adaptation architecture for context-aware multimodal documents.A compound virtual document describes the

di?erent ways in which multimodal information can be structured and presented. Physical features are associated to media instances,while properties describe the context.Concrete documents are instantiated from virtual documents by selecting

and synchronizing proper media instances based on the user context:the situation,

the environment,the device and the available communication resources.The rela-

tions between the context features and the media properties are described by a rule based system.

Keywords:Hypermedia design,adaptation,rule system

1.Introduction

The growth of information service providers and a wide spread of fea-tures in network performance(e.g.,Ethernet,Wi-Fi,GPRS),in user devices(e.g.,desktop,notebook,PDA,smart phone),in user context (e.g.,location,pro?le and environment variants),and in user interfaces (e.g.,visual,auditory and embedded interfaces)make compelling de-mand of models and systems for documents able to adapt themselves

to the user environment.

In this paper we present a model and an adaptation architecture for designing and instantiating context-aware multimodal documents,con-sidering in a uni?ed way media features,device,user and environment properties,through a rule-based de?nition of the relationships between

the context and the document components.

Adaptation and context-awareness are emerging features in several information technology areas,such as human computer interaction,ap-plications and services;the two terms are tightly related,and sometimes

are used as synonyms.However,they denote two distinct concepts.In

the document domain,adaptation means the ability to provide di?erent versions or di?erent presentations of a document in order to suite the needs of a user or the features of a device;context-awareness means

the capability of perceiving the user situation in its many aspects,and

of adapting to it the document content and presentation.

2Augusto Celentano and Ombretta Gaggi

Adaptation is the ultimate outcome of context-awareness,which is able to drive it without explicit user tuning operations.In the human-computing interaction(HCI)area,a further distinction is made be-tween adaptable and adaptive systems[11]:in adaptable systems the user chooses among di?erent behaviors,while adaptive systems provide automatic modi?cation of the behavior according to the user needs.

Context-awareness goal is therefore the design of adaptive,rather than adaptable systems.However,in other?elds,such as in the software engineering community,the term adaptable is used more frequently to cover both meanings,and we shall use it in this paper.

Adaptable documents are characterized by being multimedia,mul-tichannel and multimodal.In the scope of this paper we’ll assume the following meaning for such terms:

?a multimedia document is a time-varying document(sometimes called a multimedia presentation)containing several synchronized media items,with di?erent degrees of user control,from simple VCR style commands to hypermedia navigation;

?a multichannel document can be delivered,in di?erent formats and with di?erent content versions,through di?erent communication channels,such as wide band or GPRS networks,and di?erent presentation devices,such as desktop screens or portable phone displays;

?a multimodal document content belongs to several communication codes,and is perceived by a human with di?erent senses,in parallel or in alternative,such as vision,audio and touch(e.g.,vibration).

The three properties above usually co-exist,at some extent:a mul-timodal document has a multimedia content,and is also multichannel since the di?erent communication codes are delivered through di?erent channels.In the following we’ll use the term multimodal document to subsume also the multimedia and multichannel properties.

2.Related Work

A lot of work has been done in recent years in the area of context-awareness and adaptation.The term context has been initially as-sociated to the concept of location,but is far richer,as underlined by several works[13,21,22].The context is de?ned by Dey as“any information that can be used to characterize the situation of an entity. An entity is a person,place or object that is considered relevant to the

Context-Aware Design of Adaptable Multimodal Documents3 interaction between a user and an application”[9].Such information includes several facets,related to the user situation(location and time, mobility,pro?le),to the environment(indoor vs.outdoor,light vs. darkness,sound vs.silence),to the communication(wide vs.narrow band,plugged vs.wireless connection,continuous vs.discontinuous transmission),to the device(screen size and resolution,audio?delity, input devices,user attention),etc.Some properties are relevant in spe-ci?c application?elds for deciding how to adapt a document;e.g.,heavy tra?c conditions prevent a driving user from receiving unanticipated visual messages on the navigator screen,while audio messages can be received because they are less distracting.

Adaptation has been studied in domains like mobile computing, Web applications,user interfaces and hypermedia[5,7,16,23].Several proposals exist for automatic adaptation of multimedia documents,but they are generally limited to some aspects of the context(e.g.,location, device features,processing resources)and do not involve the whole process of document design and delivery.

Adaptation of multimedia documents to user location is typical in tourism and cultural heritage applications(see for example[1,8,19]).In such applications,adaptation consists in selecting from a repository the information relevant for the user location,sensed by sensors systems like GPS,GSM,infrared beams or other https://www.sodocs.net/doc/e414558877.html,er motion can trigger the delivery of new information,which is presented according to a template which may also depend on the user device and on the environment(e.g., indoor vs.open space).

Content adaptation is performed to satisfy the compliance of a multimedia document with devices and resources.The content of the document is processed to?t the constraints imposed by the avail-able resources,and the dynamic behavior of multimedia components is adapted to overcome delays and shifts due to resource?uctuations, according to suitable temporal models.In[32]a temporal model for multimedia presentations is discussed,which adapt themselves to a context described in terms of network bandwidth,device and available resources;the presentation behavior can vary within speci?ed limits. Euzenat et al.[14]discuss how to constrain the temporal aspects of a multimedia document to device and network resources,by introducing a notion of semantics upon which two di?erent concepts of adapta-tion(preserving or not preserving all the properties of the original document)are de?ned.

Adaptation is also performed by transforming an original multime-dia information with operations which change physical properties like frame size and rate,audio sampling,color depth,in order to?t the de-vice constraints and the network limitations.The transformation can be

4Augusto Celentano and Ombretta Gaggi

done“on the?y”at delivery time,or pre-processed alternative versions of the same information can be computed in advance and selected upon delivery.In both cases,the overall dynamic structure of the presented information does not change.

Adaptation shares a number of features with the automatic genera-tion of multimedia presentations,which is performed by evaluating a set of constraints about the presentation content and layout.Such systems (see for example[2,28])build di?erent presentations with the same “conceptual”content by selecting di?erent media?les,semantically equivalent,and adapting them to the layout and to the resources.

More complex adaptation techniques,like the one discussed in this paper,introduce a higher level of adaptation which applies to the logical structure of the multimedia document,not only to its content,while preserving the meaning of the information delivered.In[4],the authors propose an adaptation system in which the alternatives evaluated dur-ing the adaptation process include media items of di?erent types,with di?erent composition schemas.E.g.,a video?le can be replaced by a sequence of images and a separate audio comment,as long as they refer to the same meaning.Adaptation can also be done during the presentation playback,upon a context change.

In[10]adaptation is approached with a di?erent viewpoint.As-suming that di?erent combinations of media could convey the same meaning to the user,the authors present adaptation as the selection and combination of media which have the greatest cognitive impact on the user,according to psychology studies about the human perception.

The use of standard languages for describing documents and context related information,such as the ones based on XML,makes docu-ment adaptation easier due to a formal ground for transforming and validating documents along all the processing phases.

SMIL[27]has been the?rst standard language devoted to the de-scription of synchronized multimedia documents.It provides a simple mechanism for adapting parts of a document to a part of the playback context.The tag switch can be used to test a restricted set of features at the client side,such as the operating system used,the network bandwidth,the screen features,and to select consequently di?erent components of the document.In[24],Steele et al.present a system for mobile devices which dynamically adapt presentations depending on a set of context properties that can be checked through the SMIL language.

A general framework for adapting XML documents to the device and to the user preferences is presented in[29,30],based on the separation of the document content from the presentation with a suitable DTD, and on the transformations supported by an XSLT processor.

Context-Aware Design of Adaptable Multimodal Documents5

In[17,18]a device independent model based on the Universal Pro?l-ing Schema(UPS)is de?ned in order to achieve automatic adaptation of the content based on its semantic and on the capabilities of the target device.

The reviewed works,while exploring a wide range of context fea-tures,do not approach in a unifyed framework all the facets of multi-modal document adaptation.Most of them analyze only a part of the context,generally the part related to equipment and resources,and do not provide a comprehensive,uniform framework for adapting the document content,structure and presentation to the variants of user, equipment and environment properties.Moreover,context analysis is in most cases limited to the handling of independent features,while in real situations di?erent conbtext facets should be considered in their mutual relations.For example,audio contents might be not appropriate for very di?erent reasons:because the environment is noisy,like a crowdy street,or because it requires silence,like a library.They might also be not appropriate because the equipment does not support them,or because the user is hearing impaired.The same kind of adaptation could therefore be suggested by very di?erent,even opposite,contexts.

The same problems arises with user pro?le,i.e.,preferences,lan-guage and physical abilities.In general context interpretation should be a complex,logic-based process generating constraints to which to bind the multimodal document content and structure.

Our point of view is in fact more general,and is closer to the approach of Human Computer Interaction,where the ultimate goal of context-awareness is to allow a user to receive information with content and presentation suitable for the situation in which he/she is,spending an adequate interaction e?ort[31].

3.Document design

Adaptable document design concerns both the document structure and the adaptation parameters.In the multimodal domain three di?erent descriptions must be provided:

?the static architecture,de?ning the structure of the document’s content;

?the dynamic architecture,describing the media behavior along time,their mutual synchronization relationships and the reactions to user interaction;

?the context dependencies,linking the document components to the context dimensions,thus allowing the document management

6Augusto Celentano and Ombretta Gaggi

system to deliver the proper combination of media,content and presentation in any context.

Several document models provide the foundations for the above is-sues.We ground our presentation on a multimedia document model we have de?ned in a previous work[12]and experimented in several application scenarios,which is here brie?y surveyed and extended to consider adaptation.Only the basic properties of the model will be presented,so the discussion applies also to other multimodal document models with minor changes.

3.1.Static architecture

A multimodal document is a collection of media items,which can be atomic or composite.An atomic item is a self contained chunk of in-formation,materialized in a media?le.Continuous media(audio and video)have a temporal behavior,therefore they can act as triggers in the dynamic architecture.Static media(text and images)can be constrained to follow the dynamic behavior of continuous media.A composite item is a collection of items which has observable dynamic properties as a https://www.sodocs.net/doc/e414558877.html,posite items can contain other composite items in a hierarchical schema.The document itself is a composite item,and can be used as a building block to design more complex documents.

Since an adaptable document can be delivered in di?erent contexts and can be played with di?erent modalities on di?erent devices,gener-ally we need to de?ne more component items than e?ectively used in each context.

In adaptable documents virtual components are de?ned,which can be instantiated into concrete components(atomic and composite items) according to the context,as described in Section5.

3.2.Dynamic architecture

Temporal constraints de?ne the dynamic behavior of the multimodal document.The model we have de?ned describes the synchronization relationships among the document components in an event-based fash-ion.

A small number of synchronization relations de?ne media object reactions to starting and ending of continuous media items which rule the presentation timing,and to user actions on media.In order to settle a simple background for the discussion of context-awareness,we limit our discussion to the basic behavior of the two relations describ-ing parallel and sequential execution of media items.The forthcoming

Context-Aware Design of Adaptable Multimodal Documents7

name video,audio,text and

image.

next exit

¤

?

Figure1.The graphical symbols used to represent adaptable multimodal docu-ments.

discussion therefore can be applied as well to documents described by other models like,e.g.,SMIL,even if some di?erences exist[12]which are not relevant in the context of this paper.

The parallel composition of media items a and b is described by the relation“a plays with b”,symbolically written a?b:if a or b is activated by the user or by some event,the two items play together. Item a acts as a“master”in this relation:its termination forces the termination of b.

The sequential composition of media items a and b is described by the relation“a activates b”,symbolically written a?b:when object a ends,object b begins playing.

Since static media have no timing properties,they cannot appear in the left side of the above relations,and their actual duration is ruled by synchronizing them with continuous media.Two special items,the timer and the button,are de?ned to give them a dynamic behavior

8Augusto Celentano and Ombretta Gaggi

independent from other media:a timer is a continuous media item with a prede?ned time length,while a button is a continuous media item which ends when the user activates it,e.g.,with a mouse click.

Finally,composite items can be source and destination of synchro-nization relationships;in particular,a composite item ends when all the enclosed media end.

Figure1summarizes the symbology used for representing the com-ponents of a multimodal documents.

3.3.Context constraints

Context constraints describe how media items are selected for building an instance of a multimodal document.A set of features is associated to each component,which describes the media requirements for de-livery and presentation.Similarly,the context is described by a set of properties concerning the device,the network,the user and the environ-ment.The compatibility between the context properties and the media features is de?ned by a rule system,whose evaluation makes suitable media to be identi?ed for adapting the document to the context.

A virtual components is compatible with the union of contexts to which media items it contains are compatible;a composite is compatible with the contexts which correspond to the intersection of compatible contexts for media items it contains.A context-aware multimodal doc-ument can be instantiated in a given context if all its components are compatible with that context,in a recursive process.

Due to context variants,an item can be de?ned as mandatory or optional,and can be context independent,context dependent or context selectable;these terms will be discussed in Section5.

4.A case study

As a case study we analyze a document presenting a meteorological forecast.Even if it is a very simple document,it can be adapted to several situations with non trivial variants in content,dynamics and presentation.

The document is divided in two parts,a description of the current meteorological situation,and the forecast for the following hours.The two parts are delivered and presented to the user in sequence.

We consider?ve situations involving di?erent environments and devices:

(a)a desktop or notebook computer in a home or o?ce environment;

Context-Aware Design of Adaptable Multimodal Documents 9comment bigmap sat-anim comment (a)bigmap

sat-anim next

text ¤text exit ¤

(b)sat-map next

text ¤smallmap

text exit (c)

comment comment (d)

next text exit text (e)

Figure 2.Di?erent versions of a meteorological forecast document.

(b)a computer as above,but without audio capability,e.g.,because

the environment requires silence and no personal audio device is available 1;

(c)a PDA provided with a wireless connection with limited bandwidth,

used in an environment requiring silence;

10Augusto Celentano and Ombretta Gaggi

(d)a cellular phone able to receive only audio and short text messages,

receiving an audio message;

(e)the same cellular phone receiving a series of SMSs.

Figure2illustrates?ve di?erent versions of the meteorological fore-cast document,adapted to the situations(a)–(e)above.Each version is adapted in content(short/long descriptions,large/small images), media type(video animations,images,audio,text),user device(a desk-top computer,a PDA,a cellular phone),and situation(silence).The cases we discuss and the related documents are therefore signi?cantly di?erent.

Other adaptations concerning the presentation language and the physical media formats could contribute to tuning the document con-tent and format to the user needs,to the communication infrastructure and to the device capabilities,but the di?erences are less relevant and do not a?ect its conceptual design.In Section8we shall introduce some issues about this further level of adaptation.

In the?ve situations above,?ve di?erent documents should be is-sued:

(a)A multimedia document with full audio and animated video.An

animated satellite view loops as long as an audio describes the current situation;when the audio ends,a large forecast map is displayed and described by another audio comment.The synchro-nization is based on the time length of the audio.At the end of the forecast audio comment the whole document ends.

(b)In lack of audio capabilities,comments must be displayed as texts.

The document is therefore suitable for silent environments.In this case the synchronization structure of the document cannot be based on the intrinsic timing properties of media:in fact,the text has no de?ned time length(in our model it is in?nite),and the length of the animated satellite view cannot be used as a timer, because it is not related to the time needed for reading the text, which depends on the user pace.Therefore the synchronization must be based on an explicit user interaction,through a button pressed when the user has?nished reading the text,which ter-minates the text display and the animation.The user manually advances to the second part where the text and the map describing the forecast are displayed;when the user presses an exit button, the whole presentation ends.

(c)In the PDA version,due to the limited capabilities of the PDA

and of the network,the animation is replaced by a small image;

Context-Aware Design of Adaptable Multimodal Documents11 the forecast map of the second part of the document is also a small image,hence di?erent from cases(a)and(b).As in case(b), there is no audio and the user controls the timing by advancing manually from the?rst part of the document to the second,where the structure of case(b)above is replicated.

(d)An audio only document can be delivered to a cellular phone as a

sequence of two messages,which do not require user intervention because they have their own timing.The splitting in two parts is functional to having the same overall structure for the whole document,which is important at design time.At delivery time the distinction for the user is less evident.The comments are plausibly di?erent,in length and level of detail,from the ones delivered in case(a),due to the lack of accompanying images.

(e)A text-only version is designed for being delivered to a cellular

phone as a sequence of two short messages(SMS).The user steps through the messages with a button.The texts are plausibly dif-ferent from the ones delivered in cases(b)and(c)due to size limitations of SMS messages.

It is easy to understand that the documents presented do not rep-resent the only possible solutions.E.g.,the synchronization schema of the forecast composite in case(b)could be the same as the situation composite;their behavior as perceived by the user would be the same, and di?erent design strategies could naturally lead to either choice.

It is also obvious that simpler documents could be issued in rich contexts too.Cases(b)–(e)are compatible with the context of case (a);they simply do not use all the features of the device and of the environment,giving a more limited,but anyway correct,information to the user.The choice of the“correct”(or,better,“most suitable”) version is therefore the result of an evaluation process which maximizes some function of the information carried by the media against the constraints expressed by the context.

The?ve variants described above can be designed as a unique com-pound document(called virtual document)whose components are se-lectively instantiated under di?erent contexts,leading to adapted doc-uments which di?er in content and presentation,and may di?er in structure according to the mutual relationships among the instantiated components.Each component is tagged by features that can be related to the context in which the document is delivered.In Figure3a pictorial symbolic representation of a virtual meteorological forecast document is shown.Letters in grey circles identify which are the relevant cases of Figure2to which each media item applies;dashed rectangles enclose

12Augusto Celentano and Ombretta Gaggi sat-anim sat-map

comment comment next text text a

c

a,b d e

b,c bigmap a,b smallmap c

text e text b,c

comment comment a d exit Figure 3.An adaptable meteorological forecast document.sat-anim sat-map

comment comment next text a

c a,b

d b,c bigmap a,b smallmap c

text e next ¤¤

comment comment exit text a d b,c text e exit ¤Figure 4.A variant of the document of Figure 3.

alternative components,that are selected according to the context for instantiating a speci?c adapted document.The structure of the situ-ation composite is the sum of the structures de?ned in cases (a)–(e);in the forecast composite a uniform control structure is de?ned,which associates the exit button to the end of the forecast in all the cases.The meaning of the virtual document components and how they are related with the context will be described in the following two sections.

Context-Aware Design of Adaptable Multimodal Documents13 Like the design of a concrete document,also the design of a virtual document is subject to variants.For example,in Figure4a variant of the virtual document of Figure3is shown:in the situation composite, the di?erent alternatives of the comment part,both for text and audio, are exposed at the same level,while in Figure3the di?erences in the synchronization structure are represented at a level higher than the di?erences between the text page contents.The structure of the forecast composite replicates the structure of the situation part,while in Figure3a more uniform control structure was designed.We do not approach here the problem of making a“good”design of adaptable documents,an important issue that should be supported by suitable guidelines and methodologies.Di?erent virtual component structures and organizations should be considered equivalent with respect to the instantiation process if concrete documents generated by di?erent vir-tual documents have the same content and dynamics,i.e.,they are behaviorally equivalent as de?ned in[3].

5.Adaptation design

An adaptable multimodal document is a virtual document,made of vir-tual components,according to a structure which is independent from the context at macroscopic level,but may depend on it in its details.Each virtual component is a collection of composite and atomic media items whose features allow the adaptation system to select the proper ones as a function of the context.The instantiation of a virtual document into a concrete document consists in the identi?cation of the document virtual components and,for each of them,in the selection of the proper instances compatible with the given context.

In order to make a translation from a virtual to a concrete document, three descriptions must be given:

?how to characterize the di?erent variants,i.e.,how to describe the features of the component media items;

?how to characterize the context;

?how to identify,for each context,the correct or relevant variant.

Virtual documents are described as XML documents conforming to a schema which de?nes both the static and the dynamic structure of the documents.The XML schema for context-aware multimodal documents extends the one we have de?ned for multimedia documents.The XML document is made of three sections:thesection contains the

14Augusto Celentano and Ombretta Gaggi

de?nition of the channels used by the media objects,e.g.,the position and size on the screen of the visual components,thesec-tion describes the media and the static structure of the document,and thesection contains the temporal constraints between the media items.Media features are not described in the XML docu-ment.They are associated to media in a media repository,as described in Section6.The most relevant section for the focus of this paper is the components section that describes the virtual components of the document and their instances.

A virtual component is de?ned by the tagand en-closes a list of items with di?erent features,which apply to di?erent contexts.Figure5shows a fragment of the XML document describing the virtual document of?gure3,where non relevant details are omit-ted.The pictorial representation of Figure3,in fact mirrors the XML description2:virtual components are denoted by dashed frames which enclose the composite and atomic media subject to context dependence. From the synchronization point of view,a virtual component behaves as a black box:synchronization relationships can be established only between the virtual component and other components external to it; they will be inherited by the concrete instance during the adaptation process.As a consequence,if relations such as vc?i or vc?i exist between a virtual component vc and a generic item i,all the items in the virtual component must be continuous.

A virtual component can be mandatory or optional,according to its role in the document semantics.This property is set by the attribute optional which holds the values yes or no.A mandatory virtual compo-nent carries a content which is necessary to understand the document. Therefore,at least one of its internal items must be instantiated,unless the virtual components itself is contained,directly or indirectly,into another virtual component with alternatives for di?erent contexts.If the virtual components at the outermost level of the document struc-ture cannot be instantiated,the concrete document is incomplete and cannot be delivered.An optional virtual component can be instantiated or not instantiated,according to its compatibility with the context.Its absence does not prevent the document from being understandable.

In Figure3,the virtual component comment is mandatory,while the virtual component view is optional,because it can be suppressed with-out loss of document signi?cance.In fact,the information represented by it is not instantiated in cases(d)and(e).

Context-Aware Design of Adaptable Multimodal Documents15

comment”>

view”optional=”yes”>

map”?le=”...”.../>

view”>

...

Figure5.A fragment of the XML virtual document

The virtual component text in the composite page is also manda-tory;if it cannot be instantiated because of context con?icts,the whole composite page cannot be instantiated.This does not prevent a con-crete document from being generated,since alternatives are de?ned through the audio comments.However,if none of the media contained in the comment virtual component is compatible with the context,the instantiation process fails.

Generally speaking,media items are context dependent,since they exists in several instances for the di?erent contexts,selected at docu-ment instantiation time.Some media,however,could be context inde-

16Augusto Celentano and Ombretta Gaggi

pendent,i.e.,they could appear in any instance of a virtual document, either because their properties are compatible with any context(e.g., a short constant text message),or because they are strictly bound to other components;e.g.,buttons and timers inherit the context depen-dencies from the media items they control.

Finally,in some cases several media items,instances of a same vir-tual component,could be compatible with the same context.We call them context selectable items.For example,texts at several degrees of detail and images with several resolution and size values are in principle compatible with contexts whose equipment has large display capabilities.In this case the user could be asked(but this solution is not optimal from the interaction point of view),or some intelligence could be put in the system in order to make a“good”choice.A wide range of approaches,from simple priority schemas up to intelligent agents, could give support to this issue,but we do not discuss them here for space reasons.

6.Context-awareness design

Figure6shows the proposed architecture of a context-aware document adaptation system,which is implemented in a demonstration prototype written in Java and SICStus Prolog[26].The prototype implements a visual interface which allows the document designer to simulate di?er-ent contexts and evaluate the corresponding concrete document.The resolver engine,as discussed in the following,is implemented in Prolog, using the Jasper Java interface interface to load the SICStus runtime kernel from the Java program which executes the virtual document analysis and the concrete document instantiation.The XML descrip-tion of the virtual and concrete document is processed through the JAXB library(Java API for XML processing)[25].

A media repository holds the instances of all the media items and the description of their features.

The context is represented by a set of context properties.Several formalisms can be used for describing media and context properties, which are in principle equivalent as long as they can describe in a correct and complete way the relevant features and their values.The W3C CC/PP proposal[15]is a standard which allows designers to describe in an extensible way the context and media features in a simple structured attribute/value fashion.Other formalism may be used as well,e.g.,based on MPEG7or on logic assertions.For example,Ran-ganathan et al.[20]propose a context model based on?rst order logic predicate calculus.We do not elaborate on this issue here,noting that

Context-Aware Design of Adaptable Multimodal Documents17

Figure6.The architecture of a context-aware document adaptation system.

the di?erent formalisms should be considered semantically equivalent, and that translations are possible among them.

The association between the context properties and the media fea-tures is managed by a resolver,a rule-based system(RBS)which de?nes how media can be used in di?erent contexts:given the set of features of the media contained in a virtual document and the set of context properties,it is able to select the media items that are compatible with that context.

The list of selected media items is used by the system to instantiate a concrete document,which contains only the compatible media,correctly structured.The layout and relationships sections of the XML virtual document are processed according to the media items that survive the adaptation process.Without entering into too much detail,it is su?cient to say that in the layout section only the channels used by the selected media are retained,while in the relationships section only the temporal relations connecting components that have been instantiated are retained.An example will be discussed later in this section.

The resolver is implemented with a Prolog engine,therefore me-dia features and context properties should be translated into Prolog predicates if they are expressed in other formalisms.In the current version of the prototype,since we are focused on the adaptation process rather than in media description,context properties and media features are expressed as logical assertions.Figure7shows a sample of media

18Augusto Celentano and Ombretta Gaggi

Media Features

type(sat-anim,video)format(sat-anim,mpeg1)

type(comment3,audio)format(comment3,mp3)

type(text1,text)format(text1,ascii)

type(smallmap,image)format(smallmap,gif)

screen

size(desktop,1024,768)scr

format(desktop,[...,mpeg1,gif,ascii,mp3,...])

supported

video(D,U,M):-

...,

adapt

size(M,SX,SY),scr

screen(D,M):-

graphic

format(D,SF),member(F,SF),

....

adapt

video shows a sample of the rules that check if the size and the format of an image(M)is compatible with the screen size of the device(D) and,through the function adapt

Context-Aware Design of Adaptable Multimodal Documents19 2.Process information about environment.This step can be consid-

ered a re?nement of the selection made on the device:e.g.,if the user is in a room which requires silence,or in a very noisy environ-ment,audio?les cannot be delivered unless the device is provided with headsets(function adapt

3The details are not relevant since the instances are assumed logically equivalent for that context.

20Augusto Celentano and Ombretta Gaggi sat-map

next text text smallmap text text exit Figure 8.Context compatible media instances in the virtual meteorological forecast document.

As an example of adaptation,we consider the meteorological forecast document introduced in Figure 2,adapted to a PDA with a small screen size and no animated video capabilities (situation (c)of Figure 2).We assume the user is in a library,i.e.,a silent environment,and does not wear headsets,therefore cannot receive audio information even if the device supports it.

The instantiation proceeds according to the following steps:

1.Pick up the media compatible with the device.Considering the structure depicted in Figure 3,the system selects the media items compatible with the PDA device:the text ?les,the satellite image (sat-map )and the small forecast image (smallmap ).The animation (sat-anim )is discarded because the PDA does not support it,and the large forecast image (bigmap )is discarded because it does not ?t the size of the PDA screen.

2.Consider information about the environment.Since the library re-quires silence and the user has no headsets,the system discards the four audio comments.

3.Select the items compatible with user preferences.In our example the context is not di?erentiated on user preferences,therefore no media item is discarded.In a real life case,language,level of detail,narration style,etc.,could be analyzed for selecting text items,while user abilities like the ability to perceive correctly the colors,could a?ect the selection of images.

相关主题