搜档网
当前位置:搜档网 › Multi-task and multi-view learning of user state

Multi-task and multi-view learning of user state

Multi-task and multi-view learning of user state
Multi-task and multi-view learning of user state

Multi-task and multi-view learning of user state

Melih Kandemir a,n,Akos Vetek b,Mehmet G?nen c,Arto Klami d,Samuel Kaski d,e,nn

a Heidelberg University,HCI,Speyerer Str.6,D-69115Heidelberg,Germany

b Nokia Research Center Otaniemi,Espoo,Finland

c Sage Bionetworks,Seattle,WA,USA

d Helsinki Institut

e for Information Technology HIIT,Department o

f Computer Science,University of Helsinki,Finland

e Helsinki Institute for Information Technology HIIT,Department o

f Information and Computer Science,Aalto University,

P.O.Box15400,FI-00076Aalto,Finland

a r t i c l e i n f o

Article history:

Received30September2013

Received in revised form

28January2014

Accepted21February2014

Communicated by Christos Dimitrakakis

Available online18April2014

Keywords:

Affect recognition

Machine learning

Multi-task learning

Multi-view learning

a b s t r a c t

Several computational approaches have been proposed for inferring the affective state of the user,

motivated for example by the goal of building improved interfaces that can adapt to the user's needs and

internal state.While fairly good results have been obtained for inferring the user state under highly

controlled conditions,a considerable amount of work remains to be done for learning high-quality

estimates of subjective evaluations of the state in more natural conditions.In this work,we discuss how

two recent machine learning concepts,multi-view learning and multi-task learning,can be adapted for

user state recognition,and demonstrate them on two data collections of varying quality.Multi-view

learning enables combining multiple measurement sensors in a justi?ed way while automatically

learning the importance of each sensor.Multi-task learning,in turn,tells how multiple learning tasks

can be learned together to improve the accuracy.We demonstrate the use of two types of multi-task

learning:learning both multiple state indicators and models for multiple users together.We also

illustrate how the bene?ts of multi-task learning and multi-view learning can be effectively combined in

a uni?ed model by introducing a novel algorithm.

&2014Elsevier B.V.All rights reserved.

1.Introduction

Affective computing seeks to develop more ef?cient and plea-

sant user interfaces by taking into account the affective state of

the user.For example,the information?ow can be tailored by

managing interruptions from e-mail alerts and phone calls when

the user is in deep thought[7],and the affective state can be

used to determine the most suitable time to intervene during a

pedagogical game[8].Apart from adapting the interface,informa-

tion on the affective state can be used to gain a deeper under-

standing of how users and computers interact.A prerequisite of

affective computing is the ability to recognize users'states of

interest,either by observing the users'actions[26]or by analyzing

physiological signals measured from the user[25,15,6].In this

work,we study the latter approach and discuss machine learning

solutions for inferring the affective state of the user from physio-

logical signals in unobtrusive and loosely controlled user setups.

During recent years,several databases of physiological mea-

surements in affective computing tasks have been released

[13,22,31],in an attempt to provide high-quality data for learning

and benchmarking state inference models.The state of the art in

the?eld is that the user's state can be inferred relatively accurately

in highly controlled experiment setups where the stimuli evoke

strong emotional responses[20,24,32].For less controlled setups,

where the ground truth labels come from user evaluations,some

recent works have obtained positive results[22,2,9,11,33]but in

many cases the prediction accuracies are not yet suf?ciently high

for practical use in adaptive interfaces.

We introduce two elements from machine learning literature to

help improve the user state estimation:multi-view learning and

multi-task learning.Both ideas can be incorporated into many of

the current state estimation methods(for a recent review see

[34]),to obtain better estimates of the user's affective states.We

motivate these concepts for affective computing tasks and demon-

strate their usefulness in learning user states especially when used

in combination.

Multi-view learning studies how data sets having co-occurring

observations can be combined.Most affective computing studies

monitor the user with several sensors or sensor channels,which

Contents lists available at ScienceDirect

journal homepage:https://www.sodocs.net/doc/3d17332725.html,/locate/neucom

Neurocomputing

https://www.sodocs.net/doc/3d17332725.html,/10.1016/j.neucom.2014.02.057

0925-2312/&2014Elsevier B.V.All rights

reserved.

n Corresponding author.

nn Corresponding author at:Helsinki Institute for Information Technology HIIT,

Department of Information and Computer Science,Aalto University,P.O.Box15400,

FI-00076Aalto,Finland.

E-mail addresses:melih.kandemir@iwr.uni-heidelberg.de(M.Kandemir),

akos.vetek@https://www.sodocs.net/doc/3d17332725.html,(A.Vetek),mehmet.gonen@https://www.sodocs.net/doc/3d17332725.html,(M.G?nen),

arto.klami@cs.helsinki.?(A.Klami),samuel.kaski@aalto.?(S.Kaski).

Neurocomputing139(2014)97–106

can be considered as such co-occurring sets.Multi-view learning refers to various strategies for learning a joint model over all sensor data,to learn how the sources should be combined for building optimal models.In this paper,we work with a speci?c multi-view learning technique called multiple-kernel learning (MKL)[16],which allows using multiple sensors in any kernel-based learning algorithm while automatically revealing which sensors are useful for solving the task.Even though considerable effort has been put into?nding out which physiological sensors are related to which affective dimensions,this is still useful for all practical applications with speci?c sensor hardware.Automatically learning the sensor importance is especially useful when devel-oping practical systems for out-of-laboratory conditions.

The other concept,multi-task learning(MTL),studies learning of several prediction tasks together[5].Within the scope of state inference,MTL takes advantage of the data of other users by learning from the cross-user similarities,without assuming that the users are identical.This helps particularly when the amount of labeled training data is limited.Alternatively,learning each output label,such as arousal and valence,could be considered as a task. Learning predictive models for all of the labels together is then useful assuming that all labels are one-dimensional summaries of a more complex unknown state of the user.The approach will be particularly useful if the dimensions are not independent.

We present a novel kernel-based model that combines both multi-view and multi-task aspects.It can be applied to both of the aforementioned MTL scenarios,and it uses the MKL formulation to make the approach multi-view.We then apply the model to two different data collections to study the accuracy of state recogni-tion.The?rst collection,taken from Koelstra et al.[22],is an example of a laboratory-quality data.We have collected the other data set ourselves under less constrained conditions.

The main goal of the paper is to illustrate the bene?ts of the two aforementioned general purpose machine learning techniques in affective computing applications.To this end,we show how combining MTL and MKL within a uni?ed model improves the prediction performance,and also highlight how MKL automati-cally learns the importance of individual sensors even when solving multiple inference tasks simultaneously.We demonstrate the models with generic features instead of carefully selecting the sensors and features to match the particular affective inference tasks.This highlights the main advantage of the proposed strategy: It allows working with a wide set of sensors and tasks,without requiring much manual labor in incorporating domain-speci?c knowledge into the solutions.

2.Inferring the user state

Given the input data from P sensors,the user state inference task consists of inferring for each data point a set of labels that jointly characterize the state of the user.We do not assume any particular emotional model,such as[28].Instead,we simply require the states to be represented by a collection of numerical labels.The labels do not have to be independent;in fact,as will become more apparent later,the multi-task formulation we introduce is speci?cally tailored to capture correlations between the labels.In the experimental section we use Likert-scale evalua-tions of valence,arousal,liking,and mental workload as the labels, but the underlying machine learning techniques would apply to any other numerical characterizations of the state dimensions. Even though we resort to binarization of multi-category state labels to overcome data scarcity,extension of the presented techniques to multi-class setups is straightforward.

We study user-speci?c and user-independent setups for each learning model.The former is trained on data recorded from a single user and assumes this person to be the eventual user of the system,whereas the latter learns the models from M earlier users and assumes the eventual user to be a new https://www.sodocs.net/doc/3d17332725.html,er-speci?c models need to be separately customized to target users.On the other hand,user-independent models do not require any training data from the eventual user,and hence can be pre-trained on large data collections.

For both scenarios,each data sample x i is represented as a collection of vectors x i?f xemT

i

g P

m?1

,one for each of the P views

(here sensors),where xemT

i

A R D m and D m is the dimensionality of the feature representation for the sensor m.The output,character-ization of the user's state,is given as(here binary)vector of labels y i??y ie1T;…;y ieTT ,where y iejTA f71g and T is the number of labels.

All learning setups considered in this paper are multi-view,due to the input data coming from P different sensors.MTL,in turn,can be applied in two different ways.When considering the different users as different but related tasks we can learn user-speci?c models for all users at the same time,separately for each label. In this case,each task takes as input the measurements taken from a different user x,and predicts the corresponding label. Even though the models are learned together in the spirit of multi-task learning,the output will be a separate model for each user. Alternatively,we can learn a single user-independent model for all T labels at once,resulting in a MTL setup where the inputs x are the same for all tasks but the output labels are different.

In this paper,we formulate a novel kernel-based algorithm that performs multi-task and multi-view learning in a coupled and ef?cient manner.In Sections2.1–2.3we review the basics of kernel based learning and explain the earlier kernel-based multi-task and multi-view algorithms.Finally,in Section2.4we introduce our new model that combines both approaches.

2.1.Support vector machines(SVMs)

We take the standard support vector machine(SVM)[30]as a single-task and single-view building block on which we develop our novel multi-task multi-view learning algorithm.We denote by fex i;y iTg N i?1a sample of N independent training instances,where x i is a D-dimensional input vector with the target output y i,and by Φ:R D-R S a function that maps the input patterns to a preferably higher dimensional space.The support vector machine learns a linear discriminant that predicts the target output of an unseen test instance x as

fexT?w>ΦexTtb;

where w contains the hyperplane parameters and b is the bias https://www.sodocs.net/doc/3d17332725.html,ing the representer theorem,the discriminant in the dual form becomes

fexT?∑

N

i?1

αiΦex iT>ΦexT

|?????????{z?????????}

kex i;xT

tb

where N is the training set size,k:R D?R D-R is the kernel function that de?nes a similarity metric for pairs of data instances, andαis the vector of Lagrange multipliers de?ned in the domain

A?α:∑N r

i?1

αi?0;αi A R;8i

()

:e1T

For binary classi?cation y i A fà1;t1g and squared loss,the corre-sponding objective function is

JeαT?∑

N

i?1

αià1

2

∑N

i?1

∑N

j?1

αiαj y i y j kex i;x jTtδj i

2C

!

;

M.Kandemir et al./Neurocomputing139(2014)97–106 98

whereδj i?1if i?j and0otherwise.In the training phase,JeαTis maximized with respect toα.

2.2.Multiple kernel learning(MKL)

A good affective computing model utilizes information from all available sensors,correctly weighting each of the sensors accord-ing to how useful it is.Instead of manually selecting only a small subset of most useful sensors,we propose to automatically infer the best sensors amongst a possibly very rich set of sensors.

Multiple kernel learning is a multi-view learning solution that automatically learns the importance of the sensors to maximize the predictive accuracy of kernel-methods(see[16]for a survey). The idea is to represent each sensor(view)m by one kernel k m, and combine them into a single kernel kηby using a function fη: R P-R parameterized byη:

kηex i;x j;ηT?fηef k mexemTi;xemTjTg P m?1;ηT:

An optimalηis learned from data.The different multiple kernel learning models differ in the way they put restrictions on the kernel weightsη.In this paper,we take a weighted average of the kernels,with nonnegative weights that sum up to one(i.e.,convex sum):kηex i;x j;ηT?∑P m?1ηm k mexemTi;xemTjT.

When learning the kernel weights one could also consider some form of regularization for them,for example to favor sparse solu-tions.There has been no conclusive evidence that sparse solution would be more accurate(see[21]),and hence we learn here the weights of regular MKL without sparsity-inducing regularization.

2.3.Multi-task kernel machines

Multiple learning tasks can be solved more accurately if they are learned together,by encouraging the tasks to share knowledge by having similar parameters[3].This idea has been employed in SVMs by merging the training instances of all tasks,and learning the following kernel function[14]:

b kex

i;x jT?e1=γtδj

i

Tkex i;x jTe2T

whereγdetermines the similarity between the samples of diffe-rent tasks andδi j is1if x i and x j are from the same task,and 0otherwise.Intuitively,the model assumes that samples from all other tasks can also be used for learning the model but their similarity is discounted by a factor ofγ.Forγ?0the solution reduces to assuming all tasks to be identical,whereasγ?1is equivalent to learning the tasks separately.

The above multi-task formulation has three disadvantages: (a)it requires all tasks to be in a common input space;(b)it requires all tasks to have the same output space to be able to capture them in a single learner,which makes it not applicable for MTL over labels(multi-output learning);and(c)it requires more time than training separate(hence small-sample)learners for each task.

2.4.Multi-task multiple kernel machines(MT-MKL)

We could obtain a multi-task multi-kernel learning method by simply extending Eq.(2)to multiple kernels:

c k

ηex i;x j;ηT?e1=γtδj iTkηex i;x j;ηT;

and learning the weightsηas in standard MKL.However,the aforementioned disadvantages would still apply.

We propose a novel MT-MKL model that induces similarity across tasks via kernel combination parametersη,instead of via the discriminant function as above.It learns a differentηr for each task r and regularizes them globally.Assuming a singleηcommon to all tasks as in Rakotomamonjy et al.[27]is then a special case of our model,which holds the risk of negative transfer if some of the tasks are only weakly correlated.Parameters of models can be learned by solving the following min–max optimization problem: minimize

fηr A E g T r?1

maximize

fαr A A r g T r?1

Ωefηr g T

r?1

Tt∑

T

r?1

J reαr;ηrT

|????????????????????????????????????{z????????????????????????????????????}

e3T

where E?fη:∑P m?1ηm?1;ηm Z08m g denotes the domain of the kernel combination parameters,A r is the domain of the Lagrange multipliers for task r as in Eq.(1),and

J reαr;ηrT?∑N

i?1

αr

i

à

1

2

∑N

i?1

∑N

j?1

αr

i

αr

j

y r

i

y r

j

k rηex i;x j;ηrTt

δj

i

2C

!

is the objective function of the kernel-based learner for task r. Similarity between the kernels is enforced by the regularization termΩeáTthat makes the kernel combination parameters of different tasks related and penalizes their divergence from each other.Among many possible choices of regularizers,we illustrate two:(i)the inner-product regularizer

Ω1efηr g T

r?1

T?àν∑T

r?1

∑T

s?1

η>

r

ηs;

and(ii)the?2-norm regularizer

Ω2efηr g T

r?1

T?àν∑T

r?1

∑T

s?1

‖ηràηs‖2:

The?rst regularizer,Ω1eáT,corresponds to the negative total correlation between the kernel weights of the tasks.Although this term is concave,ef?cient optimization is possible thanks to the bounded feasible sets of the kernel weights.The second alter-native,Ω2eáT,is the standard?2ànorm regularizer that penalizes the distance of kernel weights in the Euclidean space.

The coef?cientνdetermines the in?uence of the regularizer on the cost function.A smallνvalue corresponds to assuming unrelated tasks(and withν?0the model reverts to an indepen-dent MKL learner for each task),whereas a large value enforces similar kernel weights across the tasks.

The min–max optimization problem in Eq.(3)can be solved using a two-step iterative algorithm in a similar way to previous work on MKL[35–37].In the?rst step,kernel weights fηr g T r?1are given,hence we have T single-task single-kernel learning pro-blems at hand.In the second step,where single-task learners are given,we update fηr g T r?1with respect to Oηby applying projected gradient-descent subject to two constraints on the kernel weights: (i)being positive(8r;8m;ηm r Z0)and(ii)summing up to one

(8r;∑P

m?1

ηm

r

?1).The gradient of the joint objective function of all task learners Oηis

?Oη

?ηr m?à2

?ΩeηrT

?ηr mà

1

2

∑N

i?1

∑N

j?1

αr

i

αr

j

y r

i

y r

j

k r

m

ex r

i

;x r jTt

δj

i

2C

!

;

where the gradient of the regularizer is

?Ω1eηrT

?ηr m?à

ν∑T

s?1

ηs

m

for the inner-product penalty,and

?Ω2eηrT

?ηr m?à

ν∑T

s?1

2eηr màηs mT

for the?2ànorm penalty.For faster convergence,step sizes of the gradient-descent can be tuned at each iteration by line search.The

M.Kandemir et al./Neurocomputing139(2014)97–10699

iterations are then repeated until convergence.The proposed method can be summarized as in Algorithm 1.See Go ?n en et al.[17]for the empirical performance of the method on tasks other than affective state inference.

Algorithm 1.The proposed Multitask Multiple Kernel Learning (MT-MKL)algorithm.

Initialize ηr as e1=P ;…;1=P T;8r repeat

Calculate K r η?k r

ηex i ;x j TN r

i ;j ;8r

Solve a single-kernel machine using K r η;8r Update ηr in the direction of à?O η=?ηr ;8r until convergence

3.Tasks,setups,and measures

We demonstrate off-line analysis with the kernel-based inference models in two different application scenarios.The ?rst uses high-quality data from Koelstra et al.[22],and acts as an example of how good models can be learned when the stimuli are relatively carefully chosen,the user is monitored with an extensive set of sensors,and the labeling has been done with care.We then make a step towards a setup that would be closer to what could be used for practical affective interfaces,using a smaller set of relatively unobtrusive sensors and letting computer scientists that are not experts in psychological experi-ments,such as ourselves,design the data collection and labeling schemes.

For assessing model performance,we use

accuracy :The proportion of correct predictions,

AUC :Area under receiver operating characteristics curve,and macro -F 1score :The average of the harmonic mean of precision

and recall over all output categories.

For user-speci ?c models we compute the leave-one-sample-out estimate,learning N different models using N à1data points for training and evaluating with the left-out sample.For user-independent models we use a leave-one-user-out procedure,learning M different models using all the data from M à1users and testing with the left-out user.For both setups,we compare the performance of three kernel-based learners:SVM,MKL,and MT-MKL.For MT-MKL,we consider the following four alternatives:

MT-MKL (U1):Users are taken as tasks and Ω1eáTis used for kernel weight regularization.

MT-MKL (U2):Users are taken as tasks and Ω2eáTis used for kernel weight regularization.

MT-MKL (L1):Label categories are taken as tasks and Ω1eáTis used for kernel weight regularization.

MT-MKL (L2):Label categories are taken as tasks and Ω2eáTis

used for kernel weight regularization.

For MT-MKL (L1)and MT-MKL (L2)we evaluate both user-speci ?c and user-independent learning setups,but for MT-MKL(U1)and MT-MKL(U2)only the user-speci ?c setup is applicable.

It would also be possible to consider multi-task learning over both the users and the label categories,so that each user tlabel pair would form a single task.However,such tasks would not be exchangeable but instead the structure between the tasks should be taken into account in the learner;for instance,the tasks corresponding to the same user should be regularized more towards each other than the tasks corresponding to different users.Hence,we do not consider such a setup further in this paper,but instead focus on the setups where all tasks a priori equally related to each other.

We picked the hyperparameters C and νby cross-validation.The C was selected from the set f 10à3;10à2;…;10t3g for all models.For MT-MKL variants,the regularization parameter νwas picked from the set f 10à4;10à3;…;10t4g .We used the baseline method SVM to choose either linear or Gaussian kernel,using the same choice for all MKL methods as well.For both cases,the kernels were normalized to make the MKL weights more easily interpretable.

4.Experiment 1:high-quality laboratory data

The ?rst data set,named by the authors as DEAP,is taken from Koelstra et al.[22].In the experiment,32healthy participants watched 40music videos of 1min each and self-reported their emotional response to each video in four dimensions:valence,arousal,dominance,and liking (where liking refers to whether the user liked the video).The original label scales (from 1to 9)were binarized by thresholding at level 5.The subjects were monitored through measurements with an extensive set of sensors,including full-scalp EEG and six peripheral sensors.We extracted 216features from the measurements for each video (see Table 1),a subset of the features used by Koelstra et al.[22].We also utilized a dimension-ality reduction procedure similar to Koelstra et al.[22].We computed linear discriminant analysis (LDA)on the training data for each label separately,then selected the top 25%of features for each sensor,ranking them by the eigenvalue in the LDA solution.4.1.Prediction performance

The user-speci ?c learning setup is the same as the one used in Koelstra et al.[22];hence,we are able to compare the perfor-mance of our methods also with the naive Bayes model used there.In particular,we compare our results against their best variant using only physiological signals as inputs (they got better results when incorporating also content-based features,which would not generalize to any other type of content).We also present the

Table 1

List of features extracted from the DEAP data set,which is a subset of the list given in Koelstra et al.[22].

Full-scalp EEG from 32channels:Spectral powers of theta (4–8)Hz,slow alpha (8–10)Hz,alpha (10–12)Hz,beta (12–30)Hz,and gamma (30t)

Hz bands for each electrode

EOG (Electro-oculogram)and EMG (Electro-myogram):Energy,mean and variance

GSR (Galvanic Skin Response):Mean,mean of the derivative,mean of the positive derivatives,proportion of negatives in the derivative,number of local minima,

and 10spectral powers within 0–2.4Hz

Respiration:Band energy ratio,average respiration signal,mean of the derivative,standard deviation,range of greatest breath,10spectral powers within 0–2.4Hz,

and average and median peak to peak time

Plethysmograph:Average and standard deviation of Heart Rate Variability (HRV)and interbeat intervals,energy ratio between 0.04–0.15Hz and 0.15–0.5Hz,

spectral power in 0.1–0.2Hz,0.2–0.3Hz,0.3–0.4Hz,0.01–0.08Hz 0.08–0.15Hz,and 0.15–0.5Hz components of HRV Skin temperature:Mean,mean of the derivative,and spectral power in 0–0.1Hz and 0.1–0.2Hz

M.Kandemir et al./Neurocomputing 139(2014)97–106

100

baseline results of majority voting(choosing the label that is most frequent in the training data1)and random guessing according to the relative frequency of the labels in training data.

We performed our analysis on the same three emotional dimensions as Koelstra et al.[22]:valence,arousal,and liking. Average(over the users)test accuracies,AUC,and macro-F1scores of our method and the baselines are given in Table2(top).Multi-tasking in either way(over labels,or users)using the inner-product regularizer brings decent improvement over simpler models for all labels except liking.MT-MKL(U1)and MT-MKL(L1) either outperform or are tied with the naive Bayesian model introduced by Koelstra et al.[22].While MT-MKL(U2)gives comparable results to MT-MKL(U1),the?2ànorm regularizer per-forms worse for multi-tasking over labels(MT-MKL(L2)).

We evaluated our methods also on the user-independent setup for completeness,even though the authors of Koelstra et al.[22] avoided this setup due to high inter-user variation in their data. Table2(bottom)reveals that the accuracy is lower than in the user-speci?c case,as was expected.Nevertheless,the relative performance of the models is roughly retained,and we still outperform the chance level.

4.2.Sensor importance

An advantage of MKL is that it gives a direct estimate of sensor importance in the form of the kernel weightsη.It is particularly useful for relative ranking of the sensors.

Fig.1(a)shows the kernel weights found by MT-MKL(U1)for arousal,averaged over the users.EEG is the dominant important sensor,which is sensible considering that a32-channel sensor is much more data-rich than the other singular sensors.The result is consistent with Koelstra et al.[22]who obtained better results with EEG than they did with all peripheral sensors combined.GSR and respiration sensors are the two most informative peripheral sensors, supporting previous studies such as Alzoubi et al.[1]and Gunes et al.[18].It is noteworthy that this is an automatic side result of the method which required no extra effort from the experimenter.

Fig.1(b)shows the weights for individual users withν?0(the regular MKL model)and Fig.1(c)shows the weights obtained with the multi-task version that chooses the optimal regularization.We see that the multi-task learning solution makes the weights more similar,regularizing the individual solutions learned from limited data,but that it still allows the models for some users to rely more on GSR that is useful for those particular users.The earlier multi-task solution by Rakotomamonjy et al.[27]would force those users to comply with the consensus.

5.Experiment2:towards real-world usage

In this second example,we took a step towards the kind of data available in real-world applications.We designed an experiment with simpler sensors and with fairly low degree of control for the naturalistic stimulus,but still performed the experiments off-line in a controlled environment.

5.1.Experimental setup

We constructed an experiment where users performed a pre-speci?ed set of tasks that reasonably resemble typical tasks of daily computer use.The tasks were presented as a series of HTML pages,and the users interacted with the system using a mouse and a keyboard.A typical page in the experiment showed a question or puzzle the user was asked to answer,inducing typical processes such as decision-making and problem solving.Submitting the answer took the user to the next page.The experimental setup and the web-interface were designed from a user-centric per-spective.To this end,we interviewed with three pilot users,and adjusted the setup based on the?ndings.

5.1.1.Measurements

We collected data from six healthy male university students with four devices(see Fig.2):accelerometer,heart rate belt,eye tracker,and electroencephalograph(EEG).A3D acceleration vector was measured from the nape of the user at15Hz frequency.The

Table2

Test accuracy,AUC,and macro-F1score of the models on the DEAP data set.The top table shows the results for the user-speci?c setup and the bottom table for the user-independent setup.The value of the best performing model(not counting baselines)has been boldfaced in each column.‘Random’and‘Majority’are baselines,SVM is a traditional kernel-based learner,MKL denotes a multi-view SVM,and the MT-MKL are multi-task multi-view learners.For the user-speci?c setup the third row shows the best results reported in Koelstra et al.[22,Table7].MT-MKL(L1)and MT-MKL(U1)correspond to multi-tasking over labels and users using the regularizerΩ1eáT,respectively. MT-MKL(L2)and MT-MKL(U2)denote the same but uses the regularizerΩ2eáT.

Models Valence Arousal Liking Average

Acc.AUC F1Acc.AUC F1Acc.AUC F1Acc.AUC F1

Random0.500.520.490.450.500.450.420.500.420.460.510.45 Majority0.510.500.320.580.500.350.650.500.390.580.500.35 DEAP0.63N/A0.610.62N/A0.580.59N/A0.540.61N/A0.58 SVM0.64?0.60?0.62?0.610.530.53?0.650.61?0.57?0.63?0.58?0.57?MKL0.63?0.60?0.60?0.610.56?0.54?0.640.540.53?0.63?0.56?0.56?MT-MKL(U1)0.66?0.64?0.63?0.630.58?0.57?0.640.56?0.55?0.64?0.59?0.58?MT-MKL(U2)0.62?0.640.53?0.610.670.57?0.640.600.52?0.63?0.64?0.54?MT-MKL(L1)0.65?0.64?0.61?0.650.550.57?0.650.530.56?0.65?0.57?0.58?MT-MKL(L2)0.63?0.61?0.58?0.630.520.51?0.650.520.51?0.64?0.55?0.53?

Random0.490.480.480.520.490.480.550.480.480.520.480.48 Majority0.570.500.280.590.500.290.670.500.330.610.500.30 SVM0.570.58?0.55?0.56?0.56?0.51?0.670.54?0.45?0.600.56?0.50?MKL0.590.61?0.55?0.56?0.54?0.53?0.66?0.480.51?0.600.54?0.53?MT-MKL(L1)0.590.60?0.55?0.580.55?0.52?0.660.500.51?0.610.55?0.53?MT-MKL(L2)0.60?0.60?0.56?0.560.540.46?0.650.510.42?0.600.55?0.48?

?Signi?cantly above majority voting(paired t-test,p o0:05).Not calculated for DEAP since performance scores for individual cross-validation trials are not publicly available.

1Note that Koelstra et al.[22]de?ned majority voting as the most frequent

label in the whole data.This would not correspond to a valid classi?er,since it uses

test data.

M.Kandemir et al./Neurocomputing139(2014)97–106101

heart rate belt recorded RR-intervals (the time between two consecutive R waves in the electrocardiogram (ECG))at 2Hz frequency.The eye tracker followed the pupil diameter with an infrared camera attached to a PC monitor at 50Hz frequency.The EEG device measured one-channel EEG from the FP1location of the International 10–20system at 512Hz frequency.

5.1.2.Interface and user tasks

The experimental setup consisted of ?ve different phases,as summarized in Fig.3.The ?rst and last phases were baseline measurements,where the participant was presented with no stimulus and was instructed to relax and sit still.In the second phase,the subject ?lled in a background survey which included open answer and multiple choice questions about age,gender and language pro ?ciency.The third part contained eight multiple choice preference questions,where the choices were presented as four images.The fourth phase consisted of 10arithmetic and logic puzzles of increasing dif ?culty,designed to elicit mental workload.After each puzzle,the user was given feedback on whether his answer was correct.During the experiment,unex-pected events and interruptions such as simulated failures in submitting forms and incorrect performance feedback were inserted to evoke frustration and arousal.

https://www.sodocs.net/doc/3d17332725.html,beling affective states and mental workload

We obtained the ground-truth state labels from a 7-point numerical scale.The scale is a simpli ?ed version of the Self Assessment Manikin [4]for arousal and valence,and corresponds to one-dimensional Mental Load sub-scale of NASA's Task Load Index (NASA TLX)[19]for mental workload.The labels were collected by self-evaluation,similar to D'Mello and Graesser [12].The user was shown each page again immediately after the experiment,this time including three sets of radio button selec-tors,one for each label.

We analyzed this data set as similar as possible to the DEAP data set to keep the outcomes comparable.We extracted one data point of 38features (8EEG and 30peripheral)from the time period of each question/puzzle (see Table 3),and formed the

views

Fig.1.(a)Average (over the users)kernel weights found by MT-MKL(U1)for inferring arousal,showing that EEG is clearly the most useful sensor.(b)Sensor weights found by MKL for each individual user sorted by the weight of the EEG sensor.(c)Sensor weights found by MT-MKL(U1).Weight increases as the color goes from blue through yellow to red.This ?gure is best viewed in colors.(For interpretation of the references to color in this ?gure caption,the reader is referred to the web version of this

article.)

Fig.2.A test user wearing the sensors.The headset is a one-channel EEG device,the eye-tracker is integrated in the desktop monitor,and the accelerometer can be seen attached to the nape of the user.A heart rate belt is under the

shirt.

Fig.3.A ?ow diagram of the experiment,showing sample screenshots of the user interface.The experiment lasted 25min on average,including transitions between the phases.

M.Kandemir et al./Neurocomputing 139(2014)97–106

102

by grouping features according to the sensors they come from.

As in the previous experiment,we binarized the output labels.We

infer low vs high level using the mid-point as the discretization

threshold.

5.2.Prediction performance

Table4shows the accuracies,AUC,and F1scores of all models

and baselines.For the user-speci?c setup(top)MT-MKL(L1),the

multi-task solution over labels,outperforms the other models in

majority of the performance metrics.The fact that MT-MKL(L1)

is better than MT-MKL(U1)could be due to that inter-subject

variance is too large to bene?t from information transfer across

users given only33samples per user.Regularizing the kernel

weights withΩ1eáTyields marginally better performance than with Ω2eáT.Standard SVM performs fairly well,since it is less likely to over?t on small data sets compared to the more complex alter-

natives.For the user-independent setup(bottom)the results are

similar;MT-MKL variants outperform the rest in general,whereas

standard SVM is good for arousal.The choice of the kernel weight

regularizer does not have a signi?cant effect on performance.

Again the accuracy of all methods is,on average,lower than in the

user-speci?c case.

5.3.Sensor importance

The kernel weights of MT-MKL(U1),averaged over users for

each task,are given in Fig.4.Body motion and pupil diameter

sensors have higher contribution to affect inference than EEG and

ECG,supporting previous work[10,29].The result provides further

evidence towards using them in future real-world applications,

especially as both are relatively unobtrusive.Another intuitive result is that the single-channel EEG is far less useful than the full-scalp EEG used in the?rst experiment.

To further illustrate how MKL automatically infers the sensor importance we conducted a semi-arti?cial study where we com-plemented the four real sensor streams with arti?cial noise sensors.The weights given for the real sensors,the ones conveying information on the user state,should then be large while the weights for the noise sensors should be driven towards zero.We created the noise sensors by randomly shuf?ing the indices of the actual sensor data,in order to break the correlation with the output labels while still retaining the nature of each sensor data. We compare the total weight MKL gives for the true sensors with the alternative approach that directly assigns the sensor weights based on averaged feature-weights of linear regression(imple-mented as Bayesian?1-regularized regression[23]).Irrespective of the norm used for averaging the weights,the MKL solutions are superior especially for a high number of noisy sensors,as demon-strated in Fig.5.

https://www.sodocs.net/doc/3d17332725.html,putational time

For practical application of affective computing models the computational time is also important.All of the models discussed in this paper are reasonably fast to train,especially compared to the time it takes to collect the sensor data,and after the training the time needed for making the predictions is negligible.Hence,all would be practically feasible for affective computing systems.

Table5reports average training durations per unit learning task. For multitask methods we divide the durations by the number of tasks they jointly learn,to provide a fair comparison to single-task methods.These durations include the time taken by the cross-validation procedure needed for choosing the hyperparameters.The

Table3

List of features for the second experiment.

3D body motion(calculated separately for each dimension),and pupil diameter:Mean and standard deviation,mean of the derivative,mean,median, and maximum peak-to-peak interval,standard deviation of?xation duration

EEG:Spectral power in0.5–2.75Hz,3.5–6.75Hz,7.5–9.20Hz,10.0–11.75Hz,13.0–16.75Hz,18.0–29.75Hz,31.0–39.75Hz,and41.0–49.75Hz

ECG:Mean and standard deviation of the HRV,energy ratio between0.04–0.15Hz and0.15–0.5Hz,spectral powers in0.1–0.2Hz,0.2–0.3Hz,0.3–0.4Hz,0.01–0.08Hz,

0.08–0.15Hz,and0.15–0.5Hz components of HRV

Table4

Test accuracy,AUC,and macro-F1score of the models for Experiment2.The top table shows the results for the user-speci?c setup and the bottom table for the user-independent setup.The value of the best performing model in each column has been boldfaced.See Table2for explanations of the methods.

Models Valence Arousal Mental Wkld Average

Acc.AUC F1Acc.AUC F1Acc.AUC F1Acc.AUC F1

Random0.510.510.500.470.470.460.520.470.460.500.490.47 Majority0.580.500.290.470.500.350.730.500.370.670.500.33 SVM0.630.650.58?0.690.540.48?0.750.74?0.58?0.690.64?0.55?MKL0.620.66?0.58?0.690.450.460.780.78?0.68?0.700.63?0.58?MT-MKL(L1)0.670.70?0.64?0.620.570.53?0.790.79?0.64?0.690.69?0.60?MT-MKL(L2)0.640.65?0.60?0.630.510.540.770.77?0.65?0.680.65?0.60?MT-MKL(U1)0.610.66?0.58?0.700.490.510.770.79?0.66?0.690.65?0.58?MT-MKL(U2)0.630.65?0.60?0.690.480.460.770.77?0.65?0.700.63?0.57?

Random0.460.460.450.480.480.470.520.520.480.490.490.47 Majority0.550.500.270.480.500.330.580.500.290.600.500.30 SVM0.530.500.52?0.650.65?0.49?0.530.630.49?0.570.59?0.50?MKL0.540.530.52?0.680.63?0.40?0.540.70?0.50?0.590.62?0.47?MT-MKL(L1)0.600.580.58?0.690.65?0.40?0.640.76?0.59?0.650.66?0.52?MT-MKL(L2)0.580.570.55?0.670.61?0.42?0.660.76?0.62?0.640.65?0.53?

?Signi?cantly above majority voting(paired t-test,p o0:05).

M.Kandemir et al./Neurocomputing139(2014)97–106103

MT-MKL methods are generally the slowest because they need to validate over a two-dimensional grid to pick not only C but also the νparameter.

7.Discussion

In this study,we investigated the bene ?ts of multi-task and multi-view learning for pattern classi ?cation problems of affective computing and human –computer interaction.We believe that these concepts ?t naturally to the needs of typical affective state recognition setups,especially when used together.We exempli ?ed the concepts by introducing a new kernel-based learning model that combines the two aspects.

Multi-view learning tells how data coming from different sensors should be combined.The MKL technique used in this paper allows automatically learning the importance of individual sensors (or sensor channels),which simpli ?es the development of robust inference solutions with novel hardware.Multi-task learn-ing ,in turn,exploits the correlations between multiple state labels while learning the models.It is also useful in case where data are scarce which is a common problem in user-speci ?c modeling

setups.Our new model combines both aspects,by mutually regularizing the kernel weights of multiple tasks towards each other.

The primary empirical result of the paper is that the MKL strategies automatically reveal the importance of the sensors,providing intuitive ranking for the sensors in both experiments.We also showed in an arti ?cially constructed example that the MKL strategies are more ef ?cient in ignoring faulty or noisy sensors compared to inferring the importance from a linear regression model.In terms of accuracy,the proposed computa-tional methods are suf ?cient for inferring the state labels better than chance,but we were not able to demonstrate statistically signi ?cant gain compared to the Naive Bayes and SVM,both of which are accurate classi ?ers for these kinds of setups.The experi-ments still suggest that a reliable gain could be demonstrated under more extensive testing:The MT-MKL variants give the best accuracy,AUC,and macro-F 1scores averaged over all of the results.Among the two alternatives considered for regularizing the kernel weights of learning tasks,the inner-product regularizer Ω1eáTwas observed to provide marginally more stable perfor-mance than Ω2eáT.A possible reason for this outcome could be that being a ?rst-order term,Ω1eáTacts as a stronger regularizer than the second-order Ω2eáT.This induces a stronger bias to the learner,making it less sensitive to high noise levels,which is typical to affective computing data sets including the ones presented above.An interesting future direction is to consider real online infer-ence of user states;for real-world use automatic selection of sensor importance is even more critical as the sensors may not work in all conditions,and it is also possible to apply multi-task learning over more diverse setups,for example conside-ring different contexts as tasks.The methods proposed here

are

Fig.4.Sensor weights on the second experiment,averaged over the users for the MT-MKL(U1)model,reveal that body motion (acceleration)is the most important sensor,followed by pupil measurements.(a)Valence.(b)Arousal.(c)Mental

Wkld.

Fig.5.The relative importance assigned for the four true sensors when learning the model with Q noise sensors not associated with the affective labels.Both MT-MKL(U1)and MKL assign much higher weight for the true sensors than the alternative method estimating the sensor weights by averaging linear regression weights,irrespective of the norm (?1,?2,or ?1)used for regularizing the model.The difference is particularly clear for large Q and statistically signi ?cant (paired t -test,p o 0:05)for Q Z 8.The black dashed line shows the chance level of assigning equal weight to each sensor.

Table 5

Average training durations of the algorithms in comparison per unit learning task in seconds.MT-MKL variants are approximately 3.5times slower than MKL.

Experiment 1Experiment 2User-speci ?c

User-independent User-speci ?c User-independent SVM 0.10.90.040.5MKL

0.314.90.2417.1MT-MKL(L1) 1.693.0 1.2910.9MT-MKL(L2) 1.790.4 1.229.5MT-MKL(U1) 2.5N/A 1.26N/A MT-MKL(U2)

2.4

N/A

1.28

N/A

M.Kandemir et al./Neurocomputing 139(2014)97–106

104

computationally light in the inference stage,and the training algorithms are also fairly effective and could possibly be extended for real-time learning as well.

Acknowledgments

We acknowledge support from Nokia Research Center,Academy of Finland(project number133818and the Finnish Centre of Excellence in Computational Inference Research(COIN)),and PASCAL2European Network of Excellence.We gratefully thank Dr.Ville Ojanen,Dr.Jari Kangas,and Maija Nevala,MSc,for their help in designing the experi-ment and discussing the modeling aspects.Our special thanks go to Maija Nevala for her help with implementing the experimental setup.

References

[1]O.Alzoubi,R.A.Calvo,R.H.Stevens,Classi?cation of EEG for affect recognition:

an adaptive approach,in:Proceedings of22nd Australasian Joint Conference on Advances in Arti?cial Intelligence,2009,pp.52–61.

[2]I.Arroyo,D.G.Cooper,W.Burleson,B.P.Woolf,K.Muldner,R.Christopherson,

Emotion sensors go to school,in:Proceedings of Conference on Arti?cial Intelligence in Education,IOS Press,Amsterdam,The Netherlands,2009, pp.17–24.

[3]J.Baxter,A Bayesian/information theoretic model of learning to learn via

multiple task sampling,Mach.Learn.28(1)(1997)7–39.

[4]M.M.Bradley,https://www.sodocs.net/doc/3d17332725.html,ng,Measuring emotion:the self-assessment manikin and

the semantic differential,J.Behav.Ther.Exp.Psychiatr.25(1)(1994)49–59.

[5]R.Caruana,Multitask learning,Mach.Learn.28(1)(1997)41–75.

[6]G.Chanel,J.J.M.Kierkels,M.Soleymani,T.Pun,Short-term emotion assess-

ment in a recall paradigm,Int.J.Human–Comput.Stud.67(8)(2009)607–627.

[7]D.Chen,R.Vertegaal,Using mental load for managing interruptions in

physiologically attentive user interfaces,in:Extended Abstracts on Human Factors in Computing Systems,2004,pp.1513–1516.

[8]C.Conati,H.Maclaren,Empirically building and evaluating a probabilistic

model of user affect,User https://www.sodocs.net/doc/3d17332725.html,er-Adapt.Interact.19(2009)267–303. [9]C.Conati,H.Maclaren,Modeling user affect from causes and effects,in:

Proceedings of International Conference on User Modeling,Adaptation,and Personalization,UMAP'09,Springer-Verlag,Berlin,Heidelberg,2009,pp.4–15.

[10]C.Conati, C.Merten,Eye-tracking for user modeling in exploratory learning

environments:an empirical evaluation,Knowl.-Based Syst.20(6)(2007)557–574.

[11]S.D'Mello, A.Graesser,Mind and body:dialogue and posture for affect

detection in learning environments,in:Proceedings of Conference on Arti?cial Intelligence in Education:Building Technology Rich Learning Contexts That Work,IOS Press,Amsterdam,The Netherlands,2007,pp.161–168.

[12]S.D'Mello,A.Graesser,Automatic detection of learner's affect from gross body

language,Appl.Artif.Intell.23(2)(2009)123–150.

[13]E.Douglas-Cowie,R.Cowie,I.Sneddon, C.Cox,O.Lowry,M.Mcrorie,

J.-C.Martin,L.Devillers,S.Abrilian,A.Batliner,N.Amir,K.Karpouzis,The humaine database:addressing the collection and annotation of naturalistic and induced emotional data,in:Proceedings of2nd International Conference on Affective Computing and Intelligent Interaction,2007,pp.488–500. [14]T.Evgeniou,M.Pontil,Regularized multi-task learning,in:Proceedings of

International Conference on Knowledge Discovery and Data Mining,ACM, New York,USA,2004,pp.109–117.

[15]A.Girouard, E.Solovey,L.Hirsh?eld,K.Chauncey, A.Sassaroli,S.Fantini,

R.Jacob,Distinguishing dif?culty levels with non-invasive brain activity measurements,in:Proceedings of12th IFIP International Conference on Human–Computer Interaction:Part I,2009,pp.440–452.

[16]M.G?nen,E.Alpayd?n,Multiple kernel learning algorithms,J.Mach.Learn.

Res.12(2011)2211–2268.

[17]M.G?nen,M.Kandemir,S.Kaski,Multitask learning using regularized multi-

ple kernel learning,in:Proceedings of18th International Conference on Neural Information Processing(ICONIP),Lecture Notes in Computer Science, 2011,pp.500–509.

[18]H.Gunes,B.Schuller,M.Pantic,R.Cowie,Emotion representation,analysis and

synthesis in continuous space:a survey,in:Proceedings of IEEE International Conference on Automatic Face Gesture Recognition and Workshops,2011, pp.827–834.

[19]S.G.Hart,L.E.Stavenland,Development of NASA-TLX(Task Load Index):results

of empirical and theoretical research,in:Human Mental Workload,Elsevier, Amsterdam,The Netherlands,1988,pp.139–183.

[20]J.Kim,E.André,Emotion recognition based on physiological changes in music

listening,IEEE Trans.Pattern Anal.Mach.Intell.30(12)(2008)2067–2083.

[21]M.Kloft,U.Brefeld,Sonnenburg,So?r en,A.Zien,Lp-norm multiple kernel

learning,J.Mach.Learn.Res.12(3)(2011)953–997.

[22]S.Koelstra,C.Mühl,M.Soleymani,A.Yazdani,J.-S.Lee,T.Ebrahimi,T.Pun,A.

Nijholt,I.Patras,DEAP:a database for emotion analysis using physiological signals,IEEE https://www.sodocs.net/doc/3d17332725.html,put.,2014,in press.

[23]K.Murphy,Machine Learning:A Probabilistic Perspective,MIT Press,

Cambridge,MA,USA,2012.[24]R.Picard,E.Vyzas,J.Healey,Toward machine emotional intelligence:analysis

of affective physiological state,IEEE Trans.Pattern Anal.Mach.Intell.23(10) (2001)1175–1191.

[25]R.W.Picard,Affective Computing,MIT Press,Cambridge,MA,USA,1997.

[26]A.Piolat,T.Olive,J.Roussey,O.Thunin,J.Ziegler,SCRIPTKELL:a tool for

measuring cognitive effort and time processing in writing and other complex cognitive activities,Behav.Res.Methods31(1)(1999)113–121.

[27]A.Rakotomamonjy,R.Flamary,G.Gasso,S.Canu,?pà?q penalty for sparse

linear and sparse multiple kernel multi-task learning,IEEE Trans.Neural Netw.

22(8)(2011)1307–1320.

[28]J.A.Russell,A circumplex model of affect,J.Personal.Soc.Psychol.39(6)

(1980)1161–1178.

[29]N.Savva,N.Bianchi-Berthouze,Automatic recognition of affective body

movement in a video game scenario,in:International Conference on Intelli-gent Technologies for Interactive Entertainment,2011,pp.149–158.

[30]B.Scho?l kopf, A.J.Smola,Learning with Kernels:Support Vector Machines,

Regularization Optimization,and Beyond,MIT Press,Cambridge,MA,USA,2002.

[31]M.Soleymani,J.Lichtenauer,T.Pun,M.Pantic,A multi-modal affective

database for affect recognition and implicit tagging,IEEE Trans.Affect.

Comput.3(1)(2011)42–55.

[32]J.Wagner,J.Kim,E.Andre,From physiological signals to emotions:imple-

menting and comparing selected methods for feature extraction and classi-?cation,in:Proceedings of IEEE International Conference on Multimedia and Expo,2005,pp.940–943.

[33]A.Yazdani,J.-S.Lee,J.-M.Vesin,T.Ebrahimi,Affect recognition based on physio-

logical changes during the watching of music videos,ACM Trans.Interact.Intell.

Syst.2(1)(2012)7:1–7:26.

[34]Z.Zeng,M.Pantic,G.Roisman,T.Huang,A survey of affect recognition

methods:audio,visual,and spontaneous expressions,IEEE Trans.Pattern Anal.Mach.Intell.31(1)(2009)39–58.

[35]A.Rakotomamonjy,F.Bach,S.Canu,Y.Grandvalet,et al.,SimpleMKL,J.Mach.

Learn.Res.9(2008)2491–2521.

[36]M.Varma,B.R.Babu,More generality in ef?cient multiple kernel learning,in:

Proceedings of International Conference on Machine Learning(ICML),ACM, New York,USA,2009,pp.1065–1072.

[37]Zenglin Xu,Rong Jin,Haiqin Yang,Irwin King,Michael R.Lyu,Simple and

ef?cient multiple kernel learning by group lasso,in:Proceedings of Interna-tional Conference on Machine Learning(ICML),2010,pp.1175–

1182.

Melih Kandemir received his B.Sc.and M.Sc.degrees in

computer engineering from Hacettepe University,

Ankara,Turkey,in2005and Bilkent University,Ankara,

Turkey,in2008,respectively.He joined the Statistical

Machine Learning and Bioinformatics research group of

Aalto University School of Science,Espoo,Finland,in

2008and earned his Ph.D.degree in2013.Since2013,

he is with Heidelberg Collaboratory for Image Proces-

sing(HCI),Heidelberg University,Heidelberg,Germany.

Bayesian modeling,weakly supervised learning,medi-

cal image analysis,digital pathology,and neuroinfor-

matics are among his research

interests.

Akos Vetek is a Principal Researcher at the Media

Technologies Laboratory of Nokia Research Center.His

research interests include multimodal interaction,

intelligent user interfaces,sensors,and

wearables.

Mehmet G?nen received the B.Sc.degree in industrial

engineering,the M.Sc.and the Ph.D.degrees in com-

puter engineering from Bogazi?i University,Istanbul,

Turkey,in2003,2005,and2010,respectively.He did

his postdoctoral work at the Helsinki Institute for

Information Technology HIIT,Department of Informa-

tion and Computer Science,Aalto University,Espoo,

Finland.He is currently a Senior Research Scientist at

Sage Bionetworks,Seattle,WA,USA.His research inter-

ests include support vector machines,kernel methods,

Bayesian methods,optimization for machine learning,

dimensionality reduction,information retrieval,and

computational biology applications.

M.Kandemir et al./Neurocomputing139(2014)97–106105

Arto Klami received his Ph.D.degree in computer science from Helsinki University of Technology in 2008and worked as a postdoctoral researcher in Aalto University until2012.Currently he works as an Acad-emy Research Fellow(2013–2018)at Department of Computer Science and Helsinki Institute for Informa-tion Technology HIIT in University of Helsinki.His research interests include statistical machine learning, nonparametric Bayesian models,and integrated analy-sis of heterogeneous data

sources.Samuel Kaski is the director of Helsinki Institute for Information Technology HIIT,a joint research institute of Aalto University and University of Helsinki,and a professor of computer science at the Aalto University. His research?eld is statistical machine learning and computational data analysis,current application areas being in bioinformatics,neuroinformatics and proac-tive interfaces.He has published about150peer reviewed articles in these?elds.

M.Kandemir et al./Neurocomputing139(2014)97–106 106

最新The_Monster课文翻译

Deems Taylor: The Monster 怪才他身材矮小,头却很大,与他的身材很不相称——是个满脸病容的矮子。他神经兮兮,有皮肤病,贴身穿比丝绸粗糙一点的任何衣服都会使他痛苦不堪。而且他还是个夸大妄想狂。他是个极其自负的怪人。除非事情与自己有关,否则他从来不屑对世界或世人瞧上一眼。对他来说,他不仅是世界上最重要的人物,而且在他眼里,他是惟一活在世界上的人。他认为自己是世界上最伟大的戏剧家之一、最伟大的思想家之一、最伟大的作曲家之一。听听他的谈话,仿佛他就是集莎士比亚、贝多芬、柏拉图三人于一身。想要听到他的高论十分容易,他是世上最能使人筋疲力竭的健谈者之一。同他度过一个夜晚,就是听他一个人滔滔不绝地说上一晚。有时,他才华横溢;有时,他又令人极其厌烦。但无论是妙趣横生还是枯燥无味,他的谈话只有一个主题:他自己,他自己的所思所为。他狂妄地认为自己总是正确的。任何人在最无足轻重的问题上露出丝毫的异议,都会激得他的强烈谴责。他可能会一连好几个小时滔滔不绝,千方百计地证明自己如何如何正确。有了这种使人耗尽心力的雄辩本事,听者最后都被他弄得头昏脑涨,耳朵发聋,为了图个清静,只好同意他的说法。他从来不会觉得,对于跟他接触的人来说,他和他的所作所为并不是使人产生强烈兴趣而为之倾倒的事情。他几乎对世间的任何领域都有自己的理

论,包括素食主义、戏剧、政治以及音乐。为了证实这些理论,他写小册子、写信、写书……文字成千上万,连篇累牍。他不仅写了,还出版了这些东西——所需费用通常由别人支付——而他会坐下来大声读给朋友和家人听,一读就是好几个小时。他写歌剧,但往往是刚有个故事梗概,他就邀请——或者更确切说是召集——一群朋友到家里,高声念给大家听。不是为了获得批评,而是为了获得称赞。整部剧的歌词写好后,朋友们还得再去听他高声朗读全剧。然后他就拿去发表,有时几年后才为歌词谱曲。他也像作曲家一样弹钢琴,但要多糟有多糟。然而,他却要坐在钢琴前,面对包括他那个时代最杰出的钢琴家在内的聚会人群,一小时接一小时地给他们演奏,不用说,都是他自己的作品。他有一副作曲家的嗓子,但他会把著名的歌唱家请到自己家里,为他们演唱自己的作品,还要扮演剧中所有的角色。他的情绪犹如六岁儿童,极易波动。心情不好时,他要么用力跺脚,口出狂言,要么陷入极度的忧郁,阴沉地说要去东方当和尚,了此残生。十分钟后,假如有什么事情使他高兴了,他就会冲出门去,绕着花园跑个不停,或者在沙发上跳上跳下或拿大顶。他会因爱犬死了而极度悲痛,也会残忍无情到使罗马皇帝也不寒而栗。他几乎没有丝毫责任感。他似乎不仅没有养活自己的能力,也从没想到过有这个义务。他深信这个世界应该给他一条活路。为了支持这一信念,他

知识管理和信息管理之间的联系和区别_[全文]

引言:明确知识管理和信息管理之间的关系对两者的理论研究和实际应用都有重要意义。从本质上看,知识管理既是在信息管理基础上的继承和进一步发展,也是对信息管理的扬弃,两者虽有一定区别,但更重要的是两者之间的联系。 知识管理和信息管理之间的联系和区别 By AMT 宋亮 相对知识管理而言,信息管理已有了较长的发展历史,尽管人们对信息管理的理解各异,定义多样,至今尚未取得共识,但由于信息对科学研究、管理和决策的重要作用,信息管理一词一度成为研究的热点。知识管理这一概念最近几年才提出,而今知识管理与创新已成为企业、国家和社会发展的核心,重要原因就在于知识已取代资本成为企业成长的根本动力和竞争力的主要源泉。 信息管理与知识管理的关系如何?有人将知识管理与信息管理对立起来,强调将两者区分开;也有不少人认为知识管理就是信息管理;多数人则认为,信息管理与知识管理虽然存在较大的差异,但两者关系密切,正是因为知识和信息本身的密切关系。明确这两者的关系对两者的理论研究和实际应用都有重要意义。从本质上看,知识管理既是在信息管理基础上的继承和进一步发展,也是对信息管理的扬弃,两者虽有一定区别,但更重要的是两者之间的联系。 一、信息管理与知识管理之概念 1、信息管理 “信息管理”这个术语自本世纪70年代在国外提出以来,使用频率越来越高。关于“信息管理”的概念,国外也存在多种不同的解释。1>.人们公认的信息管理概念可以总结如下:信息管理是实观组织目录、满足组织的要求,解决组织的环境问题而对信息资源进行开发、规划、控制、集成、利用的一种战略管理。同时认为信息管理的发展分为三个时期:以图书馆工作为基础的传统管理时期,以信息系统为特征的技术管理时期和以信息资源管理为特征的资源管理时期。 2、知识管理 由于知识管理是管理领域的新生事物,所以目前还没有一个被大家广泛认可的定义。Karl E Sverby从认识论的角度对知识管理进行了定义,认为知识管理是“利用组织的无形资产创造价值的艺术”。日本东京一桥大学著名知识学教授野中郁次郎认为知识分为显性知识和隐性知识,显性知识是已经总结好的被基本接受的正式知识,以数字化形式存在或者可直接数字化,易于传播;隐性知识是尚未从员工头脑中总结出来或者未被基本接受的非正式知识,是基于直觉、主观认识、和信仰的经验性知识。显性知识比较容易共享,但是创新的根本来源是隐性知识。野中郁次郎在此基础上提出了知识创新的SECI模型:其中社会化(Socialization),即通过共享经验产生新的意会性知识的过程;外化(Externalization),即把隐性知识表达出来成为显性知识的过程;组合(Combination),即显性知识组合形成更复杂更系统的显性知识体系的过程;内化(Internalization),即把显性知识转变为隐性知识,成为企业的个人与团体的实际能力的过程。

新版人教版高中语文课本的目录。

必修一阅读鉴赏第一单元1.*沁园春?长沙……………………………………毛泽东3 2.诗两首雨巷…………………………………………戴望舒6 再别康桥………………………………………徐志摩8 3.大堰河--我的保姆………………………………艾青10 第二单元4.烛之武退秦师………………………………….《左传》16 5.荆轲刺秦王………………………………….《战国策》18 6.*鸿门宴……………………………………..司马迁22 第三单元7.记念刘和珍君……………………………………鲁迅27 8.小狗包弟……………………………………….巴金32 9.*记梁任公先生的一次演讲…………………………梁实秋36 第四单元10.短新闻两篇别了,“不列颠尼亚”…………………………周婷杨兴39 奥斯维辛没有什么新闻…………………………罗森塔尔41 11.包身工………………………………………..夏衍44 12.*飞向太空的航程……………………….贾永曹智白瑞雪52 必修二阅读鉴赏第一单元1.荷塘月色…………………………………..朱自清2.故都的秋…………………………………..郁达夫3.*囚绿记…………………………………..陆蠡第二单元4.《诗经》两首氓采薇5.离骚………………………………………屈原6.*《孔雀东南飞》(并序) 7.*诗三首涉江采芙蓉《古诗十九首》短歌行……………………………………曹操归园田居(其一)…………………………..陶渊明第三单元8.兰亭集序……………………………………王羲之9.赤壁赋……………………………………..苏轼10.*游褒禅山记………………………………王安石第四单元11.就任北京大学校长之演说……………………..蔡元培12.我有一个梦想………………………………马丁?路德?金1 3.*在马克思墓前的讲话…………………………恩格斯第三册阅读鉴赏第一单元1.林黛玉进贾府………………………………….曹雪芹2.祝福………………………………………..鲁迅3. *老人与海…………………………………….海明威第二单元4.蜀道难……………………………………….李白5.杜甫诗三首秋兴八首(其一) 咏怀古迹(其三) 登高6.琵琶行(并序)………………………………..白居易7.*李商隐诗两首锦瑟马嵬(其二) 第三单元8.寡人之于国也…………………………………《孟子》9.劝学……………………………………….《荀子》10.*过秦论…………………………………….贾谊11.*师说………………………………………韩愈第四单元12.动物游戏之谜………………………………..周立明13.宇宙的边疆………………………………….卡尔?萨根14.*凤蝶外传……………………………………董纯才15.*一名物理学家的教育历程……………………….加来道雄第四册阅读鉴赏第一单元1.窦娥冤………………………………………..关汉卿2.雷雨………………………………………….曹禹3.*哈姆莱特……………………………………莎士比亚第二单元4.柳永词两首望海潮(东南形胜) 雨霖铃(寒蝉凄切) 5.苏轼词两首念奴娇?赤壁怀古定风波(莫听穿林打叶声) 6.辛弃疾词两首水龙吟?登建康赏心亭永遇乐?京口北固亭怀古7.*李清照词两首醉花阴(薄雾浓云愁永昼) 声声慢(寻寻觅觅) 第三单元8.拿来主义……………………………………….鲁迅9.父母与孩子之间的爱……………………………..弗罗姆10.*短文三篇热爱生

信息管理与知识管理典型例题讲解

信息管理与知识管理典型例题讲解 1.()就是从事信息分析,运用专门的专业或技术去解决问题、提出想法,以及创造新产品和服务的工作。 2.共享行为的产生与行为的效果取决于多种因素乃至人们的心理活动过程,()可以解决分享行为中的许多心理因素及社会影响。 3.从总体来看,目前的信息管理正处()阶段。 4.知识员工最主要的特点是()。 5.从数量上来讲,是知识员工的主体,从职业发展角度来看,这是知识员工职业发展的起点的知识员工类型是()。 6.人类在经历了原始经济、农业经济和工业经济三个阶段后,目前正进入第四个阶段——(),一种全新的经济形态时代。 1.对信息管理而言,( )是人类信息活动产生和发展的源动力,信息需求的不断 变化和增长与社会信息稀缺的矛盾是信息管理产生的内在机制,这是信息管理的实质所在。 2.()就是从事信息分析,运用专门的专业或技术去解决问题、提出想法,以及创造新产品和服务的工作。 3.下列不属于信息收集的特点() 4.信息管理的核心技术是() 5.下列不属于信息的空间状态() 6.从网络信息服务的深度来看,网络信息服务可以分 1.知识管理与信息管理的关系的()观点是指,知识管理在历史上曾被视为信息管理的一个阶段,近年来才从信息管理中孵化出来,成为一个新的管理领域。 2.著名的Delicious就是一项叫做社会化书签的网络服务。截至2008年8月,美味书签已经拥有()万注册用户和160万的URL书签。 3.激励的最终目的是()

4.经济的全球一体化是知识管理出现的() 5.存储介质看,磁介质和光介质出现之后,信息的记录和传播越来越() 6.下列不属于信息的作用的选项() 1.社会的信息化和社会化信息网络的发展从几方面影响着各类用户的信息需求。() 2.个性化信息服务的根本就是“()”,根据不同用户的行为、兴趣、爱好和习惯,为用户搜索、组织、选择、推荐更具针对性的信息服务。 3.丰田的案例表明:加入( )的供应商确实能够更快地获取知识和信息,从而促动了生产水平和能力的提高。 4.不属于信息资源管理的形成与发展需具备三个条件() 5.下列不属于文献性的信息是() 6.下列不属于()职业特点划分的

连锁咖啡厅顾客满意度涉入程度对忠诚度影响之研究以高雄市星巴克为例

连锁咖啡厅顾客满意度涉入程度对忠诚度影响之研究以高雄市星巴克 为例 Document number【SA80SAB-SAA9SYT-SAATC-SA6UT-SA18】

连锁咖啡厅顾客满意度对顾客忠诚度之影响-以高雄 市星巴克为例 The Effects of Customer Satisfaction on Customer Loyalty—An Empirical study of Starbucks Coffee Stores in Kaohsiung City 吴明峻 Ming-Chun Wu 高雄应用科技大学观光管理系四观二甲 学号:06 中文摘要 本研究主要在探讨连锁咖啡厅顾客满意度对顾客忠诚度的影响。 本研究以高雄市为研究地区,并选择8间星巴克连锁咖啡厅的顾客作为研究对象,问卷至2006年1月底回收完毕。 本研究将顾客满意度分为五类,分别是咖啡、餐点、服务、咖啡厅内的设施与气氛和企业形象与整体价值感;将顾客忠诚度分为三类,分别是顾客再购意愿、向他人推荐意愿和价格容忍度,并使用李克特量表来进行测量。 根据过往研究预期得知以下研究结果: 1.人口统计变项与消费型态变项有关。 2.人口统计变项与消费型态变项跟顾客满意度有关。 3.人口统计变项与消费型态变项跟顾客忠诚度有关。 4.顾客满意度对顾客忠诚度相互影响。 关键词:连锁、咖啡厅、顾客满意度、顾客忠诚度 E-mail

一、绪论 研究动机 近年来,国内咖啡消费人口迅速增加,国外知名咖啡连锁品牌相继进入台湾,全都是因为看好国内咖啡消费市场。 在国内较知名的连锁咖啡厅像是星巴克、西雅图极品等。 本研究针对连锁咖啡厅之顾客满意度与顾客忠诚度间关系加以探讨。 研究目的 本研究所要探讨的是顾客满意度对顾客忠诚度的影响,以国内知名的连锁咖啡厅星巴克之顾客为研究对象。 本研究有下列五项研究目的: 1.以星巴克为例,探讨连锁咖啡厅的顾客满意度对顾客忠诚度之影响。 2.以星巴克为例,探讨顾客满意度与顾客忠诚度之间的关系。 3.探讨人口统计变项与消费型态变项是否有相关。 4.探讨人口统计变项与消费型态变项跟顾客满意度是否有相关。 5.探讨人口统计变项与消费型态变项跟顾客忠诚度是否有相关。 二、文献回顾 连锁咖啡厅经营风格分类 根据行政院(1996)所颁布的「中华民国行业标准分类」,咖啡厅是属於九大行业中的商业类之饮食业。而国内咖啡厅由於创业背景、风格以及产品组合等方面有其独特的特质,使得经营型态与风格呈现多元化的风貌。 依照中华民国连锁店协会(1999)对咖啡产业调查指出,台湾目前的咖啡厅可分成以下几类:

高中外研社英语选修六Module5课文Frankenstein's Monster

Frankenstein's Monster Part 1 The story of Frankenstein Frankenstein is a young scientist/ from Geneva, in Switzerland. While studying at university, he discovers the secret of how to give life/ to lifeless matter. Using bones from dead bodies, he creates a creature/ that resembles a human being/ and gives it life. The creature, which is unusually large/ and strong, is extremely ugly, and terrifies all those/ who see it. However, the monster, who has learnt to speak, is intelligent/ and has human emotions. Lonely and unhappy, he begins to hate his creator, Frankenstein. When Frankenstein refuses to create a wife/ for him, the monster murders Frankenstein's brother, his best friend Clerval, and finally, Frankenstein's wife Elizabeth. The scientist chases the creature/ to the Arctic/ in order to destroy him, but he dies there. At the end of the story, the monster disappears into the ice/ and snow/ to end his own life. Part 2 Extract from Frankenstein It was on a cold November night/ that I saw my creation/ for the first time. Feeling very anxious, I prepared the equipment/ that would give life/ to the thing/ that lay at my feet. It was already one/ in the morning/ and the rain/ fell against the window. My candle was almost burnt out when, by its tiny light,I saw the yellow eye of the creature open. It breathed hard, and moved its arms and legs. How can I describe my emotions/ when I saw this happen? How can I describe the monster who I had worked/ so hard/ to create? I had tried to make him beautiful. Beautiful! He was the ugliest thing/ I had ever seen! You could see the veins/ beneath his yellow skin. His hair was black/ and his teeth were white. But these things contrasted horribly with his yellow eyes, his wrinkled yellow skin and black lips. I had worked/ for nearly two years/ with one aim only, to give life to a lifeless body. For this/ I had not slept, I had destroyed my health. I had wanted it more than anything/ in the world. But now/ I had finished, the beauty of the dream vanished, and horror and disgust/ filled my heart. Now/ my only thoughts were, "I wish I had not created this creature, I wish I was on the other side of the world, I wish I could disappear!” When he turned to look at me, I felt unable to stay in the same room as him. I rushed out, and /for a long time/ I walked up and down my bedroom. At last/ I threw myself on the bed/ in my clothes, trying to find a few moments of sleep. But although I slept, I had terrible dreams. I dreamt I saw my fiancée/ walking in the streets of our town. She looked well/ and happy/ but as I kissed her lips,they became pale, as if she were dead. Her face changed and I thought/ I held the body of my dead mother/ in my arms. I woke, shaking with fear. At that same moment,I saw the creature/ that I had created. He was standing/by my bed/ and watching me. His

人教版高中语文必修必背课文精编WORD版

人教版高中语文必修必背课文精编W O R D版 IBM system office room 【A0816H-A0912AAAHH-GX8Q8-GNTHHJ8】

必修1 沁园春·长沙(全文)毛泽东 独立寒秋, 湘江北去, 橘子洲头。 看万山红遍, 层林尽染, 漫江碧透, 百舸争流。 鹰击长空, 鱼翔浅底, 万类霜天竞自由。 怅寥廓, 问苍茫大地, 谁主沉浮。 携来百侣曾游, 忆往昔峥嵘岁月稠。

恰同学少年, 风华正茂, 书生意气, 挥斥方遒。 指点江山, 激扬文字, 粪土当年万户侯。 曾记否, 到中流击水, 浪遏飞舟。 雨巷(全文)戴望舒撑着油纸伞,独自 彷徨在悠长、悠长 又寂寥的雨巷, 我希望逢着 一个丁香一样地

结着愁怨的姑娘。 她是有 丁香一样的颜色, 丁香一样的芬芳, 丁香一样的忧愁, 在雨中哀怨, 哀怨又彷徨; 她彷徨在这寂寥的雨巷,撑着油纸伞 像我一样, 像我一样地 默默彳亍着 冷漠、凄清,又惆怅。她默默地走近, 走近,又投出 太息一般的眼光

她飘过 像梦一般地, 像梦一般地凄婉迷茫。像梦中飘过 一枝丁香地, 我身旁飘过这个女郎;她默默地远了,远了,到了颓圮的篱墙, 走尽这雨巷。 在雨的哀曲里, 消了她的颜色, 散了她的芬芳, 消散了,甚至她的 太息般的眼光 丁香般的惆怅。 撑着油纸伞,独自

彷徨在悠长、悠长 又寂寥的雨巷, 我希望飘过 一个丁香一样地 结着愁怨的姑娘。 再别康桥(全文)徐志摩 轻轻的我走了,正如我轻轻的来; 我轻轻的招手,作别西天的云彩。 那河畔的金柳,是夕阳中的新娘; 波光里的艳影,在我的心头荡漾。 软泥上的青荇,油油的在水底招摇; 在康河的柔波里,我甘心做一条水草! 那榆荫下的一潭,不是清泉, 是天上虹揉碎在浮藻间,沉淀着彩虹似的梦。寻梦?撑一支长篙,向青草更青处漫溯, 满载一船星辉,在星辉斑斓里放歌。

(完整版)Unit7TheMonster课文翻译综合教程四

Unit 7 The Monster Deems Taylor 1He was an undersized little man, with a head too big for his body ― a sickly little man. His nerves were bad. He had skin trouble. It was agony for him to wear anything next to his skin coarser than silk. And he had delusions of grandeur. 2He was a monster of conceit. Never for one minute did he look at the world or at people, except in relation to himself. He believed himself to be one of the greatest dramatists in the world, one of the greatest thinkers, and one of the greatest composers. To hear him talk, he was Shakespeare, and Beethoven, and Plato, rolled into one. He was one of the most exhausting conversationalists that ever lived. Sometimes he was brilliant; sometimes he was maddeningly tiresome. But whether he was being brilliant or dull, he had one sole topic of conversation: himself. What he thought and what he did. 3He had a mania for being in the right. The slightest hint of disagreement, from anyone, on the most trivial point, was enough to set him off on a harangue that might last for hours, in which he proved himself right in so many ways, and with such exhausting volubility, that in the end his hearer, stunned and deafened, would agree with him, for the sake of peace. 4It never occurred to him that he and his doing were not of the most intense and fascinating interest to anyone with whom he came in contact. He had theories about almost any subject under the sun, including vegetarianism, the drama, politics, and music; and in support of these theories he wrote pamphlets, letters, books ... thousands upon thousands of words, hundreds and hundreds of pages. He not only wrote these things, and published them ― usually at somebody else’s expense ― but he would sit and read them aloud, for hours, to his friends, and his family. 5He had the emotional stability of a six-year-old child. When he felt out of sorts, he would rave and stamp, or sink into suicidal gloom and talk darkly of going to the East to end his days as a Buddhist monk. Ten minutes later, when something pleased him he would rush out of doors and run around the garden, or jump up and down off the sofa, or stand on his head. He could be grief-stricken over the death of a pet dog, and could be callous and heartless to a degree that would have made a Roman emperor shudder. 6He was almost innocent of any sense of responsibility. He was convinced that

(完整版)信息管理与知识管理

信息管理与知识管理 ()是第一生产力,科学技术 ()是物质系统运动的本质特征,是物质系统运动的方式、运动的状态和运动的有序性信息按信息的加工和集约程度分()N次信息源 不是阻碍企业技术创新的主要因素主要包括()市场需求薄弱 不属于 E.M.Trauth博士在20世纪70年代首次提出信息管理的概念,他将信息管理分内容为()网络信息管理 不属于W.Jones将个人信息分为三类()个人思考的 不属于信息的经济价值的特点()永久性 不属于信息的社会价值特征的实()广泛性 不属于信息资源管理的形成与发展需具备三个条件()信息内容 存储介质看,磁介质和光介质出现之后,信息的记录和传播越来越()方便快捷 国外信息管理诞生的源头主要有()个1个 即信息是否帮助个人获得更好的发展,指的是()个人发展 社会的信息化和社会化信息网络的发展从几方面影响着各类用户的信息需求。()2个 社会价值的来源不包括()社会观念 图书馆层面的障碍因素不去包括:()书籍复杂无法分类 下列不属于信息收集的过程的()不提供信息收集的成果 下列不属于信息收集的特点()不定性

下列不属于用户在信息获取过程中的模糊性主要表现()用户无法准确表达需求 下列哪种方式不能做出信息商品()口口相传 下列现象不是五官感知信息为()电波 下列属于个人价值的来源()个人需求 信息information”一词来源于哪里()拉丁 信息管理的核心技术是()信息的存储与处理 技术信息管理观念是专业技术人员的(基本要素 信息认知能力是专业技术人员的(基本素质) 信息是通信的内容。控制论的创始人之一维纳认为,信息是人们在适应(外部) 世界并且使之反作用于世界的过程中信息收集的对象是(信息源)由于()是一种主体形式的经济成分,它们发展必将会导致社会运行的信息化。(信息经济) 在网络环境下,用户信息需求的存在形式有()4种 判断题 “效益”一词在经济领域内出现,效益递增或效益递减也是经济学领域的常用词。(对) “知识经济”是一个“直接建立在知识和信息的生产、分配和使用之上的经济”,是相对于“以物质为基础的经济”而言的一种新型的富有生命力的经济形态。(对) 百度知道模块是统计百度知道中标题包含该关键词的问题,然后由问

人教版高中语文必修一背诵篇目

高中语文必修一背诵篇目 1、《沁园春长沙》毛泽东 独立寒秋,湘江北去,橘子洲头。 看万山红遍,层林尽染;漫江碧透,百舸争流。 鹰击长空,鱼翔浅底,万类霜天竞自由。 怅寥廓,问苍茫大地,谁主沉浮? 携来百侣曾游,忆往昔峥嵘岁月稠。 恰同学少年,风华正茂;书生意气,挥斥方遒。 指点江山,激扬文字,粪土当年万户侯。 曾记否,到中流击水,浪遏飞舟? 2、《诗两首》 (1)、《雨巷》戴望舒 撑着油纸伞,独自 /彷徨在悠长、悠长/又寂寥的雨巷, 我希望逢着 /一个丁香一样的 /结着愁怨的姑娘。 她是有 /丁香一样的颜色,/丁香一样的芬芳, /丁香一样的忧愁, 在雨中哀怨, /哀怨又彷徨; /她彷徨在这寂寥的雨巷, 撑着油纸伞 /像我一样, /像我一样地 /默默彳亍着 冷漠、凄清,又惆怅。 /她静默地走近/走近,又投出 太息一般的眼光,/她飘过 /像梦一般地, /像梦一般地凄婉迷茫。 像梦中飘过 /一枝丁香的, /我身旁飘过这女郎; 她静默地远了,远了,/到了颓圮的篱墙, /走尽这雨巷。 在雨的哀曲里, /消了她的颜色, /散了她的芬芳, /消散了,甚至她的 太息般的眼光, /丁香般的惆怅/撑着油纸伞,独自 /彷徨在悠长,悠长 又寂寥的雨巷, /我希望飘过 /一个丁香一样的 /结着愁怨的姑娘。 (2)、《再别康桥》徐志摩 轻轻的我走了, /正如我轻轻的来; /我轻轻的招手, /作别西天的云彩。 那河畔的金柳, /是夕阳中的新娘; /波光里的艳影, /在我的心头荡漾。 软泥上的青荇, /油油的在水底招摇; /在康河的柔波里, /我甘心做一条水草!那榆阴下的一潭, /不是清泉,是天上虹 /揉碎在浮藻间, /沉淀着彩虹似的梦。寻梦?撑一支长篙, /向青草更青处漫溯, /满载一船星辉, /在星辉斑斓里放歌。但我不能放歌, /悄悄是别离的笙箫; /夏虫也为我沉默, / 沉默是今晚的康桥!悄悄的我走了, /正如我悄悄的来;/我挥一挥衣袖, /不带走一片云彩。 4、《荆轲刺秦王》 太子及宾客知其事者,皆白衣冠以送之。至易水上,既祖,取道。高渐离击筑,荆轲和而歌,为变徵之声,士皆垂泪涕泣。又前而为歌曰:“风萧萧兮易水寒,壮士一去兮不复还!”复为慷慨羽声,士皆瞋目,发尽上指冠。于是荆轲遂就车而去,终已不顾。 5、《记念刘和珍君》鲁迅 (1)、真的猛士,敢于直面惨淡的人生,敢于正视淋漓的鲜血。这是怎样的哀痛者和幸福者?然而造化又常常为庸人设计,以时间的流驶,来洗涤旧迹,仅使留下淡红的血色和微漠的悲哀。在这淡红的血色和微漠的悲哀中,又给人暂得偷生,维持着这似人非人的世界。我不知道这样的世界何时是一个尽头!

星巴克swot分析

6月21日 85度C VS. 星巴克SWOT分析 星巴克SWOT分析 优势 1.人才流失率低 2.品牌知名度高 3.熟客券的发行 4.产品多样化 5.直营贩售 6.结合周边产品 7.策略联盟 劣势 1.店內座位不足 2.分店分布不均 机会 1.生活水准提高 2.隐藏极大商机 3.第三空间的概念 4.建立电子商务 威胁 1.WTO开放后,陆续有国际品牌进驻 2.传统面包复合式、连锁咖啡馆的经营 星巴克五力分析 1、供应商:休闲风气盛,厂商可将咖啡豆直接批给在家煮咖啡的消费者 2、购买者:消费者意识高涨、资讯透明化(比价方便) 3、同业內部竞争:产品严重抄袭、分店附近必有其他竞争者 4、潜在竞争者:设立咖啡店连锁店无进入障碍、品质渐佳的铝箔包装咖啡 5、替代品:中国茶点、台湾小吃、窜红甚快的日本东洋风...等 85度c市場swot分析: 【Strength优势】 具合作同业优势 产品精致 以价格进行市场区分,平价超值 服务、科技、产品、行销创新,机动性强 加盟管理人性化 【Weakness弱势】 通路品质控制不易 品牌偏好度不足 通路不广 财务能力不健全 85度c的历史资料,在他们的网页上的活动信息左邊的新聞訊息內皆有詳細資料,您可以直接上網站去查閱,皆詳述的非常清楚。

顧客滿意度形成品牌重於產品的行銷模式。你可以在上他們家網站找找看!【Opportunity机会】 勇于變革变革与创新的经营理念 同业策略联盟的发展、弹性空间大 【Threat威胁】 同业竞争对手(怡客、维多伦)门市面对面竞争 加盟店水准不一,品牌形象建立不易 直,间接竞争者不断崛起(壹咖啡、City Café...) 85度跟星巴克是不太相同的零售,星巴客应该比较接近丹堤的咖啡厅,策略形成的部份,可以从 1.产品线的宽度跟特色 2.市场区域与选择 3.垂直整合 4.规模经济 5.地区 6.竞争优势 這6點來做星巴客跟85的区分 可以化一个表,來比较他們有什么不同 內外部的話,內部就从相同产业來分析(有什麼优势跟劣势) 外部的話,一樣是相同产业但卖的东西跟服务不太同,来与其中一个产业做比较(例如星巴客) S(优势):點心精致平价,咖啡便宜,店面设计观感很好...等等 W(劣势):对于消费能力较低的地区点心价格仍然较高,服务人员素质不齐,點心种类变化較少 O(机会):对于点心&咖啡市场仍然只有少数的品牌独占(如:星XX,壹XX...等),它可以透过连锁店的开幕达成高市占率 T(威协):1.台湾人的模仿功力一流,所以必须保持自己的特色做好市场定位 2.消費者的口味变化快速,所以可以借助学者"麥XX"的做法保有主要的點心款式外加上几样周期变化的點心 五力分析 客戶讲价能力(the bargaining power of customers)、 供应商讲价能力(the bargaining power of suppliers)、 新进入者的竞争(the threat of new entrants)、 替代品的威协(the threat of substitute products)、 现有厂商的竞争(The intensity of competitive rivalry)

2019年温州一般公需科目《信息管理与知识管理》模拟题

2019年温州一般公需科目《信息管理与知识管理》模拟题 一、单项选择题(共30小题,每小题2分) 2.知识产权的时间起点就是从科研成果正式发表和公布的时间,但有期限,就是当事人去世()周年以内权利是保全的。 A、三十 B、四十 C、五十 D、六十 3.()国务院发布《中华人民共和国计算机信息系统安全保护条例》。 A、2005年 B、2000年 C、1997年 D、1994年 4.专利申请时,如果审察员认为人类现有的技术手段不可能实现申请者所描述的结果而不给予授权,这体现了()。 A、技术性 B、实用性 C、创造性 D、新颖性 5.根据本讲,参加课题的研究策略不包括()。 A、量力而行 B、逐步击破 C、整体推进 D、有限目标 6.本讲认为,传阅保密文件一律采取()。 A、横向传递 B、纵向传递 C、登记方式 D、直传方式 7.本讲提到,高达()的终端安全事件是由于配置不当造成。 A、15% B、35% C、65% D、95% 8.本讲认为知识产权制度本质特征是()。 A、反对垄断 B、带动创业 C、激励竞争 D、鼓励创新 9.本讲指出,以下不是促进基本 公共服务均等化的是()。 A、互联网+教育 B、互联网+医 疗 C、互联网+文化 D、互联网+工 业 10.根据()的不同,实验可以分 为定性实验、定量实验、结构分析实 验。 A、实验方式 B、实验在科研中 所起作用C、实验结果性质D、实验 场所 11.本讲认为,大数据的门槛包含 两个方面,首先是数量,其次是()。 A、实时性 B、复杂性 C、结构化 D、非结构化 12.2007年9月首届()在葡萄牙里 斯本召开。大会由作为欧盟轮值主席 国葡萄牙的科学技术及高等教育部 主办,由欧洲科学基金会和美国研究 诚信办公室共同组织。 A、世界科研诚信大会 B、世界学术诚信大会 C、世界科研技术大会 D、诺贝尔颁奖大会 13.()是2001年7月15日发现 的网络蠕虫病毒,感染非常厉害,能 够将网络蠕虫、计算机病毒、木马程 序合为一体,控制你的计算机权限, 为所欲为。 A、求职信病毒 B、熊猫烧香病 毒 C、红色代码病毒 D、逻辑炸弹 14.本讲提到,()是创新的基础。 A、技术 B、资本 C、人才 D、知 识 15.知识产权保护中需要多方协 作,()除外。 A、普通老百姓 B、国家 C、单位 D、科研人员 17.世界知识产权日是每年的()。 A、4月23日 B、4月24日 C、4月25日 D、4月26日 18.2009年教育部在《关于严肃 处理高等学校学术不端行为的通知》 中采用列举的方式将学术不端行为 细化为针对所有研究领域的()大类行 为。 A、3 B、5 C、7 D、9 19.美国公民没有以下哪个证件 ()。 A、护照 B、驾驶证 C、身份证 D、社会保障号 20.关于学术期刊下列说法正确 的是()。 A、学术期刊要求刊发的都是第 一手资料B、学术期刊不要求原发 C、在选择期刊时没有固定的套 式 D、对论文的专业性没有限制 22.知识产权的经济价值(),其 遭遇的侵权程度()。 A、越大,越低 B、越大,越高 C、越小,越高 D、越大,不变 23.()是一项用来表述课题研究 进展及结果的报告形式。 A、开题报告 B、文献综述 C、课题报告 D、序论 25.1998年,()发布《电子出版 物管理暂行规定》。

人教版高中语文必修必背课文

必修1 沁园春·长沙(全文)毛泽东 独立寒秋, 湘江北去, 橘子洲头。 看万山红遍, 层林尽染, 漫江碧透, 百舸争流。 鹰击长空, 鱼翔浅底, 万类霜天竞自由。 怅寥廓, 问苍茫大地, 谁主沉浮。 携来百侣曾游, 忆往昔峥嵘岁月稠。 恰同学少年, 风华正茂, 书生意气, 挥斥方遒。 指点江山, 激扬文字, 粪土当年万户侯。 曾记否, 到中流击水, 浪遏飞舟。 雨巷(全文)戴望舒 撑着油纸伞,独自 彷徨在悠长、悠长 又寂寥的雨巷, 我希望逢着 一个丁香一样地 结着愁怨的姑娘。 她是有 丁香一样的颜色, 丁香一样的芬芳, 丁香一样的忧愁, 在雨中哀怨, 哀怨又彷徨;

她彷徨在这寂寥的雨巷, 撑着油纸伞 像我一样, 像我一样地 默默彳亍着 冷漠、凄清,又惆怅。 她默默地走近, 走近,又投出 太息一般的眼光 她飘过 像梦一般地, 像梦一般地凄婉迷茫。 像梦中飘过 一枝丁香地, 我身旁飘过这个女郎; 她默默地远了,远了, 到了颓圮的篱墙, 走尽这雨巷。 在雨的哀曲里, 消了她的颜色, 散了她的芬芳, 消散了,甚至她的 太息般的眼光 丁香般的惆怅。 撑着油纸伞,独自 彷徨在悠长、悠长 又寂寥的雨巷, 我希望飘过 一个丁香一样地 结着愁怨的姑娘。 再别康桥(全文)徐志摩 轻轻的我走了,正如我轻轻的来;我轻轻的招手,作别西天的云彩。 那河畔的金柳,是夕阳中的新娘;波光里的艳影,在我的心头荡漾。 软泥上的青荇,油油的在水底招摇;

在康河的柔波里,我甘心做一条水草! 那榆荫下的一潭,不是清泉, 是天上虹揉碎在浮藻间,沉淀着彩虹似的梦。 寻梦?撑一支长篙,向青草更青处漫溯, 满载一船星辉,在星辉斑斓里放歌。 但我不能放歌,悄悄是别离的笙箫; 夏虫也为我沉默,沉默是今晚的康桥。 悄悄的我走了,正如我悄悄的来; 我挥一挥衣袖,不带走一片云彩。 记念刘和珍君(二、四节)鲁迅 二 真的猛士,敢于直面惨淡的人生,敢于正视淋漓的鲜血。这是怎样的哀痛者和幸福者?然而造化又常常为庸人设计,以时间的流驶,来洗涤旧迹,仅使留下淡红的血色和微漠的悲哀。在这淡红的血色和微漠的悲哀中,又给人暂得偷生,维持着这似人非人的世界。我不知道这样的世界何时是一个尽头! 我们还在这样的世上活着;我也早觉得有写一点东西的必要了。离三月十八日也已有两星期,忘却的救主快要降临了罢,我正有写一点东西的必要了。 四 我在十八日早晨,才知道上午有群众向执政府请愿的事;下午便得到噩耗,说卫队居然开枪,死伤至数百人,而刘和珍君即在遇害者之列。但我对于这些传说,竟至于颇为怀疑。我向来是不惮以最坏的恶意,来推测中国人的,然而我还不料,也不信竟会下劣凶残到这地步。况且始终微笑着的和蔼的刘和珍君,更何至于无端在府门前喋血呢? 然而即日证明是事实了,作证的便是她自己的尸骸。还有一具,是杨德群君的。而且又证明着这不但是杀害,简直是虐杀,因为身体上还有棍棒的伤痕。 但段政府就有令,说她们是“暴徒”! 但接着就有流言,说她们是受人利用的。 惨象,已使我目不忍视了;流言,尤使我耳不忍闻。我还有什么话可说呢?我懂得衰亡民族之所以默无声息的缘由了。沉默啊,沉默啊!不在沉默中爆发,就在沉默中灭亡。

相关主题