搜档网
当前位置:搜档网 › Typicality An Improved Semantic Analysis

Typicality An Improved Semantic Analysis

Typicality: An Improved Semantic Analysis Galit W. Sassoon, Tel Aviv University
Abstract Parts 1-3 present and criticize Partee and Kamp’s 1995 well known analysis of the typicality effects. The main virtue of this analysis is in the use of supermodels, rather than fuzzy models, in order to represent vagueness in predicate meaning. The main problem is that typicality of an item in a predicate is represented by a value assigned by a measure function, indicating the proportion of supervaluations in which the item falls under the predicate. A number of issues cannot be correctly represented by the measure function, including the typicality effects in sharp predicates; the conjunction fallacy; the context dependency of the typicality effects etc. In Parts 4-5, it is argued that these classical problems are solved if the typicality ordering is taken to be the order in which entities are learnt to be denotation members (or non-members) through contexts and their extensions. A modified formal model is presented, which clarifies the connections between the typicality effects, predicate meaning, and its acquisition. Contents:
1. What are the typicality effects? 2. The Supermodel Theory (Partee and Kamp 1995)
2.1 Background: Multiple valued logic in the analysis of typicality 2.2 Supermodels 2.3 The representation of typicality in the Supermodel theory
4. My Proposal: Learning Models
4.1 4.2 4.3 4.4 Learning models The typicality ordering Deriving degrees Intermediate degrees of denotation members 4.5 The sub-type effect 4.6 The conjunction effect / fallacy 4.7 The negation effect 4.8 Partial knowledge 4.9 Context dependency 4.10 Typicality Features
3. Problems in The Theory
3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 Typicality degrees of denotation members The sub-type effect The conjunction effect / fallacy Partial knowledge Numerical degrees Prototypes Feature sets Conclusions of part 3
5
What exactly do Learning Models model? More findings
5.1 5.2 5.3 Corrections Inferences: Indirect learning Conclusions of part 5
6 Conclusions

Typicality: An Improved Semantic Analysis 1
1. What are the typicality effects? Speakers order entities or sub-kinds (Dayal 2004; sub-kinds are also called exemplars) by their typicality in predicates. For example, a robin is often considered more typical of a bird than an ostrich or a penguin. These ordering judgments show up in an unconcious processing effect, namely in online categorization time: Verification time for sentences like a robin is a bird, where subjects determine category membership for a typical item, is faster than for sentences like an ostrich is a bird, where subjects determine membership of an atypical item (Rosch 1973, Armstrong, Gleitman and Gleitman 1983). In addition, speakers consider features like feathers, small, flies and sings, as typical of birds. Crucially, the more typical birds are more typical in these features (Rosch 1973). These judgments are highly context dependent. For example, within a context of an utterance like: the bird walked across the barnyard, a chicken is regarded as a typical bird, and categorization time is faster for the contextually appropriate item chicken than for the normally typical but contextually inappropriate item robin (Roth and Shoben 1983). In addition to these basic effects, there are robust order of learning effects. In a nutshell, typical instances are acquired earlier than atypical ones, by children of various ages and by adults (Mervis and Rosch 1981, Rosch 1973, Murphy and Smith 1982); in recall tasks, typical instances are produced before atypical ones (Rosch 1973, Batting & Montague 1969); categories are learned faster if initial exposure is to a typical member (Mervis & Pani 1980), than if initial exposure is to an atypical member, or even to the whole denotation in a random order; and finally, typical (or early acquired) instances are remembered best (Heit 1997), and they affect future learning (encoding in memory) of entities and their features (Rips 1975, Osherson et al 1990). In sum, typicality is deeply related to the order in which instances are learnt to be members in predicate denotations. These findings were replicated time and again (Mervis and Rosch 1981). Yet, the mental models underlying them and their relation to predicate meaning are still a puzzle. To see this, we will now review the well known typicality theory, which is most frequently cited by formal semanticists, namely – The Supermodel Theory. For a more detailed discussion of the typicality effects and other model types, see Sassoon 2005.

2 Galit Weidman Sassoon
2. The Supermodel Theory (Partee and Kamp 1995) 2.1 Background: Multiple valued logic in the analysis of typicality Partee and Kamp's main innovation within the analysis of typicality, is in the use of a logic with three truth values and the technique of Supervaluations (van Fraassen 1969; Kamp 1975; Fein 1975; Veltman 1984; Landman 1991), as opposed to the standard use of a logic with multiple truth values (such as fuzzy logics) in the analysis of typicality in artificial intelligence, cognitive psychology, and linguistics (Zadeh 1965; Lakoff 1973; Osherson & Smith 1981; Lakoff 1987; Aarts et al 2004). 2.1.1 Fuzzy models In classical logics, a proposition may take as a truth value either 0 or 1. In fuzzy logics, a proposition may take as a truth value any number in the real interval [0,1]. For example, such a model can assume the following facts: [1] The truth value of the proposition a robin is a bird is 1; The truth value of the proposition a goose is a bird is 0.7; The truth value of the proposition an ostrich is a bird is 0.5; The truth value of the proposition a butterfly is a bird is 0.3; The truth value of the proposition a cow is a bird is 0.1. These values indicate the typicality degrees of the individuals or kinds denoted by the subjects in the predicate bird. More precisely, in such models, predicates are not associated with sets as denotations. Rather, for every predicate P, a characteristic function, cm(P,d), assigns to each entity d in the domain of individuals D, a value in the real interval [0,1], its degree of membership in P. Moreover, each predicate is associated with a prototype p, i.e. the best member possible. Finally, a degree function cP (a distance metric) associates pairs of entities with values in the real interval [0,1]. If, for example, r is a robin, b a blue jay and o an ostrich, then: cP(r,b)< cP(r,o), i.e. r is more similar to b than to o. The typicality of an entity d in P is represented as the distance of d from the prototype of P, cP(d,p). This distance function satisfies several constraints. For example, cP is such that any entity has zero distance from itself (?d∈D: cP(d,d) = 0); cP is symmetric (?d,e∈D: cP(d,e) = cP(e,d)); and cP has the property called the triangle inequality (?d,e,f∈D: cP(d,e) + cP(e,f) ≥ cP(d,f)). Most important for our purposes is the monotonic decreasing relation

Typicality: An Improved Semantic Analysis 3
between d and c: The distance of entities from the prototype p of P inversely correlates with their membership degree in P: [2] ?d,e∈D: (cP(d,p) ≤ cP(e,p)) → (cm(P,d) ≥ cm(P,e)). Typicality degrees are assumed to correspond to degrees, or probabilities, of membership in the category. This leading intuition shows up also in the rules that predict the typicality degrees in complex predicates. There are three composition rules for cm: [3] 1. The complement rule for ?: 2. The minimal-degree rule for ∧: 3. The maximal-degree rule for ∨: cm(?P,d)= 1 – cm(P,d) cm(P∧Q,d)= Min(cm(P,d),cm(Q,d)) cm(P∨Q,d)= Max(cm(P,d),cm(Q,d))
Consider, for instance, the complement rule for negated predicates in (3.1). The degree of a goose in not-a-bird is assumed to be the complement of its degree in bird (e.g. 1- 0.7). This rule is directly inspired by the idea that the probability that p is the complement of the probably that not-p. Similarly, the minimal-degree rule for conjunctions in (3.2) states that an item’s degree in a modified noun like brown apple is the minimal degree among the constituents, brown and apple. This rule, and other versions of the rule for conjunctions and modified nouns in fuzzy models, are directly inspired by the fact that the probability that p∧q cannot exceed the probability that just p, or just q. 2.1.2 Problems of fuzzy models Osherson and Smith 1981 have shown a variety of shortcomings of fuzzy models. Following them, Partee and Kamp 1995 have argued at length against such models. The main problem for these models is that they generate wrong predictions. Consider, for example, the-minimal-degree rule. This rule predicts that the typicality degree of, e.g. brown-apples, cannot be bigger in brown apple than in apple. Hence, this rule fails to predict the empirically well established conjunction effect (Smith et al 1988) or fallacy (Tversky et al 1983), i.e. the finding that, according to speakers' intuitive judgments, both the typicality degree (Smith et al 1988), and the likelihood of category membership (Tversky et al 1983), of brown-apples, is bigger in brown apple than in apple.

4 Galit Weidman Sassoon
The minimal-degree rule is most problematic when it comes to contradictory and tautological predicates. Intuitively, the degree of all entities in P∧?P and P∨?P ought to be 0 and 1, respectively. But fuzzy models fail to predict this. For example, if a goose is a bird to degree 0.7, then according to the complement rule, a goose is not a bird to degree 0.3. Given this, the minimal degree rule predicts that a goose is a bird and not a bird to degree 0.3, rather than to degree 0. Another problem has to do with the fact that the degree function in these models is total, though knowledge about typicality is often partial. For example, if one bird sings and the other flies, which one is more typical? We cannot tell out of context. This problem highlights the need for more context dependency in the representation of typicality. Partee and Kamp 1995 have argued at length for the importance of this aspect. Yet, we will see in part 3 that their proposal is also insufficient in this respect. A problem which usually goes unnoticed has to do with the complement rule. It is indeed true that the typicality orderings of negated predicates are essentially the reverse of the orderings of the predicates that are being negated (see, for instance, the findings reported in Smith et al 1988), yet exceptions to this rule are quite common. Why? Because negated predicates are often contextually restricted. For example, the set of non-birds is frequently assumed to only consist of animals. In such contexts, non-animals are intuitively assigned low typicality degrees both in the predicate bird and in the negated predicate non-bird (rather than a low degree in bird and a high degree in non-bird, as predicted by the complement rule). This judgment is not captured because the relevant contextual factors are not represented. 2.1.3 Intermediate summary We saw that multiple truth values, or probability degrees, as means to indicate typicality degrees, are problematic in many respects. An alternative theory is the Supermodel Theory (Partee and Kamp 1995). This analysis uses the same types of mechanisms, namely – a membership degree function cm, a prototype p, and a typicality degree function cp. However, this analysis differs in two crucial respects. First, it replaces fuzzy logics with three valued logics. Second, the typicality degrees are not always coupled with the membership degrees. With these two differences, the analysis is claimed to be significantly improved. However, while indeed improved in some respects, we will see in part 3 that this analysis is highly limited and problematic in other respects. In part 4 we will propose a novel

Typicality: An Improved Semantic Analysis 5
analysis which completely abandons the use of membership degree functions, prototypes, and distance functions. 2.2 Supermodels A supermodel M* consists of one partial model M, which I will call 'context' M. In M, denotations are only partially known. For example, the denotation of chair in a partial context M may consist of only one item – the prototypical chair, pchair. The denotation of non-chair may consist of only one item too, which is very clearly not a chair, say – the prototypical sofa, psofa. This means that in M we don't yet know if anything else, (an armchair, a stool, a chair with less than 4 legs, a chair without a back, a chair which is not used as a seat, a chair which is not of the normal size etc.), is a chair or not. In addition, M is accompanied by a set T of total models (the supervaluations in van Fraassen 1969), i.e. a set of all the possibilities seen in M to specify the complete sets of chairs and non-chairs. In each t in T, each item is either in the denotation of chair or in the denotation of non-chair.
tn tm tk ti tj tk tr ts
M c
Figure 1: The context structure in a supermodel M*
Formally, a supermodel M* for a set of predicates A and a set of entities D is a tuple such that: [1] M is a partial model: Predicates are associated with partial denotations in M, <[P]+M,[P]-M>. For example, if [chair]+M = {d1}, [chair]-M = {d3}, d2 is in the gap, we don't yet know if it is a chair or not. [2] T is a set of total models which are completions of M: Predicates are associated with total denotations, which are monotonic extentions of their denotations in M: ?t∈T, ?P∈A: 2.1. Maximality: [P]+t ∪ [P]-t = D (denotations are total). 2.2. Monotonicity: [P]+M ? [P]+t; [P]-M ? [P]-t. E.g. in each t∈T, d2 is added to [chair]+t or [chair]-t.

6 Galit Weidman Sassoon
Given this basic ontology, the membership degree of an individual d in a vague noun like chair is indicated by the size or measure of the set of total contexts in which d is a chair, m({t∈T: d∈[chair]+t}). For example, the prototypical chair, pchair, is a chair in all total possibilities, so its membership degree is 1. The prototypical sofa, psofa, is a chair in no possibility, so its membership degree is 0. If an armchair d is a chair in a third of the cases, its membership degree is 1/3 etc.: [3] m is a measure function from sets of total models to real numbers between 0 and 1, i.e. a function which satisfies the following constraints (Partee and Kamp 1995, p. 153): 3.1 m(T) = 1; 3.2 m({}) = 0; 3.3 ?T1,T2, s.t. T1?T2: m(T2) = m(T1) + m(T2–T1) etc. [4] The membership-degree of d in P, cm(d,P), is given by the measure m of the set of total models in which d is P: cm(d,P) = m({t∈T: d∈[P]+t}) e.g. 1 = cm(d1,chair) > cm(d2,chair) > cm(d3,chair) = 0. There is no doubt that this model is better suited to the representation of natural language than fuzzy models. For example, we now predict membership degrees 0 and 1 in contradictory and tautological predicates respectively, as opposed to the prediction of the minimal degree rule in fuzzy models (cf. 2.1). This is because for all total contexts t in T, it holds that no entity falls under P∧?P, and all entities fall under P∨?P. Thus, even if, say, a certain stool is a chair to degree 0.7 and not a chair to degree 0.3 (due to being regarded as a chair in 0.7 of the total contexts in T, and being regarded as a non-chair in the rest of T), it is a chair and not a chair to degree 0, and a chair or not a chair to degree 1. 2.3 The representation of typicality in the Supermodel Theory 2.3.1 Typicality in basic predicates In this theory, a degree of membership and a degree of typicality are taken to be two separate things. The typicality degree of an entity in a predicate is represented by the entity’s similarity to (or distance from) the predicate’s prototype. Typicality and membership are assumed to be coupled only in

Typicality: An Improved Semantic Analysis 7
vague nouns like chair. In sharp nouns like bird or grandmother, they may be dissociated. Thus: [5] A predicate P is associated with a tuple such that: 1. p is the prototype – the best possible P. 2. cm(d,P), is d’s membership-degree in P: the degree to which d is P. As explained in 2.2, it is given by the measure m of the set of total contexts in which d is a chair: cm(d,P) = m({t∈T: d∈[P]+t}. 3. cP(d,P) is d’s typicality-degree in P: d's distance from P’s prototype. How are the values of the typicality degree function, cP(d,P), indicated? Generally, they are given by the values of the membership function: cP ? cm: e.g. in chair: the more typical entities fall under [chair]+ in more of the total models t in T. However, Partee and Kamp distinguish between different predicate types in the following ways: [6] Predicate types: 1. +/– Vague: The denotations of non-vague predicates like bird, unlike those of vague predicates like chair, are total already in M. That is, everything is either a bird or a non-bird. There is no gap: [bird]+M ∪ [bird]-M = D. 2. +/– Prototype: Predicates like tall or odd number, unlike bird, grandmother, red etc., have no prototype (because there is no maximal tallness or oddness). 3. +/– Typicality-is-coupled-with-membership, cP ? cm (The original term is: +/–the-prototype-affects-the-denotation): In predicates like bird or grandmother, unlike predicates like chair, typicality and membership are separated (not coupled).
–Prototype +Prototype (cm ≠ cP) +Vague –Vague tall, wide, heavy, not red even, odd, inanimate, not a bird adolescent, tall tree bird, grandmother (cm = cP) red, chair, shy ?
Table 1: Predicate types in Partee and Kamp's analysis

8 Galit Weidman Sassoon
There are at least two reasons for the separation of typicality and membership in predicates like bird: (1) Intuitively, an ostrich d is a bird even in M, i.e. cm(d,bird) = 1; but it is an atypical bird, i.e. cP(d,bird) < 1. Thus, cm ≠ cP. (2) Intuitively, an ostrich is always a bird, i.e. for any entity d, the set of total contexts in which d is an ostrich, {t∈T: d∈[ostrich]+t}, is always a subset of the set of total contexts in which d is a bird, {t∈T: d∈[bird]+t}. So cm(d,ostrich) is always smaller than cm(d,bird): cm(d, ostrich) = ≤ m({t∈T: d∈[ ostrich]+t}) m({t∈T: d∈[ bird]+t})
= cm(d, bird)
But intuitively, d can be more typical of an ostrich than of a bird, so cP(d,ostrich) is greater than cP(d,bird). cP(d, ostrich) Again, cm ≠ cp. Let us classify the fact that d can be more typical of an ostrich than of a bird, as stated in (2), under the name the sub-type effect (Sassoon 2005). 2.3.2 Typicality in complex predicates Recall the conjunction effect or fallacy, i.e., the intuitive judgments that, e.g., a brown-apple is regarded as more typical, or more likely a member, in brown apple than in apple (see in 2.1.2): cP(d, brown apple) ≥ cP(d, apple). ≥ cP(d, bird).
This effect cannot be represented using Partee and Kamp’s membership degree function cm(d,P). Why? Because in any total context in which an entity d is a brown apple, d is an apple, i.e. the set {t∈T: d∈[ brown apple]+t} is always a subset of the set {t∈T: d∈[apple]+t}. Hence, the membership degree of d in brown apple can maximally reach d’s degree in apple and not more: cm(d,brown apple) = ≤ m({t∈T: d∈[brown apple]+t}) m({t∈T: d∈[apple]+t}) =
cm(d2,apple)

Typicality: An Improved Semantic Analysis 9
However, Partee and Kamp observe that modifiers like brown receive a distinct interpretation in each of the local contexts created by the noun they modify. For example, brown is interpreted differently when applied to apple, skin, shelf, dress etc. Thus, Partee and Kamp propose to replace cm in modified nouns like brown apple by a new function, which may assign d a higher value than cm(d,apple) or cm(d,brown). The modified membership function for the modified noun brown apple, cm(d,brown /apple) is given by d’s degree in brown, m(d,brown), minus 'a' – the minimal brown degree that the measure function m assigns to an apple. This value is normalized by the distance between 'a' - the minimal - and 'b' - the maximal - brown degrees assigned to apples. This normalization procedure ensures that the result ranges between 0 and 1: [7] The modified membership function for modified nouns: Let a and b be the minimal and maximal brown degrees among the apples in M, respectively: cm(d,brown /apple) = (m(d,brown) – a) / (b – a) For example, a brown apple may be assigned degree 0.9 in brown; the minimal brown degree existing among the apples may be 0, because some apples are not brown at all; the maximal brown degree existing among the apples may be 0.95, assuming that no apple is maximally brown. If so: cm(d,brown /apple) = (0.9 – 0) / (0.95 – 0) = 0.974. The value 0.974 indeed exceeds d’s degree in brown, 0.9, and possibly also d’s degree in apple, as desired. If indeed, the proposed mechanism helps to capture the conjunction fallacy, it seems like we could retain the idea that the typicality degrees in predicates like brown apples are coupled with the membership degrees, which in turn, are indicated by the modified membership functions. However, we will now see that this is not the case. 3. Problems in the Supermodel Theory The idea that measures-functions which range over total contexts (supervaluations) can represent typicality has some fundamental problems.

10 Galit Weidman Sassoon
3.1 Typicality degrees of denotation members The first problem has to do with the fact that the measure function m fails to account for the fact that denotation members are not necessarily associated with the maximal degree of typicality, 1, but rather they may take any degree of a whole range of typicality degrees. For example, within a certain context, I may consider three-legged seats with a back as chairs, but as less typical chairs than four-legged seats with a back. This limitation of the measure function is particularly problematic in vague nouns (sharp nouns) like bird. Even atypical examples like ostriches and penguins are known to be birds, i.e. already in M they are considered members in [bird]+M (Partee and Kamp 1995). The bird denotations are assumed to be completely specified, or in other words, not to vary across different total contexts. This is the standard way in which to represent the fact that predicates like bird are not – or are much less – vague than predicates like chair or tall. However, this is also the reason for which the measure function cannot indicate typicality in sharp predicates. Given that they are always known to be birds, the membership degree of atypical examples like ostriches and penguins in bird (i.e. the measure of the set of total contexts in which they are birds) is always 1. And for non-birds – whether butterflies and bats or whether stools and cows – since they are members in [bird]-M, their membership degree in bird is always 0. Intermediate typicality degrees in sharp nouns cannot be indicated using m. Since no other means to indicate them is given, i.e. no general mechanism to determine distance from the prototype is proposed, intermediate typicality degrees in sharp nouns are not accounted for. This is especially problematic given that the most prominent examples of the prototype theory are indeed sharp predicates. 3.2 The sub-type effect Furthermore, the measure function, m, fails to predict the sub-type effect, namely, the intuition that the typicality of ostriches in ostrich exceeds their typicality in bird. A membership degree (or measure m) is never bigger in ostrich than in bird, because in any total context in which an entity is an ostrich, it is also a bird (see 2.3.1). This effect is identical to the so-called conjunction effect, but is found in lexical nouns, i.e. nouns without a modifier, like ostrich vs. bird.

Typicality: An Improved Semantic Analysis 11
Note that the modified membership function, which Partee and Kamp add to the model in order to capture the conjunction fallacy / effect (see 2.3.2), cannot help us here. Why? Because the minimal and maximal ostrich degrees in [bird]+M are 0 and 1. We can find both complete ostriches (of membership degree 1) and complete non-ostriches (of membership degree 0) among the birds. Consequently, cm(d,ostrich / bird) is identical to cm(d,ostrich): cm(d,ostrich / bird) = (m(d,ostrich) – 0) / (1-0) = cm(d,ostrich)
Thus, we have to keep cm and cP separated in such lexical nouns. It is the values of cP which represent the intermediate typicality degrees and the subtype effect / fallacy in bird. But, again, Partee and Kamp do not specify how exactly the values of cP are determined when cm and cP are dissociated. Thus, the sub-type effect in lexical nouns is not accounted for, and in addition to this, the separation between cm and cp (in predicates like bird) forces us into an inelegant theory, which stipulates as primitives two unconnected sets of values for cm and cp. Finally, the typicality effects in basic and complex nouns are accounted for using separate measure functions (given in [5] in 2.3.1 and [7] in 2.3.2). But we would prefer an account using a single mechanism, given that certain complex nouns in English are basic lexical items in other languages. For example, 'male-nurse' translates into the basic noun ax in Hebrew. 3.3 The conjunction effect Worse still, conjunction fallacies in modified nouns are also not dealt with correctly (see 2.3.2). Indeed, brown apples are allowed to have greater degrees in brown apple than in brown or in apple, as desired, but they are ordered only by how brown they are. This yields incorrect degrees. Intuitively, an apple of an unusual shape or size, which is therefore assigned, say, typicality degree 0.2 in apple, even if maximally brown (of typicality and membership degree 1 in brown), is considered an atypical brown apple, and not a maximally typical brown apple, or a brown apple to degree 1, as predicted by Partee and Kamp's analysis: cm(d,brown /apple) = (m(d,brown – a) / (b – a) = (1 – 0) / (1 – 0) = 1
Thus, assuming that the typicality degrees in brown apple are assigned by the modified degree function, is incorrect. We have to assume that the

12 Galit Weidman Sassoon
typicality degrees in brown apple are assigned by another mechanism. For further empirical support to this argument, see Smith et al 1988. There are many naturally occurring examples of utterances which refer to typicality in complex predicates. The following examples were found in a simple Google search on the Internet, and they contain references to typicality in negated and/or modified nouns:
1) What were some exercises you would do on a typical non-running 2) 3) 4) 5) 6)
day? I read that they are mainly variations of pushups and situps, but what exactly are... ... there is one week where the format will be more typical of a non-seminar class... Thought it [the interview] pretty much typical of a non-fan, nonentertainment, smart up market British paper … it gives you some sense of being there and imagine what it's like to interview a 'star'. You counter with an anecdotal tale about a non-typical nondeveloper. How does your counter-argument apply to a typical non-developer? …her irritating non-performance is typical of a primarily young (read 'cheap') cast… The music is typical of a non-CD game - that is to say, worthless. It's tinny and very electronic sounding.
Given these examples, we cannot dismiss the problems in predicting typicality in complex predicates on the basis that typicality is inherently non-compositional. Though compositionality might be limited to some extent, we need an analysis which will more correctly predict speakers' intuitions about typicality in complex predicates when such intuitions exist. 3.4 Partial knowledge Thus far, we have focused on problems related to the representation of the typicality effects in sharp predicates and in complex predicates. Let us add to this picture now another classical problem concerning the representation of context dependency in the typicality judgments. This problem has to do with the fact that the measure functions (or the membership functions) are total (in every partial model M, every entity is assigned a degree in every predicate), though knowledge about typicality is often partial. If one bird sings and the other flies, which one is more typical?

Typicality: An Improved Semantic Analysis 13
Which bird is more typical – an ostrich or a penguin? Many contexts are too partial to tell such facts. (Nor do speakers know every typicality feature in every partial context. For example is in the home typical of chairs?) The representation of knowledge about typicality needs to be more inherently context dependent and possibly partial. One way to do this is to define the typicality function so that it will give each entity a value in a predicate in each total context separately (like the interpretation function). In such a way, it would be possible that the typicality degree of an entity (just like its membership in a predicate) is unknown in a partial model M. It would be unknown if and only if this entity's degree varies across different total contexts. However, note that the measure function in Partee and Kamp 1995 is defined per supermodel (it is a meaure of the proportion of valuations in T in which each item is a predicate member), so it is not easy to see how this measure function can be relativized to a total context. 3.5 Numerical degrees Another problem common both to fuzzy models and to supermodels is that numerical degrees are not intuitive primitives. For example, why would a certain penguin have a degree 0.25 rather than say 0.242 in bird? Partee and Kamp notice this problem and draw a general suggestion for a solution in terms of vagueness with regard to the correct measure function in each context. In this setting, a context is associated with a set of measure functions, such that we may only know in a certain context that, e.g., the degree of a penguin ranges between 0.25 to 0.242 in bird. Working this idea out would have been a step towards the addition of more context dependency into the representation (cf. 3.4!). However, Partee and Kamp admit that this is still complex and not quite a natural representation. It is true that in the languages of the world the comparative form more P than (or less P than) is derived from the predicate form P (which is assumed to stand for the concept: P to degree ) and not vice versa (Klein 1980; Kamp 1975). Nevertheless, conceptually, at least as far as typicality is concerned, representing the typicality ordering denoted by a typicality comparative (e.g. the intuition that penguins are less typical than ducks, which in turn are less typical than robins etc.), and deriving the degrees from this ordering by some general strategy (such that e.g. a penguin would have roughly zero typicality in bird) seems to be a more intuitive setting. Arguments can be given also for a difference between the linguistic and conceptual setting in predicates and comparatives without the typicality

14 Galit Weidman Sassoon
operator (Fred Landman, personal communication), but these are beyond the scope of this paper. 3.6 Prototyopes The notion of a prototyope is problematic in several respects. One well known problem concerning this notion is that it is drastically unfruitful when it comes to compositionality, i.e., in predicting prototypes of complex concepts from the prototypes of their constituents (Partee and Kamp 1995; Hampton 1997). Consider negations: What would the prototype of non-bird be: a dog, a day, a number? Similarly for conjunctions: What would the male-nurse prototype be, given that a typical male-nurse may be both an atypical male and an atypical nurse (ibid). Another problem has to do with predicates which are lacking a prototype. For example, there is no maximum tallness. But with no prototypes, the intuition that there are typical (and atypical) tall players, tall teenagers, tall women etc., is not accounted for. The status prototypical, so it seems, ought to be given to an entity only within a context (a valuation) – there are no context-independent entity-prototypes. Finally, the Supermodel Theory assumes a complicated taxonomy of predicate types, with different mechanisms in their meaning (see Table 1 in 2.3.1): With or without a prototype; with a prototype that affects the denotation or that does not affect the denotation; with a vague or a nonvague meaning etc. This is especially problematic when compositionality is addressed (Partee and Kamp 1995). For example, of what type are conjunctions of different predicate types, like tall bird, where tall is a vague predicate without a prototype, and bird is a non-vague predicate with a prototype? 3.7 Feature-sets The main idea in assuming entity prototypes is to avoid the notion of feature-sets, which Partee and Kamp, following Osherson and Smith 1981 and Armstrong, Gleitman and Gleitman 1983, see as an ill-defined notion. Back from Wittgenstein ([1953] 1968), feature-based models are most widespread in the analysis of typicality. Whether feature-sets are represented as frames (Smith et al 1988), networks (Murphy and Lassaline 1997), theories (Murphy and Medin 1985), vectors in conceptual spaces (Gardenfors 2004) or otherwise, the main idea is that each feature is assigned a weight.

Typicality: An Improved Semantic Analysis 15
The typicality degree of, say, a robin in bird, is indicated by the weightedmean of its degrees in the bird features: How well it scores in flies, sings etc. The problem is that features alone do not form a sufficient account. Scholars still hardly agree about how the weight of a feature is determined. Worse still, we can hardly tell how entities’ degrees in a feature are determined. We still need to know what a typicality degree is (Armstrong, Gleitman and Gleitman 1983). Some scholars try to avoid the problematic notion of feature-sets by assuming optimal-entity models. Whether Prototype models (Partee and Kamp 1995; Osherson and Smith 1981) or non-abstractionist Exemplar models (Brook 1987; Shanks and St. John 1994), the main idea in these theories is that a typicality degree is indicated by degree of similarity to a representative entity. The problem in these theories is that similarity is, in many cases, measured by features. One can only categorize novel instances on the basis of their similarity to a known prototype or exemplar if there is some means of determining similarity, i.e. the connections that exist between the instances and the prototype or exemplar (Hampton 1997). And it is for this reason too, that, as we saw in 3.6, theories which stipulate prototypes or exemplars for each concept, without representing typicality features, fail to predict the connections that exist between the prototypes or exemplars of complex concepts, and the prototypes or exemplars of their constituents. Finally, in eliminating the features from the analysis, the Supermodel Theory is silent with regard to the type of properties that speakers regard as typical of each predicate in a given context. 3.8 Conclusions of Part 3 The proposed measure functions fail to capture the fact that there exists a range of intermediate typicality degrees in denotation members. Hence, they fail to predict typicality in sharp predicates. This is a severe limitation, given that the most prominent examples of the prototype theory are indeed sharp predicates. In addition, the theory fails to correctly represent the conjunction and sub-type effects, despite the use of two separate mechanisms, namely, the measure function and its modified version. Ideally, we would like to represent these effects correctly, and if possible, we would like one mechanism to derive both the conjunction and sub-type effects, i.e. typicality in basic and complex predicates.

16 Galit Weidman Sassoon
We need an improved analysis, which, in addition to capturing the typicality effects in sharp and complex predicates, will capture the inherent context dependency of the typicality judgments and the gaps in these judgments. The analysis should leave context independent prototypes out. The status prototypical ought to be given to an entity only within a context (valuation). Finally, the analysis ought to say exactly how the weight of a feature is determined and how degrees in a feature are determined, i.e. what a typicality degree is. Ideally, the basic primitive in the analysis will be the typicality ordering (the denotation of more / less typical than). Numerical degrees will be derived from this ordering by some general strategy. In the next part, I propose a new model which, it is argued, improves upon the previous analysis regarding precisely these points. 4. My Proposal: Learning Models So what does a typicality-ordering stand for? I believe this ordering is no more than a side effect of the order in which we learn that entities fall under a predicate, say, bird. We encode this learning order in memory, either during acquisition, or even as adults, within a particular context, when we need to determine which birds a speaker is actually referring to (the contextually relevant or appropriate set of birds). 4.1 Learning Models Learning models represent information growth. More precisely, they represent the order in which entities are categorized under, say, bird, and non-bird. We start with a zero context, c0, where denotations are empty, and from there on, each context is followed by contexts in which more entities are added to the denotations. In a total context t, every entity is either in the negative or in the positive denotation of each predicate. ci c0 cj cl M cf cm cn
tn tm tk ti tj tk tr ts
Figure 2: The contexts' structure in a Learning Model

Typicality: An Improved Semantic Analysis 17
For example, birdhood is normally determined first for robins and pigeons, later on for chickens and geese, and last for ostriches and penguins. Similarly, non-birdhood is detrmined earlier for cows than for bats or butterflies:
[bird]c0 … [bird]cj

[bird]cn

[bird]ts
Figure 3: An example of a branch in a Learning Model
Formally, I use the information structure called “Data Semantics” (Veltman 1984; Landman 1991). A learning model M* for a set of predicates A and domain D is a tuple such that: [1] C is a set of partial contexts: in each c in C a predicate P is associated with partial positive and negative denotations: <[P]+c,[P]-c>. [2] ≤ is a partial order on C: 1. c0 is the minimal element in C under ≤: 2. T is the set of maximal elements under ≤: 3. Monotonicity: ?c1,c2∈C, s.t. c1 ≤ c2: ?P∈A: [P]+c0=[P]-c0 = ?
(Denotations are empty in c0).
[P]+t ∪ [P]-t = D [P]+c1 ? [P]+c2; [P]-c1 ? [P]-c2.
(Denotations are maximal in T).
4. Totality: ?c∈C,?t∈T: c≤ t (Every c has some maximal extension t). I also assume that in c, we consider as P, in addition to directly given Ps (i.e. members in [P]+c), also indirectly given Ps, i.e. entities whose P-hood can be inferrred on the basis of the information in c (see 4.4.2 and 5.2). Formally, P-hood of an entity d can be inferrred in c iff d belongs in [P]+t in any t above c. I call this extended denotation the super-denotation of P: 5. "Super-denotations": [P]c = ∩{[P]+t| t∈T,c≤t}; [?P]c = ∩{[P]-t| t∈T,c≤t}

18 Galit Weidman Sassoon
4.2 The typicality ordering Given this basic ontology, I propose that we consider d1 more typical of P than d2 in a context t if and only if: Either the P-hood of d1 is established before the P-hood of d2 (i.e. in a context that proceeds the context in which d2 is added to the positive denotation), Or the non-P-hood of d2 is established before the non-P-hood of d1 (i.e. in a context that proceeds the context in which d1 is added to the negative denotation). Formally, P's typicality ordering in t is the order in which entities are learnt to be P or ?P in contexts under t: [3] ?t∈T: ( ∈ [≤P]+t) if and only if: ∈ ≤ ?c≤t: (d1∈[P]c → d2∈[P]c) & (d2∈[?P]c → d1∈[?P]c). In any total t, d1 is equally or less (typical of) P than d2 iff In any context c under t, if d1 is P, d2 is P, and if d2 is ?P, d1 is ?P. Entity pairs might be added to ≤P in c either on the basis of direct pointing at them as standing in the relation more typical of P, or on the basis of indirect inferences from the rest of our knowlegde in c. That is, the extended typicality relation that holds between two entities in a partial context c can be formally defined using the supervaluation technique, as is usually done for propositions (Van Fraassen 1969): ?c∈C: ( ∈ [≤P]c) iff: ?t≥c: ( ∈ [≤P]+t) ∈ ≤ In any partial c, d1 is equally or less (typical of) P than d2 iff In any total t above c, d1 is equally or less (typical of) P than d2. Different ways to refer to ≤P differ in truth conditions. For instance, d1 may be more of a kibbutznik but less typical of a kibbutznik than d2 (if, say, d2 has left the kibbutz but still looks and behaves like a kibbutznik). Yet, I believe that we need not pose different definitional constraints on more P, more typical P and more relevant P. The difference between these three comparative phrases is pragamatic in nature: It is generally assumed that the comparative more P makes use of a semantic ordering dimension in the

Typicality: An Improved Semantic Analysis 19
meaning of P (Kamp 1995; Bartch 1984, 1986). Conversely, more typical (of a) P makes use of different, or additional, ordering properties, namely, criteria from world knowledge, not just semantic criteria. Finally, relevant P makes use of completely ad-hoc properties, not just world knowledge or semantic criteria. The effect of the ordering criteria on the ordering relation (and of the ordering relation on the ordering criteria) will be further discussed in 4.9-4.10. At this point, note only that, as desired, a possibly different ordering relation may be associated with a predicate in each context. This much context dependency is required in order to capture the typicality effects correctly (for further discussion of this point, see 4.8). In the rest of part 4 we will see that a number of long-standing puzzles are now solved. 4.3 Deriving degrees Numerical degrees are not directly given. The primitive notion is of ordering, which is more intuitive (cf. 3.5). However, numerical degrees can be derived easily, when needed, so that their ordering would conform to the typicality ordering. For instance, assuming the facts in context ts in Figure 3 above, and a small domain which consists of the six birds in the picture (a robin, a pigeon, a goose, a chicken, an ostrich and a penguin) and two non-birds (a butterfly and a cow), the robin would have degree 1 because everything, i.e., all 8 entities, is equally or less typcal than it. The goose would have degree 6/8 because only 6 of 8 entities are equally or less typcal than it, and so on. Vagueness with regard to degrees (cf. 3.5) would be derived from gaps in the typicality ordering (see 4.8 below). 4.4 Intermediate typicality degrees for denotation members 4.4.1 Intermediate degrees Recall that degrees of denotation members in Partee and Kamp's model were always maximal, i.e. 1. This is not the case in the current model. Rather, the earlier we learn that an entity is, e.g. a bird, the more typical we consider this entity to be. Therefore, now we can account for the typicality effects in sharp predicates, which were problematic for Partee and Kamp. The typicality ordering, or graded membership effect, results from the fact that, in

免疫荧光操作步骤及注意事项

免疫荧光操作步骤及注意事项 免疫荧光技术是在免疫学、生物化学和显微镜技术的基础上建立起来的一项技术。它是根据抗原抗体反应的原理,先将已知的抗原或抗体标记上荧光基团,再用这种荧光抗体(或抗原)作为探针检查细胞或组织内的相应抗原(或抗体)。利用荧光显微镜可以看见荧光所在的细胞或组织,从而确定抗原或抗体的性质和定位,以及利用定量技术(比如流式细胞仪)测定含量。 紫外光激发荧光物质放射荧光示意图 免疫荧光实验的主要步骤包括细胞片制备、固定及通透(或称为透化)、封闭、抗体孵育及荧光检测等。细胞片制备(通俗的说法是细胞爬片)是免疫荧光实验的第一步,细胞片的质量对实验的成败至关重要,原因很简单,如果发生细胞掉片,一切都无从谈起。这一步关键的是玻片(Slides or Coverslips)的处理以及细胞的活力,有人根据成功经验总结出许多有益的细节或小窍门,非常值得借鉴。固定和通透步骤最重要的是根据所研究抗原的性质选择适当的固定方法,合适的固定剂和固定程序对于获得好的实验结果是非常重要的。免疫荧光中的封闭和抗体孵育与其它方法(如ELISA或Western Blot)中的相同步骤是类似的,最重要的区别在于免疫荧光实验中要用到荧光抗体,因此必须谨记避光操作,此外抗体浓度的选择可能更加关键。最后需要注意的是,标记好荧光的细胞片应尽早观察,或者用封片剂封片后在4?或-20?避光保存,以免因标记蛋白解离或荧光减弱而影响实验结果。

由于操作步骤比较多,同时在分析结果时无法像WB那样可以根据分子量的大小区分非特异性识别,所以要得到一个完美的免疫荧光实验结果,除了需要高质量的抗体,以及对实验条件进行反复优化外,还必须设立严谨的实验对照。总之,免疫荧光实验从细胞样品处理、固定、封闭、抗体孵育到最后的封片及观察拍照,每步都非常关键,需要严格控制实验流程中每个步骤的质量,才能最终达到你的实验目的。 基本实验步骤: (1) 细胞准备。对单层生长细胞,在传代培养时,将细胞接种到预先放置有处理过的盖玻片的培养皿中,待细胞接近长成单层后取出盖玻片,PBS洗两次;对悬浮生长细胞,取对数生长细胞,用PBS离心洗涤(1000rpm,5min)2次,用细胞离心甩片机制备细胞片或直接制备细胞涂片。 (2) 固定。根据需要选择适当的固定剂固定细胞。固定完毕后的细胞可置于含叠氮纳的PBS中4?保存3个月。PBS洗涤3×5 min. (3) 通透。使用交联剂(如多聚甲醛)固定后的细胞,一般需要在加入抗体孵育前,对细胞进行通透处理,以保证抗体能够到达抗原部位。选择通透剂应充分考虑抗原蛋白的性质。通透的时间一般在5-15min.通透后用PBS洗涤3×5 min. (4) 封闭。使用封闭液对细胞进行封闭,时间一般为30min. (5) 一抗结合。室温孵育1h或者4?过夜。PBST漂洗3次,每次冲洗5min. (6) 二抗结合。间接免疫荧光需要使用二抗。室温避光孵育1h.PBST漂洗3次,每次冲洗5min后,再用蒸馏水漂洗一次。 (7) 封片及检测。滴加封片剂一滴,封片,荧光显微镜检查。 (一)细胞准备 用于免疫荧光实验的细胞可以是直接生长在盖玻片上的贴壁细胞,也可以是经过离心后涂片的悬浮细胞或者是将取自体内的组织细胞悬液离心后涂片。贴壁良好

分子荧光光谱法实验报告

分子荧光光谱法实验报告 一、实验目的 1.掌握荧光光度计的基本原理及使用。 2.了解荧光分光光度计的构造和各组成部分的作用。 3.掌握分子荧光光度计分析物质的特征荧光光谱:激发光谱、发射光谱的测定方法。 4.了解影响荧光产生的几个主要因素。 5.学会运用分子荧光光谱法对物质进行定性和定量分析。 二、实验原理 原子外层电子吸收光子后,由基态跃迁到激发态,再回到较低能级或者基态时,发射出一定波长的辐射,称为原子荧光。对于分子的能级激发态称为分子荧光,平时所说的荧光指分子荧光。 具有不饱和基团的基态分子经光照射后,价电子跃迁产生荧光,是当电子从第一激发单重态S1的最低振动能级回到基态S0各振动能级所产生的光辐射。 (1)激发光谱 是指发光的某一谱线或谱带的强度随激发光波长(或频率)变化的曲线。横坐标为激发光波长,纵坐标为发光相对强度。 激发光谱反映不同波长的光激发材料产生发光的效果。即表示发光的某一谱线或谱带可以被什么波长的光激发、激发的本领是高还是低;也表示用不同波长的光激发材料时,使材料发出某一波长光的效

率。荧光为光致发光,合适的激发光波长需根据激发光谱确定——激发光谱是在固定荧光波长下,测量荧光体的荧光强度随激发波长变化的光谱。获得方法:先把第二单色器的波长固定,使测定的λem不变,改变第一单色器波长,让不同波长的光照在荧光物质上,测定它的荧光强度,以I为纵坐标,λex为横坐标所得图谱即荧光物质的激发光谱,从曲线上找出λex,,实际上选波长较长的高波长峰。 (2)发射光谱 是指发光的能量按波长或频率的分布。通常实验测量的是发光的相对能量。发射光谱中,横坐标为波长,纵坐标为发光相对强度。 发射光谱常分为带谱和线谱,有时也会出现既有带谱、又有线谱的情况。发射光谱的获得方法:先把第一单色器的波长固定,使激发的λex不变,改变第二单色器波长,让不同波长的光扫描,测定它的发光强度,以I为纵坐标,λem为横坐标得图谱即荧光物质的发射光谱;从曲线上找出最大的λem。 (3)荧光强度与荧光物质浓度的关系 用强度为I0的入射光,照射到液池内的荧光物质时,产生荧光,荧光强度If用仪器测得,在荧光浓度很稀(A 三、实验试剂和仪器试剂:罗丹明B乙醇溶液;1-萘酚乙醇溶液;3,3’-Diethyloxadicarbocyanine iodide:标准溶液,10μg/ml, 20μg/ml,30μg/ml,40μg/ml和未知浓度;蒸馏水;乙 醇。 仪器:Fluoromax-4荧光分光光度计;1cm比色皿;

细胞免疫荧光步骤

方法一: 1.首先需要把细胞养在玻璃片上(悬浮细胞需要用多聚赖氨酸包被过的玻璃片) 2.然后在4%PFA里面室温下固定30分钟,PBS洗两次,0.1% TX-100室温下作用1-2分钟 使细胞膜通透。 3.接下来进行荧光标记,需要在一个大的容器(面积大,扁平状的,比如大的培养皿)里面, 放一张用水打湿的滤纸,以保持湿度。 4.剪一片合适大小的parafilm,在上面滴上稀释在1%BSA/TBS中的一抗(稀释倍数依具体 抗体而定),每个玻璃片30ul足够,把玻璃片盖在上面(细胞面朝下),室温下孵育30分钟,然后在PBS里洗三次。 5.接下来二抗孵育步骤同上。 6.最后,在载玻片加上mounting medium(大约每个玻璃片加10ul),把玻璃片放上去(细 胞面朝下),37度30分钟,然后就可以在荧光显微镜下观察了。 7.抗体很重要,不能有非特异性结合。你可以先做WB检测一下你的抗体,看看有没有杂带。 8.双标的话,可以把两个一抗一起加或者分别标记两次(可以都试一下看看那种方法合适)。 如果一个抗体需要二抗,一个是直接荧光标记的,可以把荧光标记的那个和另外一个的二抗一起加。 方法二: 1.选取一抗时要来源于两种不同的动物,我用的是来源于rabbit和rat的抗体,二抗则是不 同荧光信号标记的,我用的是donkey anti-rabbit-FITC(绿)和donkey anti-rat-Tex-Red(红)。 2.我的做法是两种一抗同时孵育,然后两种二抗同时孵育。抗体浓度、孵育时间要仔细摸索, 我感觉一抗4度孵育过夜比较好,背景比较清晰。 3.我的阳性对照用的是阳性组织切片,阴性对照则分别是家兔和大鼠的IgG,荧光标记物对 照是PBS+荧光标记物。 4.封闭血清与二抗来源动物一致,我用的是10%的正常donkey血清。 5.其余步骤同一般免疫荧光单标操作。 方法三: 1.片子的制作:可以做细胞爬片,细胞甩片,还有直接在24well/12well/96well中直接染色 2.细胞爬片的制作:直接购买公司的已经处理过的细胞爬片,要是自己制作的话,就用无菌 的盖玻片用多聚赖氨酸处理后让细胞自己爬片

荧光分析法实验报告

荧光分光光度法 一、 实验目的 1、学习荧光分光光度法的基本原理; 2、学习荧光光谱仪的结构和操作方法; 3、学习激发光谱、发射光谱曲线的绘制方法。 二、 实验原理 荧光分光光度法(fluorescence spectroscopy, FS )通常又叫荧光分析法,具有灵敏度高、选择性强、所需样品量少等特点,已成为一种重要的痕量分析技术。荧光(fluorescence )是分子吸收了较短波长的光(通常是紫外光和可见光),在很短的时间内发射出比照射光波长较长的光。由此可见,荧光是一种光致发光。 任何荧光物质都有两个特征光谱,即激发光谱(excitation spectrum )和发射光谱(emission spectrum )或称荧光光谱(fluorescence spectrum )。激发光谱表示不同激发波长的辐射引起物质发射某一波长荧光的相对效率。绘制激发光谱时,将发射单色器固定在某一波长,通过激发单色器扫描,以不同波长的入射光激发荧光物质,记录荧光强度对激发波长的关系曲线,即为激发光谱,其形状与吸收光谱极为相似。荧光光谱表示在所发射的荧光中各种波长的相对强度。绘制荧光光谱时,使激发光的波长和强度保持不变,通过发射单色器扫描以检测各种波长下相应的荧光强度,记录荧光强度对发射波长的关系曲线,即为荧光光谱。激发光谱和荧光光谱可用于鉴别荧光物质,而且是选择测定波长的依据。 荧光强度(F )是表征荧光发射的相对强弱的物理量。对于某一荧光物质的稀溶液,在一定波长和一定强度的入射光照射下,当液层的厚度不变时,所发生的荧光强度和该溶液的浓度成正比,即 该式即荧光分光光度法定量分析的依据。使用时要注意该关系式只适用于稀溶液。 三、 仪器与试剂 F-4500荧光光谱仪;比色管(10mL );牛血清白蛋白(BSA ) 四、 实验内容 1、 开机准备:接通电源,启动电脑。打开光谱仪主机电源,预热15分钟。 2、 运行FL solution 软件,设定检测方法和测量参数: EX (激发波长):280nm EM (发射波长):340nm EX 扫描范围:210nm ~330nm EM 扫描范围:290nm ~450nm EX 缝宽:2.5nm ,EM 缝宽:2.5nm 扫描速度:240nm/min PMT 电压:700V 3、 激发光谱和发射光谱的绘制: 先固定激发波长为280nm ,在290~450nm 测定荧光强度,获得溶液的发射光谱,在343nm 附近为最大发射波长λem ;再固定发射波长为λem ,测定激发波长为200nm ~λem 时的荧光强度,获得溶液的激发光谱,在280nm 附近为最大激发波长λex 。 4、 退出FL solution 软件,关闭光谱仪主机电源,关闭计算机。 Kc F

细胞免疫荧光步骤

创作编号: GB8878185555334563BT9125XW 创作者:凤呜大王* 方法一: 1.首先需要把细胞养在玻璃片上(悬浮细胞需要用多聚赖氨酸包被过的玻璃片) 2.然后在4%PFA里面室温下固定30分钟,PBS洗两次,0.1% TX-100室温下作用1 -2分钟使细胞膜通透。 3.接下来进行荧光标记,需要在一个大的容器(面积大,扁平状的,比如大的培养皿) 里面,放一张用水打湿的滤纸,以保持湿度。 4.剪一片合适大小的parafilm,在上面滴上稀释在1%BSA/TBS中的一抗(稀释倍数 依具体抗体而定),每个玻璃片30ul足够,把玻璃片盖在上面(细胞面朝下),室温下孵育30分钟,然后在PBS里洗三次。 5.接下来二抗孵育步骤同上。 6.最后,在载玻片加上mounting medium(大约每个玻璃片加10ul),把玻璃片放上 去(细胞面朝下),37度30分钟,然后就可以在荧光显微镜下观察了。 7.抗体很重要,不能有非特异性结合。你可以先做WB检测一下你的抗体,看看有没 有杂带。 8.双标的话,可以把两个一抗一起加或者分别标记两次(可以都试一下看看那种方法 合适)。如果一个抗体需要二抗,一个是直接荧光标记的,可以把荧光标记的那个和另外一个的二抗一起加。 方法二: 1.选取一抗时要来源于两种不同的动物,我用的是来源于rabbit和rat的抗体,二抗则 是不同荧光信号标记的,我用的是donkey anti-rabbit-FITC(绿)和donkey anti-rat-Tex-Red(红)。 2.我的做法是两种一抗同时孵育,然后两种二抗同时孵育。抗体浓度、孵育时间要仔 细摸索,我感觉一抗4度孵育过夜比较好,背景比较清晰。

药物分析实验报告

实验四苯甲酸钠的含量测定 一、目的 掌握双相滴定法测定苯甲酸钠含量的原理和操作 二、操作 取本品1.5g,精密称定,置分液漏斗中,加水约25mL,乙醚50mL和甲基橙指示液2滴,用盐酸滴定液(0.5mol/L)滴定,随滴随振摇,至水层显持续橙红色,分取水层,置具塞锥形瓶中,乙醚层用水5mL洗涤,洗涤液并入锥形瓶中,加乙醚20mL,继续用盐酸滴定液(0.5mol/L)滴定,随滴随振摇,至水层显持续橙红色,即得,每1mL的盐酸滴定液(0.5mol/L)相当于72.06mg的C7H5O2Na。 本品按干燥品计算,含C7H5O2Na不得少于99.0% 三、说明 1.苯甲酸钠为有机酸的碱金属盐,显碱性,可用盐酸标准液滴定。 COO Na +H C l COOH +N aC l 在水溶液中滴定时,由于碱性较弱(Pk b=9.80)突跃不明显,故加入和水不相溶混的溶剂乙醚提除反应生成物苯甲酸,使反应定量完成,同时也避免了苯甲酸在瓶中析出影响终点的观察。 2.滴定时应充分振摇,使生成的苯甲酸转入乙醚层。 3.在振摇和分取水层时,应避免样品的损失,滴定前,使用乙醚检查分液漏斗是否严密。 四、思考题 1.乙醚为什么要分两次加入?第一次滴定至水层显持续橙红色时,是否已达终点?为什么? 2.分取水层后乙醚层用5mL水洗涤的目的是什么? 实验五阿司匹林片的分析 一、目的 1.掌握片剂分析的特点及赋形剂的干扰和排除方法。 2.掌握阿司匹林片鉴别、检查、含量测定的原理及方法。 二、操作 [鉴别] 1.取本品的细粉适量(约相当于阿司匹林0.1g),加水10mL煮沸,放冷,加三氯化铁试液1滴,即显紫堇色。 2.取本品的细粉(约相当于阿司匹林0.5g),加碳酸钠试液10mL,振摇后,放置5分钟,滤过,滤液煮沸2分钟,放冷,加过量的稀硫酸,即析出白色沉淀,并发生醋酸的臭气。 [检查] 游离水杨酸 取本品的细粉适量(约相当于阿司匹林0.1g),加无水氯仿3mL,不断搅拌2分钟,用无水氯仿湿润的滤纸滤过,滤渣用无水氯仿洗涤2次,每次1mL,合并滤液和洗液,在室温下通风挥发至干;残渣用无水乙醇4mL溶解后,移至100mL量瓶中,用少量5%乙醇洗涤容器、洗液并入量瓶中,加5%乙醇稀释至刻度,摇匀,分取50mL,立即加新制的稀硫酸铁铵溶液[取盐酸液(1mol/L)1mL,加硫酸铁铵指示液2mL后,再加水适量使成100mL] 1mL,摇匀;30秒钟内如显色,和对照液(精密称取水杨酸0.1g,置1000mL量瓶中,加冰醋酸1mL,

细胞免疫荧光实验步骤

细胞免疫荧光实验步骤 细胞免疫荧光实验步骤 简单实验步骤如下: 1.漂洗血清蛋白H7.2-7.4 37度 PBS 2小时. 2.-20度甲醇固定20分钟后,自然、干燥 10分钟 3.PBS洗净:3min*3 4.1%Triton:25min-30min.配成50ultriton+5mlpBS 5.PBS洗净:2*5min 6.羊血清封闭:37度,20分钟 7.一抗,4度过夜,一般要大于18小时或者37度1-2小时 8.4度PBS洗净,3min*5次 9.二抗37度小于一小时 10.37度PBS洗净,3*5min 凉干封片(封闭液PH8.5) 活细胞免疫荧光技术-流式细胞仪标本的制备 (一)制备活性高的细胞悬液(培养细胞系、外周血单个核细胞、 胸腺细胞、脾细胞等均可用于本法) ↓ 用10%FCS RPMI1640调整细胞浓度为 5×106~1×107/ml ↓ 取40μl细胞悬液加入预先有特异性McAb(5~50μl) 的小玻璃管或塑料离心管,再加50μl 1∶20(用DPBS 稀释)灭活正常兔血清 ↓4℃ 30min 用洗涤液洗涤2次,每次加洗涤液2ml左右 1000rpm×5min

↓ 弃上清,加入50μl工作浓度的羊抗鼠 (或兔抗鼠)荧光标记物,充分振摇 ↓4℃ 30min 用洗涤液洗涤2次,每次加液2ml左右 1000rpm×5min ↓ 加适量固定液(如为FCM制备标本,一般加入 1ml固定液,如制片后在荧光显微镜下观察, 视细胞浓度加入100~500μl固定液) ↓ FCM检测或制片后荧光显微镜下观察 (标本在试管中可保存5~7天) (二)试剂和器材 1. 各种特异性单克隆抗体。 2. 荧光标记的羊抗鼠或兔抗鼠第二抗体,灭活正常兔血清。 3. 10% FCS RPMI1640, DPBS、洗涤液、固定液(见附录)。 4. 玻璃管、塑料管、离心机、荧光显微镜等。 (三)注意事项 1. 整个操作在4℃下进行,洗涤液中加有比常规防腐剂量高10倍的NaN 3,上述实验条件是防止一抗结合细胞膜抗原后发生交联、脱落。 2. 洗涤要充分,以避免游离抗体封闭二抗与细胞膜上一抗相结合,出现假阴性。 3. 加适量正常兔血清可封闭某些细胞表面免疫球蛋白Fc受体,降低和防止非特异性染色。 4. 细胞活性要好,否则易发生非特异性荧光染色。 附: 1. DPBS (×10, 贮存液)

荧光定量实验报告(作业)

RT-qPCR比较不同样本中miR-21的相对表达差异 一、实验目的 1、掌握实时荧光定量PCR的实验原理。 2、掌握实时荧光定量PCR相对定量的分析方法。 二、实验原理 实时荧光定量PCR (Quantitative Real-time PCR)是一种在DNA扩增反应中,以荧光化学物质测每次聚合酶链式反应(PCR)循环后产物总量的方法。通过内参或者外参法对待测样品中的特定DNA序列进行定量分析的方法。荧光定量PCR 最常用的方法是DNA 结合染料SYBR Green Ⅰ的非特异性方法和Taqman 水解探针的特异性方法。本实验中采用非特异性SYBR Green I 染料法,SYBR Green I 是一种结合于所有ds DNA 双螺旋小沟区域的具有绿色激发波长的染料,在游离状态下会发出微弱的荧光,但一旦与双链DNA 结合后,荧光大大增强。因此,SYBR Green I 的荧光信号强度与双链DNA 的数量相关,可以根据荧光信号检测出PCR 体系存在的双链DNA 数量。 三、实验仪器、材料和试剂 实验仪器:PCR仪、荧光定量PCR仪 实验材料:MCF7细胞 实验试剂:逆转录试剂盒、SYBR GREEN试剂盒 四、实验步骤 4.1 MCF7细胞RNA提取(RNAiso Plus) 1)将生长至80%的MCF细胞消化为单细胞悬液,准备提取RNA; 2)9000g,2min离心,弃掉培养基,加1 ml RNAiso Plus用移液枪反复吹吸直至裂

解液中无明显沉淀,室温(15-30℃)静置5分钟; 3)加入氯仿(RNAiso Plus的1/5体积量),盖紧离心管盖,混合至溶液乳化呈 乳白色,室温静置5min; 4)12,000 g 4℃离心15分钟。从离心机中小心取出离心管,此时匀浆液分为三 层,即:无色的上清液(含RNA)、中间的白色蛋白层(大部分为DNA)及带有颜色的下层有机相。 5)吸取上清液转移至另一新的离心管中(切勿吸出白色中间层)。 6)向上清中加入0.5-1倍RNAiso Plus体积的异丙醇,上下颠倒离心管充分混匀 后,室温下静置10分钟。 7)12,000g 4℃离心10分钟。一般在离心后,试管底部会出现RNA沉淀。 8)弃上清,加入1ml DEPC水配制的75%乙醇,充分洗涤管盖和管壁,并轻弹 管底,让沉淀浮起来,并静置3-5 min; 9)打开离心管盖,室温干燥沉淀几分钟。沉淀干燥后,加入适量(可以根据沉淀 的多少确定)的RNase-free 水溶解沉淀。测浓度,记录A260/280。 4.2 1%琼脂糖凝胶电泳(取少部分进行跑电泳,留足够的量做反转录) 4.3 反转录 试剂体积(10μl) RNA500 ng Gene specific primers(2μM)RT primer1μl 5×ReverseTranscriptase M-MLV 2μl Buffer dNTP (10mM)0.5μl

免疫荧光双标操作方法及注意事项

在同一组织细胞标本上需要同时检测两种抗原时,需进行双重荧光染色。双重免疫荧光标记法(double immunofluorescence labeling method)也分为直接法和间接法。 (1)直接法双重免疫荧光标记:将标记有两种不同荧光素的抗体(如抗A 和抗B)以适当比例混合,滴加在标本上孵育,然后洗去未结合的荧光抗体,在荧光显微镜下分别选择两种相应的激发滤片观察,即可对两种抗原进行定位和定量。直接法简便可靠,但灵敏度较低。 (2)间接法双重免疫荧光标记:用未标记的两种特异性第一抗体孵育组织或细胞,洗去多余的第一抗体后,再用两种不同的荧光素分别标记的第二抗体孵育组织或细胞,洗去多余的第二抗体,后在荧光显微镜下分别选择两种相应的激发滤片观察,从而对两种抗原进行定位和定量。使用此法应注意两种特异性第一抗体必须来源于不同种属,且荧光标记第二抗体的种属必须与第一抗体的种属相匹配。 免疫荧光双标技术中操作要点和注意事项 一、免疫荧光技术中标本制作的基本程序近似于酶免疫组化,不同点如下: 1、免疫荧光不需要使用双氧水处理,封闭和一抗孵育与其相同。 2、免疫荧光的二抗使用不同荧光标记的二抗孵育,孵育时间根据抗体的工作浓度确定。 3、二抗孵育之后充分洗片后即可贴片、封片和观察。 4、免疫荧光在封片时常使用专用封片剂或甘油:0.01M PBS (1:1)。条件许可,建议购买抗淬灭的封片液,使标本可以保存更久。

5、荧光抗体的孵育以及后续处理需要避光。 6、荧光抗体染色假阳性可能会多,需要分别设定阳性和阴性对照。 二、注意事项 1、荧光染色后一般在1h内完成观察,或于4℃保存4h,时间过长,可能会使荧光提前衰退。 2、每次试验均需设置以下三种对照: (1) 阳性对照:阳性血清+荧光标记物; (2) 阴性对照:阴性血清+荧光标记物; (3) 荧光标记物对照:PBS+荧光标记物。 三、免疫荧光双标的经验之谈 1、选取一抗时,要求来源于两种不同的动物,我用的是来源于家兔和大鼠的抗体,二抗则是不同荧光信号标记的,我用的是donkey anti-rabbit-FITC(绿)和donkey anti-rat-Tex-Red(红)。 2、我的做法是两种一抗同时孵育,然后两种二抗同时孵育。抗体浓度、孵育时间要自我摸索,我感觉一抗4℃孵育过夜比较好,背景比较清晰。 3、我的阳性对照采用的是阳性组织切片,阴性对照则分别是家兔和大鼠的IgG,荧光标记物对照是PBS+荧光标记物。 4、封闭血清是二抗来源动物的正常血清,我用的是10%正常donkey 血清。 5、其余事项同免疫荧光单标操作。 免疫组化双重染色方法和步骤 在生物医学和临床研究实践中,经常需要检测两种不同物质是否在同一

荧光分析法实验(有思考题答案)

实验二.氨基酸的荧光激发、发射及同步荧光光谱的测量五.数据处理 1.用实验获得的数据绘制两种氨基酸的激发、发射、同步光谱图(如图3、4)。2.从激发和发射光谱中找出最大激发波长和最大发射波长值,以及它们相对应的峰高。在它们的同步荧光光谱中也确定最大波长和对应的峰高。 苯丙氨酸的荧光光谱图 苯丙氨酸扫描激发波长在214nm和285m两处出现最高峰,本实验选择214nm为最大激发波长。此外,激发波长曲线在280-300nm处出现了一个十分完美的峰,此峰为倍频峰,非激发波长峰,我们通过同步扫描荧光光谱技术可以验证,如图,我们通过同步扫描荧光光谱技术获得的激发波长也在215nm,与之前基本吻合。 色氨酸的荧光光谱图

色氨酸扫描激发波长在217nm处有一个最大峰,所以激发波长为217,发射波长为361。发射波长曲线在450-460nm处出现了一个十分完美的峰(在这张图上没显示出来),此峰为倍频峰,非激发波长峰,我们通过同步扫描荧光光谱技术可以验证。 六.讨论与思考 1.对待测溶液进行预扫描的有何作用? 从预扫描得到激发和发射波长的初步结果,根据我们得到的初步结果对仪器进行设置,然后对两种氨基酸溶液测量它们的荧光激发、发射和同步荧光光谱。 2.观察激发波长的整数倍处荧光发射光谱在有何特点?该波长是否适合于进行定量分析? 激发波长的整数倍处荧光发射光谱会出现以很强的峰,是倍频峰。不适合定量分析。 3.同步荧光技术有哪些优点?比较激发、发射和同步荧光光谱中的峰值及对应波长,比较他们的不同,并解释原因。 同步荧光法能简化光谱,减少光谱重叠和散射的影响,提高对荧光性质相近化合物同时测定的选择性和灵敏度。同步荧光法相对于激发光谱和发射光谱, 得到的峰比较窄,更明显。同步荧光光谱不是荧光物质的激发光谱和发射光谱

免疫荧光实验步骤大全(精华版)

免疫荧光染色大全(精华版) 组织免疫荧光法 (1)将待染组织切片置于65摄氏度恒温箱烤片1h,脱蜡 (2)1×PBS 洗涤 3 次,每次 5min。 (3)0.5%Triton X-100(PBS 配制)室温通透 10min (4)1×PBS 洗涤 3 次,每次 5min。 注意:步骤(3)和(4)用于检测细胞核抗原,细胞膜抗原直接跳过此步骤(5)抗原修复:使用柠檬酸盐缓冲液进行抗原修复,微波炉微波高火3min,后转成低火 15min。 (6)1×PBS 洗涤 3 次,每次 5min。 (7)3% H2O2,室温孵育30min,目的是灭活内源性过氧化物酶。 (8)1×PBS 洗涤 3 次,每次 5min。 (9)使用1% BSA进行室温封闭 30min,用于封闭非特异性抗原表位。 (10)按抗体推荐使用说明书孵育特异性一抗,4°C 湿盒中静置过夜。(11)次日取出切片,室温下复温 30min。 (12)1×PBS 洗涤 3 次,每次 5min。 (13)选取相应的免疫荧光二抗滴加于血管组织上,37°C避光孵育30min。(14)1×PBS洗涤 3 次,每次 5min。 (15)避光条件下,DAPI 染液染细胞核,浓度和时间根据试剂说明书使用(16)1×PBS洗涤 3 次,每次5min。 (17)在血管组织上滴加抗荧光淬灭剂进行封片。 (18)使用荧光显微镜进行观察拍照。 贴壁细胞免疫荧光法 (1)在培养板中接种的带染色的细胞爬片用PBS泡洗3次×3min (2)4%多聚甲醛固定细胞爬片15min (3)1×PBS洗涤 3 次,每次5min。 (4)0.5%Triton X-100(PBS配制)室温通透10min (5)1×PBS洗涤 3 次,每次5min。 (6)1%BSA室温封闭30min (7)弃掉封闭液,细胞爬片滴加适量稀释至适当比例的一抗,4℃孵育过夜(8)1×PBS洗涤 3 次,每次5min。 (9)细胞爬片滴加稀释至适当比例的荧光二抗 (10)1×PBS洗涤 3 次,每次5min。 (11)DAPI染细胞核,浓度和时间根据试剂说明书使用 (12)1×PBS洗涤 3 次,每次5min。 (13)用抗荧光淬灭剂封片 (14)荧光显微镜下观察采集图像 细胞免疫荧光(悬浮细胞方法一) (1)收集悬浮细胞,细胞在冰浴中冷却,然后用台式离心机于4℃以800 g 离心5 min,吸去培养液并以4℃ 1×PBS重悬细胞。

细胞免疫荧光实验步骤

细胞爬片免疫荧光实验步骤 第一天: 1. 在培养板中将已爬好细胞的玻片用PBS浸洗3次,每次3min; 2. 用4%的多聚甲醛固定爬片15min, PBS浸洗玻片3次,每次3min; 3. 0.5%Triton X-100( PBS配制 )室温通透20min(细胞膜上表达的抗原省略此步骤); 4. PBS浸洗玻片3次,每次3 min,吸水纸吸干PBS,在玻片上滴加正常山羊血清,室温封闭30min; 5. 吸水纸吸掉封闭液,不洗,每张玻片滴加足够量的稀释好的一抗并放入湿盒,4℃孵育过夜; 第二天: 6. 加荧光二抗: PBST 浸洗爬片3次,每次3min,吸水纸吸干爬片上多余液体后滴加稀释好的荧光二抗,湿盒中20-37℃孵育1h,PBST浸洗切片3次,每次3min;注意:从加荧光二抗起,后面所有操作步骤都尽量在较暗处进行。 7. 复染核:滴加DAPI避光孵育5min,对标本进行染核,PBST 5min×4次洗去多余的DAPI;8. 用吸水纸吸干爬片上的液体,用含抗荧光淬灭剂的封片液封片,然后在荧光显微镜下观察采集图像。 细胞免疫荧光步骤 1.在24孔板里加500微升培养基,放爬片,接种细胞(做实验以30-50%汇合度较好。10000-30000左右 2.给药处理24h。 3.PBS洗三遍。 4. 4%冷的多聚甲醛固定15分钟,PBS洗三遍,每次5min,摇床。(避光) 5.0.5%Triton X-100(PBS配)破膜15min,PBS洗三遍,每次5min,摇床。 6.5%BSA(牛血清白蛋白,PBS配)封闭60分钟,不用洗。 7.加一抗孵育(5%BSA配),4℃摇床过夜。 8. 收集一抗,PBS洗三遍,每次5min,摇床。孵育二抗Alexa Fluor 488(1:1000 ),室温60min(避光) 9. 回收二抗,PBS洗三遍,摇床,每次5min。 10. 0.5ug/mLDAPI(5%BSA配,2滴/ml)染核15min。(避光) 11. PBS洗三遍,每次5min,摇床。 12.取载玻片,滴加10uL抗荧光衰减封片剂,将爬片有细胞面盖在封片剂上,指甲油封片子的对角线。 All steps for IF

分子荧光光谱法实验报告范文

分子荧光光谱法实验报告范文 一、实验目的 1.掌握荧光光度计的基本原理及使用。 2.了解荧光分光光度计的构造和各组成部分的作用。 3.掌握分子荧光光度计分析物质的特征荧光光谱:激发光谱、发射光谱的测定方法。 4.了解影响荧光产生的几个主要因素。 5.学会运用分子荧光光谱法对物质进行定性和定量分析。 二、实验原理 原子外层电子吸收光子后,由基态跃迁到激发态,再回到较低能级或者基态时,发射出一定波长的辐射,称为原子荧光。对于分子的能级激发态称为分子荧光,平时所说的荧光指分子荧光。 具有不饱和基团的基态分子经光照射后,价电子跃迁产生荧光,是当电子从第一激发单重态S1的最低振动能级回到基态S0各振动能级所产生的光辐射。 (1)激发光谱 是指发光的某一谱线或谱带的强度随激发光波长(或频率)变化的曲线。横坐标为激发光波长,纵坐标为发光相对强度。 激发光谱反映不同波长的光激发材料产生发光的效果。即表示发光的某一谱线或谱带可以被什么波长的光激发、激发的本领是高还是低;也表示用不同波长的光激发材料时,

使材料发出某一波长光的效率。荧光为光致发光,合适的激发光波长需根据激发光谱确定——激发光谱是在固定荧光波长下,测量荧光体的荧光强度随激发波长变化的光谱。获得方法:先把第二单色器的波长固定,使测定的λem不变,改变第一单色器波长,让不同波长的光照在荧光物质上,测定它的荧光强度,以I为纵坐标,λex为横坐标所得图谱即荧光物质的激发光谱,从曲线上找出λex,,实际上选波长较长的高波长峰。 (2)发射光谱 是指发光的能量按波长或频率的分布。通常实验测量的是发光的相对能量。发射光谱中,横坐标为波长(或频率),纵坐标为发光相对强度。 发射光谱常分为带谱和线谱,有时也会出现既有带谱、又有线谱的情况。发射光谱的获得方法:先把第一单色器的波长固定,使激发的λex不变,改变第二单色器波长,让不同波长的光扫描,测定它的发光强度,以I为纵坐标,λem为横坐标得图谱即荧光物质的发射光谱;从曲线上找出最大的λem。 (3)荧光强度与荧光物质浓度的关系 用强度为I0的入射光,照射到液池内的荧光物质时,产生荧光,荧光强度If用仪器测得,在荧光浓度很稀(A0.05)时,荧光物质发射的荧光强度If与浓度有下面的关系:If=KC。 三、实验试剂和仪器

分子荧光光谱实验报告doc

分子荧光光谱实验报告 篇一:分子荧光光谱实验报告 分子荧光光谱实验报告 一、实验目的: 1.掌握荧光光度法的基本原理及激发光谱、发射光谱的测定方法;学会运用分子荧光光谱法对物质进行定性分析。 2.了解荧光分光光度计的构造和各组成部分的作用。 3.了解影响荧光产生的几个主要因素。二、实验内容:测定荧光黄/水体系的激发光谱和发射光谱; 首先根据已知的激发波长(如果未知,则用紫外分光光度计进行测量,以最大吸收波长为激发波长)测定发射光谱,得到最大发射波长;然后根据最大发射波长测定激发光谱,得到最大激发波长;然后在根据最大激发波长测定测定发射光谱; 根据所得数据,用origin软件做出光谱图。三、实验原理: 某些物质吸收光子后,外层电子从基态跃迁至激发态,然后经辐射跃迁的方式返回基态,发射出一定波长的光辐射,此即光致发光。光致发光现象分荧光、磷光两种,分别对应单重激发态、三重激发态的辐射跃迁过程。本实验为荧光光谱的测定。

激发光谱:在发射波长一定的条件下,被测物吸收的荧光强度随激发波长的变化图。 发射光谱:在激发波长一定的条件下,被测物发射的荧光强度随发射波长的变化图。 各种物质均有其特征的最大激发波长和最大发射波长,因此,根据最大激发波长和最大发射波长,可以对某种物质进行定性的测定。 四、荧光光谱仪的基本机构 五、实验结果与讨论: XX00 S1 / R1 (CPS / MicroAmps) 150000 100000 50000 0Wavelength (nm) 400000 S1 / R1 (CPS / MicroAmps) 300000 XX00 100000 Wavelength (nm)

免疫荧光(IF)实验操作步骤及详细说明

免疫荧光(IF)实验操作步骤及详细说明 一、试剂和溶液 10X PBS:80g NaCl,2g KCl, 14.4g Na2HPO4 和2.4g KH2PO4 加1L 纯水,调pH 至7.4 洗涤液:1× PBS:纯水稀释10X PBS;1× PBST:含0.05% Tween-20 的1×PBS 固定液:4%多聚甲醛:4g 多聚甲醛溶于100ml PBS 通透液:0.1% triton-100 封闭液:PBS 配制的3% BSA 抗体稀释液:一抗稀释液:3% BSA;二抗及鬼笔环肽稀释液:1X PBS;DAPI: 纯水稀释 二、实验步骤 1.细胞爬片:将盖玻片在酒精灯上灼烧片刻灭菌,铺在12 孔板中,然后 接种细胞,置于37℃,5% CO2 培养箱过夜,待细胞形态完全伸展开并 且处于健康状态,细胞密度为40%-50%(做爬片的细胞一般是复苏细 胞传代2 代后的细胞); 2.洗涤:倒掉细胞培养液,1×PBS 轻微振荡洗涤2 次,每次5min; 3.固定:4%多聚甲醛固定细胞,室温放置20min,倒掉多聚甲醛,加入适 量PBS; 4.冻存:-20℃保存(如果直接使用,可跳至7); 5.解冻:常温下放置30 min; 6.洗涤:PBS 轻微振荡洗涤2 次,每次5min; 7.通透:加入适当体积的通透液室温放置10min。(若蛋白为膜表达则无 需此步) 8.洗涤:PBS 轻微振荡洗涤2 次,每次5min; 9.封闭:加入适当体积的封闭液,室温放置30 min;

10.一抗孵育:倒掉封闭液,加入适当体积及浓度的一抗,37℃孵育60min; 11.洗涤:PBST 轻微振荡洗涤3 次,每次5min; 12.二抗孵育:加入适当体积及稀释比的荧光二抗,37℃避光孵育50 min; (如果一抗是荧光素标记的则无需此步); 13.洗涤:PBST 轻微振荡洗涤3 次,每次5min; 14.核染色:加入适当体积及稀释比的核染料DAPI 室温反应10min (如 果无需加鬼笔环肽,可洗涤后跳至16); 15.洗涤:PBST 轻微振荡洗涤3 次,每次5min; 16.细胞骨架染色:此步骤只针对有细胞核定位的项目;加入反应体积为 0.2ml,适当稀释比的鬼笔环肽,避光37℃孵育60 min 或4℃孵育过夜; 17.洗涤:PBST 轻微振荡洗涤2 次,每次5min,纯水轻微振荡洗涤5min; 18.封片:取干净的载玻片,分别标记和编号,在载玻片上滴加适量防荧光 猝灭封片剂,用镊子取出盖玻片,细胞面朝下,盖在载玻片上; 19.荧光显微镜观察,记录结果。

免疫荧光实验的实验操作步骤及注意事项

免疫荧光 1.免疫荧光所用玻片的处理 (1) 玻片的规格:圆形18mm×18mm可放入12孔盘; (2) 处理方法: 将玻片一片一片的放入1M的盐酸中,65度浸泡过夜,每一遍均要一片一片的清洗,第一遍用流动的自来水冲洗,,其余在盆里清洗,要洗5次以上,再用蒸馏水洗,同样的方法洗5次以上,再用超声,低功率,一小时。完后再用蒸馏水洗5遍,洗完以后放入95%的乙醇中长期保存。 临用时,在酒精灯上过一下,使玻片上残余的乙醇燃烧,干燥后放入12孔板。 2. 铺板子如细胞难以贴壁(如293T)或需观察细胞骨架的相关指标,需用多聚赖氨酸处理玻片。多聚赖氨酸1ml,置于37度孵箱30-60min,吸干液体,用DDW洗三遍,吸干液体,紫外照射5-10min,晾干。实验前一天将含2-3×10^4细胞的单细胞悬液铺种在放有盖玻片的十二孔盘中一个孔中(要保证有单个细胞)。每个条件做两个复孔,待细胞贴壁12~24h后,用于实验分析; 3. 固定用PBS洗涤细胞3 次(注意免疫荧光所用的PBS,均为常温的),吸干,4%的PFA 1ml/ 孔于室温固定10min ; 4. 防淬灭吸出PFA ,注意不能使细胞干燥,用PBS 洗3 次,加入1ml 的50mM 氯化铵于室温孵育10min,吸出氯化铵,PBS 洗3 遍,2-3min/次;(此步可省略) 5. 通透加入1ml 的0.1%Triton X-100,10min,吸出Triton,用PBS 洗3 次,2-3min/次; 6. 封闭加入1ml 3%的BSA(PBS配)封闭,室温1h,可在摇床上晃动,吸出BSA ,用PBS 洗10min; 7. 一抗一般稀释比例为1:200-1:500,将抗体(BSA配;双染各加20μl ,单染加40μl )加到封口膜上,将盖玻片从十二孔盘中取出,吸干多余的水分,有细胞的一面接触抗体,室温>1h 或4°C 过夜(效果好),将玻片重新放回到十二孔板中,有细胞的一面朝上,用PBS 洗4 次,10min/次; 8.二抗稀释比例一般为1:400(BSA配),操作同一抗,避光室温反应1h, 将玻片重新放回到用锡纸包裹的十二孔盘中,有细胞的一面朝上,PBS 洗3 次,10min/ 次; 9. DAPI染细胞核一般浓度为1μg/ml,每个盖玻片需20ul ,操作同一抗,避光于室温反应10min,将玻片重新放回到用锡纸包裹的十二孔盘中,有细胞的一面朝上,PBS 洗3 次,10min/ 次; 10.封片把封片剂(mowoil)滴在载玻片上,每片15μl,将有细胞的面朝下,勿有气泡。等封片剂干后室温约30min-1h,放在4°C ,避光保存。若需长期保

核黄素测定实验报告

核黄素测定实验报告 篇一:实验八荧光光光度法测定核黄素的含量 实验八荧光光度法测定核黄素的含量(见教材p118) 一. 实验目的 1. 了解荧光法测定核黄素的原理和方法; 2. 学习荧光光度计的操作和使用。 二. 实验原理 某些具有π-π电子共轭体系的分子易吸收某一波段的紫外光而被激发,如该物质具有较高的荧光效率,则会以荧光的形式释放出吸收的一部分能量而回到基态。建立在发生荧光现象基础上的分析方法,称为分子荧光分析法,而常把被测物称为荧光物质。在稀溶液中,荧光强度IF与入射光的强度I0、荧光量子效率?F以及荧光物质的浓度c等有关,可表示为IF=K?FI0εbc。式中为比例常数,与仪器的参数固定后,以最大激发波长的光为入射光,测定最大发射波长光的强度时,荧光强度IF与荧光物质的浓度c成正比。 核黄素(维生素B2)是一种异咯嗪衍生物,它在中性或弱酸性的水溶液中为黄色并且有很强的荧光。这种荧光在强酸和强碱中易被破坏。核黄素可被亚硫酸盐还原成无色的二

氢化物,同时失去荧光,因而样品的荧光背景可以被测定。 OHHO OHH3CHNOOHOHHOOHH3CH3C-2HH3CH二氢化物在空气中易重新氧化,恢复其荧光,其反应如下: 核黄素的激发光波长范围约为440—500nm(一般为440nm),发射光波长范围约为510—550nm(一般为520nm)。利用核黄素在稀溶液中荧光的强度与核黄素的浓度成正比,由还原前后的荧光差数可进行定量测定。根据核黄素的荧光特性亦可进行定性鉴别。 注意:在所有的操作过程中,要避免核黄素受阳光直接照射。 三. 仪器与试剂 1. 实验试剂:核黄素标准品;冰醋酸;核黄素药片;连二亚硫酸钠(保险粉)或亚硫酸钠 2. 实验仪器:荧光光度计(F-2500型)天平(感量0.0001g) 3. 实验器材 普通试管: 容量瓶:

免疫荧光技术操作步骤

免疫荧光技术操作步骤 直接免疫荧光法测抗原 基本原理 将荧光素标记在相应的抗体上,直接与相应抗原反应。其优点是方法简便、特异性高,非特异性荧光染色少,相对使用标记抗体用量偏大。 试剂与仪器 磷酸盐缓冲盐水(PBS):0.01mol/L,pH7.4 荧光标记的抗体溶液:以0.01mol/L,pH7.4 的PBS 进行稀释 缓冲甘油:分析纯无荧光的甘油9 份+ pH9.2 0.2M 碳酸盐缓冲液 1 份配制 搪瓷桶三只(内有0.01mol/L,pH7.4 的PBS 1500ml) 有盖搪瓷盒一只(内铺一层浸湿的纱布垫) 荧光显微镜 玻片架 滤纸 37℃温箱等。 实验步骤 1. 滴加0.01mol/L,pH7.4 的PBS于待检标本片上,10min后弃去,使标本保持一定湿度。 2. 滴加适当稀释的荧光标记(饱和浓度可以用滴度发或公式法算出)的抗体溶液,使其完全覆盖标本,置于有盖搪瓷盒内,保温30min 定时间(参考:30min)。 3. 取出玻片,置玻片架上,先用0.01mol/L,pH7.4 的PBS 冲洗后,再按顺序过0.01mol/L, pH7.4 的PBS 三缸浸泡,每缸3-5 min,不时振荡。 4. 取出玻片,用滤纸吸去多余水分,但不使标本干燥,加一滴缓冲甘油,以盖玻片覆盖。 5. 立即用荧光显微镜观察。观察标本的特异性荧光强度,一般可用“+”表示: (-)无荧光;(±)极弱的可疑荧光;(+)荧光较弱,但清楚可见;(++)荧光明亮;(+++ --++++)荧光闪亮。待检标本特异性荧光染色强度达“++”以上,而各种对照显示为(±) 或(-),即可判定为阳性。 注意事项 1. 对荧光标记的抗体的稀释,要保证抗体的蛋白有一定的浓度,一般稀释在1:20-100 之 间,要自行摸索最佳梯度,建立最好的稀释比例,抗体浓度过低,会导致产生的荧光过弱, 影响结果的观察。 2. 染色的温度和时间需要根据各种不同的标本及抗原而变化,染色时间可以从10 min 到数 小时,一般30 min 已足够。染色温度多采用室温(25℃左右),高于37℃可加强染色效果, 但对不耐热的抗原(如流行性乙型脑炎病毒)可采用0-2℃的低温,延长染色时间。低温染 色过夜较37℃30 min 效果好的多。 3. 为了保证荧光染色的正确性,首次试验时需设置下述对照,以排除某些非特异性荧光染 色的干扰。 (1)标本自发荧光对照:标本加1-2 滴0.01mol/L,pH7.4 的PBS。 (2)特异性对照(抑制试验):标本加未标记的特异性抗体,再加荧光标记的特异性抗 体。 如果标本自发荧光对照和特异性对照呈无荧光或弱荧光,待检标本呈强荧光,则为特异性阳性染色。4. 一般标本在高压灯下照射超过3min,就有荧光减弱现象,经荧光染色的标本最好在当天观察,随着时间的延长,荧光强度会逐渐下降。

相关主题