搜档网
当前位置:搜档网 › Discontinuous Seam-Carving for Video Retargeting

Discontinuous Seam-Carving for Video Retargeting

Discontinuous Seam-Carving for Video Retargeting
Discontinuous Seam-Carving for Video Retargeting

Discontinuous Seam-Carving for Video Retargeting

Matthias Grundmann1,2Vivek Kwatra2Mei Han2Irfan Essa1 grundman@https://www.sodocs.net/doc/b515695169.html, kwatra@https://www.sodocs.net/doc/b515695169.html, meihan@https://www.sodocs.net/doc/b515695169.html, irfan@https://www.sodocs.net/doc/b515695169.html, 1Georgia Institute of Technology,Atlanta,GA,USA2Google Research,Mountain View,CA,USA https://www.sodocs.net/doc/b515695169.html,/cpl/projects/videoretargeting

Abstract

We introduce a new algorithm for video retargeting that uses discontinuous seam-carving in both space and time for resizing videos.Our algorithm relies on a novel appearance-based temporal coherence formulation that al-lows for frame-by-frame processing and results in tem-porally discontinuous seams,as opposed to geometrically smooth and continuous seams.This formulation optimizes the difference in appearance of the resultant retargeted frame to the optimal temporally coherent one,and allows for carving around fast moving salient regions.Addition-ally,we generalize the idea of appearance-based coher-ence to the spatial domain by introducing piece-wise spa-tial seams.Our spatial coherence measure minimizes the change in gradients during retargeting,which preserves spatial detail better than minimization of color difference alone.We also show that per-frame saliency(gradient-based or feature-based)does not always produce desirable retargeting results and propose a novel automatically com-puted measure of spatio-temporal saliency.As needed,a user may also augment the saliency by interactive region-brushing.Our retargeting algorithm processes the video se-quentially,making it conducive for streaming applications.

1.Introduction

Video retargeting has gained signi?cant importance with the growth of diverse devices(ranging from mobile phones, mobile gaming and video devices,TV receivers,internet video players,etc.)that support video playback with vary-ing formats,resolutions,sizes,and aspect ratios.Video re-targeting resizes the video to a new target resolution or as-pect ratio,while preserving its salient content.

Recent approaches to video retargeting aim to preserve salient content and avoid direct scaling or cropping by re-moving“unwanted”or redundant pixels and regions[1,13]. Such a removal(or carving)of redundant regions results in complex non-euclidean transformations or deformations of image content,which can lead to artifacts in both space and time.These artifacts are alleviated by enforcing spatial and temporal consistency of salient content in the target video. In this paper,we propose an algorithm for video retargeting that is motivated by seam carving techniques[1,13]and augments those approaches with several novel ideas.

Our treatment of video is signi?cantly different than the surface carving approach of[13].We observe that geomet-ric smoothness of seams across the video volume-while suf?cient-may not be necessary to obtain temporally co-herent videos.Instead we optimize for an appearance-based temporal coherence measure for seams.We also ex-tend a similar idea to spatial seams,which allows them to vary by several pixels between adjacent rows(for vertical seams).Such a formulation affords greater?exibility than continuous seam removal.In particular,the seams can cir-cumvent large salient regions by making long lateral moves and also jump location over frames if the region is moving across the frame(see Fig.6a).

To improve the quality of spatial detail over seams as pixels are carved,we propose to use a spatial coherence measure for the visual error that gives greater importance to the variation in gradients as opposed to the gradients them-selves.This improves upon the forward energy measure of

[13].We demonstrate the effectiveness of this formulation

on image resizing applications as well.

Saliency contributes signi?cantly to the outcome of any video retargeting algorithm.Avidan et al.[1]noted that no “single energy function performs well across all images”.

While we mostly rely on a simple gradient-based saliency in our examples,we also show results that use an alterna-tive fully automatic de?nition of saliency.This novel de?-nition of saliency is based on the image based approach of

[11].To achieve temporal coherence between frames,we

segment the video into spatio-temporal regions and aver-age the frame-based saliency over each spatio-temporal re-gion.We also provide examples generated by user-supplied weighting of spatio-temporal regions.We employ the seg-mentation algorithm of[5],extended to video volumes[7], for computing spatio-temporal regions,but could have also used segmentations from[8,12,16].In principle,our method is not limited to a single de?nition of saliency or

a speci?c video segmentation algorithm.While the use

of spatio-temporal saliency improves our results consider-ably we will show that even on per-frame,gradient-based saliency our algorithm outperforms existing approaches.

An additional advantage of our resizing technique is that it processes the video sequentially,i.e.on a frame-by-frame basis,and therefore is scalable to arbitrarily long or stream-ing videos.This allows us to improve the computation time by a factor of at least four compared to the fastest re-1

c

2006One

Republic c

2006One Republic Figure 1:Six frames from the result of our retargeting algorithm applied to a sub-clip of “Apologize”,c

2006One Republic.Original frames on left,retargeted results on right.We use shot boundary detection to separate the individual shots before processing.

ported numbers to-date and achieve performance of about

two frames per second.

2.Related Work

The use of seam carving for image resizing was introduced by Avidan and Shamir [1]and later extended for video retar-geting by Rubinstein et al.[13].Seams are vertical or hor-izontal chains of pixels that are successively removed from or added to an image to change its width or height,respec-tively.To preserve content,seams are chosen as least energy paths through the image.In video,seams are generalized to surfaces that carve through the spatio-temporal volume.Space-time surface carving is also used by Chen and Sen [2]for video summarization.An issue with space-time carv-ing is the memory required for processing video volumes,which is usually addressed by approximation techniques:[2]carve the video in small chunks,while [13]take a banded multi-resolution approach;both use a graph cut al-gorithm to solve for the surface.

Seam carving is very effective but needs external saliency maps in cases where salient objects lack texture.Wolf et al.[19]present a video retargeting technique that combines automatic saliency detection with non-uniform scaling using global optimization.They compute a saliency map for each frame using image gradients as well as face and motion detection.In contrast,we treat the detection of saliency itself as an orthogonal problem.Primarily,we use per-frame gradient-based saliency similar to [1]but we also generate a temporally coherent saliency based on space-time regions derived from the image-based approach of [11].We examine the difference of both saliency de?ni-tions in Fig.10and our video.

Other methods that use optimization for generating vi-sual summaries include [15,17,18].Optimization methods use constraints based on the desired target size.Therefore,they need to be re-run for each desired size.In contrast,seam or surface carving approaches as our proposed algo-rithm and [1,13]allow retargeting to the chosen size in

real-time.Preventing aliasing artifacts in retargeting was recently addressed by [9]by using a warping technique known as EW A splatting.While producing good results,the approach is mainly constraint to static cameras (e.g.line constraints are not tracked).

Gal et al.[6]present a feature-aware texture mapping technique that avoids distorting important features,sup-plied as user-speci?ed regions,by applying non-uniform warping to the texture image.This is similar to our ap-proach of using regions for saliency.However,our auto-matic segmentation-aided region selection method scales to video.For video segmentation,we build upon Felzenszwalb and Huttenlocher’s graph-based image segmentation [5,7].However other video segmentations techniques such as [12]could also be used.

Automatic pan-and-scan and smart cropping have been proposed by [3,10,4].Recently,[14]introduced a method to ?nd an optimal combination of cropping,non-isotropic scaling and seam carving for image retargeting w.r.t.a cost measure similar to [15].The approach is extended to video by applying the method to key-frames and interpolating the operations between them.We demonstrate equivalent re-sults using our approach and compare to [14].

3.Video Retargeting by Seam Removal

Our video retargeting algorithm resizes a video by sequen-tially removing seams from it.Seams are 8-connected paths of pixels with the property that each row (vertical seams)or each column (horizontal seams)is incident to exactly one pixel of the seam.Hence removing or duplicating a verti-cal seam changes the width of a frame by exactly one col-umn.Alternating N times between seam computation and removal for a w ×h frame yields N disjoint seams,effec-tively computing a content-aware resize for 2N target sizes {(w +N )×h },...,{(w +1)×h },{w ×h },...,{(w ?N )×h }.This is in contrast to optimization methods that solve for each target size independently.The pre-computed seams enable real-time content-aware resizing as removal

(a)x-t

slice(b)Frame

7(c)Frame

36(d)Frame37

Figure2:Traced x-t slice(at knee height)of a person running from left to right(from Weizmann Action Recognition dataset), obtained using background subtraction.Every vertical surface is a seam in the x-t plane(red)and would intersect with the space-time shape of the person.In contrast our temporally discontinuous solution(green)stays in front of the person(b)and jumps between adjacent frames(c)→(d)to overcome spatial distortion.

or duplication of seams only involves fast memory moves.

Rubinstein et al.[13]presented an approach generaliz-ing the seam in an image to a surface in the video volume by extending the image seam carving approach of[1].The proposed solution for altering the width of the video is a vertical surface.The cross-sections of this surface form a vertical seam in every frame and a temporal seam in the x?t plane for any?xed y-location1.Therefore,a funda-mental property of the surface is that it can only move by at most one pixel-location between adjacent frames.

Consider the case of an object of interest moving from left to right over the video sequence as shown in Fig.2. Any vertical surface has to start to the right of the object and end to the left of it.In other words,the seam surface is bound to intersect with the object of interest and thereby distort it.This behavior is not limited to this particular case but occurs in general when there is considerable motion in the video perpendicular to the surface–the surface simply cannot keep up with the motion in the video.

In the context of seam carving,temporal coherence is established if adjacent resized frames are aligned like in the original video.If we optimize for temporal coherence alone,an obvious solution is to pick the same seam for ev-ery frame:all pixels that are neighbors along the temporal dimension in the original video will stay neighbors in the re-sized video.This is akin to non-uniform scaling,where se-lective columns may be removed(with blending)to shrink the video.However,this by itself will introduce spatial ar-tifacts because in contrast to non-uniform scaling,seams group in non-salient regions instead of being distributed evenly over the columns of a video.

We experimented propagating seams based on tracking non-salient objects in the video.However this does not nec-essarily lead to good results.In case of vertical seams,if the tracked object does not cover the whole height of the video the propagated seam will intersect with the background at a multitude of different positions resulting in seam that

1Conversely,a horizontal surface forms a horizontal seam in every frame and a temporal seam in the y?t plane for any?xed x-location.get pulled apart in different directions over time(too frag-mented).

Surface carving relaxes the optimal temporal coherence criterion,i.e.replicating the same seam in all frames,by allowing the seam to vary smoothly over time.In other words,it imposes a geometric smoothness constraint upon the seam solution.While this may be a suf?cient condition for achieving temporal coherence,it is not necessary.In-stead,we show that,it is suf?cient(and less restrictive)to compute a seam in the current frame such that the appear-ance of the resulting resized frame is similar to the appear-ance obtained by applying the optimal temporally coherent seam.Optimizing against this criterion ensures temporally coherent appearance,but relieves the seams from being ge-ometrically connected to each other across frames,leading to temporally discontinuous seams.

Our algorithm processes frames sequentially as follows. For each pixel in the current frame,we?rst determine the spatial and temporal coherence costs(S C and T C)as well as the saliency(S)cost of removing that pixel.The three cost measures are linearly combined to one measure M,with a weight ratio S C:T C:S of5:1:2for most sequences.In case of highly dynamic video content we use a ratio of5:0.2:2. Video clip classi?cation based on optical?ow magnitude could automate this choice.We then compute the minimum cost seams w.r.t.M for that frame using dynamic program-ming,similar to[1].By removing or duplicating and blend-ing N seams from each frame we can change the width of the video by N columns.Changing the height is achieved by transposing each frame,computing and removing seams, and transposing the resulting frames.

3.1.Measuring Temporal Coherence

Assume we successively compute a seam S i in every m×n frame F i,i∈1,...,T.Our objective is to remove a seam from the current frame so that the resulting(m?1)×n frame R i would be visually close to the most temporally coherent one,R c,where R c is obtained by reusing the pre-vious seam S i?1and applying it to the current frame F i.

We use R c to inform the process of selecting S i through a look-ahead strategy.For every pixel(x,y),we determine how much the resulting resized frame R i would differ from R c if that pixel were removed.We use the sum-of-squared-differences(SSD)of the two involved rows as the measure of temporal coherence,T c(x,y):

T c=

x?1

k=0

||F i k,y?R c k,y||2+

m?1

k=x+1

||F i k,y?R c k?1,y||2.(1)

The temporal coherence cost at a pixel reduces to a per-row difference accumulation that can be determined for ev-ery pixel before any seams are computed(see Fig.3).This allows us to apply the original seam carving algorithm to

A B C D E F

A C D E F

A B

C D

E

B D

C F

E ? 2008 Paramount Pictures

Figure 3:The previous seam S i ?1(red)is applied to current

Frame F i .Removing pixel B results in the row ACDEF .The

optimal temporally coherent seam removes pixel F ,so that R c would contain ABCDE .The temporal coherence cost for pixel B is |C ?B |+|D ?C |+|E ?D |+|F ?E |,which is the SSD between the two rows as well as the sum of gradients from B to F .

Original frame from The Duchess,c

2008Paramount Pictures.a linear combination of saliency and temporal coherence.It turns out that temporal coherence integrates the gradi-ent along the pixels across which the seam jumps between frames.This is desirable because it means that seams can move more freely in homogeneous regions.Eq.1can be ef-?ciently computed using two m ×n integral images.The

left sum in Eq.1will be represented recursively by I l

0,y =0,

I l x +1,y =I l

x,y +||F i x,y ?R c x,y ||2,and the right sum by

I r m ?1,y =0,I r x ?1,y =I r

x,y +||F i x,y ?R c x ?1,y ||2,resulting in T c =(I l +I r ).

3.2.Measuring Spatial Coherence

Our look-ahead strategy for measuring temporal coherence may also be applied to the spatial domain.Here,the ques-tion is how much spatial error will be introduced after re-moving a seam.The basis of this idea is similar to Rubin-stein et al.’s [13]proposed forward energy .However,our formulation leads to a more general model,i.e.piecewise seams ,and is not based on the introduced intensity varia-tion but the variation in the gradient of the intensity.

We motivate our spatial coherence measure by examin-ing several different cases in Fig.4.In (a),there is a step between A and B as represented by the color difference.Removing B yields AC ,which exhibits the same step as be-fore,hence no detail is lost 2.On the other hand,in (b),high frequency detail will be lost on removing B .Removing B in (c)compacts the linear ramp,which is the desired be-havior as it compresses the local neighborhood without sig-2Rubinstein

et al.’s forward energy is expressed as a difference in in-tensity and would be large in this case.

A

B C (a)No detail lost A B C (b)Detail lost A B C

(c)Linearity preserved

(See in color.)Spatial error if pixel B is

removed.

(a)Removing interior pixel (b)Removing border pixel

(c)Piecewise seam cost

Figure 5:Spatial coherence costs:(a)Removing an interior pixel,

E w.r.t.A .Bottom row DE

F becomes DF ,therefore the inten-sity difference before removing E was |D ?E |+|E ?F |and

is |D ?F |afterwards.Between the two rows,the intensity dif-ference was |A ?D |and |B ?E |and is |B ?D |afterwards.(b)Removing a border pixel,here D w.r.t.B .In the bottom row |D ?E |becomes |E ?F |.(c)Summed spatial transition cost for piecewise seams.Consider transition A →H .We accumulate the change in (LHS)gradient magnitudes before (dotted blue)and after (dashed red)removal (Order:Left to right).We also consider the symmetric case by accumulating the change in RHS gradient magnitudes before (solid orange)and after (dashed red)removal.

ni?cantly changing its appearance.In each of these cases,the cost of removing B is well represented by the change in gradient,which is what we use as our measure of spatial coherence,instead of change in intensity.

Our spatial coherence measure S c =S h +S v consists of two terms,which quantify the error introduced in the horizontal and vertical (including diagonal)directions,re-spectively,by the removal of a speci?c pixel.Speci?cally S h and S v are designed to measure the change in gradients caused by the removal of the pixel.S h only depends on the pixel in question and in some sense adds to its saliency,while S v depends upon the pixel and its potential best seam neighbor in the row above.Therefore S v de?nes a spatial transition cost between two pixels in adjacent rows.S h is de?ned such that it is zero for the cases (a)and (c)in Fig.4and large for case (b).The equations for interior pixels (E in Fig.5a )and border pixels (D in Fig.5b )are slightly different,but both measure changes in horizontal gradient magnitude:5a :S h (E )=|D ?E |+|E ?F |?|D ?F |,and 5b :S h (D )

=

|D ?E |?|E ?F | .

We de?ne S v to measure the change in vertical gradi-

ent magnitudes when transitioning between a pair of pixels in adjacent rows.We treat the involved pixels in a sym-metric manner to avoid giving undue preference to diagonal neighbors.Hence,S v depends on whether the top neighbor of the pixel in question (say E in Fig.5a )is its left (A ),center (B ),or right (C )neighbor.Fig.5a corresponds to S v (E,A ),where:

S v (E,B )=0

S v (E,A )= |A ?D |?|B ?D | + |B ?E |?|B ?D |

S v (E,C )= |C ?F |?|B ?F | + |B ?E |?|B ?F |

.Piecewise Spatial Seams:We have shown that in order to achieve temporal coherence,a temporally smooth solution is not necessary;the appearance based measure T c is suf?-cient.A natural generalization of this approach is to apply a similar idea to the spatial domain,which would lead to discontinuous spatial seams.For this purpose,we gener-alize our spatial coherence cost,particularly the transition cost S v to an accumulated spatial transition cost that allows a pixel to consider not just its three neighbors in the row above but all pixels in that row.An example is shown in Fig.5c .For a pixel (x b ,y )in the bottom row,the summed spatial transition cost to pixel (x a ,y ?1)in the top row (for the case x a

S v (x b ,x a ,y )

=

x b ?1 k =x a

|G v k,y ?G d

k,y |+

x b

k =x a +1

|G v k,y ?G d

k ?1,y |

where G v k,y =|F k,y ?F k,y ?1|is the vertical gradient

magnitude between pixel (k,y )and its top neighbor,while G d k,y =|F k,y ?F k +1,y ?1|is its diagonal gradient magni-tude with the top right neighbor.The diagonal terms appear because previously diagonal gradients become vertical gra-dients after seam removal.For the example in Fig.5c ,the ?rst term in the equation above will be |AE ?BE |+|BF ?CF |+|CG ?DG |,where AE is shorthand for |A ?E |.The cost for the case x a >x b may be de?ned similarly,

while S

v (x,x,y )=0.In practice,the optimal neighbor x a typically lies in a window of ~15pixels around x b ,allow-ing us to reduce the computational cost from O (m )to O (1).Another effect of limiting the search window is that we im-plicitly enforce seams with a limited number of piecewise jumps in contrast to set of totally disconnected pixels.

Fig.6shows examples of both temporally discontinu-ous and piecewise spatial seams.Fig.7demonstrates the effectiveness of our spatial coherence cost in preserving de-tail.Fig.9shows comparisons with image resizing results of [13](examples from their web page),which use their for-ward energy measure.Fig.8shows a similar comparison for a video example (also from their paper).

c 2007Paramount Pictures (a)Temporally discontinuous seams c

2008Warner Bros.Pictures (b)Piecewise spatial seams

Figure 6:(a)Camera pans to the right.The new seam (green)

jumps to the new redundant content on right and avoids introduc-ing artifacts resulting from having to move smoothly through the

whole frame.From Sweeney Todd,c

2007Paramount Pictures (b)Piecewise seams (here neighborhood of 11pixels)have the freedom to carve around details and therefore prevent artifacts.

From The Dark Knight,c

2008Warner Bros.

Pictures.(a)with S

c (b)w/o S

c (c)[19]

c

2007MGM (d)[13]

Figure 7:Effect of spatial coherence measure S c (a)Our algo-rithm with S c (without piecewise seams)(b)Our algorithm with-out S c (but with [13]’s forward energy);one plane is clearly dis-torted (c)Our implementation of [19](d)[13]’s result.Original frame from Valkyrie,c

2007

MGM.Figure 8:Video retargeting comparison for gradient based

saliency.Shown is a single frame from a highway video (top).Our result (bottom-right)is able to preserve the shape of the cars and poles better than [13]’s result (bottom-left).Even the plate on the truck saying ”Yellow”is still readable.See accompanying video for complete result.

A

A

A

A

A B

B

B

B

B

? 2007 Walt Disney Pictures

Figure 9:Image retargeting results.Top row shows the original images.In bottom row,images labeled A are [13]’s result,while images

labeled B are our results using the novel gradient-variation based spatial coherence cost.In the pliers (left)example,our result better

respects the curvature in the handle’s shape.For Ratatouille,c

2007Walt Disney Pictures,(middle)and the snow scene (right),the straight edges are better preserved in our result (shown zoomed in).

4.Automatic spatio-temporal saliency

There are cases where per-frame gradient based saliency 3is not suf?cient.We can employ higher-level techniques such as face detection,motion cues or learned saliency [11],but a major challenge remains in the required temporal coher-ence for video retargeting.In face detection,for example,the bounding boxes around faces might change consider-ably between frames or even miss several ones.

We are interested in designing an automatic saliency measure that is temporally coherent as well as aligned with the outlines in the video.The latter requirement is moti-vated by the fact that local saliency measures do not cap-ture higher-level context and are inherently sensitive to noise.Therefore,we propose to average external per-frame saliency maps over spatio-temporal regions to address both issues.We obtain spatio-temporal regions for video by ex-tending [5]’s graph-based image segmentation to video [7],but any other video segmentation method could be used as well.We build a 3D graph using a 26-neighborhood in space-time with edge weights based on color difference.We then apply the graph-segmentation algorithm to obtain spatio-temporal regions.The effect of applying our method to frame-based saliency maps is shown in Fig.10.

If the underlying frame-based saliency method fails to detect salient content in a majority of frames,the spatio-temporal smoothing fails as well.In this case we offer a user interface that allows highlighting salient and non-salient re-gions by simple brush strokes,which are then automatically tracked over multiple frames through the spatio-temporal regions (see Fig.11).

3We use the

sum

of absolute values of the pixel’s gradients in

our

work.

c

2007TriStar Pictures Figure 10:Effect of our spatio-temporal saliency.Left col-umn:Saliency maps computed based on [11]for adjacent frames

(top/bottom)independently (white =salient content).Notice the abrupt changes in face,coat and right brick wall.Middle column:Saliency averaged over spatio-temporal regions results in smooth variations across frames.Right column:Effect on video retarget-ing.Top uses spatio-temporal saliency,bottom uses gradient

based

saliency.Original frame:88minutes,c

2007TriStar Pictures.(a)User-selected regions

c

2007TriStar Pictures (b)Auto-selected regions

Figure 11:User selects regions in a single frame (a)by roughly

brushing over objects of interest (indicated by dashed line).These

regions are automatically extrapolated to other frames (b)of the video.See accompanying video.Original frame from 88minutes,c

2007TriStar Pictures.5.Results

We demonstrate our results for video retargeting based on gradient-based saliency and spatio-temporal saliency (auto-

(a)Our

result(b)Result from[9

](c)Result from[14]

Figure12:Comparison to[9]and[14].Content is highly dy-namic(athlete performing720o turn and fast moving camera).In [9],the background gets squished on the left,the waterfront at the bottom gets distorted,and the result is less sharp overall compared to our result.The approach of[14]distorts the head and essentially crops the frame,while our algorithm compresses the background. matic as well as user-selected)in the accompanying video. Fig.12shows comparisons to other techniques for a highly dynamic video.Fig.1and Fig.13(top three rows)show frames from example videos that were retargeted using gradient-based saliency.Fig.13(bottom row)and Fig.14 were retargeted by user-selected regions as shown.In both cases it took less than10seconds to select the regions.

Our approach provides the user control over determin-ing what regions to carve in case automatic approaches fail. Fig.14demonstrates the usefulness of user-selected regions for non-salient content.Fig.15shows that we can achieve results comparable to and with sharper detail than[14]–we only used per-frame gradient-based saliency in this case. 6.Conclusion and Limitations

We have presented a novel video retargeting algorithm based on carving discontinuous seams in space and time that exhibits improved visual quality,affords greater?ex-ibility,and is scalable for large videos.We achieve2fps on400x300video compared to0.3-0.4fps for[13]and0.5 fps for our implementation of[19].We have presented the novel idea of using spatio-temporal regions for automatic or user-guided saliency.We have also demonstrated the bene-?ts of our novel gradient-variation based spatial coherence measure in preserving detail.

We can handle videos with long shots as well as stream-ing videos using frame-by-frame processing.However,if spatio-temporal saliency is also employed,then the video length is limited by the underlying video segmentation al-gorithm,which in our case is~30?40seconds.

Fast-paced actions or highly-structured scenes might have little non-salient content.In these cases,just like other approaches,our video retargeting might produce unsatisfac-tory results as shown in our accompanying video.

The sequential nature of our video retargeting algorithm can occasionally cause the seam in the initial frames to be sub-optimal w.r.t.the later frames.This can sometimes cause several seams to jump their location across time in the same frame,which leads to a visible temporal discontinuity. However,this problem can be alleviated by looking up the saliency information forward in time(around5frames)and averaging it with the current saliency.

References

[1]S.Avidan and A.Shamir.Seam carving for content-aware

image resizing.ACM SIGGRAPH,2007.1,2,3

[2] B.Chen and P.Sen.Video carving.In Eurographics2008,

Short Papers,2008.2

[3]T.Deselaers,P.Dreuw,and H.Ney.Pan,zoom,scan–time-

coherent,trained automatic video cropping.In IEEE CVPR, 2008.2

[4]X.Fan,X.Xie,H.-Q.Zhou,and W.-Y.Ma.Looking into

video frames on small displays.In ACM MULTIMEDIA, 2003.2

[5]P.F.Felzenszwalb and D.P.Huttenlocher.Ef?cient graph-

based image https://www.sodocs.net/doc/b515695169.html,put.Vision,59(2), 2004.1,2,6

[6]R.Gal,O.Sorkine,and D.Cohen-Or.Feature-aware textur-

ing.In Eurographics,2006.2

[7]M.Grundmann,V.Kwatra,M.Han,and I.Essa.Ef?cient hi-

erarchical graph-based video segmentation.In IEEE CVPR, 2010.1,2,6

[8]S.Khan and M.Shah.Object based segmentation of video

using color,motion and spatial information.In IEEE CVPR, 2001.1

[9]P.Kr¨a henb¨u hl,https://www.sodocs.net/doc/b515695169.html,ng,A.Hornung,and M.Gross.A sys-

tem for retargeting of streaming video.In ACM SIGGRAPH ASIA,2009.2,7

[10] F.Liu and M.Gleicher.Video retargeting:automating pan

and scan.In ACM MULTIMEDIA,2006.2

[11]T.Liu,J.Sun,N.-N.Zheng,X.Tang,and H.-Y.Shum.

Learning to detect a salient object.In IEEE CVPR,2007.

1,2,6

[12]S.Paris.Edge-preserving smoothing and mean-shift seg-

mentation of video streams.In ECCV,2008.1,2

[13]M.Rubinstein,A.Shamir,and S.Avidan.Improved seam

carving for video retargeting.In ACM SIGGRAPH,2008.1, 2,3,4,5,6,7,8

[14]M.Rubinstein,A.Shamir,and S.Avidan.Multi-operator

media retargeting.In ACM SIGGRAPH,volume28,2009.

2,7,8

[15] D.Simakov,Y.Caspi,E.Shechtman,and M.Irani.Sum-

marizing visual data using bidirectional similarity.In IEEE CVPR,2008.2

[16]J.Wang,B.Thiesson,Y.Xu,and M.Cohen.Image and video

segmentation by anisotropic kernel mean shift.In ECCV, 2004.1

[17]Y.-S.Wang,C.-L.Tai,O.Sorkine,and T.-Y.Lee.Op-

timized scale-and-stretch for image resizing.ACM SIG-GRAPH ASIA,2008.2

[18]L.Y.Wei,J.Han,K.Zhou,H.Bao,B.Guo,and H.Y.Shum.

Inverse texture synthesis.In ACM SIGGRAPH,2008.2 [19]L.Wolf,M.Guttmann,and D.Cohen-Or.Non-homogeneous

content-driven video-retargeting.In IEEE ICCV,2007.2,5, 7

c

2008Paramount

Pictures c

2007Paramount

Pictures c

2008Warner Bros.

Pictures Figure 13:Video retargeting results.Original frame on left.Retargeted result(s)on right.The top three rows show results obtained by our

discontinuous seam carving computed on gradient-based saliency.Bottom row shows video retargeted using user-selected regions (marked

in green,complexity caused by segmentation errors due to blocking artifacts).Original frames from:The Duchess,c

2008Paramount Pictures (1st row),Sweeney Todd,c

2007Paramount Pictures (3rd

row),The Dark Knight,c 2008Warner Bros.Pictures (4th

row).(a)Saliency

map (b)Result A c

2007Miramax Films (c)User selected

regions (d)Result

B (e)[13]

Figure 14:Sometimes it is vital to preserve non-salient objects because their removal introduces unpleasant motion.Result A (b)removes

the white pillar because it is marked non-salient by the saliency map (a).If we constrain the solution by user-selected regions (c)the pillar

is preserved and the outcome is temporally coherent –Result B (d).Please see video for https://www.sodocs.net/doc/b515695169.html,pared to [13](e)our result does

not squish the actor and or introduce a bump in the pillar.Original frame from No Country for Old Men,c

2007Miramax Films.A?

B?C?

A?

B?

C?

B?

C?

B?

C?

Figure 15:Comparison to [14].The original image A is resized by the method of [14]using a combination of seam carving,cropping and

non-isotropic scaling (B).We achieve similar results (C)using our seam carving alone applied to simple gradient-based saliency.Because

we avoid scaling and cropping our results have sharper details (see zoomed-in portion).

[批处理]计算时间差的函数etime

[批处理]计算时间差的函数etime 计算时间差的函数etime 收藏 https://www.sodocs.net/doc/b515695169.html,/thread-4701-1-1.html 这个是脚本代码[保存为etime.bat放在当前路径下即可:免费内容: :etime <begin_time> <end_time> <return> rem 所测试任务的执行时间不超过1天// 骨瘦如柴版setlocal&set be=%~1:%~2&set cc=(%%d-%%a)*360000+(1%%e-1%%b)*6000+1%%f-1% %c&set dy=-8640000 for /f "delims=: tokens=1-6" %%a in ("%be:.=%")do endlocal&set/a %3=%cc%,%3+=%dy%*("%3>> 31")&exit/b ---------------------------------------------------------------------------------------------------------------------------------------- 计算两个时间点差的函数批处理etime 今天兴趣大法思考了好多bat的问题,以至于通宵 在论坛逛看到有个求时间差的"函数"被打搅调用地方不少(大都是测试代码执行效率的) 免费内容: :time0

::计算时间差(封装) @echo off&setlocal&set /a n=0&rem code 随风@https://www.sodocs.net/doc/b515695169.html, for /f "tokens=1-8 delims=.: " %%a in ("%~1:%~2") do ( set /a n+=10%%a%%100*360000+10%%b%%100*6000+10%% c%%100*100+10%%d%%100 set /a n-=10%%e%%100*360000+10%%f%%100*6000+10%%g %%100*100+10%%h%%100) set /a s=n/360000,n=n%%360000,f=n/6000,n=n%%6000,m=n/1 00,n=n%%100 set "ok=%s% 小时%f% 分钟%m% 秒%n% 毫秒" endlocal&set %~3=%ok:-=%&goto :EOF 这个代码的算法是统一找时间点凌晨0:00:00.00然后计算任何一个时间点到凌晨的时间差(单位跑秒) 然后任意两个时间点求时间差就是他们相对凌晨时间点的时间数的差 对09这样的非法8进制数的处理用到了一些技巧,还有两个时间参数不分先后顺序,可全可点, 但是这个代码一行是可以省去的(既然是常被人掉用自然体

to与for的用法和区别

to与for的用法和区别 一般情况下, to后面常接对象; for后面表示原因与目的为多。 Thank you for helping me. Thanks to all of you. to sb.表示对某人有直接影响比如,食物对某人好或者不好就用to; for表示从意义、价值等间接角度来说,例如对某人而言是重要的,就用for. for和to这两个介词,意义丰富,用法复杂。这里仅就它们主要用法进行比较。 1. 表示各种“目的” 1. What do you study English for? 你为什么要学英语? 2. She went to france for holiday. 她到法国度假去了。 3. These books are written for pupils. 这些书是为学生些的。 4. hope for the best, prepare for the worst. 作最好的打算,作最坏的准备。 2.对于 1.She has a liking for painting. 她爱好绘画。 2.She had a natural gift for teaching. 她对教学有天赋/ 3.表示赞成同情,用for不用to. 1. Are you for the idea or against it? 你是支持还是反对这个想法? 2. He expresses sympathy for the common people.. 他表现了对普通老百姓的同情。 3. I felt deeply sorry for my friend who was very ill. 4 for表示因为,由于(常有较活译法) 1 Thank you for coming. 谢谢你来。 2. France is famous for its wines. 法国因酒而出名。 5.当事人对某事的主观看法,对于(某人),对…来说(多和形容词连用)用介词to,不用for.. He said that money was not important to him. 他说钱对他并不重要。 To her it was rather unusual. 对她来说这是相当不寻常的。 They are cruel to animals. 他们对动物很残忍。 6.for和fit, good, bad, useful, suitable 等形容词连用,表示适宜,适合。 Some training will make them fit for the job. 经过一段训练,他们会胜任这项工作的。 Exercises are good for health. 锻炼有益于健康。 Smoking and drinking are bad for health. 抽烟喝酒对健康有害。 You are not suited for the kind of work you are doing. 7. for表示不定式逻辑上的主语,可以用在主语、表语、状语、定语中。 1.It would be best for you to write to him. 2.The simple thing is for him to resign at once. 3.There was nowhere else for me to go. 4.He opened a door and stood aside for her to pass.

of与for的用法以及区别

of与for的用法以及区别 for 表原因、目的 of 表从属关系 介词of的用法 (1)所有关系 this is a picture of a classroom (2)部分关系 a piece of paper a cup of tea a glass of water a bottle of milk what kind of football,American of soccer? (3)描写关系 a man of thirty 三十岁的人 a man of shanghai 上海人 (4)承受动作 the exploitation of man by man.人对人的剥削。 (5)同位关系 It was a cold spring morning in the city of London in England. (6)关于,对于 What do you think of Chinese food? 你觉得中国食品怎么样? 介词 for 的用法小结 1. 表示“当作、作为”。如: I like some bread and milk for breakfast. 我喜欢把面包和牛奶作为早餐。What will we have for supper? 我们晚餐吃什么?

2. 表示理由或原因,意为“因为、由于”。如: Thank you for helping me with my English. 谢谢你帮我学习英语。 Thank you for your last letter. 谢谢你上次的来信。 Thank you for teaching us so well. 感谢你如此尽心地教我们。 3. 表示动作的对象或接受者,意为“给……”、“对…… (而言)”。如: Let me pick it up for you. 让我为你捡起来。 Watching TV too much is bad for your health. 看电视太多有害于你的健康。 4. 表示时间、距离,意为“计、达”。如: I usually do the running for an hour in the morning. 我早晨通常跑步一小时。We will stay there for two days. 我们将在那里逗留两天。 5. 表示去向、目的,意为“向、往、取、买”等。如: let’s go for a walk. 我们出去散步吧。 I came here for my schoolbag.我来这儿取书包。 I paid twenty yuan for the dictionary. 我花了20元买这本词典。 6. 表示所属关系或用途,意为“为、适于……的”。如: It’s time for school. 到上学的时间了。 Here is a letter for you. 这儿有你的一封信。 7. 表示“支持、赞成”。如: Are you for this plan or against it? 你是支持还是反对这个计划? 8. 用于一些固定搭配中。如: Who are you waiting for? 你在等谁? For example, Mr Green is a kind teacher. 比如,格林先生是一位心地善良的老师。

延时子程序计算方法

学习MCS-51单片机,如果用软件延时实现时钟,会接触到如下形式的延时子程序:delay:mov R5,#data1 d1:mov R6,#data2 d2:mov R7,#data3 d3:djnz R7,d3 djnz R6,d2 djnz R5,d1 Ret 其精确延时时间公式:t=(2*R5*R6*R7+3*R5*R6+3*R5+3)*T (“*”表示乘法,T表示一个机器周期的时间)近似延时时间公式:t=2*R5*R6*R7 *T 假如data1,data2,data3分别为50,40,248,并假定单片机晶振为12M,一个机器周期为10-6S,则10分钟后,时钟超前量超过1.11秒,24小时后时钟超前159.876秒(约2分40秒)。这都是data1,data2,data3三个数字造成的,精度比较差,建议C描述。

上表中e=-1的行(共11行)满足(2*R5*R6*R7+3*R5*R6+3*R5+3)=999,999 e=1的行(共2行)满足(2*R5*R6*R7+3*R5*R6+3*R5+3)=1,000,001 假如单片机晶振为12M,一个机器周期为10-6S,若要得到精确的延时一秒的子程序,则可以在之程序的Ret返回指令之前加一个机器周期为1的指令(比如nop指令), data1,data2,data3选择e=-1的行。比如选择第一个e=-1行,则精确的延时一秒的子程序可以写成: delay:mov R5,#167 d1:mov R6,#171 d2:mov R7,#16 d3:djnz R7,d3 djnz R6,d2

djnz R5,d1 nop ;注意不要遗漏这一句 Ret 附: #include"iostReam.h" #include"math.h" int x=1,y=1,z=1,a,b,c,d,e(999989),f(0),g(0),i,j,k; void main() { foR(i=1;i<255;i++) { foR(j=1;j<255;j++) { foR(k=1;k<255;k++) { d=x*y*z*2+3*x*y+3*x+3-1000000; if(d==-1) { e=d;a=x;b=y;c=z; f++; cout<<"e="<

用c++编写计算日期的函数

14.1 分解与抽象 人类解决复杂问题采用的主要策略是“分而治之”,也就是对问题进行分解,然后分别解决各个子问题。著名的计算机科学家Parnas认为,巧妙的分解系统可以有效地系统的状态空间,降低软件系统的复杂性所带来的影响。对于复杂的软件系统,可以逐个将它分解为越来越小的组成部分,直至不能分解为止。这样在小的分解层次上,人就很容易理解并实现了。当所有小的问题解决完毕,整个大的系统也就解决完毕了。 在分解过程中会分解出很多类似的小问题,他们的解决方式是一样的,因而可以把这些小问题,抽象出来,只需要给出一个实现即可,凡是需要用到该问题时直接使用即可。 案例日期运算 给定日期由年、月、日(三个整数,年的取值在1970-2050之间)组成,完成以下功能: (1)判断给定日期的合法性; (2)计算两个日期相差的天数; (3)计算一个日期加上一个整数后对应的日期; (4)计算一个日期减去一个整数后对应的日期; (5)计算一个日期是星期几。 针对这个问题,很自然想到本例分解为5个模块,如图14.1所示。 图14.1日期计算功能分解图 仔细分析每一个模块的功能的具体流程: 1. 判断给定日期的合法性: 首先判断给定年份是否位于1970到2050之间。然后判断给定月份是否在1到12之间。最后判定日的合法性。判定日的合法性与月份有关,还涉及到闰年问题。当月份为1、3、5、7、8、10、12时,日的有效范围为1到31;当月份为4、6、9、11时,日的有效范围为1到30;当月份为2时,若年为闰年,日的有效范围为1到29;当月份为2时,若年不为闰年,日的有效范围为1到28。

图14.2日期合法性判定盒图 判断日期合法性要要用到判断年份是否为闰年,在图14.2中并未给出实现方法,在图14.3中给出。 图14.3闰年判定盒图 2. 计算两个日期相差的天数 计算日期A (yearA 、monthA 、dayA )和日期B (yearB 、monthB 、dayB )相差天数,假定A 小于B 并且A 和B 不在同一年份,很自然想到把天数分成3段: 2.1 A 日期到A 所在年份12月31日的天数; 2.2 A 之后到B 之前的整年的天数(A 、B 相邻年份这部分没有); 2.3 B 日期所在年份1月1日到B 日期的天数。 A 日期 A 日期12月31日 B 日期 B 日期1月1日 整年部分 整年部分 图14.4日期差分段计算图 若A 小于B 并且A 和B 在同一年份,直接在年内计算。 2.1和2.3都是计算年内的一段时间,并且涉及到闰年问题。2.2计算整年比较容易,但

常用介词用法(for to with of)

For的用法 1. 表示“当作、作为”。如: I like some bread and milk for breakfast. 我喜欢把面包和牛奶作为早餐。 What will we have for supper? 我们晚餐吃什么? 2. 表示理由或原因,意为“因为、由于”。如: Thank you for helping me with my English. 谢谢你帮我学习英语。 3. 表示动作的对象或接受者,意为“给……”、“对…… (而言)”。如: Let me pick it up for you. 让我为你捡起来。 Watching TV too much is bad for your health. 看电视太多有害于你的健康。 4. 表示时间、距离,意为“计、达”。如: I usually do the running for an hour in the morning. 我早晨通常跑步一小时。 We will stay there for two days. 我们将在那里逗留两天。 5. 表示去向、目的,意为“向、往、取、买”等。如: Let’s go for a walk. 我们出去散步吧。 I came here for my schoolbag.我来这儿取书包。 I paid twenty yuan for the dictionary. 我花了20元买这本词典。 6. 表示所属关系或用途,意为“为、适于……的”。如: It’s time for school. 到上学的时间了。 Here is a letter for you. 这儿有你的一封信。 7. 表示“支持、赞成”。如: Are you for this plan or against it? 你是支持还是反对这个计划? 8. 用于一些固定搭配中。如: Who are you waiting for? 你在等谁? For example, Mr Green is a kind teacher. 比如,格林先生是一位心地善良的老师。 尽管for 的用法较多,但记住常用的几个就可以了。 to的用法: 一:表示相对,针对 be strange (common, new, familiar, peculiar) to This injection will make you immune to infection. 二:表示对比,比较 1:以-ior结尾的形容词,后接介词to表示比较,如:superior ,inferior,prior,senior,junior 2: 一些本身就含有比较或比拟意思的形容词,如equal,similar,equivalent,analogous A is similar to B in many ways.

Excel中如何计算日期差

Excel中如何计算日期差: ----Excel中最便利的工作表函数之一——Datedif名不见经传,但却十分好用。Datedif能返回任意两个日期之间相差的时间,并能以年、月或天数的形式表示。您可以用它来计算发货单到期的时间,还可以用它来进行2000年的倒计时。 ----Excel中的Datedif函数带有3个参数,其格式如下: ----=Datedif(start_date,end_date,units) ----start_date和end_date参数可以是日期或者是代表日期的变量,而units则是1到2个字符长度的字符串,用以说明返回日期差的形式(见表1)。图1是使用Datedif函数的一个例子,第2行的值就表明这两个日期之间相差1年又14天。units的参数类型对应的Datedif返回值 “y”日期之差的年数(非四舍五入) “m”日期之差的月数(非四舍五入) “d”日期之差的天数(非四舍五入) “md”两个日期相减后,其差不足一个月的部分的天数 “ym”两个日期相减后,其差不足一年的部分的月数 “yd”两个日期相减后,其差不足一年的部分的天数

表1units参数的类型及其含义 图1可以通过键入3个带有不同参数的Datedif公式来计算日期的差。units的参数类型 ----图中:单元格Ex为公式“=Datedif(Cx,Dx,“y”)”得到的结果(x=2,3,4......下同) ----Fx为公式“=Datedif(Cx,Dx,“ym”)”得到的结果 ----Gx为公式“=Datedif(Cx,Dx,“md”)”得到的结果 现在要求两个日期之间相差多少分钟,units参数是什么呢? 晕,分钟你不能用天数乘小时再乘分钟吗? units的参数类型对应的Datedif返回值 “y”日期之差的年数(非四舍五入) “m”日期之差的月数(非四舍五入) “d”日期之差的天数(非四舍五入) “md”两个日期相减后,其差不足一个月的部分的天数 “ym”两个日期相减后,其差不足一年的部分的月数 “yd”两个日期相减后,其差不足一年的部分的天数 假设你的数据从A2和B2开始,在C2里输入下面公式,然后拖拉复制。 =IF(TEXT(A2,"h:mm:ss")

of和for的用法

of 1....的,属于 One of the legs of the table is broken. 桌子的一条腿坏了。 Mr.Brown is a friend of mine. 布朗先生是我的朋友。 2.用...做成的;由...制成 The house is of stone. 这房子是石建的。 3.含有...的;装有...的 4....之中的;...的成员 Of all the students in this class,Tom is the best. 在这个班级中,汤姆是最优秀的。 5.(表示同位) He came to New York at the age of ten. 他在十岁时来到纽约。 6.(表示宾格关系) He gave a lecture on the use of solar energy. 他就太阳能的利用作了一场讲演。 7.(表示主格关系) We waited for the arrival of the next bus. 我们等待下一班汽车的到来。

I have the complete works of Shakespeare. 我有莎士比亚全集。 8.来自...的;出自 He was a graduate of the University of Hawaii. 他是夏威夷大学的毕业生。 9.因为 Her son died of hepatitis. 她儿子因患肝炎而死。 10.在...方面 My aunt is hard of hearing. 我姑妈耳朵有点聋。 11.【美】(时间)在...之前 12.(表示具有某种性质) It is a matter of importance. 这是一件重要的事。 For 1.为,为了 They fought for national independence. 他们为民族独立而战。 This letter is for you. 这是你的信。

单片机C延时时间怎样计算

C程序中可使用不同类型的变量来进行延时设计。经实验测试,使用unsigned char类型具有比unsigned int更优化的代码,在使用时 应该使用unsigned char作为延时变量。以某晶振为12MHz的单片 机为例,晶振为12M H z即一个机器周期为1u s。一. 500ms延时子程序 程序: void delay500ms(void) { unsigned char i,j,k; for(i=15;i>0;i--) for(j=202;j>0;j--) for(k=81;k>0;k--); } 计算分析: 程序共有三层循环 一层循环n:R5*2 = 81*2 = 162us DJNZ 2us 二层循环m:R6*(n+3) = 202*165 = 33330us DJNZ 2us + R5赋值 1us = 3us 三层循环: R7*(m+3) = 15*33333 = 499995us DJNZ 2us + R6赋值 1us = 3us

循环外: 5us 子程序调用 2us + 子程序返回2us + R7赋值 1us = 5us 延时总时间 = 三层循环 + 循环外 = 499995+5 = 500000us =500ms 计算公式:延时时间=[(2*R5+3)*R6+3]*R7+5 二. 200ms延时子程序 程序: void delay200ms(void) { unsigned char i,j,k; for(i=5;i>0;i--) for(j=132;j>0;j--) for(k=150;k>0;k--); } 三. 10ms延时子程序 程序: void delay10ms(void) { unsigned char i,j,k; for(i=5;i>0;i--) for(j=4;j>0;j--) for(k=248;k>0;k--);

for和to区别

1.表示各种“目的”,用for (1)What do you study English for 你为什么要学英语? (2)went to france for holiday. 她到法国度假去了。 (3)These books are written for pupils. 这些书是为学生些的。 (4)hope for the best, prepare for the worst. 作最好的打算,作最坏的准备。 2.“对于”用for (1)She has a liking for painting. 她爱好绘画。 (2)She had a natural gift for teaching. 她对教学有天赋/ 3.表示“赞成、同情”,用for (1)Are you for the idea or against it 你是支持还是反对这个想法? (2)He expresses sympathy for the common people.. 他表现了对普通老百姓的同情。 (3)I felt deeply sorry for my friend who was very ill. 4. 表示“因为,由于”(常有较活译法),用for (1)Thank you for coming. 谢谢你来。

(2)France is famous for its wines. 法国因酒而出名。 5.当事人对某事的主观看法,“对于(某人),对…来说”,(多和形容词连用),用介词to,不用for. (1)He said that money was not important to him. 他说钱对他并不重要。 (2)To her it was rather unusual. 对她来说这是相当不寻常的。 (3)They are cruel to animals. 他们对动物很残忍。 6.和fit, good, bad, useful, suitable 等形容词连用,表示“适宜,适合”,用for。(1)Some training will make them fit for the job. 经过一段训练,他们会胜任这项工作的。 (2)Exercises are good for health. 锻炼有益于健康。 (3)Smoking and drinking are bad for health. 抽烟喝酒对健康有害。 (4)You are not suited for the kind of work you are doing. 7. 表示不定式逻辑上的主语,可以用在主语、表语、状语、定语中。 (1)It would be best for you to write to him. (2) The simple thing is for him to resign at once.

51单片机延时时间计算和延时程序设计

一、关于单片机周期的几个概念 ●时钟周期 时钟周期也称为振荡周期,定义为时钟脉冲的倒数(可以这样来理解,时钟周期就是单片机外接晶振的倒数,例如12MHz的晶振,它的时间周期就是1/12 us),是计算机中最基本的、最小的时间单位。 在一个时钟周期内,CPU仅完成一个最基本的动作。 ●机器周期 完成一个基本操作所需要的时间称为机器周期。 以51为例,晶振12M,时钟周期(晶振周期)就是(1/12)μs,一个机器周期包 执行一条指令所需要的时间,一般由若干个机器周期组成。指令不同,所需的机器周期也不同。 对于一些简单的的单字节指令,在取指令周期中,指令取出到指令寄存器后,立即译码执行,不再需要其它的机器周期。对于一些比较复杂的指令,例如转移指令、乘法指令,则需要两个或者两个以上的机器周期。 1.指令含义 DJNZ:减1条件转移指令 这是一组把减1与条件转移两种功能结合在一起的指令,共2条。 DJNZ Rn,rel ;Rn←(Rn)-1 ;若(Rn)=0,则PC←(PC)+2 ;顺序执行 ;若(Rn)≠0,则PC←(PC)+2+rel,转移到rel所在位置DJNZ direct,rel ;direct←(direct)-1 ;若(direct)= 0,则PC←(PC)+3;顺序执行 ;若(direct)≠0,则PC←(PC)+3+rel,转移到rel 所在位置 2.DJNZ Rn,rel指令详解 例:

MOV R7,#5 DEL:DJNZ R7,DEL; rel在本例中指标号DEL 1.单层循环 由上例可知,当Rn赋值为几,循环就执行几次,上例执行5次,因此本例执行的机器周期个数=1(MOV R7,#5)+2(DJNZ R7,DEL)×5=11,以12MHz的晶振为例,执行时间(延时时间)=机器周期个数×1μs=11μs,当设定立即数为0时,循环程序最多执行256次,即延时时间最多256μs。 2.双层循环 1)格式: DELL:MOV R7,#bb DELL1:MOV R6,#aa DELL2:DJNZ R6,DELL2; rel在本句中指标号DELL2 DJNZ R7,DELL1; rel在本句中指标号DELL1 注意:循环的格式,写错很容易变成死循环,格式中的Rn和标号可随意指定。 2)执行过程

excel中计算日期差工龄生日等方法

excel中计算日期差工龄生日等方法 方法1:在A1单元格输入前面的日期,比如“2004-10-10”,在A2单元格输入后面的日期,如“2005-6-7”。接着单击A3单元格,输入公式“=DATEDIF(A1,A2,"d")”。然后按下回车键,那么立刻就会得到两者的天数差“240”。 提示:公式中的A1和A2分别代表前后两个日期,顺序是不可以颠倒的。此外,DATEDIF 函数是Excel中一个隐藏函数,在函数向导中看不到它,但这并不影响我们的使用。 方法2:任意选择一个单元格,输入公式“="2004-10-10"-"2005-6-7"”,然后按下回车键,我们可以立即计算出结果。 计算工作时间——工龄—— 假如日期数据在D2单元格。 =DA TEDIF(D2,TODAY(),"y")+1 注意:工龄两头算,所以加“1”。 如果精确到“天”—— =DA TEDIF(D2,TODAY(),"y")&"年"&DATEDIF(D2,TODAY(),"ym")&"月"&DATEDIF(D2,TODAY(),"md")&"日" 二、计算2003-7-617:05到2006-7-713:50分之间相差了多少天、多少个小时多少分钟 假定原数据分别在A1和B1单元格,将计算结果分别放在C1、D1和E1单元格。 C1单元格公式如下: =ROUND(B1-A1,0) D1单元格公式如下: =(B1-A1)*24 E1单元格公式如下: =(B1-A1)*24*60 注意:A1和B1单元格格式要设为日期,C1、D1和E1单元格格式要设为常规. 三、计算生日,假设b2为生日

=datedif(B2,today(),"y") DA TEDIF函数,除Excel2000中在帮助文档有描述外,其他版本的Excel在帮助文档中都没有说明,并且在所有版本的函数向导中也都找不到此函数。但该函数在电子表格中确实存在,并且用来计算两个日期之间的天数、月数或年数很方便。微软称,提供此函数是为了与Lotus1-2-3兼容。 该函数的用法为“DA TEDIF(Start_date,End_date,Unit)”,其中Start_date为一个日期,它代表时间段内的第一个日期或起始日期。End_date为一个日期,它代表时间段内的最后一个日期或结束日期。Unit为所需信息的返回类型。 “Y”为时间段中的整年数,“M”为时间段中的整月数,“D”时间段中的天数。“MD”为Start_date与End_date日期中天数的差,可忽略日期中的月和年。“YM”为Start_date与End_date日期中月数的差,可忽略日期中的日和年。“YD”为Start_date与End_date日期中天数的差,可忽略日期中的年。比如,B2单元格中存放的是出生日期(输入年月日时,用斜线或短横线隔开),在C2单元格中输入“=datedif(B2,today(),"y")”(C2单元格的格式为常规),按回车键后,C2单元格中的数值就是计算后的年龄。此函数在计算时,只有在两日期相差满12个月,才算为一年,假如生日是2004年2月27日,今天是2005年2月28日,用此函数计算的年龄则为0岁,这样算出的年龄其实是最公平的。 本篇文章来源于:实例教程网(https://www.sodocs.net/doc/b515695169.html,) 原文链接:https://www.sodocs.net/doc/b515695169.html,/bgruanjian/excel/631.html

双宾语tofor的用法

1. 两者都可以引出间接宾语,但要根据不同的动词分别选用介词to 或for: (1) 在give, pass, hand, lend, send, tell, bring, show, pay, read, return, write, offer, teach, throw 等之后接介词to。 如: 请把那本字典递给我。 正:Please hand me that dictionary. 正:Please hand that dictionary to me. 她去年教我们的音乐。 正:She taught us music last year. 正:She taught music to us last year. (2) 在buy, make, get, order, cook, sing, fetch, play, find, paint, choose,prepare, spare 等之后用介词for 。如: 他为我们唱了首英语歌。 正:He sang us an English song. 正:He sang an English song for us. 请帮我把钥匙找到。 正:Please find me the keys. 正:Please find the keys for me. 能耽搁你几分钟吗(即你能为我抽出几分钟吗)? 正:Can you spare me a few minutes? 正:Can you spare a few minutes for me? 注:有的动词由于搭配和含义的不同,用介词to 或for 都是可能的。如: do sb a favou r do a favour for sb 帮某人的忙 do sb harnn= do harm to sb 对某人有害

for和of的用法

for的用法: 1. 表示“当作、作为”。如: I like some bread and milk for breakfast. 我喜欢把面包和牛奶作为早餐。 What will we have for supper? 我们晚餐吃什么? 2. 表示理由或原因,意为“因为、由于”。如: Thank you for helping me with my English. 谢谢你帮我学习英语。 Thank you for your last letter. 谢谢你上次的来信。 Thank you for teaching us so well. 感谢你如此尽心地教我们。 3. 表示动作的对象或接受者,意为“给……”、“对…… (而言)”。如: Let me pick it up for you. 让我为你捡起来。 Watching TV too much is bad for your health. 看电视太多有害于你的健康。 4. 表示时间、距离,意为“计、达”。如:

I usually do the running for an hour in the morning. 我早晨通常跑步一小时。 We will stay there for two days. 我们将在那里逗留两天。 5. 表示去向、目的,意为“向、往、取、买”等。如: Let’s go for a walk. 我们出去散步吧。 I came here for my schoolbag.我来这儿取书包。 I paid twenty yuan for the dictionary. 我花了20元买这本词典。 6. 表示所属关系或用途,意为“为、适于……的”。如: It’s time for school. 到上学的时间了。 Here is a letter for you. 这儿有你的一封信。 7. 表示“支持、赞成”。如: Are you for this plan or against it? 你是支持还是反对这个计划? 8. 用于一些固定搭配中。如:

英语形容词和of for 的用法

加入收藏夹 主题: 介词试题It’s + 形容词 + of sb. to do sth.和It’s + 形容词 + for sb. to do sth.的用法区别。 内容: It's very nice___pictures for me. A.of you to draw B.for you to draw C.for you drawing C.of you drawing 提交人:杨天若时间:1/23/2008 20:5:54 主题:for 与of 的辨别 内容:It's very nice___pictures for me. A.of you to draw B.for you to draw C.for you drawing C.of you drawing 答:选A 解析:该题考查的句型It’s + 形容词+ of sb. to do sth.和It’s +形容词+ for sb. to do sth.的用法区别。 “It’s + 形容词+ to do sth.”中常用of或for引出不定式的行为者,究竟用of sb.还是用for sb.,取决于前面的形容词。 1) 若形容词是描述不定式行为者的性格、品质的,如kind,good,nice,right,wrong,clever,careless,polite,foolish等,用of sb. 例: It’s very kind of you to help me. 你能帮我,真好。 It’s clever of you to work out the maths problem. 你真聪明,解出了这道数学题。 2) 若形容词仅仅是描述事物,不是对不定式行为者的品格进行评价,用for sb.,这类形容词有difficult,easy,hard,important,dangerous,(im)possible等。例: It’s very dangerous for children to cross the busy street. 对孩子们来说,穿过繁忙的街道很危险。 It’s difficult for u s to finish the work. 对我们来说,完成这项工作很困难。 for 与of 的辨别方法: 用介词后面的代词作主语,用介词前边的形容词作表语,造个句子。如果道理上通顺用of,不通则用for. 如: You are nice.(通顺,所以应用of)。 He is hard.(人是困难的,不通,因此应用for.) 由此可知,该题的正确答案应该为A项。 提交人:f7_liyf 时间:1/24/2008 11:18:42

to和for的用法有什么不同(一)

to和for的用法有什么不同(一) 一、引出间接宾语时的区别 两者都可以引出间接宾语,但要根据不同的动词分别选用介词to 或for,具体应注意以下三种情况: 1. 在give, pass, hand, lend, send, tell, bring, show, pay, read, return, write, offer, teach, throw 等之后接介词to。如: 请把那本字典递给我。 正:Please hand me that dictionary. 正:Please hand that dictionary to me. 她去年教我们的音乐。 正:She taught us music last year. 正:She taught music to us last year. 2. 在buy, make, get, order, cook, sing, fetch, play, find, paint, choose, prepare, spare 等之后用介词for 。如: 他为我们唱了首英语歌。 正:He sang us an English song. 正:He sang an English song for us. 请帮我把钥匙找到。 正:Please find me the keys. 正:Please find the keys for me. 能耽搁你几分钟吗(即你能为我抽出几分钟吗)? 正:Can you spare me a few minutes?

正:Can you spare a few minutes for me? 3. 有的动词由于用法和含义不同,用介词to 或for 都是可能的。如: do sb a favor=do a favor for sb 帮某人的忙 do sb harm=do harm to sb 对某人有害 在有的情况下,可能既不用for 也不用to,而用其他的介词。如: play sb a trick=play a trick on sb 作弄某人 请比较: play sb some folk songs=play some folk songs for sb 给某人演奏民歌 有时同一个动词,由于用法不同,所搭配的介词也可能不同,如leave sbsth 这一结构,若表示一般意义的为某人留下某物,则用介词for 引出间接宾语,即说leave sth for sb;若表示某人死后遗留下某物,则用介词to 引出间接宾语,即说leave sth to sb。如: Would you like to leave him a message? / Would you like to leave a message for him? 你要不要给他留个话? Her father left her a large fortune. / Her father left a large fortune to her. 她父亲死后给她留下了一大笔财产。 二、表示目标或方向的区别 两者均可表示目标、目的地、方向等,此时也要根据不同动词分别对待。如: 1. 在come, go, walk, move, fly, ride, drive, march, return 等动词之后通常用介词to 表示目标或目的地。如: He has gone to Shanghai. 他到上海去了。 They walked to a river. 他们走到一条河边。

相关主题