搜档网
当前位置:搜档网 › 08_Performance Capture from Sparse Multi-view Video

08_Performance Capture from Sparse Multi-view Video

08_Performance Capture from Sparse Multi-view Video
08_Performance Capture from Sparse Multi-view Video

Performance Capture from Sparse Multi-view Video

Edilson de Aguiar ?

Carsten Stoll ?

Christian Theobalt ?

Naveed Ahmed ?Hans-Peter Seidel ?Sebastian Thrun ?

?

MPI Informatik,Saarbruecken,Germany ?

Stanford University,Stanford,

USA

Figure 1:A sequence of poses captured from eight video recordings of a capoeira turn kick.Our algorithm delivers spatio-temporally coherent geometry of the moving performer that captures both the time-varying surface detail as well as details in his motion very faithfully.

Abstract

This paper proposes a new marker-less approach to capturing hu-man performances from multi-view video.Our algorithm can jointly reconstruct spatio-temporally coherent geometry,motion and textural surface appearance of actors that perform complex and rapid moves.Furthermore,since our algorithm is purely mesh-based and makes as few as possible prior assumptions about the type of subject being tracked,it can even capture performances of people wearing wide apparel,such as a dancer wearing a skirt.To serve this purpose our method ef?ciently and effectively combines the power of surface-and volume-based shape deformation tech-niques with a new mesh-based analysis-through-synthesis frame-work.This framework extracts motion constraints from video and makes the laser-scan of the tracked subject mimic the recorded performance.Also small-scale time-varying shape detail is re-covered by applying model-guided multi-view stereo to re?ne the model surface.Our method delivers captured performance data at high level of detail,is highly versatile,and is applicable to many complex types of scenes that could not be handled by alternative marker-based or marker-free recording techniques.

CR Categories:I.3.7[Computer Graphics]:Three-Dimensional Graphics and Realism;I.4.8[Image Processing and Computer Vi-sion]:Scene Analysis

Keywords:performance capture,marker-less scene reconstruc-tion,multi-view video analysis

1Introduction

The recently released photo-realistic CGI movie Beowulf [Paramount 2007]provides an impressive foretaste of the way how many movies will be produced as well as displayed in the future.In contrast to previous animation movies,the goal was not the cre-ation of a cartoon style appearance but a photo-realistic display of the virtual sets and actors.Today it still takes a tremendous effort to create authentic virtual doubles of real-world actors.It remains one of the biggest challenges to capture human performances,i.e.mo-tion and possibly dynamic geometry of actors in the real world in order to map them onto virtual doubles.To measure body and facial motion,the studios resort to marker-based optical motion capture technology.Although this delivers data of high accuracy,it is still a stopgap.Marker-based motion capture requires a signi?cant setup time,expects subjects to wear unnatural skin-tight clothing with optical beacons,and often makes necessary many hours of manual data cleanup.It therefore does not allow for what both actors and directors would actually prefer:To capture human performances densely in space and time -i.e.to be able to jointly capture ac-curate dynamic shape,motion and textural appearance of actors in arbitrary everyday apparel.

In this paper,we therefore propose a new marker-less dense perfor-mance capture technique.From only eight multi-view video record-ings of a performer moving in his normal and even loose or wavy clothing,our algorithm is able to reconstruct his motion and his spatio-temporally coherent time-varying geometry (i.e.geometry with constant connectivity)that captures even subtle deformation detail.The abdication of any form of optical marking also makes simultaneous shape and texture acquisition straightforward.Our method achieves a high level of ?exibility and versatility by explicitly abandoning any traditional skeletal shape or motion parametrization and by posing performance capture as deformation capture .For scene representation we employ a detailed static laser scan of the subject to be recorded.Performances are captured in a multi-resolution way,i.e.?rst global model pose is inferred using a lower-detail model,Sect.5,and thereafter smaller-scale shape and motion detail is estimated based on a high-quality model,Sect.6.

Global pose capture employs a new analysis-through-synthesis pro-cedure that robustly extracts from the input footage a set of posi-tion constraints.These are fed into an ef?cient physically plausi-ble shape deformation approach,Sect.4,in order to make the scan mimic the motion of its real-world equivalent.After global pose recovery in each frame,a model-guided multi-view stereo and con-tour alignment method reconstructs?ner surface detail at each time step.Our results show that our approach can reliably reconstruct very complex motion exhibiting speed and dynamics that would even challenge the limits of traditional skeleton-based optical cap-turing approaches,Sect.7.

To summarize,this paper presents a new video-based performance capture method

?that passively reconstructs spatio-temporally coherent shape, motion and texture of actors at high quality;

?that draws its strength from an effective combination of new skeleton-less shape deformation methods,a new analysis-through-synthesis framework for pose recovery,and a new model-guided multi-view stereo approach for shape re?ne-ment;

?and that exceeds capabilities of many previous capture tech-niques by allowing the user to record people wearing loose apparel and people performing fast and complex motion.

2Related Work

Previous related work has largely focused on capturing sub-elements of the sophisticated scene representation that we are able to reconstruct.

Marker-based optical motion capture systems are the workhorses in many game and movie production companies for measuring mo-tion of real performers[Menache and Manache1999].Despite their high accuracy,their very restrictive capturing conditions,that of-ten require the subjects to wear skin-tight body suits and re?ective markings,make it infeasible to capture shape and texture.Park et al.[2006]try to overcome this limitation by using several hun-dred markers to extract a model of human skin deformation.While their animation results are very convincing,manual mark-up and data cleanup times can be tremendous in such a setting and gener-alization to normally dressed subjects is dif?cult.In contrast,our marker-free algorithm requires a lot less setup time and enables si-multaneous capture of shape,motion and texture of people wearing everyday apparel.

Marker-less motion capture approaches are designed to overcome some restrictions of marker-based techniques and enable perfor-mance recording without optical scene modi?cation[Moeslund et al.2006;Poppe2007].Although they are more?exible than in-trusive methods,it remains dif?cult for them to achieve the same level of accuracy and the same application range.Furthermore, since most approaches employ kinematic body models,it is hard for them to capture motion,let alone detailed shape,of people in loose everyday apparel.Some methods,such as[Sand et al. 2003]and[Balan et al.2007]try to capture more detailed body deformations in addition to skeletal joint parameters by adapting the models closer to the observed silhouettes,or by using captured range scan data[Allen et al.2002].But both algorithms require the subjects to wear tight clothes.Only few approaches,such as the work by[Rosenhahn et al.2006],aim at capturing humans wearing more general attire,e.g.by jointly relying on kinematic body and cloth models.Unfortunately,these methods typically require hand-crafting of shape and dynamics for each individual piece of apparel,and they focus on joint parameter estimation under occlusion rather than accurate geometry capture.

Other related work explicitly reconstructs highly-accurate geome-try of moving cloth from video[Scholz et al.2005;White et al. 2007].However,these methods require visual interference with the scene in the form of specially tailored color patterns on each piece of garment which renders simultaneous shape and texture acquisi-tion infeasible.

A slightly more focused but related concept of performance cap-ture is put forward by3D video methods which aim at rendering the appearance of reconstructed real-world scenes from new syn-thetic camera views never seen by any real camera.Early shape-from-silhouette methods reconstruct rather coarse approximate3D video geometry by intersecting multi-view silhouette cones[Ma-tusik et al.2000;Gross et al.2003].Despite their computational ef?ciency,the moderate quality of the textured coarse scene re-constructions often falls short of production standards in the movie and game industry.To boost3D video quality,researchers experi-mented with image-based methods[Vedula et al.2005],multi-view stereo[Zitnick et al.2004],multi-view stereo with active illumina-tion[Waschb¨u sch et al.2005],or model-based free-viewpoint video capture[Carranza et al.2003].In contrast to our approach,the?rst three methods do not deliver spatio-temporally coherent geometry or360degree shape models,which are both essential prerequisites for animation post-processing.At the same time,previous kine-matic model-based3D video methods were unable to capture per-formers in general clothing.[Starck and Hilton2007]propose a combination of stereo and shape-from-silhouette to reconstruct per-formances from video.They also propose a spherical reparameter-ization to establish spatio-temporal coherence during postprocess-ing.However,since their method is based on shape-from-silhouette models which often change topology due to incorrect reconstruc-tion,establishing spatio-temporal coherence may be error-prone.In contrast,our prior with known connectivity handles such situations more gracefully.

Data-driven3D video methods synthesize novel perspectives by a pixel-wise blending of densely sampled input viewpoints[Wilburn et al.2005].While even renderings under new lighting can be pro-duced at high?delity[Einarsson et al.2006],the complex acquisi-tion apparatus requiring hundreds of densely spaced cameras makes practical applications often dif?cult.Further on,the lack of geom-etry makes subsequent editing a major challenge.

Recently,new animation design[Botsch and Sorkine2008],ani-mation editing[Xu et al.2007],deformation transfer[Sumner and Popovi′c2004]and animation capture methods[Bickel et al.2007] have been proposed that are no longer based on skeletal shape and motion parametrizations but rely on surface models and general shape deformation approaches.The explicit abandonment of kine-matic parametrizations makes performance capture a much harder problem,but bears the striking advantage that it enables capturing of both rigidly and non-rigidly deforming surfaces with the same underlying technology.

Along this line of thinking,the approaches by[de Aguiar et al. 2007a]and[de Aguiar et al.2007b]enable mesh-based motion capture from video.At a?rst look,both methods also employ laser-scanned models and a more basic shape deformation frame-work.But our algorithm greatly exceeds their methods’capabilities in many ways.First,our new analysis-through-synthesis tracking framework enables capturing of motion that shows a level of com-plexity and speed which would have been impossible to recover with previous?ow-based or?ow-and feature-based methods.Sec-ondly,we propose a volumetric deformation technique that greatly increases robustness of pose recovery.Finally,in contrast to previ-

ous methods,our algorithm explicitly recovers small-scale dynamic

surface detail by applying model-guided multi-view stereo. Related to our approach are also recent animation reconstruction

methods that jointly perform model generation and deformation

capture from scanner data[Wand et al.2007].However,their prob-

lem setting is different and computationally very challenging which

makes it hard for them to generate the visual quality that we achieve

by employing a prior model.The approaches proposed in[Stoll

et al.2006]and[Shinya2004]are able to deform mesh-models

into active scanner data or visual hulls,respectively.Unfortunately,

neither of these methods has shown to match our method’s robust-

ness,or the quality and detail of shape and motion data which our

approach produces from video only.

3Video-based Performance Capture

Prior to video-recording human performances we take a full-body

laser scan of the subject in its current apparel by means of a Vitus

Smart TM laser scanner.After scanning,the subject immediately

moves to the adjacent multi-view recording area.Our multi-view

capturing apparatus features K=8synchronized geometrically

and photometrically calibrated video cameras running at24fps and

providing1004x1004pixels frame resolution.The cameras are

placed in an approximately circular arrangement around the cen-

ter of the scene(see video for visualization of input).As part of

pre-processing color-based background subtraction is applied to all

video footage to yield silhouette images of the captured performers. Once all of the data has been captured,our automatic performance

reconstruction pipeline commences which only requires a minimum

of manual interaction during pre-processing.To obtain our compu-

tational model of shape and motion,we?rst transform the raw scan

into a high-quality surface mesh T tri=(V tri,T tri)with n s ver-tices V tri={v1...v n s}and m s triangles T tri={t1...t m s} by employing the method of[Kazhdan et al.2006](see Fig.2(l)).

Additionally,we create a coarser tetrahedral version of the surface

scan T tet=(V tet,T tet)(comprising of n t vertices V tet and m t tetrahedrons T tet)by applying a quadric error decimation and a

subsequent constrained Delaunay tetrahedralization(see Fig.2(r)).

Typically,T tri contains between30000and40000triangles,and

the corresponding tet-version between5000and6000tetrahedrons.

Both models are automatically registered to the?rst pose of the ac-

tor in the input footage by means of a procedure based on iterative

closest points(ICP).Since we asked the actor to strike in the?rst

frame of video a pose similar to the one that she/he was scanned

in,pose initialization is greatly simpli?ed,as the model is already

close to the target pose.

Our capture method explicitly abandons a skeletal motion

parametrization and resorts to a deformable model as scene repre-

sentation.Thereby,we are facing a much harder tracking problem,

but gain an intriguing advantage:we are now able to track non-

rigidly deforming surfaces(like wide clothing)in the same way

as rigidly deforming models and do not require prior assumptions

about material distributions or the segmentation of a model.

The?rst core algorithmic ingredient of mesh-based performance

capture is a fast and reliable shape deformation framework that ex-

presses the deformation of the whole model based on a few point

handles,Sect.4.We capture performances in a multi-resolution

way to increase reliability.First,an analysis-through-synthesis

method based on image and silhouette cues estimates the global

pose of an actor at each frame on the basis of the lower-detail tetra-

hedral input model,Sect.5.The sequence of processing steps is

designed to enable reliable convergence to plausible poses despite

the highly multi-modal solution space of optimization-based

mesh Figure2:A surface scan T tri of an actress(l)and the correspond-ing tetrahedral mesh T tet in an exploded view(r). deformation.Once global poses are found,the high-frequency as-pect of performances is captured.For instance,the motion of folds in a skirt is recovered in this step.To this end the global poses are transferred to the high-detail surface scan,and surface shape is re-?ned by enforcing contour alignment and performing model-guided stereo,Sect.6.

The output of our method is a dense representation of the perfor-mance in both space and time.It comprises of accurately deformed spatio-temporally coherent geometry that nicely captures the liveli-ness,motion and shape detail of the original input.

4A Deformation Toolbox

Our performance capture technique uses two variants of Laplacian shape editing.For low-frequency tracking,we use an iterative vol-umetric Laplacian deformation algorithm which is based on our tetrahedral mesh T tet,Sect.4.1.This method enables us to infer rotations from positional constraints and also implicitly encodes prior knowledge about shape properties that we want to preserve, such as local cross-sectional areas.For recovery of high-frequency surface details,we transfer the captured pose of T tet to the high-resolution surface scan,Sect.4.2.Being already roughly in the cor-rect pose,we can resort to a simpler non-iterative variant of surface-based Laplacian deformation to infer shape detail from silhouette and stereo constraints,Sect.4.3.

4.1Volumetric Deformation

It is our goal to deform the tetrahedral mesh T tet as naturally as pos-sible under the in?uence of a set of position constraints v j≈q j, j∈{1,...,n c}.To this end,we iterate a linear Laplacian defor-mation step and a subsequent update step,which compensates the (mainly rotational)errors introduced by the nature of the linear de-formation.This procedure minimizes the amount of non-rigid de-formation each tetrahedron undergoes,and thus exhibits qualities of an elastic deformation.Our algorithm is related to the approach by[Sorkine and Alexa2007].However,we decide to use a tetra-hedral construction rather than their triangle mesh construction,as this allows us to implicitly preserve certain shape properties,such as cross-sectional areas,after deformation.The latter greatly in-creases tracking robustness since non-plausible model poses(e.g. due to local?attening)are far less likely.

Our deformation technique is based on solving the tetrahedral Laplacian system Lv=δwith

L=G T DG,(1) and

δ=G T Dg,(2)

where G is the discrete gradient operator matrix for the mesh,D is a4m t×4m t diagonal matrix containing the tetrahedra’s vol-umes,and g is the set of tetrahedron gradients,each being calcu-lated as g j=G j p j(see[Botsch and Sorkine2008]for more detail).Here,p j is a matrix containing the vertex coordinates of tetrahedron t j.The constraints q j can be factorized into the matrix L by eliminating the corresponding rows and columns in the matrix and incorporating the values into the right-hand sideδ.

We now iterate the following steps:

?Linear Laplacian deformation:By solving the above system we obtain a set of new vertex positions V tet={v 1...v n t}.

Due to the linear formulation,this deformed model exhibits artifacts common to all simple Laplacian techniques,i.e.the local elements do not rotate under constraints but rather sim-ply scale and shear to adjust to the desired pose.?Rotation extraction:We now extract a transformation matrix T i for each tetrahedron which brings t i into con?guration t i.These transformations can be further split up into a rigid part R i and a non-rigid part S i using polar decomposition.

Keeping only the rotational component removes the non-rigid in?uences of the linear deformation step from the local ele-ments.

?Differential update:We?nally update the right hand sideδusing Eq.(2)by applying the rotations R i to the gradients of the tetrahedron.

Iterating this procedure minimizes the amount of non-rigid defor-mation S i remaining in each tetrahedron.Henceforth we will refer to this deformation energy as E D.While our subsequent track-ing steps would work with any physically plausible deformation or simulation method such as[Botsch et al.2007;M¨u ller et al.2002], our technique has the advantages of being extremely fast,of being very easy to implement,and of producing plausible results even if material properties are unknown.

4.2Deformation Transfer

To transfer a pose from T tet to T tri,we express the position of each vertex v i in T tri as a linear combination of vertices in T tet. These coef?cients c i are calculated for the rest pose and can be used afterwards to update the pose of the triangle mesh.

We generate the linear coef?cients c i by?nding the subset T r(v i) of all tetrahedra from T tet that lie within a local spherical neigh-borhood of radius r(in all our cases r was set to5%of the mesh’s bounding box diagonal)and contain a boundary face with a face normal similar to that of v i.Subsequently,we calculate the(not necessarily positive)barycentric coordinate coef?cients c i(j)of the vertex with respect to all t j∈T r(v i)and combine them into one larger coef?cient vector c i as

c i=

t j∈T r(v i)

c i(j)φ(v i,t j)

t j∈T r(v i)

φ(v i,t j)

.

φ(v i,t j)is a compactly supported radial basis function with re-spect to the distance of v i to the barycenter of tetrahedron t j.This weighted averaging ensures that each point is represented by several tetrahedra and thus the deformation transfer from tetrahedral mesh to triangle mesh will be smooth.The coef?cients for all vertices of T tri are combined into a matrix B.Thanks to the smooth partition of unity de?nition and the local support of our parametrization,we can quickly compute the mesh in its transferred pose V tri by mul-tiplying the current vertex positions of the current tetrahedral mesh V tet with B.4.3Surface-based Deformation

Our surface-based deformation relies on a simple least-squares Laplacian system as it has been widely used in recent years[Botsch and Sorkine2008].Given our triangle mesh T tri we apply a dis-crete least-squares Laplacian using cotangent weights to deform the surface under the in?uence of a set of position constraints v j≈q j,j∈{1,...,n c}.This can be achieved by minimizing the energy

argmin

v

Lv?δ 2+ Cv?q 2

.(3)

Here,L is the cotangent Laplacian matrix,δare the differen-tial coordinates,and C is a diagonal matrix with non-zero entries C j,j=w j only for constrained vertices v j(where w j is the weight of the additional entry).This formulation uses the Laplacian as a regularization term for the deformation de?ned by our constraints.

5Capturing the Global Model Pose

Our?rst step aims at recovering for each time step of video a global pose of the tetrahedral input model that matches the pose of the real actor.In a nutshell,our global pose extraction method computes deformation constraints from each pair of subsequent multi-view input video frames at times t and t+1.It then applies the volumetric shape deformation procedure to modify the pose of T tet at time t (that was found previously)until it aligns with the input data at time t+1.In order to converge to a plausible pose under this highly multi-modal goodness-of-?t criterion,it is essential that we extract the right types of features from the images in the right sequence and apply the resulting deformation constraints in the correct order.

To serve this purpose,our pose recovery process begins with the extraction of3D vertex displacements from reliable image features which brings our model close to its?nal pose even if scene motion is rapid,Sect.5.1.The distribution of3D features on the model surface is dependent on scene structure,e.g.texture,and can,in general,be non-uniform or sparse.Therefore,the resulting pose may not be entirely correct.Furthermore,potential outliers in the correspondences make additional pose update steps unavoidable. We therefore subsequently resort to two additional steps that ex-ploit silhouette data to fully recover the global pose.The?rst step re?nes the shape of the outer model contours until they match the multi-view input silhouette boundaries,Sect.5.2.The second step optimizes3D displacements of key vertex handles until optimal multi-view silhouette overlap is reached,Sect.5.3.Conveniently, the multi-view silhouette overlap can be quickly computed as an XOR operation on the GPU.

We gain further tracking robustness by subdividing the surface of the volume model into a set R of approximately100-200regions of similar size during pre-processing[Yamauchi et al.2005].Rather than inferring displacements for each vertex,we determine repre-sentative displacements for each region as explained in the follow-ing sections.

5.1Pose Initialization from Image Features

Given two sets of multi-view video frames I1(t),...,I k(t)and I1(t+1),...,I k(t+1)from subsequent time steps,our?rst pro-cessing step extracts SIFT features in each frame[Lowe1999](see Fig.3).This yields for each camera view k and either time step

a list of (k)=1,...,L k2D feature locations u (k)

k,t

along with

their SIFT feature descriptors dd (k)

k,t

–henceforth we refer to each such list as LD k,t.SIFT features are our descriptors of choice,as

Figure3:3D correspondences are extracted from corresponding SIFT features in respective input camera views at t and t+1.These 3D correspondences,two of them illustrated by lines,are used to deform the model into a?rst pose estimate for t+1.

they are largely invariant under illumination and out-of-plane rota-

tion and enable reliable correspondence?nding even if the scene motion is fast.

Let T tet(t)be the pose of T tet at time t.To transform feature data into deformation constraints for vertices of T tet(t),we?rst need to pair image features from time t with vertices in the model.We therefore?rst associate each v i of T tet(t)with that descriptor dd i k,t from each I k(t)that is located closest to the projected location of v i in this respective camera.We perform this computation for all camera views and discard a feature association if v i is not visible from k or if the distance between the projected position of v i and the image position of dd i k,t is too large.This way,we obtain a set of associations A(v i,t)={dd j11,t,···,dd jK K,t}for a subset of ver-tices that contains at most one feature from each https://www.sodocs.net/doc/2d1867591.html,stly,we check the consistency of each A(v i,t)by comparing the pseudo-

intersection point p INT

i of the reprojected rays passing through u j11,t,...,u jK K,t to the3D position of v i in model pose T tet(t).If

the distance v i?p INT

i is greater than a threshold E DIST the

original feature association is considered implausible and v i is re-moved from the candidate list for deformation handles.

The next step is to establish temporal correspondence,i.e.to?nd for each vertex v i with feature association A(v i,t)the correspond-ing association A(v i,t+1)with features from the next time step.

To this end,we preliminarily?nd for each dd j k,t∈A(v i,t)a de-scriptor dd f k,t+1∈LD k,t+1by means of nearest neighbor distance matching in the descriptor values,and add dd f k,t+1to A(v i,t+1). In practice,this initial assignment is likely to contain outliers,and therefore we compute the?nal set of temporal correspondences by means of robust spectral matching[Leordeanu and Hebert2005]. This method ef?ciently bypasses the combinatorial complexity of the correspondence problem by formulating it in closed form as a spectral analysis problem on a graph adjacency matrix.Incorrect matches are eliminated by searching for an assignment in which both the feature descriptor values across time are consistent,and pairwise feature distances across time are preserved.Fig.3illus-trates a subset of associations found for two camera views.From the?nal set of associations A(v i,t+1)we compute the predicted

3D target position p EST

i of vertex v i again as the virtual intersec-tion point of reprojected image rays through the2D feature posi-tions.

Each vertex v i for which a new estimated position was found is a candidate for a deformation handle.However,we do not straight-forwardly apply all handles to move directly to the new target pose. We rather propose the following step-wise procedure which,in practice,is less likely to converge to implausible model

con?gu-

(a)(b)

Figure4:(a)Color-coded distance?eld from the image silhouette contour shown for one camera view.(b)Rim vertices with respect to one camera view marked in red on the3D model.

rations:We resort to the set of regions R on the surface of the

tet-mesh(as described above)and?nd for each region r i∈R one best handle from all candidate handles that lie in r i.The best han-dle vertex v i is the one whose local normal is most collinear with

the difference vector p EST

i

?v i.If no handle is found for a region, we constrain the center of that region to its original3D position in T tet(t).This prevents unconstrained surface areas from arbitrary drifting.For each region handle,we de?ne a new intermediate tar-get position as q i=v i+p EST

i?v i

p EST

i

?v i

.Typically,we obtain posi-tion constraints q i for around70%to90%of the surface regions R that are then used to change the pose of the model.This step-wise deformation is repeated until the multi-view silhouette overlap er-ror SIL(T tet,t+1)cannot be improved further.The overlap error is computed as the XOR between input and model silhouette in all camera views.

We would like to remark that we do not require tracking of features across the entire sequence which greatly contributes to the reliabil-ity of our method.The output of this step is a feature-based pose estimate T F tet(t+1).

5.2Re?ning the Pose using Silhouette Rims

In image regions with sparse or low-frequency textures,only few SIFT features may have been found.In consequence,the pose of T F tet(t+1)may not be correct in all parts.We therefore resort to another constraint that is independent of image texture and has the potential to correct for such misalignments.To this end,we de-rive additional deformation constraints for a subset of vertices on T F tet(t+1)that we call rim vertices V RIM(t+1),see Fig.4(b).In order to?nd the elements of V RIM(t+1),we?rst calculate con-tour images C k,t+1using the rendered volumetric model silhou-ettes.A vertex v i is considered a rim vertex if it projects into close vicinity of the silhouette contour in(at least)one of the C k,t+1,and if the normal of v i is perpendicular to the viewing direction of the camera k.

For each element v i∈V RIM(t+1)a3D displacement is com-puted by analyzing the projected location u k,t+1of the vertex into the camera k that originally de?ned its rim status.The value of the distance?eld from the contour at the projected location de?nes the total displacement length in vertex normal direction,Fig.4(a). This way,we obtain deformation constraints for rim vertices which we apply in the same step-wise deformation procedure that was al-ready used in Sect.5.1.The result is a new model con?guration T R tet(t+1)in which the projections of the outer model contours more closely match the input silhouette boundaries.

5.3Optimizing Key Handle Positions

In the majority of cases,the pose of the model in T R tet(t+1)is already close to a good match.However,in particular if the scene motion was fast or the initial pose estimate from SIFT was not en-tirely correct,residual pose errors remain.We therefore perform an additional optimization step that corrects such residual errors by globally optimizing the positions of a subset of deformation handles until good silhouette overlap is reached.

Instead of optimizing the position of all1000?2000vertices of the volumetric model,we only optimize the position of typically15-25key vertices V k?V tet until the tetrahedral deformation pro-duces optimal silhouette overlap.Tracking robustness is increased by designing our energy function such that surface distances be-tween key handles are preserved,and pose con?gurations with low distortion energy E D are preferred.We ask the user to specify key vertices manually,a procedure that has to be done only once for every model.Typically,key vertices are marked close to anatomi-cal joints,and in case of model parts representing loose clothing,a simple uniform handle distribution produces good results. Given all key vertex positions v i∈V k in the current model pose T R tet(t+1),we optimize for their new positions p i by minimizing the following energy functional:

E(V k)=w S·SIL(T tet(V k),t+1)+w D·E D+w C·E C.(4) Here,SIL(T tet(V k),t+1)denotes the multi-view silhouette overlap error of the tet-mesh in its current deformed pose T tet(V k) which is de?ned by the new positions of the V k.E D is the defor-mation energy as de?ned in Sect.4.1.Implicitly we reason that low energy con?gurations are more plausible,see Sect.4.1.E C pe-nalizes changes in distance between neighboring key vertices.All three terms are normalized and the weights w S,w D,and w C are chosen in a way such that SIL(T tet(V k),t+1)is the dominant term.We use a Quasi-Newton LBFGS-B method to minimize Eq.

(4)[Byrd et al.1995].

Fig.5illustrates the improvements in the new output pose T O tet(t+1)that are achieved through key handle optimization.

5.4Practical Considerations

The above sequence of steps is performed for each pair of subse-quent time instants.Surface detail capture,Sect.6,commences after the global poses for all frames were found.

Typically the rim step described in Sect.5.2is performed once more after the last silhouette optimization steps which,in some cases, leads to a better model alignment.We also perform a consistency check on the output of low frequency pose capture to correct po-

tential self-intersections.To this end,for every vertex lying

inside

(a)

(b)

(c)(d)

Figure5:Model(a)and silhouette overlap(b)after the rim step; slight pose inaccuracies in the leg and the arms appear black in the silhouette overlap image.(c),(d)After key vertex optimization, these pose inaccuracies are removed and the model strikes a correct pose.another tetrahedron,we use the volumetric deformation method to displace this vertex in outward direction along its normal until the intersection is resolved.

6Capturing Surface Detail

Once global pose has been recovered for each frame the pose se-quence of T tet is mapped to T tri,Sect.4.2.In the following,the process of shape detail capture at a single time step is explained.

6.1Adaptation along Silhouette Contours

In a?rst step we adapt the silhouette rims of our?ne mesh to better match the input silhouette contours.As we are now working on a surface mesh which is already very close to the correct con?gura-tion,we can allow a much broader and less smooth range of defor-mations than in the volumetric case,and thereby bring the model in much closer alignment with the input data.At the same time we have to be more careful in selecting our constraints,since noise in the data now has more deteriorating in?uence.

Similar to Sect.5.2we calculate rim vertices,however on the high-resolution surface mesh,Fig.6(a).For each rim vertex the closest 2D point on the silhouette boundary is found in the camera view that de?nes its rim status.Now we check if the image gradient at the input silhouette point has a similar orientation to the image gradient in the reprojected model contour image.If this is the case, the back-projected input contour point de?nes the target position for the rim vertex.If the distance between back-projection and original position is smaller than threshold E RIM we add it as constraint to Eq.(3).Here we use a low weight(between0.25and0.5depending on the quality of the segmentation)for the rim constraint points. This has a regularizing and damping effect on the deformation that minimizes implausible shape adaptation in the presence of noise. After processing all vertices,we solve for the new surface.This rim projection and deformation step is iterated up to20times or until silhouette overlap can not be improved further.

6.2Model-guided Multi-view Stereo

Although the silhouette rims only provide reliable constraints on outer boundaries,they are usually evenly distributed on the surface. Hence,the deformation method in general nicely adapts the shape of the whole model also in areas which don’t project on image con-tours.Unless the surface of the actor has a complicated shape with many concavities,the result of rim adaptation is already a realistic representation of the correct shape.

However,in order to recover shape detail of model regions that do not project to silhouette boundaries,such as folds and concavities in a skirt,we resort to photo-consistency information.To serve this purpose,we derive additional deformation constraints by applying the multi-view stereo method proposed by[Goesele et al.2006]. Since our model is already close to the correct surface,we can ini-tialize the stereo optimization from the current surface estimate and constrain the correlation search to3D points that are at most±2cm away from T tri.

As we have far less viewpoints of our subject than Goesele et al. and our actors can wear apparel with little texture,the resulting depth maps(one for each input view)are often sparse and noisy. Nonetheless,they provide important additional cues about the ob-ject’s shape.We merge the depth maps produced by stereo into a single point cloud P,Fig.6(b),and thereafter project points from V tri onto P using a method similar to[Stoll et al.2006].These projected points provide additional position constraints that we can

(a)

(b)

(c)

(d)(e)

Figure6:Capturing small-scale surface detail:(a)First,deformation constraints from silhouette contours,shown as red arrows,are estimated.(b)Additional deformation handles are extracted from a3D point cloud that was computed via model-guided multi-view stereo.

(c)Together,both sets of constraints deform the surface scan to a highly accurate pose.–Evaluation:(d)per-frame silhouette overlap in per cent after global pose estimation(blue)and after surface detail reconstruction(green).(e)Blended overlay between an input image and the reconstructed model showing the almost perfect alignment of our result.

use in conjunction with the rim vertices in the surface-based de-formation framework,Eq.(3).Given the uncertainty in the data, we solve the Laplace system with lower weights for the stereo con-straints.

7Results and Applications

Our test data were recorded in our acquisition setup described in Sect.3and comprise of12sequences that show four different ac-tors and that feature between200and600frames each.To show the large application range of our algorithm,the captured performers wore a wide range of different apparel,ranging from tight to loose, and made of fabrics with prominent texture as well as plain colors only.Also,the recovered set of motions ranges from simple walks, over different dance styles,to fast capoeira sequences.As the im-ages in Figs.1,7and8,as well as the results in the accompanying video demonstrate,our algorithm faithfully reconstructs this wide spectrum of scenes.We would also like to note that,although we focused on human performers,our algorithm would work equally well for animals provided that a laser scan can be acquired.

Fig.1shows several captured poses of a very rapid capoeira se-quence in which the actor performs a series of turn kicks.Despite the fact that in our24fps recordings the actor rotates by more than 25degrees in-between some subsequent frames,both shape and

motion are reconstructed at high?delity.The resulting animation even shows deformation details such as the waving of the trouser legs(see video).Furthermore,even with the plain white clothing that the actor wears in the input and which exhibits only few trace-able SIFT features,our method performs reliably as it can capitalize on rims and silhouettes as additional sources of https://www.sodocs.net/doc/2d1867591.html,-paring a single moment from the kick to an input frame con?rms the high quality of our reconstruction,Fig.7(b)(Note that input and virtual camera views differ slightly).

The video also shows the captured capoeira sequence with a static checkerboard texture.This result demonstrates that temporal alias-ing,such as tangential surface drift of vertex positions,is almost not noticeable,and that the overall quality of the meshes remains highly stable.

In Fig.7(a)we show one pose from a captured jazz dance per-formance.As the comparison to the input in image and video shows,we are able to capture this fast and?uent motion.In ad-dition,we can also reconstruct the many poses with complicated self-occlusions,such as the inter-twisted arm-motion in front of the torso,like in Fig.7(a).

Fig.8shows one of the main strengths of our method,namely its ability to capture the full time-varying shape of a dancing girl wear-ing a skirt.Even though the skirt is of largely uniform color,our results capture the natural waving and lifelike dynamics of the fab-ric(see also the video).In all frames,the overall body posture,and also the folds of the skirt were recovered nicely without the user specifying a segmentation of the model beforehand.We would also like to note that in these skirt sequences(one more in the video)the bene?ts of the stereo step in recovering concavities are most appar-ent.In the other test scenes,the effects are less pronounced and we therefore deactivated the stereo step(Sect.6.2)there to reduce com-putation time.The jitter in the hands that is slightly visible in some of the skirt sequences is due to the fact that the person moves with an opened hand but the scan was taken with hands forming a?st. In general,we also smooth the?nal sequence of vertex positions to remove any remaining temporal noise.

Apart from the scenes shown in the result images,the video con-tains three more capoeira sequences,two more dance sequences, two more walking sequences and one additional skirt sequence. 7.1Validation and Discussion

Table1gives detailed average timings for each individual step in our algorithm.These timings were obtained with highly unopti-mized single-threaded code running on an Intel Core Duo T2500 Laptop with2.0GHz.We see plenty of room for implementation improvement,and anticipate that parallelization can lead to a sig-ni?cant run time reduction.

So far,we have visually shown the high capture quality,as well as the large application range and versatility of our approach.To for-mally validate the accuracy of our method,we have compared the silhouette overlap of our tracked output models with the segmented input frames.We use this criterion since,to our knowledge,there is no gold-standard alternative capturing approach that would pro-vide us with accurate time-varying3D data.The re-projections of our?nal results typically overlap with over85%of the input sil-houette pixels,already after global pose capture only(blue curve in Fig.6(d)).Surface detail capture further improves this overlap to more than90%as shown by the green curve.Please note that this measure is slightly negatively biased by errors in foreground seg-mentation in some frames that appear as erroneous silhouette pix-els.Visual inspection of the silhouette overlap therefore con?rms the almost perfect alignment of model and actual person silhouette. Fig.6(e)shows a blended overlay between the rendered model and an input frame which proves this point.

Our algorithm robustly handles even noisy input,e.g.due to typ-ically observed segmentation errors in our color-based segmenta-

(a)

(b)

Figure 7:(a)Jazz dance posture with reliably captured inter-twisted arm motion.(b)One moment from a very fast capoeira turn kick (Input and virtual viewpoints differ minimally).

tion (see video).All 12input sequences were reconstructed fully-automatically after only minimal initial user input.As part of pre-processing,the user marks the head and foot regions of each model to exclude them from surface detail capture.Even slightest silhou-ette errors in these regions (in particular due to shadows on the ?oor and black hair color)would otherwise cause unnatural deforma-tions.Furthermore,for each model the user once marks at most 25deformation handles needed for the key handle optimization step,Sect.5.3.

In individual frames of two out of three capoeira turn kick se-quences (11out of around 1000frames),as well as in one frame of each of the skirt sequences (2frames from 850frames),the out-put of global pose recovery showed slight misalignments in one of the limbs.Please note that,despite these isolated pose errors,the method always recovers immediately and tracks the whole se-quence without drifting –this means the algorithm can run with-out supervision and the results can be checked afterwards.All ob-served pose misalignments were exclusively due to oversized sil-houette areas because of either motion blur or strong shadows on the ?oor.Both of this could have been prevented by better adjust-ment of lighting and shutter speed,and more advanced segmenta-tion schemes.In either case of global pose misalignment,at most two deformation handle positions had to be slightly adjusted by the user.At none of the over 3500input frames we processed in to-tal,it was necessary to manually correct the output of surface detail capture (Sect.6).

Step

Time SIFT step (Sect.5.1)~34s Global rim step (Sect.5.2)~145s Key handle optimization (Sect.5.3)~270s Contour-based re?nement (Sect.6.1)~27s Stereo,340×340depth maps (Sect.6.2)

~132s

Table 1:Average run times per frame for individual steps.

For comparison,we implemented two related approaches from the literature.The method by [de Aguiar et al.2007a]uses surface-based deformation and optical ?ow to track a deformable mesh from multi-view video.As admitted by the authors,optical ?ow fails for fast motions like our capoeira kicks,which makes track-ing with their approach infeasible.In contrast,our volumetric de-formation framework,in combination with the multi-cue analysis-through-synthesis approach,captures this footage reliably.The method proposed in [de Aguiar et al.2007b]solves the slightly dif-ferent problem of capturing continuous 3D feature trajectories from multi-view video without 3D scene geometry.However,as shown in their paper,the trajectories can be employed to deform a surface scan to move like the actor in video.In our experiments we found that it is hard for their method to maintain uninterrupted trajectories if the person moves sometimes quickly,turns a lot,or strikes poses with complex self-intersections.In contrast,our method handles these situations robustly.Furthermore,as opposed to both of these methods,we perform a stereo-based re?nement step that improves contour alignment and that estimates true time-varying surface de-tail and concavities which greatly contribute to the naturalness of the ?nal result.

Despite our method’s large application range,there are a few limi-tations to be considered.Our current silhouette rim matching may produce erroneous deformations in case the topological structure of the input silhouette is too different from the reprojected model sil-houette.However,in none of our test scenes this turned out to be an issue.In future,we plan to investigate more sophisticated image registration approaches to solve this problem entirely.Currently,we are recording in a controlled studio environment to obtain good segmentations,but are con?dent that a more advanced background segmentation will enable us to handle outdoor scenes.

Moreover,there is a resolution limit to our deformation capture.Some of the high-frequency detail in our ?nal result,such as ?ne wrinkles in clothing or details of the face,has been part of the laser-scan in the ?rst place.The deformation on this level of detail is not actually captured,but it is ”baked in”to the deforming surface.Consequently,in some isolated frames small local differences in the shape details between ground-truth video footage and our deformed mesh may be observed,in particular if the deformed mesh pose de-viates very strongly from the scanned pose.To illustrate the level of detail that we are actually able to reconstruct,we generated a result with a coarse scan that lacks ?ne surface detail.Fig.9shows an in-put frame (l),as well as the reconstructions using the detailed scan (m)and the coarse model (r).While,as noted before,?nest detail in Fig.9(m)is due to the high-resolution laser scan,even with a coarse scan,our method still captures the important lifelike motion and the deformation details,Fig.9(r).To further support this point,the accompanying video shows a side-by-side comparison between the ?nal result with a coarse template and the ?nal result with the original detailed scan.

Also,in our system the topology of the input scanned model is pre-served over the whole sequence.For this reason,we are not able to track surfaces which arbitrarily change apparent topology over time (e.g.the movement of hair or deep folds with self-collisions).Further on,although we prevent self-occlusions during global pose capture,we currently do not correct them in the output of surface detail capture.However,their occurrence is rather seldom.Manual or automatic correction by collision detection would also be feasi-ble.

Our volume-based deformation technique essentially mimics elas-tic deformation,thus the geometry generated by the low-frequency tracking may in some cases have a rubbery look.For instance,an arm may not only bend at the elbow,but rather bend along its en-tire length.Surface detail capture eliminates such artifacts in gen-

Figure 8:Side-by-side comparison of input and reconstruction of a dancing girl wearing a skirt (input and virtual viewpoints differ minimally).Body pose and detailed geometry of the waving skirt,including lifelike folds and wrinkles visible in the input,have been recovered.

eral,and a more sophisticated yet slower ?nite element deformation could reduce this problem already at the global pose capture stage.Despite these limitations we have presented a new non-intrusive ap-proach to spatio-temporally dense performance capture from video.It deliberately abandons traditional motion skeletons to reconstruct a large range of real-world scenes in a spatio-temporally coherent way and at a high level of detail.

7.2Applications

In the following,we brie?y exemplify the strengths and the usabil-ity of our algorithm in two practical applications that are important in media production.

3D Video

Since our approach works without optical markings,

we can use the captured video footage and texture the moving ge-ometry from the input camera views,for instance by using the blending scheme from [Carranza et al.2003].The result is a 3D video representation that can be rendered from arbitrary synthetic views (see video and Fig.10(l),(m)).Due to the highly-detailed

un-Figure 9:Input frame (l)and reconstructions using a detailed (m)and a coarse model (r).Although the ?ne details on the skirt are due to the input laser scan (m),even with a coarse template,our method captures the folds and the overall lifelike motion of the cloth

(r).

Figure 10:(l),(m)High-quality 3D Video renderings of the dancer wearing a skirt.(r)Fully-rigged character automatically estimated from a capoeira turn kick output.

derlying scene geometry the visual results are much better than with previous model-based or shape from silhouette-based 3D video methods.

Reconstruction of a fully-rigged character

Since our method

produces spatio-temporally coherent scene geometry with practi-cally no tangential distortion over time,we can reconstruct a fully-rigged character,i.e.a character featuring an animation skeleton,a surface mesh and associated skinning weights,Fig.10(r),in case this is a suitable parametrization for a scene.To this end we feed our result sequences into the automatic rigging method proposed in [de Aguiar et al.2008]that fully-automatically learns the skele-ton and the blending weights from mesh sequences.Although not the focus of this paper,this experiment shows that the data captured by our system can optionally be converted into a format immedi-ately suitable for modi?cation with traditional animation tools.

8Conclusion

We have presented a new approach to video-based performance capture that produces a novel dense and feature-rich output for-mat comprising of spatio-temporally coherent high-quality geome-try,lifelike motion data,and optionally surface texture of recorded actors.The fusion of ef?cient volume-and surface-based deforma-tion schemes,a multi-view analysis-through-synthesis procedure,and a multi-view stereo approach enables our method to capture performances of people wearing a wide variety of everyday apparel and performing extremely fast and energetic motion.The proposed method supplements and exceeds the capabilities of marker-based optical capturing systems that are widely used in industry,and will provide animators and CG artists with a new level of ?exibility in acquiring and modifying real-world content.

Acknowledgements

Special thanks to our performers Maria Jacob,Yvonne Flory and Samir Hammann,as well as to Derek D.Chan for helping us with the video.This work has been developed within the Max-Planck-Center for Visual Computing and Communication (MPC VCC)col-laboration.

References

A LLEN ,B.,C URLESS ,B.,AND P OPOVI ′C

,Z.2002.Articulated body deformation from range scan data.ACM Trans.Graph.21,3,612–619.B ALAN ,A.O.,S IGAL ,L.,B LACK ,M.J.,D AVIS ,J.E.,AND H AUSSECKER ,H.W.2007.Detailed human shape and pose from images.In Proc.CVPR .B ICKEL ,B.,B OTSCH ,M.,A NGST ,R.,M ATUSIK ,W.,O TADUY ,M.,P FISTER ,H.,AND G ROSS ,M.2007.Multi-scale capture of facial geometry and motion.In Proc.of SIGGRAPH ,33.

B OTSCH,M.,AND S ORKINE,O.2008.On linear variational sur-

face deformation methods.IEEE TVCG14,1,213–230.

B OTSCH,M.,P AULY,M.,W ICKE,M.,AND G ROSS,M.2007.

Adaptive space deformations based on rigid https://www.sodocs.net/doc/2d1867591.html,puter Graphics Forum26,3,339–347.

B YRD,R.,L U,P.,N OCEDAL,J.,AND Z HU,C.1995.A limited

memory algorithm for bound constrained optimization.SIAM J.

https://www.sodocs.net/doc/2d1867591.html,p.16,5,1190–1208.

C ARRANZA,J.,T HEOBALT,C.,M AGNOR,M.,AN

D S EIDEL,H.-

P.2003.Free-viewpoint video of human actors.In Proc.SIG-GRAPH,569–577.

DE A GUIAR,E.,T HEOBALT,C.,S TOLL,C.,AND S EIDEL,H.-P.

2007.Marker-less deformable mesh tracking for human shape and motion capture.In Proc.CVPR,IEEE,1–8.

DE A GUIAR,E.,T HEOBALT,C.,S TOLL,C.,AND S EIDEL,H.

2007.Marker-less3d feature tracking for mesh-based human motion capture.In Proc.ICCV HUMO07,1–15.

DE A GUIAR,E.,T HEOBALT,C.,T HRUN,S.,AND S EIDEL,H.-P.

2008.Automatic conversion of mesh animations into skeleton-based https://www.sodocs.net/doc/2d1867591.html,puter Graphics Forum(Proc.Eurograph-ics EG’08)27,2(4),389–397.

E INARSSON,P.,C HABERT,C.-F.,J ONES,A.,M A,W.-C.,L A-

MOND,B.,IM H AWKINS,B OLAS,M.,S YLWAN,S.,AND D E-BEVEC,P.2006.Relighting human locomotion with?owed re?ectance?elds.In Proc.EGSR,183–194.

G OESELE,M.,C URLESS,B.,AND S EITZ,S.M.2006.Multi-

view stereo revisited.In Proc.CVPR,2402–2409.

G ROSS,M.,W¨URMLIN,S.,N¨AF,M.,L AMBORAY,E.,S PAGNO,

C.,K UNZ,A.,K OLLER-M EIER,E.,S VOBODA,T.,G OOL,

L.V.,L ANG,S.,S TREHLKE,K.,M OERE, A.V.,AND S TAADT,O.2003.blue-c:a spatially immersive display and 3d video portal for telepresence.ACM TOG22,3,819–827.

K ANADE,T.,R ANDER,P.,AND N ARAYANAN,P.J.1997.Vir-tualized reality:Constructing virtual worlds from real scenes.

IEEE MultiMedia4,1,34–47.

K AZHDAN,M.,B OLITHO,M.,AND H OPPE,H.2006.Poisson surface reconstruction.In Proc.SGP,61–70.

L EORDEANU,M.,AND H EBERT,M.2005.A spectral technique for correspondence problems using pairwise constraints.In Proc.

ICCV.

L OWE,D.G.1999.Object recognition from local scale-invariant features.In Proc.ICCV,vol.2,1150ff.

M ATUSIK,W.,B UEHLER,C.,R ASKAR,R.,G ORTLER,S.,AND M C M ILLAN,L.2000.Image-based visual hulls.In Proc.SIG-GRAPH,369–374.

M ENACHE,A.,AND M ANACHE,A.1999.Understanding Motion Capture for Computer Animation and Video Games.Morgan Kaufmann.

M ITRA,N.J.,F LORY,S.,O VSJANIKOV,M.,G ELFAND,N.,AS, L.G.,AND P OTTMANN,H.2007.Dynamic geometry registra-tion.In Proc.SGP,173–182.

M OESLUND,T.B.,H ILTON,A.,AND K R¨UGER,V.2006.A survey of advances in vision-based human motion capture and https://www.sodocs.net/doc/2d1867591.html,put.Vis.Image Underst.104,2,90–126.M¨ULLER,M.,D ORSEY,J.,M C M ILLAN,L.,J AGNOW,R.,AND

C UTLER,B.2002.Stable real-time deformations.In Proc.of

SCA,ACM,49–54.

P ARAMOUNT,2007.Beowulf movie page.

https://www.sodocs.net/doc/2d1867591.html,/.

P ARK,S.I.,AND H ODGINS,J.K.2006.Capturing and animat-ing skin deformation in human motion.ACM TOG(SIGGRAPH 2006)25,3(Aug.).

P OPPE,R.2007.Vision-based human motion analysis:An overview.CVIU108,1.

R OSENHAHN,B.,K ERSTING,U.,P OWEL,K.,AND S EIDEL,H.-P.2006.Cloth x-ray:Mocap of people wearing textiles.In LNCS4174:Proc.DAGM,495–504.

S AND,P.,M C M ILLAN,L.,AND P OPOVI′C,J.2003.Continuous capture of skin deformation.ACM TOG22,3.

S CHOLZ,V.,S TICH,T.,K ECKEISEN,M.,W ACKER,M.,AND M AGNOR,M.2005.Garment motion capture using color-coded https://www.sodocs.net/doc/2d1867591.html,puter Graphics Forum(Proc.Eurographics EG’05)24,3(Aug.),439–448.

S HINYA,M.2004.Unifying measured point sequences of deform-ing objects.In Proc.of3DPVT,904–911.

S ORKINE,O.,AND A LEXA,M.2007.As-rigid-as-possible sur-face modeling.In Proc.SGP,109–116.

S TARCK,J.,AND H ILTON,A.2007.Surface capture for perfor-mance based animation.IEEE CGAA27(3),21–31.

S TOLL,C.,K ARNI,Z.,R¨OSSL,C.,Y AMAUCHI,H.,AND S EI-DEL,H.-P.2006.Template deformation for point cloud?tting.

In Proc.SGP,27–35.

S UMNER,R.W.,AND P OPOVI′C,J.2004.Deformation transfer for triangle meshes.In SIGGRAPH’04,399–405.

V EDULA,S.,B AKER,S.,AND K ANADE,T.2005.Image-based spatio-temporal modeling and view interpolation of dy-namic events.ACM Trans.Graph.24,2,240–261.

W AND,M.,J ENKE,P.,H UANG,Q.,B OKELOH,M.,G UIBAS, L.,AND S CHILLING,A.2007.Reconstruction of deforming geometry from time-varying point clouds.In Proc.SGP,49–58. W ASCHB¨USCH,M.,W¨URMLIN,S.,C OTTING,D.,S ADLO,F., AND G ROSS,M.2005.Scalable3D video of dynamic scenes.

In Proc.Paci?c Graphics,629–638.

W HITE,R.,C RANE,K.,AND F ORSYTH,D.2007.Capturing and animating occluded cloth.In ACM TOG(Proc.SIGGRAPH). W ILBURN,B.,J OSHI,N.,V AISH,V.,T ALVALA,E.,A NTUNEZ,

E.,B ARTH,A.,A DAMS,A.,H OROWITZ,M.,AND L EVOY,

M.2005.High performance imaging using large camera arrays.

ACM TOG24,3,765–776.

X U,W.,Z HOU,K.,Y U,Y.,T AN,Q.,P ENG,Q.,AND G UO,B.

2007.Gradient domain editing of deforming mesh sequences.In Proc.SIGGRAPH,ACM,84ff.

Y AMAUCHI,H.,G UMHOLD,S.,Z AYER,R.,AND S EIDEL,H.-P.

2005.Mesh segmentation driven by gaussian curvature.Visual Computer21,8-10,649–658.

Z ITNICK,C.L.,K ANG,S.B.,U YTTENDAELE,M.,W INDER,S., AND S ZELISKI,R.2004.High-quality video view interpolation using a layered representation.ACM TOG23,3,600–608.

通信电子线路课程设计

通信电子线路课程设计中波电台发射系统与接收系统设计 学院:******* 专业:******* 姓名:**** 学号:******

一.引言 这学期,我们学习了《通信电子线路》这门课,让我对无线电通信方面的知识有了一定的认识与了解。通过这次的课程设计,可以来检验和考察自己理论知识的掌握情况,同时,在本课设结合Multisim软件来对中波电台发射机与接收机电路的设计与调试方法进行研究。既帮助我将理论变成实践,也使自己加深了对理论知识的理解,提高自己的设计能力 二.发射机与接收机原理及原理框图 1.发射机原理及原理框图 发射机的主要任务是完成有用的低频信号对高频载波的调制,将其变为在某一中心频率上具有一定带宽、适合通过天线发射的电磁波。 通常,发射机包括三个部分:高频部分,低频部分,和电源部分。 高频部分一般包括主振荡器、缓冲放大、倍频器、中间放大、功放推动级与末级功放。主振器的作用是产生频率稳定的载波。为了提高频率稳定性,主振级往往采用石英晶体振荡器,并在它后面加上缓冲级,以削弱后级对主振器的影响。低频部分包括话筒、低频电压放大级、低频功率放大级与末级低频功率放大级。低频信号通过逐渐放大,在末级功放处获得所需的功率电平,以便对高频末级功率放大器进行调制。因此,末级低频功率放大级也叫调制器。发射机系统原理框图如下图: 设计指标: 设计目的是要求掌握最基本的小功率调幅发射系统的设计与安装调试。 技术指标:载波频率535-1605KHz,载波频率稳定度不低于10-3,输出负载51Ω,总的输出功率50mW,调幅指数30%~80%。调制频率500Hz~10kHz。 本设计可提供的器件如下,参数请查询芯片数据手册。所提供的芯片仅供参考,可以选择其他替代芯片。 高频小功率晶体管3DG6 高频小功率晶体管3DG12 集成模拟乘法器XCC,MC1496 高频磁环NXO-100 运算放大器μA74l 集成振荡电路E16483 原理及原理框图 接收机的主要任务是从已调制AM波中解调出原始有用信号,主要由输

通信电子线路复习题及答案

《通信电子线路》复习题 一、填空题 1、通信系统由输入变换器、发送设备、信道、接收设备以及输出变换器组成。 2、无线通信中,信号的调制方式有调幅、调频、调相三种,相应的解 调方式分别为检波、鉴频、鉴相。 3、在集成中频放大器中,常用的集中滤波器主要有:LC带通滤波器、陶瓷、石英 晶体、声表面波滤波器等四种。 4、谐振功率放大器为提高效率而工作于丙类状态,其导通角小于 90度,导 通角越小,其效率越高。 5、谐振功率放大器根据集电极电流波形的不同,可分为三种工作状态,分别为 欠压状 态、临界状态、过压状态;欲使功率放大器高效率地输出最大功率,应使放 大器工作在临界状态。

6、已知谐振功率放大器工作在欠压状态,为了提高输出功率可将负载电阻Re 增大,或将电源电压Vcc 减小,或将输入电压Uim 增大。 7、丙类功放最佳工作状态是临界状态,最不安全工作状态是强欠压状态。最佳工 作状态的特点是输出功率最大、效率较高 8、为了有效地实现基极调幅,调制器必须工作在欠压状态, 为了有效地实现集电极调幅,调制器必须工作在过压状态。 9、要产生较高频率信号应采用LC振荡器,要产生较低频率信号应采用RC振荡 器,要产生频率稳定度高的信号应采用石英晶体振荡器。 10、反馈式正弦波振荡器由放大部分、选频网络、反馈网络三部分组成。 11、反馈式正弦波振荡器的幅度起振条件为1 ,相位起振条件 A F (n=0,1,2…)。 12、三点式振荡器主要分为电容三点式和电感三点式电路。 13、石英晶体振荡器是利用石英晶体的压电和反压电效应工作的,其频率稳 定度很高,通常可分为串联型晶体振荡器和并联型晶体振荡器两种。 14、并联型石英晶振中,石英谐振器相当于电感,串联型石英晶振中,石英谐振器 相当于短路线。

(整理)通信电子线路课程设计

二○○九~二○一○学年第二学期电子信息工程系 课程设计报告书 班级: 学号: 姓名: 课程名称:通信电子线路学时学分:1周1学分 指导教师: 二○一○年三月十五日

变容二极管频率调制电路设计 一、 课程设计目的 1、 复习正弦波振荡器有关知识 2、 复习LC 振荡器的工作原理 3、 复习静态工作点和动态信号对工作点的影响 4、 学会分析计算LC 振荡器的频率稳定度 二、 课程设计内容及要求 1、 已知条件 V V CC 12+=+,高频三极管3DG100,变容二极管2CC1C 。 2、 性能指标 主振频率0f =10MHZ ,频率稳定度 / 105/4-?≤?o o f f 小时,主振级 的输出电压V V O 1≥,最大频偏 kHz f m 10=?。 3、 报告要求 给出详细的原理分析,计算步骤,电路图和结果分析。 三、 原理分析 3.1 FM 调制原理: FM 调制是靠信号使频率发生变化,振幅可保持一定,所以噪声成分易消除。 设载波t w Vcm Vc c cos =,调制波t w Vsm Vs s cos =。 t w w w w s c m cos ?+=或t f f f f s c m π2cos ?+=,此时的频率偏移量△f 为最大频率偏移。 最后得到的被调制波m cm m V V θsin = , V m 随V s 的变化而变化。 ??+==t s s c m m t w w w t w dt w 0 sin )/(θ ) sin sin(]sin )/(sin[sin t w m t w V t w w w t w V V V s c cm s s c cm m cm m +=?+==θ s s f f w w m ?=?= 为调制系数

通信电子线路课后答案

?画出无线电广播发射调幅系统的组成方框图,以及各方框图对应的波形。 ?频率为3-30MHz称为什么频段?对应的波长是多少? 高频,短波波段,波长为10~100m ?简述无线电通信中调制的目的。 无线电通信中调制的目的主要有: 1)实现信道复用,即把多个信号分别安排在不同的频段上同时传输,提高信道的容量;并可以提高通信系统的抗干扰能力; 2)电信号要以电磁波形式有效地辐射,则天线的长度需与电信号的波长相比拟;而实际工作中需传送的原始信号常是低频信号,通过调制,可以信号搬到高频段,实现有效天线发射。 ?FM广播、TV以及导航移动通信均属于哪一波段通信? FM广播、TV以及导航移动通信均属于超短波波段通信。 ?填空题: 一个完整的通信设备应包括信息源(输入设备)、发送设备、信道、接收设备、输出设备。 调制是用音频信号控制载波的幅度、频率、相位。 无线电波传播速度固定不变,频率越高,波长越短。 波长比短波更短的无线电波称为超短波,不能以地面波和天波方式传播,只能以视距波传播。

? 谐振回路品质因数Q 与通频带和选择性有什么关系?提高谐振回路的Q 值,在电路上主 要采用什么手段。 Q ↑,选择性好,但通频带窄;提高Q 值,在电路上可采用部分接入的方式,包括信号源和/或负载部分接入谐振回路。 ? 小信号调谐放大器在性能上存在什么矛盾,解决该矛盾有什么途径。 高频小信号谐振放大器存在通频带和选择性的矛盾。选择性越好,通频带越窄,选择合适的Q 值,尽可能兼顾两者;当不能兼顾时,可采用耦合谐振回路的方式,即两个单谐振回路通过互感或电容临界耦合,获得理想的矩形系数。 ? 影响小信号谐振放大器稳定性的因素是什么?可采用何措施来提高稳定性。 影响小信号谐振放大器稳定性的因素是晶体管存在内部反馈即方向传输导纳y re 的作用。它把输出电压可以反馈到输入端, 引起输入电流的变化, 从而可能引起放大器工作不稳定。 可采用中和法和失配法来消减其影响;其中前者是在电路中引入一反馈,来抵消内部反馈的作用,达到放大器单向化的目的;而后者是通过牺牲增益来换取稳定,通过增大放大器的负载电导,使之与放大器输出电导不匹配,即失配,导致放大器放大倍数降低,以减小内部反馈的影响。采用失配法时为保证增益高的要求,常采用组合放大电路。 ? 如图所示电路中,电感L 的铜损电阻忽略不计,Rs=30k Ω,电感量为100uH ,R L =5k Ω。 i s =1cos(2π×5 ×105t )mA 。若要求回路的有载Q 为50,确定C 1和C 2的值,并计算输出电压。 由I S 的表示式可知信号频率f0=5×105Hz,故谐振回路总电容 pF L f C C C C C 10001010010541 416 10222022121=????==+= -ππ (1) L R Q L 0ω∑= Ω=??????==∴∑K L Q R L 15.710100105250-6 50πω 又L s L s R R R R R '+'=∑ Ω='-='∴∑∑K R R R R R s s L 32.94 则接入系数p C 应满足 3896.0211 ='= += L L c R R C C C p (2) 由(1)和(2)可得pF C pF C 1637,256712==

通信电子线路习题解答

思考题与习题 2-1列表比较串、并联调谐回路的异同点(通频带、选择性、相位特性、幅度特性等)。 表2.1 2-2已知某一并联谐振回路的谐振频率f p =1MHz ,要求对990kHz 的干扰信号有足够的衰减,问该并联回路应如何设计? 为了对990kHz 的干扰信号有足够的衰减,回路的通频带必须小于20kHz 。 取kHz B 10=, 2-3试定性分析题图2-1所示电路在什么情况下呈现串联谐振或并联谐振状态? 题图2-1 图(a ):2 21 11 11 1L C L C L o ωωωωω- + - = 图(b ):2 21 11 11 1C L C L C o ωωωωω- + - = 图(c ):2 21 11 11 1C L C L C o ωωωωω- + - = 2-4有一并联回路,其通频带B 过窄,在L 、C 不变的条件下,怎样能使B 增宽? P o Q f B 2 =,当L 、C 不变时,0f 不变。所以要使B 增宽只要P Q 减小。 而C L R Q p P =,故减小P R 就能增加带宽 2-5信号源及负载对谐振回路有何影响,应如何减弱这种影响? 对于串联谐振回路(如右图所示):设没有接入信号源内阻和负载电阻时回路本身的Q

值为o Q ,则:R L Q o o ω= 值,则: 设接入信号源内阻和负载电阻的Q 为L Q R R R R Q R R R L Q L s L ++=++=1L s o L ω 其中R 为回路本身的损耗,R S 为信号源内阻,R L 为负载电阻。 由此看出:串联谐振回路适于R s 很小(恒压源)和R L 不大的电路,只有这样Q L 才不至于太低,保证回路有较好的选择性。 对于并联谐振电路(如下图所示): 设接入信号源内阻和负载电阻的Q 值为L Q 由于没有信号源内阻和负载接入时的Q 值为 由式(2-31)可知,当R s 和R L 较小时,Q L 也减小,所以对并联回路而言,并联的电阻越大越好。因此并联谐振回路适于恒流源。 2-6已知某电视机一滤波电路如题图2-2所示,试问这个电路对什么信号滤除能力最强,对什么信号滤除能力最弱,定性画出它的幅频特性。 V1=V2? 题图2-2题图2-3 2-7已知调谐电路如题图2-3所示,回路的谐振频率为465kHz ,试求: (1)电感L 值; (2)L 无损耗时回路的通频带; (3)L 有损耗(Q L =100)回路的通频带宽度。 左侧电路的接入系数: 25.040120401=+= T T T p 右侧电路的接入系数:25.040120402=+= T T T p 等效电源: s s i p i 1' = 等效阻抗:Ω=Ω + Ω+Ω= k k p k k p R p 67.265.21601 101 2 221 等效容抗:2 22 1' 16?10p pF p pF C ?++?= 电容值未知 2-8回路的插入损耗是怎样引起的,应如何减小这一损耗? 由于回路有谐振电阻R p 存在,它会消耗功率因此信号源送来的功率不能全部送给负载R L ,有一部分功率被回路电导g p 所消耗了,这就是插入损耗。增大回路本身的Q 值可以减小插入损耗。 2-9已知收音机某中放的负载回路如题2-4所示,回路的f 0=465kHz ,电感的Q 0=100,要求回路的带宽B=20kHz ,试求: (1)电感L 值; (2)回路插入损耗;

通信电子线路课程设计报告_电感三点式正弦波振荡器

课程设计报告 课题名称 _____通信电子线路课程设计_ 学院电子信息学院 专业 班级 学号 姓名 指导教师

目录 摘要................................................... I 1绪论. (1) 2正弦波振荡器 (2) 2.1 反馈振荡器产生振荡的原因及其工作原理 (2) 2.2平衡条件 (3) 2.3起振条件 (3) 2.4稳定条件 (4) 3电感三点式振荡器 (5) 3.1三点式振荡器的组成原则 (5) 3.2电感三点式振荡器 (5) 3.3 振荡器设计的模块分析 (6) 4 仿真与制作 (11) 4.1仿真 . (11) 4.2分析调试 (13) 5 心得体会...................................13= 参考文献 (14)

摘要 反馈振荡器是一种常用的正弦波振荡器,主要由决定振荡频率的选频网络和维持振荡的正反馈放大器组成。按照选频网络所采用元件的不同,正弦波振荡器可分为LC振荡器、RC振荡器和晶体振荡器等类型。本文介绍了高频电感三点式振荡器电路的原理及设计,电感三点式容易起振,调整频率方便,变电容而不影响反馈系数。 正弦波振荡器在各种电子设备中有着广泛的应用。例如,无线发射机中的载波信号源,接收设备中的本地振荡信号源,各种测量仪器如信号发生器、频率计、fT测试仪中的核心部分以及自动控制环节,都离不开正弦波振荡器。根据所产生的波形不同,可将振荡器分成正弦波振荡器和非正弦波振荡器两大类。前者能产生正弦波,后者能产生矩形波、三角波、锯齿波等。 本文将简单介绍一种利用一款名为Multisim 11.0的软件作为电路设计的仿真软件,电容电感以及其他电子器件构成的高频电感三点式正弦波振荡器。电路中采用了晶体三极管作为电路的放大器,电路的额定电源电压为5.0 V,电流为1~3 mA,电路可输出输出频率为8 MHz(该频率具有较大的变化围)。 关键词:高频、电感、振荡器

通信电子线路课程设计

通信电子线路课程设计 学院信息工程学院班级通信0711 姓名邱加钦学号 2007830029 成绩指导老师马中华陈红霞 2010年 1 月 4 日

通信电子线路课程设计报告 一设计名称:调频无线话筒的设计 二设计时间:2010年1月1日~1月5日 三设计地点:集美大学信息工程学院通信实验室 四指导老师:马中华、陈红霞 五设计目的: 1,了解无线话筒的发射原理; 2,熟练掌握protel设计; 3,完成简单的无线话筒制作; 4,通过制作和检测无线话筒,加深对放功率放大器的认识。 六设计原理 调频无线话筒是一种可以将声音或者歌声转换成88~108MHz的无线电波发射出去,距离可以达到30~50m,用普通调频收音机或者带收音机功能的手机就可以接收。 将声音调制到高频载波上,可以用调幅的方法,也可以用调频的方法。 与调幅相比,调频具有保真度好,抗干扰性强的优点,缺点是占用频带较宽。 调频的方式一般用于超短波波段。 1、调频无线话筒的框图如下: T2 图1 调频话筒框图 2、设计原理图:

图2 试验原理图 晶体管T1和其周围的电路构成高频振荡器,振荡频率由L、C4、C5、T1的结电容决定。 加至T1管基极的音频信号电压,会使c-b结电容随它变化,从而实现调频。 C4可改变中心频率的选择(88~108MHz)。 T1输出调频信号,通过C7耦合到T2管的基极,经过T2管放大后从天线辐射出去。T2管构成高频放大器,还有缓冲作用,隔离了天线对高频振荡器的影响,使振荡频率更加稳定。 七设计内容 1,protel设计 (1)电路原理图设计。按设计原理图进行电路原理图的绘制。如图3示。

通信电子线路课程设计题目及答案(正式版)

1.请问本机振荡电路的类型并估算电路的振荡频率? 答:本振的类型为Clapp 振荡器,它是电容三端式振荡器的一种变形。振荡电路的振荡频率近似等于其选频回路的谐振频率,即: f= 2.影响振荡频率的元件有哪些? 答:如下图: 如图红色椭圆标注所示,振荡频率由这些元件决定。 3.天线信号接收选频网络的作用? 答:其作用是选频,通过可变电容选择希望听到的广播信号。 4.混频电路射极电阻的作用? 答:该电阻是用于稳定混频管静态工作点而使用的电流负反馈电阻。 5.混频电路输入输出信号波形特征? 答:混频电路有两路输入信号:天线信号,其波形是疏密相间且等幅的调频信号;本振信号,其波形是高频正弦信号。混频电路输出信号:载波为中频的调频信号,其波形特征与天线信号一致,是疏密相间且等幅的调频信号。 6.混频电路集电极选频网络的作用? 答:从混频后的信号中用该选频网络滤出中频信号。 7.中频放大电路陶瓷滤波器的作用? 答:陶瓷滤波器的作用是进一步滤出中频信号,因为陶瓷滤波器的矩形系数一般要比LC谐振回路好,即具有较好的选择性。 8.检波电路中中周的作用及选频网络的中心频率是多少? 答:该中周的作用是将信号中频率的变化转化为电压的变化。选频网络的中心频率是:

10.7MHz 9. 低频放大电路的输出是如何调整的? 答:通过调整低放输入端可变电阻实现 10. 如何保证中频放大电路的频率是10.7MHz ? 答:要保证中放的频率是10.7MHz ,我们在电路中需要注意:中放管输出端的陶瓷滤波器要选择中心频率为10.7MHz 的产品 11. 混频级与中放级电路静态计算 答:混频级和和中放级电路的直流静态工作点分析如下: 设Tr1和Tr2的直流放大倍数分别为1β、2β,基极电流、集电极电流和发射极电流分别为i Ib 、 i Ic 和i Ie ,1,2i =,总电流为I 。 根据三极管的电流放大特性有: i i i Ic Ib β= (1) (1)i i i Ie Ib β=+ (2) 设Tr1和Tr2的基极电压分别为1Vb 、2V b ,那么 1120.7Vb Ie R =+ (3) 2240.7Vb Ie R =+ (4) 此外,

通信电子线路习题解答

关于《通信电子线路》课程的习题安排: 第一章习题参考答案: 1-1 1-3 解: 1-5 解: 第二章习题解答: 2-3 解: 2-4 由一并联回路,其通频带B 过窄,在L 、C 不变的条件下,怎样能使B 增宽? 答:减小Q 值或减小并联电阻 2-5 信号源及负载对谐振回路有何影响,应该如何减弱这种影响? 答: 1、信号源内阻及负载对串联谐振回路的影响:通常把没有接入信号源内阻和负载电阻时回路本身的Q 值叫做无载Q (空载Q 值) 如式 通常把接有信号源内阻和负载电阻时回路的Q 值叫做有载QL,如式 为空载时的品质因数 为有载时的品质因数 Q Q Q Q L L <可见 结论: 串联谐振回路通常适用于信号源内阻Rs 很小 (恒压源)和负载电阻RL 也不大的情况。 2、信号源内阻和负载电阻对并联谐振回路的影响 o o Q R L Q ==ωL S L R R R L Q ++=0ωL p s p p p p p p p 11R R R R Q Q G C LG Q L ++= ==故ωω同相变化。 与L S L R R Q 、Θ性。 较高而获得较好的选择以使也较大的情况,很大,负载电阻内阻并联谐振适用于信号源L L S Q R R ∴

2-8 回路的插入损耗是怎样引起的,应该如何减小这一损耗? 答:由于回路有谐振电阻R p 存在,它会消耗功率因此信号源送来的功率不能全部送给负载R L ,有一部分功率被回路电导g p 所消耗了。回路本身引起的损耗称为插入损耗,用K l 表示 无损耗时的功率,若R p = , g p = 0则为无损耗。 有损耗时的功率 插入损耗 通常在电路中我们希望Q 0大即损耗小,其中由于回路本身的L g Q 0p 01ω= ,而 L g g g Q 0L p s L )(1 ω++= 。 2-11 2-12 解: 2-13 时,电路的失调为:66.65 5 .0*23.33f f 2Q p 0 ==?=ξ 2-14 解: 又解:接入系数p=c1/(c1+c2)=,折合后c0’=p2*c0=,R0’=R0/ p2=20k Ω,总电容C=Ci+C0’+C1C2/(C1+C2)=,回路谐振频率fp=,谐振阻抗Rp=1/(1/Ri+1/Rp0+1/R0’),其中Rp0为空载时回路谐振阻抗,Rp0=Q0*2π*fp*L=Ω,因此,回路的总的谐振阻抗为:Rp=1/ 11P P K l '=率回路有损耗时的输出功率回路无损耗时的输出功L 2L s s L 201g g g I g V P ????? ??+==L 2 p L s s L 211g g g g I g V P ?? ??? ??++=='2 0L 1 111?? ? ? ?? ??-='=Q Q P P K l

通信电子线路作业参考答案

第2章 2-3 已知串联谐振回路的谐振频率f 0=30MHz ,电容C =80pF ,谐振阻抗R 0=5Ω,试求:电感L 和回路品质因数Q 0。 解:1、求L 根据公式:LC 10=ω得: H H C L μ≈????π=ω=-35.01080103041112 6220 2、求Q 0 根据公式:Q 0 = 1/(ω0CR 0) =ω0L/R 0得: Q 0==1/(ω0CR 0) =1/(2π×30×106×80×10-12×5)=1/(24π×10-3)≈ 13. 27 或:Q 0==ω0L/R 0 =(2π×30×106×0.35×10-6)/5≈ 13. 20 2-5 题图2-5是单调谐放大器的交流通路,当谐振频率等于10MHz 时,测得晶体管的Y 参数为:y re =0;y ie =2+j0.5 (mS); y fe =20+j5 (mS);y oe =20+j40 (μS)。放大器的通频带为300kHz ,谐振电压增益为50,试求:电路元件C 、L 的参数值。 题图2-5 解:已知P 1=1,P 2=1,f 0=10MHz ,B=2Δf 0.7=300 kHz ,500 =u A 根据∑-=g y p p A fe u 210 ,∑-=g y p p A fe u 210 得: mS A y g u fe 41.050520220 ≈+==∑ 根据Q f Q 2B 00=πω= 得:Q L = f 0/2Δf 0.7 =33.33 根据ω0C oe =40(μS)得:C oe =40×10-6/(2π×10×106)≈ 0.637pF , 根据ie 22oe 21C p C C p C ++=∑ 和C Σ = g ΣQ L /ω0得:C= C Σ-C oe ≈217.6-0.637 ≈217pF 根据LC 10= ω得:L ≈1.15μH 2-8 如题图2-8所示。已知C i =5pF ,R i =10k Ω,L =0.8μH ,C 1=C 2=20pF ,C L =20pF ,R L =5k Ω,空载品质因数Q 0=100。试计算回路谐振频率、谐振阻抗(不计R i 、R L 时)、有载品质因数Q L 和通频带。 C L 题图2-8

通信电子线路题库,答案版

复习题 一、选择题 1.二极管峰值包络检波器适用于哪种调幅波的解调(C)。 A 单边带调幅波 B 抑制载波双边带调幅波 C 普通调幅波 D 残留边带调幅波 2.欲提高功率放大器的效率,应使放大器的工作状态为(C)。 A 甲类 B 乙类 C 丙类 3.变容二极管调频器实现线性调频的条件是变容二极管的结电容变化指数n为(C)。 A 1/3 B 1/2 C 2 D 4 4.某超外差接收机的中频为465kHz,当接收550kHz的信号时,还收到1480kHz 的干扰信号,此干扰为(B)。 A 干扰哨声 B 镜像干扰 C 互调干扰 D 交调干扰 5.某单频调制的普通调幅波的最大振幅为10v,最小振幅为6v,则调幅系数m a为(C) A 0.6 B 0.4 C 0.25 D 0.1 6.以下几种混频器电路中,输出信号频谱最纯净的是(A) A 二极管混频器 B 三极管混频器 C 模拟乘法器混频器 7.某丙类谐振功率放大器工作在临界状态,若保持其它参数不变,将集电极直流电源电压增大,则放大器的工作状态将变为(C) A 过压 B 临界 C 欠压 8.鉴频的描述是( B ) A 调幅信号的解调 B 调频信号的解调 C 调相信号的解调 9.下图所示框图能实现何种功能? ( C ) 其中u s(t)= U s cosωs tcosΩt, u L(t)= U L cosωL t A 振幅调制 B 调幅波的解调 C 混频 D 鉴频 10.在超外差式调幅收音机中,混频器的输出谐振回路应调谐在什么频率上?( D ) A 载波频率f c B 本机振荡频率f L C f c + f L D f c - f L 11.图1所示的检波电路适于对哪种已调信号进行检波? ( C ) A 抑制载波的双边带调幅波 B 单边带调幅波 C 普通调幅波 D 调相波 12.调频波的瞬时附加相位与什么成正比? ( A ) A 调制信号的积分 B 调制信号的微分 C 调制信号的瞬时值 D 调制信号的频率 13.在谐振功率放大器中,为了提高谐波抑制能力,谐振回路的Qe值(A) A越大越好B越小越好C无关 14.为了实现丙类工作,基极偏置电压应设置在功率管的(B) A 放大区 B 截止区 C 饱和区 15.某调频波,其调制信号频率F=1kHz,载波频率为10.7MHz,最大频偏Δf m=10kHz,若调制信号的振幅不变,频率加倍,则此时调频波的频带宽度为(B) A 12kHz B 24kHz C 20kHz D 40kHz 16.双边带调制信号和单边带调制信号的载波被(C) A变频B搬移C抑制 17.双边带调制信号和单边带调制信号的包络是否反映调制信号的变化规律。(B) A反映B不反映C双边带调制信号反映 18.为提高振荡频率的稳定度,高频正弦波振荡器一般选用( B )

通信电子线路部分习题解答(严国萍版)

《通信电子线路》课程的部分习题答案 第一章习题参考答案: 1-1: 1-3: 解: 1-5: 解:

第二章习题解答: 2-3, 解 : 2-4,由一并联回路,其通频带B 过窄,在L 、C 不变的条件下,怎样能使B 增宽? 答:减小Q 值或减小并联电阻 2-5,信号源及负载对谐振回路有何影响,应该如何减弱这种影响? 答: 1、信号源内阻及负载对串联谐振回路的影响:通常把没有接入信号源内阻和负载电阻时回路本身的Q 值叫做无载Q (空载Q 值) 如式 通常把接有信号源内阻和负载电阻时回路的Q 值叫做有载QL,如式 为空载时的品质因数 为有载时的品质因数 Q Q Q Q L L <可见 o o Q R L Q ==ωL S L R R R L Q ++=0ω

结论: 串联谐振回路通常适用于信号源内阻Rs 很小 (恒压源)和负载电阻RL 也不大的情况。 2、信号源内阻和负载电阻对并联谐振回路的影响 2-8,回路的插入损耗是怎样引起的,应该如何减小这一损耗? 答:由于回路有谐振电阻R p 存在,它会消耗功率因此信号源送来的功率不能全部送给负载R L ,有一部分功率被回路电导g p 所消耗了。回路本身引起的损耗称为插入损耗,用K l 表示 无损耗时的功率,若R p = ∞, g p = 0则为无损耗。 有损耗时的功率 插入损耗 通常在电路中我们希望Q 0大即损耗小,其中由于回路本身的L g Q 0p 01 ω=,而 L g g g Q 0L p s L )(1ω++= 。 2-11, L p s p p p p p p p 11R R R R Q Q G C LG Q L ++===故ωω同相变化。与L S L R R Q 、 性。较高而获得较好的选择以使也较大的情况,很大,负载电阻内阻并联谐振适用于信号源L L S Q R R ∴11P P K l '=率回路有损耗时的输出功率回路无损耗时的输出功L 2L s s L 201g g g I g V P ????? ??+==L 2p L s s L 211g g g g I g V P ????? ??++=='20L 1111????? ? ??-='=Q Q P P K l

通信电子线路典型习题

通信电子线路典型习题 01、 什么叫传输线?(P-7) 02、 什么叫无损传输线?(P-9) 03、 无损传输线的特征阻抗=?(P-9) 04、 信号源的输出阻抗为150Ω,负载的阻抗为50Ω,如果用 的无损耗传输线实现阻抗匹配,求:用作匹配的传输线的特性阻抗Z C =? 05、 这种匹配方法的缺点是什么? 06、 电感的等效电路如图所示,L=100μH ,r=1Ω,工作频率f=100kHz 。 (1)求电感L 的Q 0, (2)将电感的等效电路转换为并联形式。 07、 电路如图所示,L=100μH ,C=100pF 。 (1)当i=5cos(106/2π)t 时,确定电路的阻抗性质; (2)当i=5cos(107/2π)t 时,确定电路的阻抗性质。 08、 电路如图所示,已知:L=50μH ,C=100pF ,、r=5Ω,求ω0、回路的Q 0、BW 、B 0.1、D 。 /4 i

09、电路如图所示,工作在谐振状态。已知:L=100μH,电感的r=5Ω、N1=6、N2=4、C1=100pF、C2=300pF、Rs=100KΩ、R L=50KΩ,求ω0、回路的Q、BW、B0.1、D。 10、电路如图所示,工作在谐振状态。已知:L1=100μH,L2=50μH,M=5μH,电感的r=5Ω、N1=6、N2=4、C1=100pF、C2=300pF、Rs=100KΩ、R L=50KΩ,求ω0、回路的Q、BW、B0.1、D。 11、计算3级选频放大器(n=3),单谐振回路数目为(n+1=4)时的3Db带宽BW=? 12、晶振的f q和f p的数值有什么特点?(提示:有3) 13、为了提高效率,高频功率放大器多工作在或状态。 14、为了兼顾高的输出功率和高的集电极效率,在实际应用中,通常取θ= 。

(精简版)通信电子线路课程设计--简易SSB设计

高频电子线路课程设计 学校:海南大学 学院:信息学院 指导老师: 专业:电子信息工程 设计者: 日期:2009年4月18日

简易振幅调制解调器的设计 摘要: 在当今时代,电子科技已经十分发达,而通信和广播等领域也随之高速发展。在模拟调制系统的有效性从优至劣排列为SSB、VSB、AM(DSB)、FM;可靠性从优至劣排列为FM、SSB(DSB)、AM,因此我们选择制作SSB。 有时为了提高通信质量和处理信号方便,需要在将语音、图象等有用信息经过调幅后再发送出去,这就无疑需要一种振幅调制电路来实现,该电路的载波信号和调制信号经乘法器后,将调制信号搬移到了高频处,输出抑制载波的双边带调幅波,再经过低通滤波器,即可产生单边带调幅波;然后将已调信号和载波信号经乘法器后,则已调信号搬移到了低频和更高频处,再经过低通滤波器,即可恢复出调制信号。此电路的设计思路十分清晰,原理较为易懂,结构简单明了,使用起来方便、稳定且实用价值较高。关键词:高频;载波;调幅;调制信号。 一、概述 1、设计任务 要求设计一个简易的振幅调制解调器,该电路的载波信号和调制信号经乘法器后,将调制信号搬移到了高频处,输出抑制载波的双边带调幅波,再经过低通滤波器,即可产生单边带调幅波;然后将已调信号和载波信号经乘法器后,则已调信号搬移到了低频和更高频处,再经过低通滤波器,即可恢复出调制信号。 2、技术指标 ①振幅调制的载波部分采用高频信号发生器输出幅值为7mV,频率为20KHz的正弦波; ②振幅调制器的设计采用乘法器产生抑制载波的双边带调幅波; ③低频信号可以利用已有的信号发生器产生,输出2KHz的正弦波信号,幅值根据实际需要自行确定。 3、理论意义 本课题其理论意义十分广泛且重要,涉及方面广,而且对电路基础、模拟电子线路、通信电子线路中的一些基础知识要求较高,对以往学过的知识是一次全面的复习。同时也将理论知识应用到设与计与实践中。 二、方案分析 1、整体方案分析

通信电子线路习题解答

5-2.图题4-10所示是实用晶体振荡线路,试画出它们的高频等效电路,并指出它们是哪一种振荡器。晶体在电路中的作用分别是什么? K 20 K6.5 (a) (b) 图题4-10 解:两个晶体振荡电路的高频等效电路如图4-22所示。 图(a)为并联型晶体振荡器,晶体在电路中的作用是:晶体等效为电感元件; 图(b)为串联型晶体振荡器,工作在晶体的串联谐振频率上,晶体等效为短路元件。 20 Hμ7. H (a) (b) 图4-22 高频等效电路

5-5晶体振荡电路如图P4.12所示,试画出该电路的交流通路;若1 f 为1 1C L 的谐振频率, 2f 为22C L 的谐振频率,试分析电路能否产生自激振荡。若能振荡,指出振荡频率与1f 、2 f 之间的关系。 图 P4.12 解:该电路的简化交流通路如图P4.12(s)所示, 图P4.12(s) 电路可以构成并联型晶体振荡器。若要产生振荡,要求晶体呈感性, 11C L 和22C L 呈容性。必须满足12f f f osc >>。

5-6 图示为三回路振荡器的等效电路,设有以下四种情况: ①332211C L C L C L >>; ②332211C L C L C L <<; ③332211C L C L C L >=; ④332211C L C L C L =<。 试分析上述四种情况是否可能振荡?振荡频率0f 与回路谐振频率有何关系? 1 L 2 L 3 L 图题4-5 解:设11C L 回路谐振频率为11011C L = ω,22C L 回路谐振频率为2 2021C L =ω,33C L 回路谐振频率为3 3031 C L = ω。 能满足振荡的相位条件是be ce X X ,同电抗性质,cb X 与be ce X X ,反性质。 ①332211C L C L C L >> 可知321f f f f osc <<< 若振荡频率满足321f f f f osc <<<条件,则11C L 回路等效为容抗、 22C L 回路等效为容抗,而33C L 回路等效为感抗,满足相位条件,可能振荡。为电容三点式振荡器。 ②332211C L C L C L <<

通信电子线路课程设计心得体会

通信电子线路课程设计心得体会 听了景敏教授关于高效课堂的讲座后,使我对高效课堂有了一个进一步的认识。通过 学习,结合自己所教学科的实际情况,让我深切到高效课堂:要求老师的教学是高效的, 在课堂上用最短的时间完成高效的教学内容。要求学生在课堂上自主、主动、合作、和谐 的探究,并且让课堂上的每一分钟都得进其所。在课堂教学中要侧重以下各方面: 通过此次课程设计,使我更加扎实的掌握了有关高频电子线路方面的知识,在设计过 程中虽然遇到了一些问题,但经过一次又一次的思考,一遍又一遍的检查终于找出了原因 所在,也暴露出了前期我在这方面的知识欠缺和经验不足。实践出真知,通过亲自动手制作,使我们掌握的知识不再是纸上谈兵。 过而能改,善莫大焉。在课程设计过程中,我们不断发现错误,不断改正,不断领悟,不断获龋最终的检测调试环节,本身就是在践行“过而能改,善莫大焉”的知行观。这次 课程设计终于顺利完成了,在设计中遇到了很多问题,最后在老师的指导下,终于游逆而解。在今后社会的发展和学习实践过程中,一定要不懈努力,不能遇到问题就想到要退缩,一定要不厌其烦的发现问题所在,然后一一进行解决,只有这样,才能成功的做成想做的事,才能在今后的道路上劈荆斩棘,而不是知难而退,那样永远不可能收获成功,收获喜悦,也永远不可能得到社会及他人对你的认可! 课程设计诚然是一门专业课,给我很多专业知识以及专业技能上的提升,同时又是一 门讲道课,一门辩思课,给了我许多道,给了我很多思,给了我莫大的空间。同时,设计 让我感触很深。使我对抽象的理论有了具体的认识。通过这次课程设计,我掌握了常用元 件的识别和测试;熟悉了常用仪器、仪表;了解了电路的连线方法;以及如何提高电路的性 能等等,掌握了焊接的方法和技术,通过查询资料,也了解了收音机的构造及原理。 第一,两人一组,既加强了我们的动手能力,又让我们学会了团结一致,共同合作才 能研究出最好的方案。我们将理论联系实际,在交流中取得进步,从问题中提高自己。 我认为,在这学期的实验中,不仅培养了独立思考、动手操作的能力,在各种其它能 力上也都有了提高。更重要的是,在实验课上,我们学会了很多学习的方法。而这是日后 最实用的,真的是受益匪浅。要面对社会的挑战,只有不断的学习、实践,再学习、再实践。这对于我们的将来也有很大的帮助。以后,不管有多苦,我想我们都能变苦为乐,找 寻有趣的事情,发现其中珍贵的事情。就像中国提倡的艰苦奋斗一样,我们都可以在实验 结束之后变的更加成熟,会面对需要面对的事情。 经此次机械原理课程设计,我们都懂得和认知到了自己的很大的不足,不管是设计方案,还是设计那些机构,还有数字计算等,我们都欠缺的很多,都还有很多的空洞未能补上,都还需要我们花费很多的时间去填补和获取,虽然说我们学的只是理论,但我们要实 现的确是实践,可能一开始因为大家的理论不足和实践的经验不足都可能够造成我们在设

通信电子线路问题汇总-student - 答案版

第一章绪论 1.调幅发射机和超外差接收机的结构是怎样的?每部分的输入和输出波形是 怎样的? 2.什么是接收机的灵敏度? 接收机的灵敏度指接受弱信号的能力。 3.无线电电波的划分,P12 例:我国CDMA手机占用的CDMA1X,800MHz 频段,按照无线电波波段划分,该频段属于什么频段? 甚低频(VLF):10~30kHz 低频(LF):30~300kHz 中频(MF):300~3000kHz 高频(HF):3~30MHz 甚高频(VHF):30~300MHz 超高频(UHF):300~3000MHz 特高频(SHF):3000~30000MHz 极高频:30~300GHz 第三章 1.什么叫通频带?什么叫广义失谐? 当回路的外加信号电压的幅值保持不变,频率改变为w=w1或w=w2时,回 路电流等于谐振值的1/w2-w1称为回路的通频带。 广义失谐

2.串联谐振回路和并联谐振回路的谐振曲线(幅度和相位)和电抗性质? 串联电抗

并联电抗 3.串联谐振回路和并联谐振回路适用于信号源内阻和负载电阻大还是小的电 路? 串联谐振回路适用于信号源内阻小的和电阻不大的,并联谐振回路适用于信 号源内阻大的。 4.电感抽头接入和电容抽头接入的接入系数? 5.Q 值的物理意义是什么?Q 值由哪些因素决定,其与通频带和回路损耗的关系 怎样? 回路Q 值与回路电阻R 成反比,考虑信号源和负载的电阻后,Q 值下降 Q 值越高,谐振曲线越尖锐,对外加电压的选频特性越显著,回路的选择性越好,Q 与回路通频带成反比。 在串联回路中:0L S L w L Q R R R = ++ ,S L R R +使回路Q 值降低,谐振曲线边钝。 在并联回路中:1 () 1P L P P p P L S S L Q Q R R w L G G G R R = =+++ +, P R 和S R 越小,L Q 值下降 越多,因而回路通频带加宽,选择性变坏。 6.串联谐振电路Q 值的计算式?谐振时电容(或电感)上电压与电阻(或电 源)上电压的关系是怎样的? 无负载时 001w L Q R w C R = = 有负载时 0L S L w L Q R R R = ++ 7.并联谐振电路有哪两种形式,相应的Q 值计算式是怎样的?谐振时电容(或 电感)上电流与电阻(或电源)上电流的关系是怎样的?

通信电子线路实验报告

通信电子线路研究性报告 姓名XX 班级通信120X 学号122110XX 通线教师路勇 时间2014/11/14

一实验要求 1 了解丁类放大器的相关信息 2 利用模拟乘法器实现AM、SSB及DSB的振幅调制与解调 二实验过程 2.1 丁类放大器 丁类放大也称D类放大或数字式放大器。系利用极高频率的转换开关电路来放大音频信号的。具有效率高,体积小的优点。许多功率高达1000W的这类数字式放大器,体积只不过像盒VHS录像带那么大。这类放大器不适宜于用作宽频带的放大器,但在有源超低音音箱中却有较多的应用。 D 类放大器实际上是一种开关放大器,其开关频率高达100kHz以上。输入端是直接从数码信号源如CD唱机、DVD影碟机、DVD Audio或SACD光碟机以及DTV数码电视等输入的数码音频信号,而不是经过ADC模数转换或DAC数模转换处理的音乐模拟信号。典型的实现过程如下: 先由振荡器调制直流电源产生一个基准方波信号,其工作频率可跟随输入信号变化,设定为几十到几百千赫;脉冲宽度则随输入信号的幅度大小而变化。还可以设置一个锯齿波信号产生器,其频率为基准方波信号的一倍,并与之同步。锯齿波信号用来同需要放大的、不断变化的输入信号作比较。当锯齿波同输入信号发生差异时,便产生与其瞬时振幅一致的相移信号。再用一个逻辑上由基准信号和相移信号控制的开关电路输出一个极性经过选择的脉冲宽度调制信号(PWM信号)。PWM信号经晶体管放大和高速整流,再通过低通滤波器滤除高频成分、平滑处理后回复为音频信号馈送扬声器放音。 这种电路最大优点是功耗极小。因为它通常采用耐二次击穿、开关转换效率极高的场效应晶体管,运行中几乎没有损耗,效率可达90%以上(普通A类或AB类放大器的效率最大也只不过50%)。高效意味着耗电小、散热要求低,从而导致集成电路化的大批量生产。其另一个优点是失真小。我们都知道,为了增加频响宽度、防止信号饱和畸变,几乎所有放大器都需要使用反馈电路,可是反馈产生的延时效应却对原音重现带来失真。由于数码放大器转换时间极快,延时效应微乎其微,产生的误差只有传统模拟放大器的六分之一,所以对输出控制得更好,尤其是瞬态反应更为精确真实,特别适用于爆发力要求较高的重低音功放。 2.2 AM、DSB及SSB的调制

通信电子线路课程设计

目录 一、题目 (1) 二、实验目的 (1) 三、主要技术指标 (1) 四、设计和制作任务 (1) 五、设计思路及工作原理 (2) 六、方案的选择与论证 (2) 七、整机电路的设计 (7) 八、电路的调试与仿真 (10) 九.课程设计总结与体会 十.参考资料 (11) 十一.附件 (12)

AM 广播接收机系统设计 一. 题目: 设计个一由分立元件构成的AM 广播接收机系统 二. 实验目的: 通过调幅广播接收电路设计设计,学生应建立无线电接收机的整机概念,了解接收整机各单元电路之间的关系及相互影响,从而能正确设计、计算接收机的各个单元电路:包括高频放大级、主振级、中放级、检波级及音频放大器的参数设计、元器件选择。使学生加深对所学的通信电路知识理解,培养学生的专业素质,提 高其利用通信电子线路知识处理通信系统问题的能力,为今后的专业课程的学习、毕业设计和工作打下良好的基础。使学生能比较扎实地掌握通信电子线路课程的基础知识和基本理论,掌握通信系统及有关设备的分析、开发等基本技能,受到必要工程训练、初步的科学研究方法训练和实践锻练,增强分析问题和解决问题的能力,了解通信电子线路课程的新发展。 三. 主要技术指标 调幅波接收机设计参数: 1.载波频率:f 0=10.7MHz 2.输出功率:P Omax ≥0.25W 3.检波效率:ηd >80%±5% 4.包络失真系数:γ≤1% 5.负载电阻:R L =8Ω 6.频率稳定度:0 f f ≤5×10—4 四. 设计和制作任务: 1.熟悉设计任务及主要技术指标和要求。 2.选定方案的论证及整体电路框图的工作原理。

相关主题