搜档网
当前位置:搜档网 › karczewicz-csvt-0703

karczewicz-csvt-0703

karczewicz-csvt-0703
karczewicz-csvt-0703

The SP-and SI-Frames Design for H.264/A VC Marta Karczewicz and Ragip Kurceren,Member,IEEE

Abstract—This paper discusses two new frame types,SP-frames and SI-frames,defined in the emerging video coding standard, known as ITU-T Rec.H.264or ISO/IEC MPEG-4/Part10-A VC. The main feature of SP-frames is that identical SP-frames can be reconstructed even when different reference frames are used for their prediction.This property allows them to replace I-frames in applications such as splicing,random access,and error re-covery/resilience.We also include a description of SI-frames, which are used in conjunction with SP-frames.Finally,simulation results illustrating the coding efficiency of SP-frames are pro-vided.It is shown that SP-frames have significantly better coding efficiency than I-frames while providing similar functionalities. Index Terms—A VC,bitstream switching,error recovery,error resiliency,H.264,JVT,MPEG-4,random access,SI-frames, SP-frames,splicing.

I.I NTRODUCTION

T O INCREASE compression efficiency and network friendliness,the emerging ITU-T Recommendation H.264,ISO/IEC MPEG-4/Part10-A VC[1]video compression standard introduces an extensive set of new features.In this paper,we describe in detail two of these features,specifically, new frame types referred to as SP-frames[1]–[3]and SI-frames [1],[4].

SP-frames make use of motion compensated predictive coding to exploit temporal redundancy in the sequence similar to P-frames.The difference between SP-and P-frames is that SP-frames allow identical frames to be reconstructed even when they are predicted using different reference frames.Due to this property,SP-frames can be used instead of I-frames in such applications as bitstream switching,splicing,random access, fast forward,fast backward,and error resilience/recovery.At the same time,since SP-frames unlike I-frames are utilizing mo-tion-compensated predictive coding,they require significantly fewer bits than I-frames to achieve similar quality.In some of the mentioned applications,SI-frames are used in conjunction with SP-frames.An SI-frame uses only spatial prediction as an I-frame and still reconstructs identically the corresponding SP-frame,which uses motion-compensated prediction.

The remainder of the paper is organized as follows.In Sec-tionII,areviewofframetypesbeingusedintheexistingstandards is given.In Section III,we discuss how features of SP-frames can be exploited in several example applications.Section IV provides a description of SP-and SI-frame decoding and encoding pro-cesses.Section V includes details of experiments and a relative performance improvement of the SP-frames.Finally,Section V offers a summary and conclusions.

Manuscript received December13,2001;revised May9,2003.

The authors are with Nokia Research Center,Nokia Inc.,Irving,TX75039 USA(e-mail:ragip.kurceren@https://www.sodocs.net/doc/773562374.html, and marta.karczewicz@https://www.sodocs.net/doc/773562374.html,). Digital Object Identifier

10.1109/TCSVT.2003.814969

Fig.1.Generic block diagram of decoding process.

II.M OTIV ATION

In the existing video coding standards,such as MPEG-2,

H.263and MPEG-4,three main types of frames are defined.

Each frame type exploits a different type of redundancy ex-

isting in video sequences and consequently results in a different

amount of compression efficiency and different functionality

that it can provide.

An intra-frame(or I-frame)is a frame that is coded ex-

ploiting only the spatial correlation of the pixels within the

frame without using any information from other frames.

I-frames are utilized as a basis for decoding/decompression of

other frames and provide access points to the coded sequence

where decoding can begin.

A predictive-frame(or P-frame)is coded/compressed using

motion prediction from a so-called reference frame,i.e.,a past

I-or P-frame available in an encoder and decoder buffer.Fig.1

illustrates a generic decoding process for P-and I-frames.Fi-

nally,a bidirectional-frame(or B-frame)is coded/compressed

using a prediction derived from an I-frame(P-frame)in its past

or an I-frame(P-frame)in its future or a combination of both.

B-frames are not used as a reference for prediction of other

frames.

Since,in a typical video sequence,adjacent frames are

highly correlated,higher compression efficiency is achieved

when using B-or P-frames instead of I-frames.On the other

hand,temporal predictive coding employed in P-and B-frames

introduces temporal correlation within the coded bitstream,i.e.,

B-or P-frames cannot be decoded without correctly decoding

their reference frames in the future and/or past.In cases when

a reference frame used in an encoder and a reference frame

used in a decoder are not identical either due to errors during

transport or due to some intentional action on the server side, 1051-8215/03$17.00?2003IEEE

the reconstructed values of the subsequent frames predicted from such a reference frame are different in the encoder than in the decoder.This mismatch would not only be confined to a single frame but would further propagate in time due to the motion-compensated coding.

In H.264/A VC,I-,P-,and B-frames have been extended with new coding features,which lead to a significant increase in coding efficiency.For example,H.264/A VC allows using more than one prior coded frame as a reference for P-and B-frames. Furthermore,in H.264/A VC,P-frames and B-frames can use prediction from subsequent frames[5].These new features are described in detail elsewhere in this Special Issue. Additionally,two new types of frames have been defined, namely,SP-frames[2],[3]and SI-frames[4].The method of coding as defined for SP-and SI-frames allows obtaining frames having identical reconstructed values even when different refer-ence frames are used for their prediction.

In the following,we describe on how these features of SP-frames and SI-frames can be exploited in specific applica-tions[6].

A.Bitstream Switching

Video streaming has emerged as one of the essential appli-cations over the fixed internet and in the near future over3G wireless networks.The best-effort nature of today’s networks causes variations of the effective bandwidth available to a user due to the changing network conditions.The server should then scale the bit rate of the compressed video,transmitted to the receiver,to accommodate these variations.In case of conversa-tional services that are characterized by real-time encoding and point-to-point delivery,this can be achieved by adjusting,on the fly,source encoding parameters,such as a quantization param-eter or a frame rate,based on the network feedback.In typical streaming scenarios when an already encoded video bitstream is to be sent to a client,the above solution cannot be applied. The simplest way of achieving bandwidth scalability in the case of pre-encoded sequences is by representing each sequence using multiple and independent streams of different bandwidth and quality.The server then dynamically switches between the streams to accommodate the variations of the bandwidth avail-able to the client.

Assume that we have multiple bitstreams generated inde-pendently with different encoding parameters,corresponding

to the same video sequence.

Let

and

denote the sequence of the decoded

frames from bitstreams1and2,respectively.Since the encoding

parameters are different for each bitstream,the reconstructed

frames from different bitstreams at the same time instant,for

example,

frames,will not be identical.Now

let us assume that the server initially sends bitstream1up to

time n after which it starts sending bitstream2,i.e.,the decoder

would have

received.

In this case,the

frame

used to obtain its prediction is not

received whereas the

frame

,is not identical

to

but will further propagate in time due to

motion-compensated coding.

In the prior video encoding standards,perfect(mis-

match-free)switching between bitstreams is possible only at

frames,which do not use any information prior to their location,

i.e.,at I-frames.Furthermore,by placing I-frames at fixed(e.g.,

1s)intervals,VCR functionalities,such as random access or

“Fast Forward”and“Fast Backward”(increased playback rate)

when streaming a video content,are achieved.The user may

skip a portion of a video sequence and restart playing at any

I-frame location.Similarly,the increased playback rate can

be achieved by transmitting only I-frames.The drawback of

using I-frames in these applications is that,since I-frames do

not exploit any temporal redundancy,they require much larger

number of bits than P-frames at the same quality.

From the properties of SP-frames,note that identical

SP-frames can be obtained even when they are predicted using

different reference frames.This feature can be exploited in

bitstream switching as follows.Fig.2depicts an example how

to utilize SP frames to switch between different bitstreams.

Again assume that there are two bitstreams corresponding

to the same sequence encoded at different bit rates and/or at

different temporal resolutions.Within each encoded bitstream,

SP-frames are placed at the locations at which switching

from one bitstream to another will be allowed

(frames

in Fig.2).These SP-frames shall be referred to as

primary SP-frames in what follows.Furthermore,for each

primary SP-frame,a corresponding secondary SP-frame is

generated,which has the same identical reconstructed values as

the primary SP-frame.Such a secondary SP-frame is sent only

during bitstream switching.In Fig.2,the

SP-frame

,which will be transmitted

only when switching from bitstream1to bitstream

2.

KARCZEWICZ AND KURCEREN:SP-AND SI-FRAMES DESIGN FOR H.264/A VC

639

Fig.3.Splicing,random access using SI-frames.

If one of the bitstreams has a lower temporal resolution,e.g., 1fps,then this bitstream can be used to achieve fast-forward functionality.Specifically,decoding from the bitstream with the lower temporal resolution and then switching to the bitstream with the normal frame rate would provide such functionality.

B.Splicing and Random Access

The bitstream-switching example discussed earlier considers bitstreams representing the same sequence of images.However, this is not necessarily the case for other applications where bit-stream switching is needed.Examples include

?switching between bitstreams arriving from different cam-eras capturing the same event but from different perspec-tives,or cameras placed around a building for surveillance;

?switching to local/national programming or insertion of commercials in TV broadcast,video bridging,etc. Splicing refers to the process of concatenating encoded bit-streams and includes the examples discussed earlier.

When switching occurs between bitstreams representing different sequences,that affects encoding of secondary frames, i.e.,encoding

of

to indicate that this is an SI-frame encoded,as described later,using spatial prediction and having identical reconstructed values as the corresponding

SP-frame

in Fig.4.In this case, the server can send the SI-frame representation,

i.e.,in-stead

of

640IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY ,VOL.13,NO.7,JULY 2003

with some predefined ratio of SP-slices.Then during transport,instead of some of the SP-slices their secondary representation,that is SI-slices,is sent.The number of SI-slices that should be sent can be calculated similarly as in the real-time encoding/de-livery approach.

E.Video Redundancy Coding

SP-frames have other uses in applications in which they do not act as replacements of I-frames.Video redundancy coding (VRC)can be given as an example.“The principle of the VRC method is to divide the sequence of pictures into two or more threads in such a way that all camera pictures are assigned to one of the threads in a round-robin fashion.Each thread is coded independently.In regular intervals,all threads converge into a so-called sync frame.From this sync frame,a new thread series is started.If one of these threads is damaged because of a packet loss,the remaining threads stay intact and can be used to predict the next sync frame.It is possible to continue the decoding of the damaged thread,which leads to slight picture degradation,or to stop its decoding which leads to a drop of the frame rate.Sync frames are always predicted out of one of the undamaged threads.This means that the number of transmitted I-frames can be kept small,because there is no need for complete re-syn-chronization.”1For the sync frame,more than one representa-tion (P-frame)is sent,each one using a reference frame from a different thread.Due to the usage of P-frames these representa-tions are not identical.Therefore a mismatch is introduced when some of the representations cannot be decoded and their coun-terparts are used when decoding the following threads.The use of SP-frames as sync frames eliminates this problem.III.D ECODING AND E NCODING P ROCESSES FOR SP-AND

SI-F RAMES In this section,we provide a detailed description of de-coding and encoding processes for nonintra blocks in SP-and SI-frames.For intra blocks in SP-and SI-frames,the process identical to that of I-frames is applied [1].As noted earlier,SP-frames can be further classified as secondary SP-frames,

e.g.,

and are

added to the received quantized prediction error coefficients

to calculate the quantized reconstruction coefficients .The image is reconstructed by inverse transform

of

.

Since during SP-frame (SI-frame)decoding,unlike during P-frame (I-frame)decoding (compare Figs.1and 5),transform is applied to predicted blocks and resulting coefficients are quantized,coding efficiency of the SP-frames (SI-frames)is expected to be worse than that of P-frames (I-frames),as will be illustrated in Section IV .The quantization applied to the predicted block coefficients and the prediction error coefficients in the secondary SP-and SI-frame decoding scheme described above has to be the same.More specifically,both should use the same quantization parameter.To improve coding efficiency,the following improved decoding structure [3]is defined,which gives the flexibility to use different quantization parameters for the predicted block coefficients than for the prediction error coefficients.

B.Decoding Process for Primary SP-Frames

Fig.6illustrates a block diagram of the improved SP-frame decoder used for primary SP-frames.Similar to the earlier case,a predicted

block

.

Then the quantized prediction error coefficients

that are re-ceived from the encoder are dequantized using a quantization parameter PQP and added to the predicted block transform co-

efficients

.The sum is denoted

by .Note that in the earlier case,the quantized coefficients were added whereas in this case the summation of coefficients is performed.The recon-struction

coefficients

are quantized and dequantized using a quantization parameter SPQP and inverse transform is applied to the resulting

coefficients

KARCZEWICZ AND KURCEREN:SP-AND SI-FRAMES DESIGN FOR H.264/A VC

641

Fig.6.Generic block diagram of decoding process for primary SP-frames.

not necessarily the same as the quantization parameter PQP used for the prediction error coefficients.Therefore,in this case,a finer quantization parameter,introducing smaller distortion,can be used for the predicted block coefficients than for the predic-tion error coefficients,which will result in smaller reconstruc-tion error.

The SP-and SI-frame decoders discussed earlier in this sec-tion are general decoders and can easily be incorporated into other coding standards.The specific details as to how they are implemented in H.264/A VC can be found in [1].C.SP-Frame and SI-Frame Encoder

In this section,we first present the encoding process for pri-mary SP-frames and later for secondary SP-and SI-frames.The following applies to the encoding of nonintra blocks in SP-and SI-frames.For intra blocks in SP-and SI-frames,the identical process to that used for I-frames is applied [1].

Fig.7illustrates a general schematic block diagram of an ex-ample encoder corresponding to the primary SP-frames.First,a predicted

block

are then subtracted from the trans-form coefficients of the original image.The results of the sub-traction represent the prediction error

coefficients

.The pre-diction error coefficients are quantized using PQP and the re-sulting coefficients

are sent to the multiplexer together with motion vector information.The decoding process follows the identical steps as described earlier.

In the following,we illustrate with an example,how SP-frames provide the functionality mentioned earlier,i.e.,identical frames are reconstructed even when different refer-ence frames are used for their prediction.Let us denote

by

and obtained

by Fig.7.Generic block diagram of encoding process for nonintra blocks in SP-frames.

inverse transform of the quantized reconstructed coefficients

(see Fig.6).Now assume that we would like to generate secondary representation of this primary SP-frame having the identical reconstructed

values

.The problem becomes

finding new prediction error coefficients

that would identically reconstruct the

frame instead

of

are calculated for the predicted

frame

.On the decoder side,according to

the decoding process for secondary SP-frames,as illustrated in Fig.5,the decoder

forms

by first applying transform

to

,

identical to the ones on the encoder side,are added to the

received prediction error coefficients

.The resulting sum is equal

to

is formed by intra-prediction,this case

becomes converting a primary SP-frame with motion-com-pensated prediction into a secondary SI-frame with only spatial prediction.As shown earlier,this property has major implications in random access,and error recovery/resilience.To achieve identical reconstruction,the quantization param-eter used for a secondary SP-frame should be equal to the quan-tization parameter SPQP used for the predicted frame in a pri-mary SP-frame.That means that using a finer quantization pa-rameter value for SPQP although improves the coding efficiency of the primary SP-frames placed within the bitstream might re-sult in larger frame sizes for the secondary SP-frames.Since the secondary representations are sent only during switching

642IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY,VOL.13,NO.7,JULY

2003

Fig.8.Illustration of coding efficiencies SP frames using different SPQP values,also included are I-and P-frame performances.

or random access,the choice of the SPQP value is application dependent.For example when SP-frames are used to facilitate random access one can expect that the SP-frames placed within a single bitstream will have the major influence on compression efficiency and therefore the SPQP value should be small.On the other hand,when SP-frames are used for streaming rate control, the SPQP value should be kept close to PQP since the secondary SP-frames sent during switching from one bitstream to another will have large share of the overall bandwidth.

IV.R ESULTS

In this section,we provide simulation results to illustrate the coding efficiency of SP-frames.First,we compare the coding efficiency of SP-frames with I-and P-frames.The results are obtained using TML8.7software[13]with five reference frames in UVLC mode with rate-distortion optimization option https://www.sodocs.net/doc/773562374.html,ter,a comparison of SP-frames with S-frames[11] is provided;these results are repeated from[12].

The results reported here are for some of the standard se-quences used in JVT contributions with QCIF resolution and en-coded at10fps.Similar results are observed for other sequences and further results can be found in[3].

1)Coding Efficiency of SP-Frames:Fig.8gives the comparison of coding efficiency of I-,P-,and SP-frames,in terms of their PSNR as a function of bit rate.These results are generated by encoding each frame of a sequence as either an I-,P-,or SP-frame,with the exception of the first frame,which is always an I-frame.We also include in Fig.8the results when SP-frames are coded using different values of SPQP.In the first case,SPQP is the same as PQP,then SPQP is equal to3and in the last case,SPQP is equal to PQP-6.It can be observed in Fig.8,that SP-frames have lower coding efficiency than P-frames and significantly higher coding efficiency than I-frames.Note,however,that SP-frames provide functionalities that are usually achieved only with I-frames.As expected,the SP-frames performance improves with decreasing SPQP versus PQP.The SP-frames coding efficiency

for

th frame,an S-frame is generated by encoding

the

and

with and two additional bitstreams created by switching between them.In each case,we switch from the bitstream encoded

with to the bitstream encoded

with

but at different frames,namely at frames number10and20.

KARCZEWICZ AND KURCEREN:SP-AND SI-FRAMES DESIGN FOR H.264/A VC

643

Fig.10.Illustration of switching between bitstream encoded by QP =19and 13with S-frames,QP (S-frame)is equal to

3.

Fig.11.Multiple switching between bitstreams encoded with QP =19and 13when using S-frames.QP for S-frame is equal to 3.

As can be seen from Fig.10,the PSNR values of the recon-structed frames after the switch diverge from the values of the frames from the “target”bitstream—the bitstream that we are switching to.Moreover,the drift becomes more pronounced when multiple switches occur as illustrated in Fig.11.In Fig.11,the drift becomes larger than 1.5dB after the last switch between https://www.sodocs.net/doc/773562374.html,ing even smaller quantization parameter for S-frames could reduce the drift;however,that would increase S-frame sizes which,as will be shown below,are already substantial.

In this section,we present a brief comparison of SP-and S-frames.Tables I and II list the average frame sizes of S-and SP-frames.Here,SP-frames refer to secondary SP-frames that would be used during switching,

e.g.,

TO THE

B ITSTREAM E NCODED W ITH QP

TO THE

B ITSTREAM E NCODED W ITH

QP

644IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY ,VOL.13,NO.7,JULY 2003

TABLE IV

PSNR AND T OTAL B ITS O VER 100F RAMES W HEN T HERE IS NO S WITCHING

FOR I-,S-,AND SP-F RAMES A

PPROACHES

Similarly,the I-frame method requires 1.3times more bits than the SP-frame one.

In Table IV ,we further illustrate the performance of each scheme when there is no switching between bitstreams.It can be seen that in this case the performance of the I-frame ap-proach is significantly lower than that of the S-and SP-frame methods.The S-frame approach has the best coding efficiency that is equal to the P-frames performance.The difference be-tween the S-frame and SP-frame coding efficiency is however quite small.Furthermore,the SP-frame efficiency can be im-proved by using smaller values of SPQP,as discussed earlier,but only at the expense of increasing the secondary SP-frame sizes,which would approach the S-frame sizes.Nevertheless,even in this case,SP-frames will provide drift-free switching.

V .S UMMARY

We have described two new frame types defined in H.264/A VC.These frame types,called SP-and SI-frames,can be used to provide functionalities such as bitstream switching,splicing,random access,error recovery,and error resiliency.We have presented the decoding process for primary SP-frames (frames placed within a single bitstream)and sec-ondary SP-and SI-frames (frames used when switching from one bitstream to another).Usage of different quantization pa-rameter values for the predicted block coefficients (SPQP)and for the prediction error coefficients (PQP)allows to introduce a tradeoff between coding efficiency of primary and secondary SP-frames.The lower the value of SPQP with respect to PQP,the higher the coding efficiency of a primary SP-frame,while on the other hand,the larger number of bits is required when switching to this frame.

Later,we showed how secondary SP-and SI-frames can be encoded such that identical frames can be reconstructed even when different reference frames are used for their prediction.We have compared coding efficiency of SP-frames against I-and P-frames.SP-frames have significantly better coding efficiency than I-frames while providing similar functionalities.Finally,we have shown the resulting performances of different

schemes that are being used for switching between bitstreams.The S-frame approach introduces drift,which becomes sig-nificant when there are multiple switches between bitstreams.SP-frames,on the other hand,provide drift-free switching between bitstreams and their sizes are considerably smaller than that of S-frames.We have also included results of the periodic intra-coding approach.It is again noted that SP-frames provide better PSNR versus bit-rate performance.

R EFERENCES

[1]T.Wiegand and G.Sullivan,“Study of final committee draft of joint

video specification (ITU-T rec.H.264/ISO/IEC 14496-10A VC),”in 6th Meeting,Awaji,JP,Island,Dec.5–13,2002,Doc.JVT-G050d2,Joint Video Team (JVT)of ISO/IEC MPEG &ITU-T VCEG(ISO/IEC JTC1/SC29/WG11and ITU-T SG16Q.6).

[2]M.Karczewicz and R.Kurceren,“A Proposal for SP-Frames,”in ITU-T

Video Coding Experts Group Meeting,Eibsee,Germany,Jan.09–12,2001,Doc.VCEG-L-27.

[3]

,“Improved SP-Frame Encoding,”in ITU-T Video Coding Experts Group Meeting,Austin,TX,Apr.02–04,2001,Doc.VCEG-M-73.[4]R.Kurceren and M.Karczewicz,“New Macroblock Modes for

SP-Frames,”in ITU-T Video Coding Experts Group Meeting ,Pattaya,Thailand,Dec.4–6,2001,Doc.VCEG-O-47.

[5]M.Hannuksela,“Prediction From Temporally Subsequent Pictures,”in

ITU-T Video Coding Experts Group Meeting ,Portland,OR,Aug.22–25,2000,Doc.VCEG-K-38.

[6]R.Kurceren and M.Karczewicz,“SP-Frame demonstrations,”in ITU-T

Video Coding Experts Group Meeting ,Santa Barbara,CA,Sept.24–27,2001,Doc.VCEG-N-42.

[7]S.Wenger and G.C?té,Intra-macroblock refresh in packet (picture)

lossy scenarios,Whistler,BC,Canada,July 21–24,1998,Doc.Q15-E-15.

[8]G.C?té,S.Wenger,and M.Gallant,Intra-macroblock refresh in packet

(picture)lossy scenarios,Whistler,BC,Canada,July 21–24,1998,Doc.Q15-E-37.

[9]S.Wenger,H.26L error resilience experiments:First results,Osaka,

Japan,May 16–18,2000,Doc.Q15-J-53.

[10]T.Stockhammer,G.Liebl,T.Oelbaum,T.Wiegand,and D.Marpe,

“H.26L Simulation Results for Common Test Conditions for RTP/IP Over 3GPP/3GPP2,”in ITU-T Video Coding Experts Group Meeting ,Santa Barbara,CA,Sept.24–27,2001,Doc.VCEG-N-38.

[11]N.Farber and B.Girod,“Robust H.263compatible video transmission

for mobile access to video servers,”in Proc.Int.Conf.Image Processing,ICIP’97,Santa Barbara,CA,Oct.1997.

[12]R.Kurceren and M.Karczewicz,“Further Results for SP-Frames,”in

ITU-T Video Coding Experts Group Meeting,Austin,TX,Apr.02–04,2001,Doc.VCEG-M-38.

[13]TML 8.7Software [Online].Available:ftp://https://www.sodocs.net/doc/773562374.html,/video-site/h26l/.

[14]S.Wenger,“Simulation Results for H:263+Error Resilience Modes K,

R,N on the Internet,”ITU-T,SG16,Question 15,doc.Q15-D-17,Apr.7,1998.Marta Karczewicz received the M.S.degree in electrical engineering in 1994and Dr.Technol.degree in 1997from Tampere University of Technology (TUT),Tampere,Finland.

During 1994–1996,she was a Researcher in the Signal Processing Laboratory of TUT.Since 1996,she has been with the Visual Communication Laboratory,Nokia Research Center,Irving,TX,where she is currently a Senior Research Manager.Her research interests include image compression,communication,and computer graphics.

Ragip Kurceren (S’98–M’01)received the M.S.and Ph.D.degrees in elec-trical,computer,and systems engineering from Rensselaer Polytechnic Insti-tute,Troy,NY ,in 1996and 2001,respectively.

From 1995to 2000,he was a Research Assistant with the Center for Image Processing Research,Rensselaer Polytechnic Institute.Since July 2000,he has been with Nokia Research Center,Irving,TX.His research interests include digital image and video processing,including compression and transmission,and multimedia adaptation.

Dr.Kurceren is a co-recipient of the Best Paper Award from the IEEE Packet Video Workshop 2000.

相关主题