搜档网
当前位置:搜档网 › An Adaptive Large Neighborhood Search Heuristic for the Pickup and Delivery Problem with Ti

An Adaptive Large Neighborhood Search Heuristic for the Pickup and Delivery Problem with Ti

An Adaptive Large Neighborhood Search Heuristic for the Pickup and Delivery Problem with Ti
An Adaptive Large Neighborhood Search Heuristic for the Pickup and Delivery Problem with Ti

An Adaptive Large Neighborhood Search Heuristic for the Pickup and Delivery Problem with Time Windows

Stefan Ropke,David Pisinger

8th August2005

Abstract

The pickup and delivery problem with time windows is the problem of serving a number of transportation requests using a limited amount of vehicles.Each request involves moving a number of goods from a

pickup location to a delivery location.Our task is to construct routes that visit all locations such that

corresponding pickups and deliveries are placed on the same route and such that a pickup is performed

before the corresponding delivery.The routes must also satisfy time window and capacity constraints.

This paper presents a heuristic for the problem based on an extension of the Large Neighborhood Search heuristic previously suggested for solving the vehicle routing problem with time windows.The proposed

heuristic is composed of a number of competing sub-heuristics which are used with a frequency correspond-

ing to their historic performance.This general framework is denoted Adaptive Large Neighborhood Search.

The heuristic is tested on more than350benchmark instances with up to500requests.It is able to improve the best known solutions from the literature for more than50%of the problems.

The computational experiments indicate that it is advantageous to use several competing sub-heuristics instead of just one.We believe that the proposed heuristic is very robust and is able to adapt to various

instance characteristics.

Keywords:Pickup and Delivery Problem with Time Windows,Large Neighborhood Search,Simulated Annealing,Metaheuristics

Introduction

In the considered variant of the pickup and delivery problem with time windows(PDPTW),we are given a number of requests and vehicles.A request consists of picking up goods at one location and delivering these goods to another location.Two time windows are assigned to each request:a pickup time window that speci?es when the goods can be picked up and a delivery time window that tells when the goods can be dropped off. Furthermore service times are associated with each pickup and delivery.The service times indicate how long it will take for the pickup or delivery to be performed.A vehicle is allowed to arrive at a location before the start of the time window of the location,but the vehicle must then wait until the start of the time window before initiating the operation.A vehicle may never arrive to a location after the end of the time window of the location.

Each request is assigned a set of feasible vehicles.This can for example be used to model situations where some vehicles cannot enter a certain location because of the dimensions of the vehicle.

Each vehicle have a limited capacity and it starts and ends its duty at given locations called start and end terminals.The start and end location do not need to be the same and two vehicles can have different start and end terminals.Furthermore each vehicle is assigned a start and end time.The start time indicates when the vehicle must leave its start location and the end time denotes the latest allowable arrival at its end location.Note that the vehicle leaves its depot at the speci?ed start time even though this may introduce a waiting time at the ?rst location visited.

Our task is to construct valid routes for the vehicles.A route is valid if time windows and capacity con-straints are obeyed along the route,each pickup is served before the corresponding delivery,corresponding

pickup and deliveries are served on the same route and the vehicle only serves requests it is allowed to serve. The routes should be constructed such that they minimize the cost function to be described below.

As the number of vehicles is limited,we might encounter situations where some requests cannot be assigned to a vehicle.These requests are placed in a virtual request bank.In a real world situation it is up to a human operator to decide what to do with such requests.The operator might for example decide to rent extra vehicles in order to serve the remaining requests.

The objective of the problem is to minimize a weighted sum consisting of the following three components: 1)the sum of the distance traveled by the vehicles,2)the sum of the time spent by each vehicle.The time spent by a vehicle is de?ned as its arrival time at the end terminal minus its start time(which is given a priori),3)the number of requests in the request bank.The three terms are weighted by the coef?cientsα,βandγrespectively. Normally a high value is assigned toγin order to serve as many requests as possible.A mathematical model is presented in section1to de?ne the problem precisely.

The problem was inspired from a real life vehicle routing problem related to transportation of raw materials and goods between production facilities of a major Danish food manufacturer.For con?dentiality reasons,we are not able to present any data about the real life problem that motivated this research.

The problem is NP-hard as it contains the traveling salesman problem as a special case.The objective of this paper is to develop a method for?nding good but not necessarily optimal solutions to the problem described above.The developed method should preferably be reasonably fast,robust and able to handle large problems. Thus it seems fair to turn to heuristic methods.

The next paragraphs survey recent work on the PDPTW.Although none of the references mentioned below consider exactly the same problem as ours,they all face the same core problem.

Nanry and Barnes[15]are among the?rst to present a metaheuristic for the PDPTW.Their approach is based on a Reactive Tabu Search algorithm that combines several standard neighborhoods.In order to test the heuristic,Nanry and Barnes create PDPTW instances from a set of standard VRPTW problems proposed by Solomon[26].The heuristic is tested on instances with up to50requests.Li and Lim[11]use a hybrid metaheuristic to solve the problem.The heuristic combines Simulated Annealing and Tabu search.Their method is tested on the9largest instances from Nanry and Barnes[15]and they consider56new instances based on Solomon’s VRPTW problems[26].Lim,Lim and Rodrigues[12]apply“Squeaky wheel”optimization and local search to the PDPTW.Their heuristic is tested on the set of problems proposed by Li and Lim[11].Lau and Liang[10]also apply Tabu search to PDPTW and they describe several construction heuristics for the problem.Special attention is given to how test problems can be constructed from VRPTW instances.

Recently,Bent and Van Hentenryck[2]proposed a heuristic for the PDPTW based on Large Neighborhood Search.The heuristic was tested on the problems proposed by Li and Lim[11].The heuristic by Bent and Van Hentenryck is probably the most promising metaheuristic for the PDPTW proposed so far.

Gendreau et al.[9]consider a dynamic version of the problem.An ejection chain neighborhood is proposed and steepest descent and Tabu search heuristics based on the ejection chain neighborhood are tested.The tabu search is parallelized and the sequential and parallelized versions are compared.

Several column generation methods for PDPTW have been proposed.These methods both include exact and heuristic methods.Dumas et al.[8]were the?rst to use column generation for solving PDPTW.They propose a branch and bound method that is able to handle problems with up to55requests.

Xu et al.[29]consider a PDPTW with several extra real-life constraints,including multiple time win-dows,compatibility constraints and maximum driving time restrictions.The problem is solved using a column generation heuristic.The paper considers problem instances with up to500requests.

Sigurd et al.[24]solve a PDPTW problem related to transportation of livestock.This introduces some extra constraints,such as precedence relations among the requests,meaning that some requests must be served before others in order to avoid the spread of diseases.The problem is solved to optimality using column generation. The largest problems solved contain more than200requests.

A recent survey of pickup and delivery problem literature was made by Desaulniers et al.[7].

The work presented in this paper is based on the Masters Thesis of Ropke[19].In the papers by Pisinger and Ropke[16],[20]it is shown how the heuristic presented in this paper can be extended to solve a variety of vehicle routing problems,for example the VRPTW,the Multi Depot Vehicle Routing Problem and the Vehicle Routing Problem with Backhauls.

The rest of this paper is organized as follows:Section1de?ne the PDPTW problem formally,Section

2describes the basic solution method in a general context;Section3describes how the solution method has been applied to PDPTW and extensions to the method are presented;Section4contains the results of the computational tests.The computational test is focused on comparing the heuristic to existing metaheuristics and evaluating if the re?nements presented in Section3improve the heuristic;Section5concludes the paper. 1Mathematical model

This section presents a mathematical model of the problem,it is based on the model proposed by Desaulniers et al.[7].The mathematical model serves as a formal description of the problem.As we solve the problem heuristically we do not attempt to write the model on integer-linear form.

A problem instance of the pickup and delivery problem contains n requests and m vehicles.The problem is de?ned on a graph,P={1,···,n}is the set of pickup nodes,D={n+1,···,2n}is the set of delivery nodes. Request i is represented by nodes i and i+n.K is the set of all vehicles,|K|=m.One vehicle might not be able to serve all requests,as an example a request might require that the vehicle has a freezing compartment.K i is the set of vehicles that are able to serve request i and P k?P and D k?D are the set of pickups and deliveries,

respectively,that can be served by vehicle k,thus for all i and k:k∈K i?i∈P k∧i∈D k.Requests where K i=K

are called special requests.De?ne N=P∪D and N k=P k∪D k.Letτk=2n+k,k∈K andτ k=2n+m+k,k∈K

be the nodes that represents the start and end terminal,respectively,of vehicle k.The graph G=(V,A)consists of the nodes V=N∪{τ1,···,τm}∪{τ 1,···,τ m}and the arcs A=V×V.For each vehicle we have a subgraph

G k=(V k,A k),where V k=N k∪{τk}∪ τ k and A k=V k×V k.For each edge(i,j)∈A we assign a distance d i j≥0and a travel time t i j≥0.It is assumed that distances and times are nonnegative;d i j≥0,t i j≥0and that

the times satisfy the triangle inequality;t i j≤t il+t l j for all i,j,l∈V.For the sake of modeling we also assume

that t i,n+i+s i>0,this makes elimination of sub tours and the pickup-before-delivery constraint easy to model.

Each node i∈V has a service time s i and a time window[a i,b i].The service time represents the time needed

for loading and unloading and the time window indicates when the visit at the particular location must start;a visit to node i can only take place between time a i and b i.A vehicle is allowed to arrive to a location before the start of the time window but it has to wait until the start of the time window before the visit can be performed. For each node i∈N,l i is the amount of goods that must be loaded onto the vehicle at the particular node,l i≥0 for i∈P and l i=?l i?n for i∈D.The capacity of vehicle k∈K is denoted C k.

Four types of decision variables are used in the mathematical model.x i jk,i,j∈V,k∈K is a binary variable which is one if the edge between node i and node j is used by vehicle k and zero otherwise.S ik,i∈V,k∈K is a nonnegative integer that indicates when vehicle k starts the service at location i,L ik,i∈V,k∈K is a nonnegative integer that is an upper bound on the amount of goods on vehicle k after servicing node i.S ik and L ik are only well-de?ned when vehicle k actually visits node i.Finally z i,i∈P is a binary variable that indicates if request i is placed in the request bank.The variable is one if the request is placed in the request bank and zero otherwise.

A mathematical model is:

minα∑

k∈K

(i,j)∈A

d i j x i jk+β∑

k∈K Sτ k,k?aτk

+γ∑i∈P z i(1)

Subject to:

∑k∈K i ∑

j∈N k

x i jk+z i=1?i∈P(2)

∑j∈V k x i jk?∑

j∈V k

x j,n+i,k=0?k∈K,?i∈P k(3)∑

j∈P k∪{τ k}

k,j,k

=1?k∈K(4)∑

i∈D k∪{τk}

x i,τ

k

,k

=1?k∈K(5)∑

i∈V k

x i jk?∑

i∈V k

x jik=0?k∈K,?j∈N k(6)

x i jk=1?S ik+s i+t i j≤S jk?k∈K,?(i,j)∈A k(7)

a i≤S ik≤

b i?k∈K,?i∈V k(8)

S ik≤S n+i,k?k∈K,?i∈P k(9) x i jk=1?L ik+l j≤L jk?k∈K,?(i,j)∈A k(10) L ik≤C k?k∈K,?i∈V k(11)

k k =Lτ

k

k

=0?k∈K(12) x i jk∈{0,1}?k∈K,?(i,j)∈A k(13) z i∈{0,1}?i∈P(14) S ik≥0?k∈K,?i∈V k(15) L ik≥0?k∈K,?i∈V k(16)

The objective function minimizes the weighted sum of the distance traveled,the sum of the time spent by each vehicle,and the number of requests not scheduled.

Constraint(2)ensures that each pickup location is visited or that the corresponding request is placed in the request bank.Constraint(3)ensures that the delivery location is visited if the pickup location is visited and that the visit is performed by the same vehicle.Constraints(4)and(5)ensure that a vehicle leaves every start terminal and a vehicle enters every end terminal.Together with constraint(6)this ensures that consecutive paths betweenτk andτ k are formed for each k∈K.

Constraints(7),(8)ensure that S ik is set correctly along the paths and that the time windows are obeyed. These constraints also make sub tours impossible.Constraint(9)ensures that each pickup occur before the corresponding delivery.Constraints(10),(11)and(12)ensure that the load variable is set correctly along the paths and that the capacity constraints of the vehicles are respected.

2Solution method

Local search heuristics are often built on neighborhood moves that make small changes to the current solution, such as moving a request from one route to another or exchanging two requests as in Nanry and Barnes[15] and Li and Lim[11].These kind of local search heuristics are able to investigate a huge number of solutions in a short time,but a solution is only changed very little in each iteration.It is our belief that such heuristics can have dif?culties in moving from one promising area of the solution space to another,when faced with tightly constrained problems,even when embedded in metaheuristics.

One way of tackling this problem is by allowing the search to visit infeasible solutions by relaxing some constraints;see e.g.Cordeau et al.[5].We take another approach—instead of using small“standard moves”we use very large moves that potentially can rearrange up to30-40%of all requests in a single iteration.The price of doing this is that the computation time needed for performing and evaluating the moves becomes much larger compared to the smaller moves.The number of solutions evaluated by the proposed heuristic per time unit is only a fraction of the solutions that could be evaluated by a standard heuristic.Nevertheless very good performance is observed in the computational tests as demonstrated in Section4.

The proposed heuristic is based on Large Neighborhood Search(LNS)introduced by Shaw[21].The LNS heuristic has been applied to the VRPTW with good results(see Shaw[21],[22]and Bent and Van Hentenryck

1Function LNS(s∈{solutions},q∈N)

2solution s best=s;

3repeat

4s =s;

5remove q requests from s

6reinsert removed requests into s ; 7if(f(s )

8s best=s ;

9if accept(s ,s)then

10s=s ;

11until stop-criterion met

12return s best;

term Adaptive Large Neighborhood Search(ALNS)to describe an algorithm using several large neighborhoods in an adaptive way.A more general presentation of the ALNS framework can be found in the subsequent paper [16].

3LNS applied to PDPTW

This section describes how the LNS heuristic has been applied to the https://www.sodocs.net/doc/759204309.html,pared to the LNS heuristic developed for the VRPTW and PDPTW by Shaw[21],[22]and Bent and Van Hentenryck[2],[4]the heuristic in this paper is different in several ways:

1.We are using several removal and insertion heuristics during the same search while the earlier LNS

heuristics only used one method for removal and one method for insertions.The removal heuristics are described in Section3.1and the insertion heuristics are described in Section3.2.The method for selecting which sub-heuristic to use is described in Section3.3.The selection mechanism is guided by statistics gathered during the search,as described in Section3.4.We are going to use the term Adaptive Large Neighborhood Search(ALNS)heuristic for a LNS heuristic that uses several competing removal and insertion heuristics and chooses between using statistics gathered during the search.

2.Simple and fast heuristics are used for the insertion of requests as opposed to the more complicated

branch and bound methods proposed by Shaw[21],[22]and Bent and Van Hentenryck[2],[4].

3.The search is embedded in a simulated annealing metaheuristic where the earlier LNS heuristics used a

simple descent approach.This is described in Section3.5.

The present section also describes how the LNS heuristic can be used in a simple algorithm designed for minimizing the number of vehicles used to serve all requests.The vehicle minimization algorithm only works for homogeneous?eets without an upper bound on the number of vehicles available.

3.1Request removal

This section describes three removal heuristics.All three heuristics take a solution and an integer q as input. The output of the heuristic is a solution where q requests have been removed.The heuristics Shaw removal and Worst removal furthermore have a parameter p that determines the degree of randomization in the heuristic. 3.1.1Shaw removal heuristic

This removal heuristic was proposed by Shaw in[21,22].In this section it is slightly modi?ed to suit the PDPTW.The general idea is to remove requests that are somewhat similar,as we expect it to be reasonably easy to shuf?e similar requests around and thereby create new,perhaps better solutions.If we choose to remove requests that are very different from each other then we might not gain anything when reinserting the requests as we might only be able to insert the requests at their original positions or in some bad positions.We de?ne the similarity of two requests i and j using a relatedness measure R(i,j).The lower R(i,j)is,the more related are the two requests.

The relatedness measure used in this paper consists of four terms:a distance term,a time term,a capacity term and a term that considers the vehicles that can be used to serve the two requests.These terms are weighted using the weights?,χ,ψandωrespectively.The relatedness measure is given by:

R(i,j)=? d A(i),A(j)+d B(i),B(j) +χ T A(i)?T A(j) + T B(i)?T B(j) (17)

+ψ l i?l j +ω 1? K i K j

1Function ShawRemoval(s∈{solutions},q∈N,p∈R+)

2request:r=a randomly selected request from S;

3set of requests:D={r};

4while|D|

5r=a randomly selected request from D;

6Array:L=an array containing all request from s not in D;

7sort L such that i

8choose a random number y from the interval[0,1);

9D=D{L[y p|L|]};

10end while

11remove the requests in D from s;

Algorithm3Worst Removal

connectedness,the term weighted byψcompares capacity demand of the requests and the term weighted by ωensures that two requests get a high relatedness measure if only a few or no vehicles are able to serve both requests.It is assumed that d i j,T x and l i are normalized such that0≤R(i,j)≤2(?+χ)+ψ+ω.This is done by scaling d i j,T x and l i such that they only take on values from[0,1].Notice that we cannot calculate R(i,j),if request i or j is placed in the request bank.

The relatedness is used to remove requests in the same way as described by Shaw[21].The procedure for removing requests is shown in pseudo code in Algorithm2.The procedure initially chooses a random request to remove and in the subsequent iterations it chooses requests that are similar to the already removed requests.

A determinism parameter p≥1introduces some randomness in the selection of the requests(a low value of p corresponds to much randomness).

Notice that the sorting in line7can be avoided in an actual implementation of the algorithm,as it is suf?cient to use a linear time selection algorithm[6]in line9.

3.1.2Random removal

The random removal algorithm simply selects q requests at random and removes them from the solution.The random removal heuristic can be seen as a special case of the Shaw removal heuristic with p=1.We have implemented a separate random removal heuristic though,as it obviously can be implemented to run faster than the Shaw removal heuristic.

3.1.3Worst removal

Given a request i served by some vehicle in a solution s we de?ne the cost of the request as cost(i,s)= f(s)?f?i(s)where f?i(s)is the cost of the solution without request i(the request is not moved to the request bank,but removed completely).It seems reasonable to try to remove requests with high cost and inserting them at another place in the solution to obtain a better solution value,therefore we propose a removal heuristic that removes requests with high cost(i,s).

The worst removal heuristic is shown in pseudo-code in Algorithm3.It reuses some of the ideas from Section3.1.1.

Notice that the removal is randomized,with the degree of randomization controlled by the parameter p like in Section3.1.1.This is done to avoid situations where the same requests are removed over and over again.

One can say that the Shaw removal heuristic and the worst removal heuristic belong to two different classes of removal heuristics.The Shaw heuristic is biased towards selecting requests that“easily”can be exchanged, while the worst-removal selects the requests that appear to be placed in the wrong position in the solution. 3.2Inserting requests

Insertion heuristics for vehicle routing problems are typically divided into two categories:sequential and par-allel insertion heuristics.The difference between the two classes is that sequential heuristics build one route at a time while parallel heuristics construct several routes at the same time.Parallel and sequential insertion heuristics are discussed in further detail in[17].The heuristics presented in this paper are all parallel.The reader should observe that the insertion heuristic proposed here will be used in a setting where they are given a number of partial routes and a number of requests to insert—they seldom build the solution from scratch. 3.2.1Basic greedy heuristic

The basic greedy heuristic is a simple construction heuristic.It performs at most n iterations as it inserts one request in each iteration.Let?f i,k denote the change in objective value incurred by inserting request i into route k at the position that increases the objective value the least.If we cannot insert request i in route k,then we set ?f i,k=∞.We then de?ne c i as c i=min k∈K{?f i,k}.In other words,c i is the“cost”of inserting request i at its best position overall.We denote this position by the minimum cost position.Finally we choose the request i that minimizes

min c i

i∈U

(18) and insert it at its minimum cost position.U is the set of unplanned requests.This process continues until all requests have been inserted or no more requests can be inserted.

Observe that in each iteration we only change one route(the one we inserted into),and we do not have to recalculate insertion costs in all the other routes.This property is used in the concrete implementation to speed up the insertion heuristics.

An obvious problem with this heuristic is that it often postpones the placement of“hard”requests(requests which are expensive to insert,that is requests with large c i)to the last iterations where we do not have many opportunities for inserting the requests as many of the routes are“full”.The heuristic presented in the next section tries to circumvent this problem.

3.2.2Regret heuristics

The regret heuristic tries to improve upon the basic greedy heuristic by incorporating a kind of look ahead information when selecting the request to insert.Let x ik∈{1,...,m}be a variable that indicates the route

for which request i has the k’th lowest insertion cost,that is?f i,x

ik ≤?f i,x

ik

for k≤k .Using this notation

we can express c i from Section3.2.1as c i=?f i,x

i1

.In the regret heuristic we de?ne a regret value c?i as

c?i=?f i,x

i2??f i,x

i1

.In other words,the regret value is the difference in the cost of inserting the request in its

best route and its second best route.In each iteration the regret heuristic chooses to insert the request i that maximizes

max c?i

i∈U

The request is inserted at its minimum cost position.Ties are broken by selecting the insertion with lowest cost. Informally speaking,we choose the insertion that we will regret most if it is not done now.

The heuristic can be extended in a natural way to de?ne a class of regret heuristics:the regret-k heuristic is the construction heuristic that in each construction step chooses to insert the request i that maximizes:

max ∑k j=1 ?f i,x i j??f i,x i1

i∈U

(19)

If some requests cannot be inserted in at least m?k+1routes,then the request that can be inserted in the fewest number of routes(but still can be inserted in at least one route)is inserted.Ties are broken by selecting the request with best insertion cost.The request is inserted at its minimum cost position.The regret heuristic presented at the start of this section is a regret-2heuristic and the basic insertion heuristic from Section3.2.1is a regret-1heuristic because of the tie-breaking https://www.sodocs.net/doc/759204309.html,rmally speaking,heuristics with k>2investigate the cost of inserting a request on the k best routes and insert the request whose cost difference between inserting it into the best route and the k?1best routes is https://www.sodocs.net/doc/759204309.html,pared to a regret-2heuristic,regret heuristics with large values of k discover earlier that the possibilities for inserting a request become limited.

Regret heuristics have been used by Potvin and Rousseau[17]for the VRPTW.The heuristic in their paper can be categorized as a regret-k heuristic with k=m,as all routes are considered in an expression similar to (19).The authors do not use the change in the objective value for evaluating the cost of an insertion,but use a special cost function.Regret heuristics can also be used for combinatorial optimization problems outside the vehicle routing domain,an example of an application to the Generalized Assignment Problem was described by Martello and Toth[13].

As in the previous section we use the fact that we only change one route in each iteration to speed up the regret heuristic.

3.3Choosing a removal and an insertion heuristic

In Section3.1we de?ned three removal heuristics(shaw,random and worst removal),and in Section3.2we de?ned a class of insertion heuristics(basic insertion,regret-2,regret-3,etc.).One could select one removal and one insertion heuristic and use these throughout the search,but in this paper we propose to use all heuristics.The reason for doing this is that for example the regret-2heuristic may be well suited for one type of instance while the regret-4heuristic may be the best suited heuristic for another type of instance.We believe that alternating between the different removal and insertion heuristics gives us a more robust heuristic overall.

In order to select the heuristic to use,we assign weights to the different heuristics and use a roulette wheel selection principle.If we have k heuristics with weights w i,i∈{1,2,···,k},we select heuristic j with proba-bility

w j

Description

The last remove-insert operation resulted in a new global best solution.

The last remove-insert operation resulted in a solution that has not been ac-

cepted before.The cost of the new solution is better than the cost of current

solution.

The last remove-insert operation resulted in a solution that has not been ac-

cepted before.The cost of the new solution is worse than the cost of current

solution,but the solution was accepted.

The case forσ1is clear:if a heuristic is able to?nd a new overall best solution,then it has done well. Similarly if a heuristic has been able to?nd a solution that has not been visited before and it is accepted by the accept criteria in the ALNS search then the heuristic has been successful as it has brought the search forward. It seems sensible to distinguish between the two situations corresponding to parametersσ2andσ3because we prefer heuristics that can improve the solution,but we are also interested in heuristics that can diversify the search and these are rewarded byσ3.It is important to note that we only reward unvisited solutions.This is to encourage heuristics that are able to explore new parts of the solution space.We keep track of visited solutions by assigning a hash key to each solution and storing the key in a hash table.

In each iteration we apply two heuristics:a removal heuristic and an insertion heuristic.The scores for both heuristics are updated by the same amount as we can not tell whether it was the removal or the insertion that was the reason for the“success”.

At the end of each segment we calculate new weights using the recorded scores.Let w i j be the weight of heuristic i used in segment j as the weight used in formula(20).In the?rst segment we weight all heuristics equally.After we have?nished segment j we calculate the weight for all heuristics i to be used in segment j+1as follows:

πi

w i,j+1=w i j(1?r)+r

T where T>0 is the temperature.

The temperature starts out at T start and is decreased every iteration using the expression T=T·c,where 0

The algorithm stops when a speci?ed number of LNS iterations have passed.

3.6Applying noise to the objective function

As the proposed insertion heuristics are quite myopic,we believe that it is worthwhile to randomize the insertion heuristics such that they do not always make the move that seems best locally.This is achieved by adding a noise term to the objective function.Every time we calculate the cost C of an insertion of a request into a route, we also calculate a random number noise in the interval[?maxN,maxN]and calculate the modi?ed insertion costs C =max{0,C+noise}.At each iteration we decide if we should use C or C to determine the insertions

1

2

3

4

5

6

7

0 5000 10000 15000 20000 25000

Worst removal Random removal Shaw removeal

iteration

w e i g h t

Figure 1:The ?gure shows an example of how the weights for the three removal heuristics progressed during one

application of the heuristic.The iteration number is shown along the x -axis and the weight is shown along the y -axis.The graph illustrates that for the particular problem,the random removal and the Shaw removal heuristics perform virtually equally well,while the worst heuristic performs worst.Consequently the worst heuristic is not used as often as the two other heuristics.

to perform.This decision is taken by the adaptive mechanism described earlier by keeping track of how often the noise applied insertions and the “clean”insertions are successful.

In order to make the amount of noise related to the properties of the problem instance,we calculate maxN =η·max i ,j ∈V d i j ,where ηis a parameter that controls the amount of noise.We have chosen to let maxN be dependent on the distances d i j as the distances are an important part of the objective in all of the problems we consider in this paper.

It might seem super?uous to add noise to the insertion heuristics as the heuristics are used in a simulated annealing framework that already contains randomization,however we believe that the noise applications are important as our neighborhood is searched by means of the insertion heuristics and not randomly sampled.Without the noise applications we do not get the full bene?t of the simulated annealing metaheuristic.This conjecture is supported by the computational experiments reported in table 3.

3.7Minimizing the number of vehicles used

Minimization of the number of vehicles used to serve all requests is often considered as ?rst priority in the vehicle routing literature.The heuristic proposed so far is not able to cope with such an objective,but by using a simple two stage algorithm that minimizes the number of vehicles in the ?rst stage and then minimizes a secondary objective (typically traveled distance)in the second stage,we can handle such problems.The vehicle minimization algorithm only works for problems with a homogeneous ?eet.We also assume that the number of vehicles available is unlimited,such that constructing an initial feasible solution always can be done.

A two-stage method was also used by Bent and Van Hentenryck [4],[2],but while they used two different neighborhoods and metaheuristics for the two stages,we use the same heuristic in both stages.

The vehicle minimization stage works as follows:?rst an initial feasible solution is created using a sequen-tial insertion method that constructs one route at a time until all requests have been planned.The number of

vehicles used in this solution is the initial estimate on the number of vehicles necessary.Next step is to remove one route from our feasible solution.The requests on the removed route are placed in the request bank.The resulting problem is solved by our LNS heuristic.When the heuristic is run,a high value is assigned toγsuch that requests are moved out of the request bank if possible.If the heuristic is able to?nd a solution that serves all requests,a new candidate for the minimum number of vehicles has been found.When such a solution has been found,the LNS heuristic is immediately stopped,one more route is removed from the solution and the process is reiterated.If the LNS heuristic terminates without?nding a solution where all requests are served, then the algorithm steps back to the last solution encountered in which all requests were served.This solution is used as a starting solution in the second stage of the algorithm,which simply consists of applying the normal LNS heuristic.

In order to keep the running time of the vehicle minimization stage down,this stage is only allowed to spendΦLNS iterations all together such that if the?rst application of the LNS heuristic for example spends a iterations to?nd a solution where all requests are planned,then the vehicle minimization stage is only allowed to performΦ?a LNS iterations to minimize the number of vehicles further.Another way to keep the running time limited is to stop the LNS heuristic when it seems unlikely that a solution exists in which all requests are planned.In practice this is implemented by stopping the LNS heuristic if5or more requests are unplanned and no improvement in the number of unplanned requests has been found in the lastτLNS iterations.In the computational experimentsΦwas set to25000andτwas set to2000.

3.8Discussion

Using several removal and insertion heuristics during the search may be seen as using local search with several neighborhoods.To the best of our knowledge this idea has not been used in the LNS literature before.The related Variable Neighborhood Search(VNS)was proposed by Mladenovi′c and Hansen[14].VNS is a meta-heuristic framework using a parameterized family of neighborhoods.The metaheuristic has received quite a lot of attention in the recent years and has provided impressive results for many problems.Where ALNS makes use of several unrelated neighborhoods,VNS typically is based on a single neighborhood which is searched with variable depth.

Several metaheuristics can be used at the top level of ALNS to help the heuristic escape a local minimum. We have chosen to use simulated annealing as the ALNS heuristic already contains the random sampling ele-ment.For a further discussion of metaheuristic frameworks used in connection with ALNS see the subsequent paper[16].

The request bank is an entity that makes sense for many real life applications.In the problems considered in Section4we do not accept solutions with unscheduled requests,but the request bank allows us to visit infeasible solutions in a transition stage,improving the overall search.The request bank is particularly important when minimizing the number of vehicles.

4Computational experiments

In this section we describe our computational experiments.We?rst introduce a set of tuning instances in Section4.1.In Section4.2we evaluate the performance of the proposed construction heuristics on the tuning instances.In Section4.3we describe how the parameters of the ALNS heuristic were tuned,and in Section4.4 we present the results obtained by the ALNS heuristic and a simpler LNS heuristics.

4.1Tuning instances

First a set of representative tuning instances is identi?ed.The tuning instances must have a fairly limited size as we want to perform numerous experiments on the tuning problems and they should somehow be related to the problems our heuristic is targeted at.In the case at hand we want to solve some standard benchmark instances and a new set of randomly generated instances.

Our tuning set consists of16instances.The?rst four instances are LR1_2_1,LR202,LRC1_2_3,and LRC204from Li and Lim’s benchmark problems[11],containing between50and100requests.The number of available vehicles was set to one more than that reported by Li and Lim to make it easier for the heuristic to?nd solutions with no requests in the request bank.The last12instances are randomly generated instances.

These instances contain both single depot and multi depot problems and problems with requests that only can be served by a subset of the vehicle?eet.All randomly generated problems contain50requests.

4.2Evaluation of construction heuristics

First we examine how the simple construction heuristics from Section3.2perform on the tuning problems,to see how well they work without the LNS framework.The construction heuristics regret-1,regret-2,regret-3,regret-4and regret-m have been implemented.Table1shows the results of the test.As the construction heuristics are deterministic,the results were produced by applying the heuristics to each of the16test problems once.

Avg.gap(%)

33320

Time(s)

one parameter to take a number of values,while the rest of the parameters are kept?xed.For each parameter setting we apply the heuristic on our set of test problems?ve times,and the setting that shows the best average behavior(in terms of average deviation from the best known solutions)is chosen.We now move on to the next parameter,using the values found so far and the values from the initial tuning for the parameters that have not been considered yet.This process continues until all parameters have been tuned.Although it would be possible to process the parameters once again using the new set of parameters as a starting point to further optimize the parameters,we stopped after one pass.

One of the experiments performed during the parameter tuning sought to determine the value of the parame-terξthat controls how many requests we remove and insert in each iteration.This parameter should intuitively have a signi?cant impact on the results our heuristic is able to produce.We tested the heuristic withξranging from0.05to0.5with a step size of0.05.Table2shows the in?uence ofξ.Whenξis too low the heuristic is not able to move very far in each iteration,and it has a higher chance of being trapped in one suboptimal area of the search space.On the other hand,ifξis large then we can easily move around in the search space, but we are stretching the capabilities of our insertion heuristics.The insertion heuristics work fairly well when they must insert a limited number of requests into a partial solution,but they cannot build a good solution from scratch as seen in Section4.2.The results in Table2shows thatξ=0.4is a good choice.One must notice that the heuristic gets slower whenξincreases because the removals and insertions take longer when more requests are involved,thus the comparison in Table2is not completely fair.

ξ0.10.20.30.40.5

1.75 1.210.810.810.57

Shaw Rand Worst Noise

?

2? 2.6

3? 5.4

??

?

6? 1.6

7? 2.2

??

?

10? 1.3

11? 2.0

??

?

14? 1.7

15????? 1.3

Table3:Simple LNS heuristics compared to the full adaptive LNS with dynamic weight adjustment.The?rst column shows if the con?guration must be considered as an LNS or an ALNS heuristic.The second column is the con?guration number,columns three to?ve indicate which removal heuristics were used.Columns six to ten indicate which insertion heuristics were used.Column eleven states if noise was added to the objective function during insertion of requests(in this case noise was added to the objective function in50%of the insertions for the simple con?gurations1-14while in con?guration15the number of noise-insertions was controlled by the adaptive method).Column twelve shows the average performance of the different heuristics.As an example,in con?guration four we used random removal together with the regret-2insertion heuristic and we applied noise to the objective value.This resulted in a set of solutions whose objective values on average were3.2%above the best solutions found during the whole experiment.

basic insertion heuristic nearly performs as well as the regret heuristics when used in a LNS framework.This is surprising seen in the light of Table1where the basic insertion heuristic performed particularly badly.This observation may indicate that the LNS method is relatively robust with respect to the insertion method used.

The last row of the table shows the performance of ALNS.As one can see,it is on par with the two best simple approaches,but not better,which at?rst may seem disappointing.The results show though,that the adaptive mechanism is able to?nd a sensible set of weights,and it is our hypothesis that the ALNS heuristic is more robust than the simpler LNS heuristics.That is,the simple con?guration may fail to produce good solutions on other types of problems,while the ALNS heuristic continues to perform well.One of the purposes of the experiments in Section4.4is to con?rm or disprove this hypothesis.

4.4Results

This section provides computational experiments conducted to test the performance of the heuristic.There are three major objectives for this section:

1.To compare the ALNS heuristic to a simple LNS heuristic that only contains one removal and one inser-

tion heuristic.

2.To determine if certain problem properties in?uence the(A)LNS heuristics ability to?nd good solutions.

3.To compare the ALNS heuristic with state-of-the-art PDPTW heuristics from the literature.

In order to clarify if the ALNS heuristic is worthwhile compared to a simpler LNS heuristic we are going to show results for both the ALNS heuristic and the best simple LNS heuristic from Table3.Con?guration12 was chosen as representative for the simple LNS heuristics as it performed slightly better than con?guration 10.In the following sections we refer to the full and simple LNS heuristic as ALNS and LNS respectively.

All experiments were performed on a1.5GHz Pentium IV PC with256MB internal memory,running Linux.The implemented algorithm measures travel times and distances using double precision?oating point numbers.The parameter setting found in Section4.3.2was used in all experiments unless otherwise stated.

4.4.1Data sets

As the model considered in this paper is quite complicated,it is hard to?nd any benchmark instances that consider exactly the same model and objective function.The benchmark instances that come closest to the model considered in this paper are the instances constructed by Nanry and Barnes[15]and the instances con-structed by Li and Lim[11].Both data sets are single depot pickup and delivery problems with time windows, constructed from VRPTW problems.We are only reporting results on the data set proposed by Li and Lim,as the Nanry and Barnes instances are easy to solve due to their size.

The problem considered by Li and Lim were simpler than the one considered in this paper as:1)it did not contain multiple depots;2)all requests must be served;3)all vehicles were assumed to be able to serve all requests.When solving the Li and Lim instances using the ALNS heuristic we setαto one andβto zero in our objective function.In section4.5we minimize the number of vehicles as?rst priority while we in section4.4.2 only minimize the distance driven.

In order to test all aspects of the model proposed in this paper,we also introduce some new,randomly generated instances.These instances are described in section4.4.3.

4.4.2Comparing ALNS and LNS using the Li&Lim instances

This section compares the ALNS and LNS heuristics using the benchmark instances proposed by Li and Lim [11].The data set contains354instances with between100and1000locations.The data set can be downloaded from[25].

In this section we use the distance driven as our objective even though vehicle minimization is the standard primary objective for these instances.The reason for this decision is that distance minimization makes compar-ison of the heuristics easier and distance minimization is the original objective of the proposed heuristic.The number of vehicles available for serving the requests is set to the minimum values reported by Li and Lim in [11]and on their web page which unfortunately no longer is on-line.

The heuristics were applied10times to each instance with400or less locations and5times to each instance with more than400locations.The experiments are summarized in Table4.

Best known solutions Average time(s)

#problems ALNS LNS ALNS LNS

560.190.5000 2004915305314

60 2.36 4.2900

60054510691470

60 1.73 3.2702

100047429165252

4.4.3New instances

This section provides results on randomly generated PDPTW instances that contain features of the model that were not used in the Li and Lim benchmark problems considered in Section4.4.2.These features are:multiple depots,routes with different start and end terminals and special requests that only can be served by a certain subset of the vehicles.When solving these instances we setα=β=1in the objective function so that distance and time are weighted equally in the objective function.We do not perform vehicle minimization as the vehicles are inhomogeneous.

Three types of geographical distributions of requests are considered:problems with locations distributed uniformly in the plane,problems with locations distributed in10clusters and problems with50%of the loca-tions are put in10clusters and50%of the locations distributed uniformly.These three types of problems were inspired by Solomon’s VRPTW benchmark problems[26],and the problems are similar to the R,the C and the RC Solomon problems respectively.We consider problems with50,100,250and500requests,all problems are multi depot problems.For each problem size we generated12problems as we tried every combination of the three problem features shown below:

?Route type:1)A route starts and ends at the same location,2)a route starts and ends at different locations.

?Request type:1)All requests are normal requests,2)50%of the requests are special requests.The special requests can only be served by a subset of the vehicles.In the test problems each special request could only be served by between30%to60%of the vehicles.

?Geographical distributions:1)Uniform,2)Clustered,3)Semi-clustered.

The instances can be downloaded from www.diku.dk/~sropke.The heuristics were tested by applying them to each of the48problems10times.Table5shows a summary of the results found.In the table we list for how many problems the two heuristics?nd the best known solution.The best known solution is simply the best solution found throughout this experiment.

We observe the same tendencies as in Table4;ALNS is still superior to LNS,but one notices that the gap in solution quality between the two methods are smaller for this set of instances while the difference in running time is larger compared to the results on the Li and Lim instances.One also notices that it seems harder to solve small instances of this problem class compared to the Li and Lim instances.

Best known solutions Average time(s)

#requests ALNS LNS ALNS LNS

12 1.44 1.86

12 1.54 2.18

12 1.39 1.62

12 1.18 1.32

Sum:351444889596

Table5:Summary of results obtained on new instances.The captions of the table should be interpreted as in Table4. The last row sums each column.Notice that the size of the problems in this table is given as number of requests and not the number of locations.

Table6summarizes how the problem features in?uence the average solution quality.These results show that the clustered problems are the hardest to solve,while the uniformly distributed instances are the easiest.The results also indicate that special requests make the problem slightly harder to solve.The route type experiments compare the situation where routes start and end at the same location(the typical situation considered in the literature)to the situation where each route starts and ends at different locations.Here we expect the last case to be the easiest to solve,as we by having different start and end positions for our routes,gain information about the area the route most likely should cover.The results in Table6con?rm these expectations.

In addition to investigate the question of how the model features in?uence the average solution quality obtained by the heuristics we also want to know if the presence of some features could make LNS behave better than ALNS.For the considered features the answer is negative.

Feature

Distribution:Uniform

1.89%

2.09%

Distribution:Semi-clustered

Normal Requests

1.54%

2.02%

1.59%

2.04%

Start of route=end of route

∑i∈F 1c (i) where F is the set of instances

|F|

with a speci?c feature,c (i)is the cost of the best known solution to instance i and c(i,j,h)is the cost obtained in the j th experiment on instance i using heuristic h.

4.5Comparison to existing heuristics

This section compares the ALNS heuristics to existing heuristics for the PDPTW.The comparison is performed using the benchmark instances proposed by Li and Lim[11]that also were used in Section4.4.2.When PDPTW problems have been solved in the literature,the primary objective has been to minimize the number of vehicles used while the secondary objective has been to minimize the traveled distance.For this purpose we use the vehicle minimization algorithm described in Section3.7.The ALNS heuristic was applied10times to each instance with200or less locations and5times to each instance with more than200locations.The experiments are summarized in Tables7,8and9.It should be noted that it was necessary to decrease the w parameter and increase the c parameter when the instances with1000locations were solved in order to get reasonable solution quality.Apart from that,the same parameter setting has been used for all instances.

In the literature,four heuristics have been applied to the benchmark problems:the heuristic by Li and Lim[11],the heuristic by Bent and Van Hentenryck[2]and two commercial heuristics;a heuristic developed by SINTEF and a heuristic developed by TetraSoft A/S.Detailed results for the two last heuristics are not available but some results obtained using these heuristics can be found on a web page maintained by SINTEF [25].The heuristic that has obtained the best overall solution quality so far is probably the one by Bent and Van Hentenryck[2](shortened BH heuristic in the following),therefore the ALNS heuristic is compared to this heuristic in Table7.The complete results from the BH heuristic can be found in[3].The results given for the BH heuristic are the best obtained among10experiments(though for the100location instances only5 experiments were performed).The Avg.TTB column shows the average time needed for the BH heuristic to obtain its best solution.For the ALNS heuristic we only list the time used in total as this heuristic-because of its simulated annealing component,the heuristic usually?nds its best solution towards the end of the search. The BH heuristic was tested on a1.2GHz Athlon processor and the running times of the two heuristics should therefore be comparable(we believe that the Athlon processor is at most20%slower than our computer). The results show that the ALNS heuristic overall dominates the BH heuristic,especially as the problem sizes increase.It is also clear that the ALNS heuristic is able to improve considerably on the previously best known solutions and that the vehicle minimization algorithm works very well despite its simplicity.The last two columns in Table7summarize the best results obtained using several experiments with different parameter settings,which show that the results obtained by ALNS actually can be improved even further.

Table8compares the results obtained by ALNS with the best known solutions from the literature.It can be seen that ALNS improves more than half of the solutions and achieves a solution that is at least as good as the previously best known solution for80%of the problems.

The two afore mentioned tables only dealt with the best solutions found by the ALNS heuristic.Table 9shows the average solution quality obtained by the heuristic.These numbers can be compared to those in Table7.It is worth noticing that the average solution sometimes have a lower distance than the“best of10 or5”solution in table7,this is the case in the last row.This is possible because the heuristic?nds solutions that use more than the minimum number of vehicles and this usually makes solutions with shorter distances

BH best ALNS best

#locations#veh.Dist Avg.TTB Avg.time#veh.Dist 402580604025806066

615178380606180931264

11834212151158422201881

169987385016798634422221

22131492200220814320783918

26982195755265221370345370

ALNS best

#locations

100054

602757

4004047

605157

8003742

585155

Table8:Comparison of the ALNS heuristic to the previously best known solutions.The table is grouped by problem size.The?rst column shows the problem size,the next column shows the number of problems of that size.The next two columns give additional information about the experiment where the ALNS heuristic was applied5or10times to each instance.The columns

#locations

100

608181707

400

1686867930

800

26772129032

Table9:The ALNS heuristic was applied10times to each problem with200or less locations and5times to each problem with more than200locations.The best solutions reported in Table7and8were of course not obtained in all experiments.This table shows the average number of vehicles and average distance traveled obtained.These numbers can be compared to the?gures in Table7

R2C2RC2

191650.8010828.94141708.80

171487.5710828.94121558.07

131292.6891035.35111258.74

91013.399860.01101128.40

141377.1110828.94131637.62

121252.6210828.94111424.73

101111.3110828.94111230.14

9968.9710826.44101147.43

111208.9691000.60

101159.35

101108.90

91003.77

R1C1RC1

154073.1061931.4463605.40

243796.0061881.4053327.18

343098.3661844.3342938.28

432486.1461767.1232887.97

543438.3961891.2152776.93

643201.5461857.7852707.96

733135.0561850.1343056.09

822555.4061824.3442399.95

933930.4961854.2142208.49

1033344.0861817.4532550.56

Table11:Best results,200locations.

随机过程作业

第三章 随机过程 A 简答题: 3-1 写出一维随机变量函数的均值、二维随机变量函数的联合概率密度(雅克比行列式)的定义式。 3-2 写出广义平稳(即宽平稳)随机过程的判断条件,写出各态历经随机过程的判断条件。 3-3 平稳随机过程的自相关函数有哪些性质功率谱密度有哪些性质自相关函数与功率谱密度之间有什么关系 3-4 高斯过程主要有哪些性质 3-5 随机过程通过线性系统时,输出与输入功率谱密度之间的关系如何 3-6 写出窄带随机过程的两种表达式。 3-7 窄带高斯过程的同相分量和正交分量的统计特性如何 3-8 窄带高斯过程的包络、正弦波加窄带高斯噪声的合成包络分别服从什么分布 3-9 写出高斯白噪声的功率谱密度和自相关函数的表达式,并分别解释“高斯”及“白”的含义。 3-10 写出带限高斯白噪声功率的计算式。 B 计算题: 一、补充习题 3-1 设()()cos(2)c y t x t f t πθ=?+,其中()x t 与θ统计独立,()x t 为0均值的平稳随机过程,自相关函数与功率谱密度分别为:(),()x x R P τω。 ①若θ在(0,2π)均匀分布,求y()t 的均值,自相关函数和功率谱密度。 ②若θ为常数,求y()t 的均值,自相关函数和功率谱密度。 3-2 已知()n t 是均值为0的白噪声,其双边功率谱密度为:0 ()2 N P ω= 双,通过下图()a 所示的相干解调器。图中窄带滤波器(中心频率为c ω)和低通滤波器的传递函数1()H ω及2()H ω示于图()b ,图()c 。

试求:①图中()i n t (窄带噪声)、()p n t 及0()n t 的噪声功率谱。 ②给出0()n t 的噪声自相关函数及其噪声功率值。 3-3 设()i n t 为窄带高斯平稳随机过程,其均值为0,方差为2 n σ,信号[cos ()]c i A t n t ω+经过下图所示电路后输出为()y t ,()()()y t u t v t =+,其中()u t 是与cos c A t ω对应的函数,()v t 是与()i n t 对应的输出。假设()c n t 及()s n t 的带宽等于低通滤波器的通频带。 求()u t 和()v t 的平均功率之比。

衣服尺码尺寸对应表

说明:裤子上的尺码,如160/68A,160是指身高,68表示腰围,A代表体型;体型分类:A正常体B偏胖体C肥胖体Y偏瘦体 说明:34号到38号是属于超大尺寸的超大号牛仔裤 尺寸、裤长测量方法: 1、腰围 裤子腰围:两边腰围接缝处围量一周;净腰围:在单裤外沿腰间最细处围量一周,按需要加放尺寸; 2、臀围 裤子臀围:由腰口往下,裤子最宽处横向围量一周;净臀围:沿臀部最丰满处平衡围量一周,按需要加放松度;

3、裤长 由腰口往下到裤子最底边的距离;休闲裤、牛仔裤裤长不含脚口贴边,脚口贴边另预留3-4CM长供自行缭边使用; 4、净裤长 由腰口到您裤子的实际缭边处的距离;男士净裤长标准测量长度在:皮鞋鞋帮 身高裤长对照表 身高(CM) 裤长(市尺) 裤长(CM) 160~165 2尺9寸97 165~170 3尺100 170~175 3尺1寸103 175~180 3尺2寸107 180~185 3尺3寸110 男式衬衫尺码对照表单位(厘米) 身高/胸围尺码身高腰围肩宽胸围衣长袖长165/84Y 37165 94 44 104 78 58 165/88Y 38165 98 45 108 78 59.5

170/92Y 39170 102 46 112 79 59.5 175/96Y 40175 106 47 115 79 60.5 175/100Y 41175 110 48 118 80 60.5 180/104Y 42180 113 49 121 81 61.5 180/108Y 43180 116 50 124 81 61.5 185/112Y 44185 119 51 126 82 62.5 185/116Y 45185 122 51 128 82 62.5 185/120Y 46185 124 52 130 83 64 注:(身高/胸围)为净尺寸。一般实际紧腰围和成衣相差12~22厘米。 女式衬衫尺码对照表单位(厘米) 规格尺码肩宽胸围腰围下摆围后衣长短袖长短袖口长袖长长袖口155/80 3537 86 71 89 56 19.5 30 54 21 155/83 3638 89 74 92 57 19.5 31 55 22 160/86 3739 92 77 95 58 20 32 56 22 160/89 3840 95 80 98 59 20 33 56 23 165/92 3941 98 83 101 60 20.5 34 57 23 165/95 4042 101 86 104 61 20.5 35 57 24 170/98 4143 104 89 107 62 21 36 58 24 170/101 4244 107 92 110 63 21 37 58 25 173/104 4345 110 95 113 64 21.5 38 59 25 注:尺寸表中的规格表示为(身高/胸围净尺寸)的参考尺寸。 男士西服尺码对照表单位(厘米)

毕业设计外文翻译附原文

外文翻译 专业机械设计制造及其自动化学生姓名刘链柱 班级机制111 学号1110101102 指导教师葛友华

外文资料名称: Design and performance evaluation of vacuum cleaners using cyclone technology 外文资料出处:Korean J. Chem. Eng., 23(6), (用外文写) 925-930 (2006) 附件: 1.外文资料翻译译文 2.外文原文

应用旋风技术真空吸尘器的设计和性能介绍 吉尔泰金,洪城铱昌,宰瑾李, 刘链柱译 摘要:旋风型分离器技术用于真空吸尘器 - 轴向进流旋风和切向进气道流旋风有效地收集粉尘和降低压力降已被实验研究。优化设计等因素作为集尘效率,压降,并切成尺寸被粒度对应于分级收集的50%的效率进行了研究。颗粒切成大小降低入口面积,体直径,减小涡取景器直径的旋风。切向入口的双流量气旋具有良好的性能考虑的350毫米汞柱的低压降和为1.5μm的质量中位直径在1米3的流量的截止尺寸。一使用切向入口的双流量旋风吸尘器示出了势是一种有效的方法,用于收集在家庭中产生的粉尘。 摘要及关键词:吸尘器; 粉尘; 旋风分离器 引言 我们这个时代的很大一部分都花在了房子,工作场所,或其他建筑,因此,室内空间应该是既舒适情绪和卫生。但室内空气中含有超过室外空气因气密性的二次污染物,毒物,食品气味。这是通过使用产生在建筑中的新材料和设备。真空吸尘器为代表的家电去除有害物质从地板到地毯所用的商用真空吸尘器房子由纸过滤,预过滤器和排气过滤器通过洁净的空气排放到大气中。虽然真空吸尘器是方便在使用中,吸入压力下降说唱空转成比例地清洗的时间,以及纸过滤器也应定期更换,由于压力下降,气味和细菌通过纸过滤器内的残留粉尘。 图1示出了大气气溶胶的粒度分布通常是双峰形,在粗颗粒(>2.0微米)模式为主要的外部来源,如风吹尘,海盐喷雾,火山,从工厂直接排放和车辆废气排放,以及那些在细颗粒模式包括燃烧或光化学反应。表1显示模式,典型的大气航空的直径和质量浓度溶胶被许多研究者测量。精细模式在0.18?0.36 在5.7到25微米尺寸范围微米尺寸范围。质量浓度为2?205微克,可直接在大气气溶胶和 3.85至36.3μg/m3柴油气溶胶。

男装、女装衣服尺码对照表

男装、女装衣服尺码对照表
1、男装尺码对照表
身高 (cm)
衬衣尺码 (领围 cm)
西服尺码夹克尺码西裤尺码
(肩宽 (胸围 (腰围
cm)
cm)
cm)
西(腰裤围尺寸码) T
恤尺码
毛衣尺码 内裤尺码 统计比例
160 37(S) 44(S) 80(S) 72
28
S
S
S
0
165 38(M) 46(M) 84(M) 74,76 29
M
M
M
1
170 39(L) 48(L) 88(S) 78
30
L
L
L
2
175 40(XL) 50(XL) 92(M) 80
31
XL
XL
XL
3
180 41(2XL) 52(2XL) 96(L) 82
32
2XL
2XL
2XL
3
185 42(3XL) 54(3XL) 100(XL) 84,86 33
3XL
3XL
3XL
2
190 43(4XL) 56(4XL) 104(4XL) 88
34
4XL
4XL
4XL
1
195 44(5XL)
90
35
5XL
5XL
5XL
0
2、衬衫尺寸(除个别款尺寸,买前询问)
平铺尺寸 M
XL
胸围 97cm 99cm 101cm
肩宽 43cm 44cm 45cm
衣长 67cm 68cm 69cm
袖长 62cm 64cm 65cm
3、裤装尺码为: 26 代表腰围为:“尺” 28 代表腰围为:“尺” 30 代表腰围为:“尺” 32 代表腰围为:“尺” 34 代表腰围为:“尺” 38 代表腰围为:“尺” 42 代表腰围为:“尺” 50 代表腰围为:“尺” 54 代表腰围为:“尺”
27 代表腰围为:“尺” 29 代表腰围为:“尺” 31 代表腰围为:“尺” 33 代表腰围为:“尺” 36 代表腰围为:“尺” 40 代表腰围为:“尺” 44 代表腰围为:“尺” 52 代表腰围为:“尺”
4.女装尺码对照表
上装尺码
“女上装”尺码对照表(cm)
S
M
L
155/80A
160/84A
165/88A
XL 170/92A

衣服尺码尺寸对应表

裤子尺寸对照表1 裤子尺寸对照表2 说明:裤子上的尺码,如160/68A,160是指身高,68表示腰围,A代表体型;体型分类:A正常体B偏胖体C肥胖体Y偏瘦体 牛仔裤尺码对照表:(以下测量误差在+-2cm) 说明:34号到38号是属于超大尺寸的超大号牛仔裤

尺寸、裤长测量方法: 1、腰围 裤子腰围:两边腰围接缝处围量一周;净腰围:在单裤外沿腰间最细处围量一周,按需要加放尺寸; 2、臀围 裤子臀围:由腰口往下,裤子最宽处横向围量一周;净臀围:沿臀部最丰满处平衡围量一周,按需要加放松度; 3、裤长 由腰口往下到裤子最底边的距离;休闲裤、牛仔裤裤长不含脚口贴边,脚口贴边另预留3-4CM长供自行缭边使用; 4、净裤长 由腰口到您裤子的实际缭边处的距离;男士净裤长标准测量长度在:皮鞋鞋帮和鞋底交接处;

男式衬衫尺码对照表 单位(厘米) 身高/胸围 尺码 身高 腰围 肩宽 胸围 衣长 袖长 165/84Y 37 165 94 44 104 78 58 165/88Y 38 165 98 45 108 78 59.5 170/92Y 39 170 102 46 112 79 59.5 175/96Y 40 175 106 47 115 79 60.5 175/100Y 41 175 110 48 118 80 60.5 180/104Y 42 180 113 49 121 81 61.5 180/108Y 43 180 116 50 124 81 61.5 185/112Y 44 185 119 51 126 82 62.5 185/116Y 45 185 122 51 128 82 62.5 185/120Y 46 185 124 52 130 83 64 注:(身高/胸围)为净尺寸。一般实际紧腰围和成衣相差12~22厘米。

毕业设计(论文)外文资料翻译〔含原文〕

南京理工大学 毕业设计(论文)外文资料翻译 教学点:南京信息职业技术学院 专业:电子信息工程 姓名:陈洁 学号: 014910253034 外文出处:《 Pci System Architecture 》 (用外文写) 附件: 1.外文资料翻译译文;2.外文原文。 指导教师评语: 该生外文翻译没有基本的语法错误,用词准确,没 有重要误译,忠实原文;译文通顺,条理清楚,数量与 质量上达到了本科水平。 签名: 年月日 注:请将该封面与附件装订成册。

附件1:外文资料翻译译文 64位PCI扩展 1.64位数据传送和64位寻址:独立的能力 PCI规范给出了允许64位总线主设备与64位目标实现64位数据传送的机理。在传送的开始,如果回应目标是一个64位或32位设备,64位总线设备会自动识别。如果它是64位设备,达到8个字节(一个4字)可以在每个数据段中传送。假定是一串0等待状态数据段。在33MHz总线速率上可以每秒264兆字节获取(8字节/传送*33百万传送字/秒),在66MHz总线上可以528M字节/秒获取。如果回应目标是32位设备,总线主设备会自动识别并且在下部4位数据通道上(AD[31::00])引导,所以数据指向或来自目标。 规范也定义了64位存储器寻址功能。此功能只用于寻址驻留在4GB地址边界以上的存储器目标。32位和64位总线主设备都可以实现64位寻址。此外,对64位寻址反映的存储器目标(驻留在4GB地址边界上)可以看作32位或64位目标来实现。 注意64位寻址和64位数据传送功能是两种特性,各自独立并且严格区分开来是非常重要的。一个设备可以支持一种、另一种、都支持或都不支持。 2.64位扩展信号 为了支持64位数据传送功能,PCI总线另有39个引脚。 ●REQ64#被64位总线主设备有效表明它想执行64位数据传送操作。REQ64#与FRAME#信号具有相同的时序和间隔。REQ64#信号必须由系统主板上的上拉电阻来支持。当32位总线主设备进行传送时,REQ64#不能又漂移。 ●ACK64#被目标有效以回应被主设备有效的REQ64#(如果目标支持64位数据传送),ACK64#与DEVSEL#具有相同的时序和间隔(但是直到REQ64#被主设备有效,ACK64#才可被有效)。像REQ64#一样,ACK64#信号线也必须由系统主板上的上拉电阻来支持。当32位设备是传送目标时,ACK64#不能漂移。 ●AD[64::32]包含上部4位地址/数据通道。 ●C/BE#[7::4]包含高4位命令/字节使能信号。 ●PAR64是为上部4个AD通道和上部4位C/BE信号线提供偶校验的奇偶校验位。 以下是几小结详细讨论64位数据传送和寻址功能。 3.在32位插入式连接器上的64位卡

男装女装衣服尺码对照表

男装、女装衣服尺码对照表1、男装尺码对照表 2、衬衫尺寸(除个别款尺寸,买前询问)

3、裤装尺码为: 26代表腰围为:“尺” 27代表腰围为:“尺” 28代表腰围为:“尺” 29代表腰围为:“尺” 30代表腰围为:“尺” 31代表腰围为:“尺” 32代表腰围为:“尺” 33代表腰围为:“尺” 34代表腰围为:“尺” 36代表腰围为:“尺” 38代表腰围为:“尺” 40代表腰围为:“尺” 42代表腰围为:“尺” 44代表腰围为:“尺” 50代表腰围为:“尺” 52代表腰围为:“尺” 54代表腰围为:“尺” 4.女装尺码对照表 “女上装”尺码对照表(cm)

“女下装”尺码详细对照表(cm) 其他算法

裤子尺码对照表 26号------1尺9寸臀围2尺632号------2尺6寸臀围3尺2 27号------2尺0寸臀围2尺734号------2尺7寸臀围3尺4 28号------2尺1寸臀围2尺836号------2尺8寸臀围3尺5-6 29号------2尺2寸臀围2尺938号------2尺9寸臀围3尺7-8 30号------2尺3寸臀围3尺040号------3尺0寸臀围3尺9-4尺 31号------2尺4寸臀围3尺142号------3尺1-2寸臀围4尺1-2 牛仔裤尺码对照表 5.尺码换算参照表 女装(外衣、裙装、恤衫、上装、套装) 标准尺码明细 中国 (cm) 160-165 / 84-86 165-170 / 88-90 167-172 / 92-96 168-173 / 98-102 170-176 / 106-110 国际 XS S M L XL

衣服尺寸对照表

衣服尺寸对照表 女款上装 女裤 尺码XL 3XL

女式内裤 女式泳装 女鞋 2尺4 2尺6 2尺7 2尺8 2尺9 3尺 3尺1 (市尺) 尺码 S M L XL 3XL 光脚长度 女裙对应臀围 尺码 XS/32 S/34 M/36 L/38 XL/40 腰围 63-70 70-76 80-86 86-93 93-100

文胸 80,83,85,88,9 85,88,90,93,95,9 90,93,95,98100,1 95,98,100,103,105,1 103,105,108,110,1 胸 女袜胸 68-72(cm) 73-77(cm) 78-82(cm) 83-87(cm) 88-92(cm) 03 08 13 A,B,C,D,DD A,B,C,D,DD,E 型 A,B,C,D,DD,E A,B,C,D,DD,E B,C,D,DD,E 尺 70A,70B,70C 75A,75B,75C, 80A,80B,80C 85A,85B,85C 90B,90C,90D 码 70D,70DD 75D,75DD,75E 80D,80DD,80E 85D,85DD,85E 90DD,90E 英 32A,32B,32C 34A,34B,34C 36A,36B,36C 38A,38B,38C 40B,40C,40D

女式衬衫 数目为12.5公分者为B 罩杯。以此类推计算,即下胸围尺寸为 75公分者,可容许+/-2.5公分的误差,凡 介于72.5?77.5公分者皆可穿75。假设您的下胸围尺寸为 72.5公分,则建议选购75的胸罩而非70,因 为在 穿着时会比较舒适,二来也比较耐穿。 臀围 型号 80-88cm (约 34 85-93cm (约 36 90-98cm (约 38 100-108cm (约 罩杯 【罩杯的尺寸】上胸围尺寸减去下胸围尺寸之数目为 10.0公分者为A 罩杯。上胸围尺寸减去下胸围尺寸之 式 32D,32DD 34D,34DD,34E 36D,36DD,36E 38D,38DD,38E 40DD,40E XL 范围

工程造价外文翻译(有出处)

预测高速公路建设项目最终的预算和时间 摘要 目的——本文的目的是开发模型来预测公路建设项目施工阶段最后的预算和持续的时间。 设计——测算收集告诉公路建设项目,在发展预测模型之前找出影响项目最终的预算和时间,研究内容是基于人工神经网络(ANN)的原理。与预测结果提出的方法进行比较,其精度从当前方法基于挣值。 结果——根据影响因素最后提出了预算和时间,基于人工神经网络的应用原理方法获得的预测结果比当前基于挣值法得到的结果更准确和稳定。 研究局限性/意义——因素影响最终的预算和时间可能不同,如果应用于其他国家,由于该项目数据收集的都是泰国的预测模型,因此,必须重新考虑更好的结果。 实际意义——这项研究为用于高速公路建设项目经理来预测项目最终的预算和时间提供了一个有用的工具,可为结果提供早期预算和进度延误的警告。 创意/价值——用ANN模型来预测最后的预算和时间的高速公路建设项目,开发利用项目数据反映出持续的和季节性周期数据, 在施工阶段可以提供更好的预测结果。 关键词:神经网、建筑业、预测、道路、泰国 文章类型:案例研究 前言 一个建设工程项普遍的目的是为了在时间和在预算内满足既定的质量要求和其他规格。为了实现这个目标,大量的工作在施工过程的管理必须提供且不能没有计划地做成本控制系统。一个控制系统定期收集实际成本和进度数据,然后对比与计划的时间表来衡量工作进展是否提前或落后时间表和强调潜在的问题(泰克兹,1993)。成本和时间是两个关键参数,在建设项目管理和相关参数的研究中扮演着重要的角色,不断提供适当的方法和

工具,使施工经理有效处理一个项目,以实现其在前期建设和在施工阶段的目标。在施工阶段,一个常见的问题要求各方参与一个项目,尤其是一个所有者,最终项目的预算到底是多少?或什么时候该项目能被完成? 在跟踪和控制一个建设项目时,预测项目的性能是非常必要的。目前已经提出了几种方法,如基于挣值技术、模糊逻辑、社会判断理论和神经网络。将挣值法视为一个确定的方法,其一般假设,无论是性能效率可达至报告日期保持不变,或整个项目其余部分将计划超出申报日期(克里斯坦森,1992;弗莱明和坎普曼,2000 ;阿萨班尼,1999;维卡尔等人,2000)。然而,挣值法的基本概念在研究确定潜在的进度延误、成本和进度的差异成本超支的地区。吉布利(1985)利用平均每个成本帐户执行工作的实际成本,也称作单位收入的成本,其标准差来预测项目完工成本。各成本帐户每月的进度是一个平均平稳过程标准偏差,显示预测模型的可靠性,然而,接受的单位成本收益在每个报告期在变化。埃尔丁和休斯(1992)和阿萨班尼(1999)利用分解组成成本的结构来提高预测精度。迪克曼和Al-Tabtabai(1992)基于社会判断理论提出了一个方法,该方法在预测未来的基础上的一组线索,源于人的判断而不是从纯粹的数学算法。有经验的项目经理要求基于社会判断理论方法的使用得到满意的结果。Moselhi等人(2006)应用“模糊逻辑”来预测潜在的成本超支和对建设工程项目的进度延迟。该方法的结果在评估特定时间状态的项目和评价该项目的利润效率有作用。这有助于工程人员所完成的项目时间限制和监控项目预算。Kaastra和博伊德(1996)开发的“人工神经网络”,此网络作为一种有效的预测工具,可以利用过去“模式识别”工作和显示各种影响因素的关系,然后预测未来的发展趋势。罗威等人(2006)开发的成本回归模型能在项目的早期阶段估计建筑成本。总共有41个潜在的独立变量被确定,但只有四个变量:总建筑面积,持续时间,机械设备,和打桩,是线性成本的关键驱动因素,因为它们出现在所有的模型中。模型提出了进一步的洞察了施工成本和预测变量的各种关系。从模型得到的估计结果可以提供早期阶段的造价咨询(威廉姆斯(2003))——最终竞标利用回归模型预测的建设项目成本。 人工神经网络已被广泛用在不同的施工功能中,如估价、计划和产能预测。神经网络建设是Moselhi等人(1991)指出,由Hegazy(1998)开发了一个模型,该模型考虑了项目的外在特征,估计加拿大的公路建设成本: ·项目类型 ·项目范围

衣服尺码对照(最全的一份)

男装、女装衣服尺码对照表 1、男装尺码对照表 身高(cm)衬衣尺码 (领围cm) 西服尺码 (肩宽cm) 夹克尺码 (胸围cm) 西裤尺码 (腰围cm) 西裤尺码 (腰围寸) T恤尺码毛衣尺码内裤尺码统计比例 16037(S)44(S)80(S)7228S S S0 16538(M)46(M)84(M)74,7629M M M1 17039(L)48(L)88(S)7830L L L2 17540(XL)50(XL)92(M)8031XL XL XL3 18041(2XL)52(2XL)96(L)82322XL2XL2XL3 18542(3XL)54(3XL)100(XL)84,86333XL3XL3XL2 19043(4XL)56(4XL)104(4XL)88344XL4XL4XL1 19544(5XL)90355XL5XL5XL0

2、衬衫尺寸(除个别款尺寸,买前询问) 3、裤装尺码为: 26代表腰围为:“尺”27代表腰围为:“尺”28代表腰围为:“尺”29代表腰围为:“尺”30代表腰围为:“尺”31代表腰围为:“尺”32代表腰围为:“尺”33代表腰围为:“尺”34代表腰围为:“尺”36代表腰围为:“尺”38代表腰围为:“尺”40代表腰围为:“尺”42代表腰围为:“尺”44代表腰围为:“尺”50代表腰围为:“尺”52代表腰围为:“尺”54代表腰围为:“尺” 4.女装尺码对照表

“女下装”尺码详细对照表(cm) 其他算法

裤子尺码对照表 26号------1尺9寸臀围2尺6 32号------2尺6寸臀围3尺2 27号------2尺0寸臀围2尺734号------2尺7寸臀围3尺4 28号------2尺1寸臀围2尺836号------2尺8寸臀围3尺5-6 29号------2尺2寸臀围2尺938号------2尺9寸臀围3尺7-8 30号------2尺3寸臀围3尺040号------3尺0寸臀围3尺9-4尺 31号------2尺4寸臀围3尺142号------3尺1-2寸臀围4尺1-2 牛仔裤尺码对照表 5.尺码换算参照表 女装(外衣、裙装、恤衫、上装、套装) 标准尺码明细 中国(cm) 160-165 / 84-86 165-170 / 88-90 167-172 / 92-96 168-173 / 98-102 170-176 / 106-110

衣服尺寸对照表

衣服尺寸对照表 女款上装 上衣S M L XL XXL XXXL 服装384042444648胸围(cm)78-8182-8586-8990-9394-9798-102腰围(cm)62-6667-7071-7475-7980-8485-89 肩宽(cm)363840424446适合身高(cm)153/157158/162163/167168/172173/177177/180, 女裤 尺码24 26 27 28 29 30 31 对应臀围(市 2尺4 2尺6 2尺7 2尺8 2尺9 3尺3尺1 尺) 对应臀围 80 87 90 93 97 100 103 (cm) 对应腰围(市 1尺8 1尺9 2尺2尺1 2尺2 2尺3 2尺4 尺) 对应腰围 59 63 67 70 74 78 82 (cm) 女式背心 尺码S M L XL 3XL

胸围72-80 79-87 86-94 94-97 98-102 腰围58-64 64-70 69-77 77-85 85-93 臀围82-90 87-95 92-100 97-105 102-110 女式内裤 尺码S M L XL 3XL 型号150-155 155-160 160-165 165-170 170-175 腰围55-61 61-67 67-73 73-79 79-85 臀围80-86 85-93 90-98 95-103 100-108 女式泳装 尺码XS/32 S/34 M/36 L/38 XL/40 腰围63-70 70-76 80-86 86-93 93-100 女鞋 光脚长度 22.0 22.5 23.0 23.5 24.0 24.5 25.0 (cm) 鞋码(中国) 34码35码36码37码38码39码40码 鞋码(美国) 4.5 5 5.5 6 6.5 7 7.5 鞋码(英国) 3.5 4 4.5 5 5.5 6 6.5 女裙 尺码24 26 27 28 29 30 31 对应臀围 2尺4 2尺6 2尺7 2尺8 2尺9 3尺3尺1 (市尺)

外国历史文学外文翻译 (节选)

2100单词,11500英文字符,4300汉字 出处:Do?an E. New Historicism and Renaissance Culture*J+. Ankara üniversitesiDilveTarih-Co?rafyaFakültesiDergisi, 2005, 45(1): 77-95. 原文 NEW HISTORICISM AND RENAISSANCE CULTURE EvrimDogan Although this new form of historicism centers history as the subject of research, it differs from the "old" in its understanding of history. While traditional historicism regards history as "universal," new historicism considers it to be "cultural." According to Jeffrey N. Cox and Larry J.Reynolds, "new" historicism can be differentiated from "old" historicism "by its lack of faith in 'objectivity' and 'permanence' and its stress not upon the direct recreation of the past, but rather the process by which the past is constructed or invented" (1993: 4). This new outlook on history also brings about a new outlook on literature and literary criticism. Traditional literary historicism holds that the proper aim of literary criticism is to attempt to reconstruct the past objectively, whereas new historicism suggests that history is only knowable in the same sense literature is—through subjective interpretation: our understanding of the past is always conducted by our present consciousness’s. Louis Montrose, in his "Professing the Renaissance," lays out that as critics we are historically bound and we may only reconstruct the histories through the filter of our consciousness: Our analyses and our understandings necessarily proceed from our own historically, socially and institutionally shaped vantage points; that the histories we reconstruct are the textual constructs of critics who are, ourselves, historical subjects (1989: 23). For Montrose, contemporary historicism must recognize that "not only the poet but also the critic exists in history" and that the texts are "inscriptions of history" and furthermore that "our comprehension, representation, interpretation of the texts of the past always proceeds by a mixture of estrangement and appropriation.". Montrose suggests that this kind of critical practice constitutes a continuous dialogue between a "poetics" and a "politics" of culture. In Montrose's opinion, the complete recovery of meanings in a diverse historical outlook is considered necessary since older historical criticism is "illusory," in that it attempts to "recover meanings that are in any final or absolute sense authentic, correct, and complete," because scholarship constantly "constructs and delimits" the objects of study and the scholar is "historically positioned vis-a-vis that object:" The practice of a new historical criticism invites rhetorical strategies by which to foreground the constitutive acts of textuality that traditional modes of literary history efface or misrecognize. It also necessitates efforts to historicize the present as well as the past, and to historicize the dialectic between them—those reciprocal historical pressures by which the past has shaped the present and the present reshapes the past. The new historicist outlook on literary criticism is primarily against literary formalism that excludes all considerations external to the "text," and evaluates it in isolation.The preliminary concern of new historicism is to refigure the relationship between texts and the cultural system in which they were produced. In terms of new historicism, a literary text can only be evaluated in its social, historical, and political contexts. Therefore, new historicism renounces the formalist conception of literature as an autonomous aesthetic order that transcends the needs and interests of

成人服装码数对照表

成人服装尺码对照表 女装(外衣、裙装、恤衫、上装、套装) 标准尺码明细 中国 (cm) 160-165 / 84-86 165-170 / 88-90 167-172 / 92-96 168-173 / 98-102 170-176 / 106-110 国际 XS S M L XL 美国 2 4-6 8-10 12-14 16-18 欧洲 34 34-36 38-40 42 44 男装(外衣、恤衫、套装)标准尺码明细 中国 (cm) 165 / 88-90 170 / 96-98 175 / 108-110 180 / 118-122 185 / 126-130 国际 S M L XL XXL 男装(衬衫) 标准尺码明细 中国 (cm) 36 - 37 38 - 39 40 - 42 43 - 44 45 - 47 国际 S M L XL XXL 男装(裤装) 标准尺码明细 尺码 42 44 46 48 50 腰围 68 - 72 cm 71 - 76 cm 75 - 80 cm 79 - 84 cm 83 - 88 cm 裤度 99 cm 101.5 cm 104 cm 106.5 cm 109 cm 看不懂的还可以参照下面的: 男装尺码 分类小码中码大码加大码 身高 165 170 175 180 胸围 84 90 96 102

腰围 75 81 87 93 臀围 88 90 92 100

女装尺码 分类小码中码大码加大码 身高 155 160 165 170 胸围 80 84 88 92 腰围 60 64 68 72 臀围 84 88 92 96 儿童服装尺码 分类小码中码大码加大码 适合年龄 0-2岁 2-4岁 5-7岁 7-10岁 身高 80 110-110 110-130 140-150 胸围 50 55 60-65 70 腰围 40 42 44 46 臀围 55 60 65-70 75 单位:厘米CM 衣服尺码对照表---鲁宾汉尺码对照表 1、上装尺码为: 01码表示代码为:“XXS” 02码表示代码为:“XS” 03码表示代码为:“S” 04码表示代码为:“M” 05码表示代码为:“L” 06码表示代码为:“XL” 07码表示代码为:“XXL” 2、茄克装尺码为: 70表示代码为:“48”(M) 71表示代码为:“50”(L) 72表示代码为:“52”(XL) 73表示代码为:“54”(XXL) 3、裤装尺码为:

【最新推荐】应急法律外文文献翻译原文+译文

文献出处:Thronson P. Toward Comprehensive Reform of America’s Emergency Law Regime [J]. University of Michigan Journal of Law Reform, 2013, 46(2). 原文 TOWARD COMPREHENSIVE REFORM OF AMERICA’S EMERGENCY LAW REGIME Patrick A. Thronson Unbenownst to most Americans, the United States is presently under thirty presidentially declared states of emergency. They confer vast powers on the Executive Branch, including the ability to financially incapacitate any person or organization in the United States, seize control of the nation’s communications infrastructure, mobilize military forces, expand the permissible size of the military without congressional authorization, and extend tours of duty without consent from service personnel. Declared states of emergency may also activate Presidential Emergency Action Documents and other continuity-of-government procedures, which confer powers on the President—such as the unilateral suspension of habeas corpus—that appear fundamentally opposed to the American constitutional order.

服装类尺码对照表完整版

服装类尺码对照表 HUA system office room 【HUA16H-TTMS2A-HUAS8Q8-HUAH1688】

男式服装类尺码对照表(仅供参照) 服装尺码 46 48 50 52 54 适合身高 160/165 165/170 170/175 175/180 180/185 服装尺码 105 110 115 120 125 适合身高 160/165 165/170 170/175 175/180 180/185 服装尺码 74 76 78 80 82 适合身高 160/165 165/170 170/175 175/180 180/185 男裤标注 29 30 31 32 33 对应腰围 2尺2 2尺3 2尺4 2尺5 2尺6 女式服装类尺码对照表(仅供参照) 服装标注 9 11 13 15 大致对应胸围25-27 胸围27-29 胸围29-31 胸围31-33 服装尺码 S M L XL

服装标注 38 40 42 大致对应胸围27-29 胸围29-31 胸围31-33 服装尺码 M L XL 女裤尺码 2 4 6 8 10 对应腰围 63cm 67cm 71cm 74cm 78cm 女裤标注 26 27 28 29 30 对应腰围 1尺9 2尺 2尺1 2尺2 2尺3 裤子尺码对照表一 号码尺寸臀围 26号 1尺9寸 2尺6 27号 2尺 2尺7 28号 2尺1寸 2尺8 29号 2尺2寸 2尺9 30号 2尺3寸 3尺 31号 2尺4寸 3尺1

32号 2尺6寸 3尺2 34号 2尺7寸 3尺4 36号 2尺8寸 3尺5-6 38号 2尺9寸 3尺7-8 40号 3尺 3尺9-4尺 42号 3尺1-2寸 4尺1-2 裤子尺码对照表二 29码= 2尺2腰 | 30码= 2尺3腰 | 31码= 2尺4腰 | 32码= 2尺5腰33码= 2尺6腰 | 34码= 2尺7腰 | 36码= 2尺8腰 | 38码= 2尺9腰40码= 3尺腰 其他衣服裤子上的尺码,如160/68A,160是指身高,68是胸围/腰围。 关于A是按体型分类:

外文资料翻译(原文和译文)_刘海平

淮阴工学院 毕业设计(论文)外文资料翻译 系部:计算机工程 专业:计算机科学与技术 姓名:刘海平 学号: 10213120 外文出处:Digital Avionics Systems Conference,2005. DASC 2005. The 24th 附件: 1.外文资料翻译译文;2.外文原文。 注:请将该封面与附件装订成册。

附件1:外文资料翻译译文 解决嵌入式OPENGL难题-使标准、工具和APIS能在高度嵌入 和安全的环境中一起工作 摘要 作为定义和表现屏幕图象来说,嵌入式的HMIS正在使用OpenGL来表现API.由于图形加速子系统和商业驱动的出现,这一趋势能被很好的支持。同时,嵌入的图形工具和软件厂商已经在他们的API中支持OpenGL。因为其高度的嵌入和关键的安全环境,完整的OpenGL不是一个狭窄的标准。为了能获得低价格/低功耗的硬件设备和减少获得关键安全证书的驱动的复杂性,必须包含OpenGL的子集。 近些年,移动图形工业已经从定义合适的OpenGL子集的工业联盟的努力中获得利益。这些子集,或外形,存在于趋向为广泛的不同的嵌入式市场的应用的不同版本提供服务。它很清楚如此定义明确的标准罐子和将会有一种在嵌入式和关键安全的图形业上的有益的影响,提供空前的便携和简单的HMI程序. 图形工具和软件厂商正在支持新的标准的水平是不清晰的。对于终端开发者来说,这些要求是非常高的,就像既不支持或很难的保证的API的可靠性。这篇论文在对厂商和开发者征税方面提出了些建议,获得用户接口和用OPENGL标准来确保工程的成功和HMI软件的广泛调度的建议。 背景 图形处理单元(GPUs) 在过去 10 年内, 嵌入式的系统经历了基本的变化的平台显示技术。这些变化已经主要被两个相似技术所控制,使用了OPENGL的显示硬件和高级的以光栅为基础的EGS系统。平面显示已经在支持嵌入式尺寸和宽度限制方面有了很大的提高。以光栅为基础的EGS已经在解决增强的方法方面提供了足够的马力,特别是建立在日常的OPENGL硬件上。 那些渲染引擎或图形芯片是处理图形和创建或渲染图形的移动处理设备的一部分。在桌面系统方面,硬件渲染引擎起处于统治地位,导致了两个高性能的处理

相关主题