Computación y Sistemas ISSN: Instituto Politécnico Nacional México

Size: px
Start display at page:

Download "Computación y Sistemas ISSN: Instituto Politécnico Nacional México"

Transcription

1 Computacón y Sstemas ISSN: computacon-y-sstemas@cc.pn.mx Insttuto Poltécnco Naconal Méxco Lezama Barquet, Anuar; Tchernykh, Andre; Yahyapour, Ramn Performance Evaluaton of Infrastructure as Servce Clouds wth SLA Constrants Computacón y Sstemas, vol. 17, núm. 3, ulo-septembre, 2013, pp Insttuto Poltécnco Naconal Dstrto Federal, Méxco Avalable n: How to cte Complete ssue More nformaton about ths artcle Journal's homepage n redalyc.org Scentfc Informaton System Network of Scentfc Journals from Latn Amerca, the Carbbean, Span and Portugal Non-proft academc proect, developed under the open access ntatve

2 Performance Evaluaton of Infrastructure as Servce Clouds wth SLA Constrants Anuar Lezama Barquet 1, Andre Tchernykh 1, and Ramn Yahyapour 2 1 Computer Scence Department, CICESE Research Center, Ensenada, BC, Mexco 2 GWDG Unversty of Göttngen, Göttngen, Germany {alezama, chernykh}@ccese.edu.mx, ramn.yahyapour@gwdg.de Abstract. In ths paper, we present an expermental study of ob schedulng algorthms n nfrastructure as a servce type n clouds. We analyze dfferent system servce levels whch are dstngushed by the amount of computng power a customer s guaranteed to receve wthn a tme frame and a prce for a processng tme unt. We analyze dfferent scenaros for ths model. These scenaros combne a sngle servce level wth sngle and parallel machnes. We apply our algorthms n the context of executng real workload traces avalable to HPC communty. In order to provde performance comparson, we make a ont analyss of several metrcs. A case study s gven. Keywords. Cloud computng, nfrastructure as a servce, qualty of servce, schedulng. Evaluacón del desempeño de servcos de nfraestructura en nubes con restrccones de acuerdos de nvel de servco (SLA) Resumen. En el presente artículo, mostramos un estudo expermental sobre algortmos de calendarzacón en servcos de nfraestructura en nubes. Analzamos dferentes nveles de servcos que se dstnguen por la cantdad de poder computaconal que al usuaro se le garantza recbr dentro de un perodo de tempo y el preco por undad de procesamento. Analzamos dferentes escenaros para este modelo. Estos escenaros combnan un únco nvel de servco en una sola máquna y en máqunas paralelas. Utlzamos nuestros algortmos para la eecucón de muestras de cargas de trabao reales dsponbles para la comundad de HPC. Con el fn de proveer una comparacón en el desempeño, realzamos un análss conunto de varas métrcas. Presentamos un caso de estudo. Palabras clave. Computacón en nube, servco de nfraestructura en nube, caldad de servco, calendarzacón. 1 Introducton Infrastructure as a servce type n clouds allows users to take advantage of computatonal power on-demand. The focus of ths knd of clouds manages vrtual machnes (VMs) created by users to execute ther obs on the cloud resources. However, n ths new paradgm, there are ssues that prevent ts wdespread adopton. The man concern s ts necessty to provde Qualty of Servce (QoS) guarantees [1]. The use of Servce Level Agreements (SLAs) s a fundamentally new approach for ob schedulng. In ths approach, schedulers are based on satsfyng QoS constrants. The man dea s to provde dfferent levels of servce, each addressng a dfferent set of customers for the same servces, n the same SLA, and establsh blateral agreements between a servce provder and a servce consumer to guarantee ob delvery tme dependng on the selected level of servce. Bascally, SLAs contan nformaton such as the latest fnsh tme of the ob, reserved tme for ob executon, number of CPUs requred, and prce per tme unt. The shftng emphass of the Grd and Clouds towards a servce-orented paradgm led to the adopton of SLA as a very mportant concept, but at the same tme led to the problem of fndng the strngent SLAs.

3 402 Anuar Lezama Barquet, Andre Tchernykh, and Ramn Yahyapour There has been sgnfcant amount of research on varous topcs related to SLAs: admsson control technques [2]; ncorporaton of the SLA nto the Grd/Cloud archtecture [3]; specfcatons of SLAs [4, 5]; usage of SLAs for resource management; SLA-based schedulng [6], SLA profts [7]; automatc negotaton protocols [8]; economc aspects assocated wth the usage of SLAs for servce provson [9], etc. Lttle s known about the worst case effcency of SLA schedulng solutons. There are only very few theoretcal results on SLA schedulng, and most of them address real tme schedulng wth gven deadlnes. Baruah and Hartsa [10] dscuss the onlne schedulng of sequental ndependent obs on real tme systems. They presented the algorthm ROBUST (Resstance to Overload By Usng Slack Tme) whch guarantees a mnmum slack factor for every task. The slack factor f of a task s defned as a rato of ts relatve deadlne to ts executon tme requrement. It s a quanttatve ndcator of the tghtness of the task deadlne. The algorthm provdes an effectve processor utlzaton (EPU) of (f-1)/f durng the overload nterval. He shows that gven enough processors, on-lne schedulng algorthms can be desgned wth performance guarantees arbtrarly close to that of optmal unprocessor schedulng algorthms. A more complete study s presented n [11] by Schwegelshohn et al. The authors theoretcally analyze the sngle (SM) and the parallel machne (PM) models subect to obs wth sngle (SSL) and multple servce levels (MSL). Ther analyss s based on the compettve factor whch s measured as the rato of the ncome of the nfrastructure provder obtaned va the schedulng algorthm to the optmal ncome. They provde worst case performance bounds of four greedy acceptance algorthms named SSL-SM, SSL-PM, MSL-SM, MSL-PM, and two restrcted acceptance algorthms MSL-SM-R, and MSL-PM-R. All of them are based on adaptaton of the preemptve EDD (Earlest Due Date) algorthm for schedulng obs wth deadlnes. In ths paper, we make use of IaaS cloud model proposed n [11]. To show practcablty and compettveness of the algorthms, we conduct a comprehensve study of ther performance and dervatves usng smulaton. We take nto account an mportant ssue that s crtcal for practcal adopton of the schedulng algorthms: we use workloads based on real producton traces of heterogeneous HPC systems. We study two greedy algorthms: SSL-SM and SSL-PM. SSL-SM accepts every new ob for a sngle machne f ths ob and all prevously accepted obs can be completed n tme. SSL-PM accepts obs consderng all avalable processors n parallel machnes. Key propertes of SLA should be observed to provde benefts for real nstallatons. Snce SLAs are often consdered as successors of servce orented real tme paradgm wth deadlnes, we start wth a smple model wth a sngle servce level on a sngle computer, and extend t to a sngle SLA on multple computers. One of the most basc models of SLA provdes relatve deadlne as a functon of the ob executon tme wth a constant servce level parameter of usage. Ths model does not match every real SLA, but the assumptons are nonetheless reasonable. It s stll a vald basc abstracton of SLAs that can be formalzed and automatcally treated. We address an onlne schedulng problem. The obs arrve one by one and after the arrval of a new ob the decson maker must resolve whether he reects ths ncomng ob or schedules t on one of the machnes. The problem s onlne because the decson maker has to resolve t wthout nformaton about the followng obs. For ths problem, we measure the performance of the algorthms by a set of metrcs whch ncludes the compettve factor and the number of accepted obs. 2 Schedulng Model 2.1 Formal Defnton In ths work, we consder the followng model. A user submts obs to a servce provder, whch has to guarantee some level of servce (SL). Let S [ S1, S2,..., S,..., Sk] be the set of servce levels offered by the provder. For a gven servce level S the user s charged at a cost u per unt of

4 Performance Evaluaton of Infrastructure as Servce Clouds wth SLA Constrants 403 executon tme dependng on the urgency of the submtted ob. u max{ u} denotes the max maxmum cost. The urgency of the ob s denoted by the slack factor f 1. The total number of obs submtted to the system s n r. Each ob J from the released ob set J [ J, J2,, J ] s descrbed by a tuple r n r ( r, p, S, d ) : ts release date r 0, ts executon tme p, and the SL S. The deadlne of each ob d s calculated at the release of the ob as d r f p. The maxmum deadlne s denoted by d max{ d }. The processng tme max of the ob p becomes known at tme r. Once the ob s released, the provder has to decde, before any other ob arrves, whether the ob s accepted or not. In order to accept the ob J the provder should ensure that some machne n the system s capable of completng J before ts deadlne. In the case of acceptance, further obs should prevent that the ob J msses ts deadlne. Once a ob s accepted, the scheduler uses some heurstc to schedule the ob. Fnally, the set of accepted obs J [ J1, J2,, J n ] s a subset of J r where n s the number of obs successfully accepted and executed. 2.2 Metrcs We used several metrcs to evaluate the performance of our schedulng algorthms and SLAs. In contrast to tradtonal schedulng problems, the classc schedulng metrcs such as C max become rrelevant n evaluatng the system performance of systems scheduled through SLAs. One of the obectve functons represents the goal of the nfrastructure provder who wants to maxmze hs total ncome. Job J wth servce level S generates ncome u p n the case of acceptance and zero otherwse. The compettve factor c v n u p 1 V A 1 * s defned as a rato of total ncome generated by an algorthm to optmal ncome VA *. Due to maxmzaton of ncome, a larger compettve factor s better than a smaller one. Note that n our evaluaton of experments, we use the upper bound of the optmal ncome ˆV A nstead of the optmal ncome as we are, * n general, not able to determne the optmal ncome. * * V A Vˆ A m n( u p, u d m) n r max max max 1 The frst bound s the sum of the processng tmes of all released obs multpled by the maxmum prce per unt executon of all avalable SLAs. The second bound s the maxmum deadlne of all released obs multpled by the maxmum prce per unt executon value and the number of machnes n the system. Due to our admsson control polcy, the system does not execute obs whose deadlne cannot be reached; therefore, ths second bound s also an upper bound of the maxmum processng tme n whch the system can execute work. In our experments we analyze SSL-SM and SSL-PM algorthms, snce only one SL s used; we do not take u max nto account to calculate the compettve factor. We also calculate the number of reected obs and use t as a measure of the capacty of the system to respond to the ncomng flow of obs. Fnally, we calculate the mean watng tme of the obs wthn the system as n 1 MWT ( c p ), where c s the n 1 completon tme of the ob. 3 Expermental Setup 3.1 Algorthms In our experments, we use SSL-SM and SSL-PM algorthms based on the EDD (Earlest Deadlne Deadlne) algorthm, whch gves prorty to obs accordng to ther deadlne. The obs that have been admtted but not yet completed are ntroduced n a queue. The obs are ordered n non-decreasng deadlnes. For ther executon,

5 %RJ 404 Anuar Lezama Barquet, Andre Tchernykh, and Ramn Yahyapour obs are taken from the head of the queue. When a new ob s released, t s placed n the queue accordng to ts deadlne. EDD s an optmal algorthm for mnmzng lateness n a sngle machne system. In our case, t corresponds to mnmzng the number of reected obs. Gupta and Paln [12] showed that there cannot exst an algorthm wth a compettve rato greater than 1 (1/ f ) ò wth m 1 machnes, and ò 0 s arbtrary small for the problem of allocatng obs on a hard real-tme schedulng model n whch a ob must be completed f t was admtted for executon. They proposed an algorthm that acheves a compettve rato of at least 1 (1 / f ) and demonstrated that ths s an optmal scheduler for hard real-tme schedulng wth m machnes. The admttance test also proposed by them conssts n verfyng that all the already accepted obs whose deadlne s greater than that of the ncomng ob wll be completed before ther deadlne s met. 3.2 Workload In order to evaluate the performance of SLA schedulng, we performed a seres of experments usng traces of HPC obs obtaned from the Parallel Workloads Archve (PWA) [13] and the Grd Workloads Archve (GWA) [14]. These traces are logs from real parallel computer systems, and they gve us a good nsght n how our proposed schemes wll perform wth real users. Predomnance of low parallel obs n real logs s well known. Even though some obs n the traces requre multple processors, we consder that n our model the machnes have enough capacty to process them, so we can abstract ther parallelsm. Snce we assume that IaaS clouds are a promsng alternatve to computatonal centers, we can expect that workload submtted to clouds wll have smlar characterstcs to the ones submtted to actual parallel and grd systems. In our log, we consdered nne traces from DAS2 (Unversty of Amsterdam), DAS2 (Delft Unversty), DAS2 (Utrecht Unversty), DAS2 (Leden Unversty), KTH, DAS2 (Vre Unversty), HPC2N, CTC, and LANL. Detals of the log characterstcs can be found n the PWA [13] and GWA [14]. To obtan vald statstcal values, 30 experments wthn one week perod were smulated for each SLA. We calculated ob deadlnes based on the real processng tme of the obs. 4 Expermental Results 4.1 Sngle Machne Model For the frst set of experments wth a sngle machne system scheme, we performed experments for 12 values of the slack factor: 1, 2, 5, 10, 15, 20, 25, 50, 100, 200, 500 and Although we do not expect that a real SLA provdes slack factors greater than 50, large values are mportant to study expected system performance when slack factors tend to nfnty. Fgures 1-5 show smulaton results of SSL-SM algorthm. They present percentage of reected obs, total processng tme of accepted obs, mean watng tme, mean number of nterruptons per ob, and mean compettve factor Percentage of Reected Jobs (detal) SLA f Fg. 1. Percentage of reected obs for SSL-SM algorthm

6 seconds seconds Performance Evaluaton of Infrastructure as Servce Clouds wth SLA Constrants 405 x 10 6 Total Processng Tme (detal) x 10 5 Mean Watng Tme (detal) SLA f SLA f Fg. 2. Total processng tme for SSL-SM algorthm Fgure 1 shows the percentage of reected obs for the SSL-SM algorthm. We see that the number of reected obs decreases whle the slack factor ncreases. Large values of slack factor ncrease the flexblty to accept new obs by delayng the executon of already accepted ones. In the case when a slack factor s equal to 1, the system cannot accept new obs untl the ob n executon s completed. We observe that the percentage of reected obs wth a slack factor of 1 s a bt lower than that wth values of slack factor from 2 to 25. However, t does not mean that ths slack factor allows the system to execute more computatonal work as we see n Fgure 2. Fgure 2 shows the total processng tme of accepted obs for the gven slack factors. We see that the processng tme ncreases as the slack factor ncreases, meanng that the scheduler s able to explot the ncreased flexblty of the obs. Fgure 3 shows mean watng tme versus the slack factor. It demonstrates that an ncrease of total processng tme causes an ncrease of watng tme. We also evaluate the mean number of nterruptons per ob; these results are showed n Fgure 4. We see that for small slack factors the Fg. 3. Mean watng tme of obs for SSL-SM algorthm number of nterruptons s greater than that for larger slack factors. Mean values are below 1 nterrupton per ob. Moreover, f a slack factor s more than 10, the number of nterruptons per ob s stable and vary between 0.2 and 0.3. Ths fact s mportant; keepng the number of nterruptons low prevents the system overhead. Fgure 5 shows the mean compettve factor. It represents the nfrastructure provder obectve to maxmze hs total ncome. Note that a larger compettve factor s better than a smaller one. When the slack factor s equal to 1, the compettve factor s Once the slack factor s ncreased to 5, we obtan better compettve factors. When the slack factor s equal to 5, the mean compettve factor has ts maxmum value of Passng ths pont, the compettve factor decreases when the slack factor s equal to 200. We consder that at ths pont the deadlnes of the obs are much larger than ther processng tme. If the slack factor s between 200 and 500, the compettve factor s ncreased agan because the maxmum deadlne gets close to the sum of processng tmes. When the deadlne of all obs tends to nfnty, the completve factor s optmal as expected.

7 Interruptons/Job 406 Anuar Lezama Barquet, Andre Tchernykh, and Ramn Yahyapour Mean Number of Interruptons per Job (detal) SLA f Fg. 4. Mean number of nterruptons per ob for SSL- SM algorthm Fg. 6. Percentage of reected obs for SSL-PM algorthm Compettve Factor end, hstorcal workload wthn a gven tme nterval can be analyzed to determne an approprate slack factor. The tme nterval for ths adustment should be set accordng to the dynamc characterstcs of the workload and n the IaaS confguraton. c V SLA f Fg. 5. Mean compettve factor of SSL-SM algorthm In a real cloud scenaro, the slack factor can be dynamcally adusted n response to changes n the confguraton and/or the workload. To ths 4.2 Multple Machne Model In ths secton, we present the results of SSL-PM algorthm smulatons on two and three machnes. We plotted the SSL-SM results to analyze the change of the system performance when the number of machnes vares. Fgures 6-11 show the percentage of reected obs, total processng tme of accepted obs, mean watng tme, mean number of nterruptons per ob, effcency and mean compettve factor. Fgure 6 presents the percentage of reected obs. It can be seen that an ncrease of the number of machnes has a lmted effect on the acceptablty of obs when the slack factor s small. However, larger values of slack factor have greater mpact on the number of accepted obs. Fgure 7 shows the total processng tme of accepted obs. The processng tme s ncreased

8 Performance Evaluaton of Infrastructure as Servce Clouds wth SLA Constrants 407 Fg. 7. Total processng tme for SSL-PM algorthm Fg. 8. Mean watng tme for SSL-PM algorthm as more machnes are added to the system. However, doublng and trplng the processng capacty do not cause the same ncrease n the processng tme. Ths effect can be clearly seen when the slack factor s large. We conclude that an ncrease n the processng capacty wll be more effectve wth smaller slack factors. Fgure 8 shows the mean watng tme when slack factor Fg. 9. Mean number of nterrupton for SSL-PM algorthm vares. We see that an ncrease of the total processng tme, as a result of larger slack factors, also causes an ncrease of watng tme. Addtonally, addng more machnes to the system makes the ncrease of the mean watng tme less sgnfcant. Fgure 9 shows the mean number of nterruptons per ob. We see that an ncrease of the number of machnes ncreases the number of nterruptons. Ths ncrease s not consderable, and s stablzed as the slack factor s ncreased. The number of nterruptons s maxmal wth a slack factor of 2 for all three models. Fgure 10 shows the executon effcency. Ths metrc ndcates the relatve amount of useful work whch the system executes durng the nterval between the release tme of the frst ob and the completon of the last ob. We see that a decrease of effcency, at least wth moderate slack factors, manly depends on the number of machnes. Fgure 11 presents the compettve factor whle the slack factor vares. We see that for the two and three machne system confguraton the maxmum compettve factor s obtaned wth a slack factor of 2. As we already mentoned, n the case of a sngle machne confguraton the best compettve

9 408 Anuar Lezama Barquet, Andre Tchernykh, and Ramn Yahyapour Fg. 10. Executon effcency for SSL-PM algorthm Fg. 12. Executon cost per hour clearly seen when the slack factor s 200 for a sngle machne confguraton, and 100 for two and three machnes. In the cases of two and three machnes confguraton, for the slack factor greater than 500, the compettve factor almost reached the optmal value. 4.3 Executon Costs Fg. 11. Compettve factor for SSL-PM algorthm factors are obtaned wth a slack factor of 2 and 5. We can also observe that when the slack factor s ncreased, the compettve factor s decreased. Ths happens untl the slack factor becomes large enough to create a sgnfcant dfference between ob deadlnes and ther processng tmes. Ths s In the IaaS scenaro, cloud provders offer computer resources to customers on a pay-asyou-go bass. The prce per tme unt depends on the servces selected by the customer. Ths charge depends not only on the prce the user s wllng to accept, but also on the cost of the nfrastructure mantenance. In order to estmate ths charge, we propose a tarff functon that depends on the slack factor. We frst take nto account that the provder needs to recover the mantenance cost from the executon of obs. We assume that the provder pays a flat rate for the use/mantenance of the resources. The total mantenance cost of ob processng co can be calculated usng the expresson ( ) t

10 Performance Evaluaton of Infrastructure as Servce Clouds wth SLA Constrants 409 n r 1 m p u u m can be calculated as. The cost per tme unt co u co u cot n p 1, where n r p s the sum of processng tmes of all 1 released obs, u u s the prce per unt of mantenance, m s the number of machnes, and n p s the sum of processng tmes reached 1 by the algorthm. We consder that u u s equal to 8.5 cents per hour, whch s the prce that Amazon EC2 charges for a small processng unt [15]. Fgure 12 shows the executon cost per hour when the slack factor vares. As t can be seen, the cost of processng obs wth a small slack factor s larger than the executon of obs wth a looser slack factor. Moreover, the costs are larger f fewer machnes are used. The reason s that a system wth less machnes and a small slack factor reects most of the obs wthn a gven nterval, so the executon s costly. Therefore, confguratons that execute more obs have lower costs per executon tme unt. Clear proft s generated f cost per tme unt s ncremented. 5 Conclusons and Future Work The use of Servce Level Agreements (SLAs) s a fundamentally new approach for ob schedulng. Accordng to ths approach, schedulng s based on satsfacton of QoS constrants. The man dea s to provde dfferent levels of servce, each addressng a dfferent set of customers. Whle a large number of servce levels leads to hgh flexblty for customers, t also produces a sgnfcant management overhead. Hence, a sutable tradeoff must be found and adusted dynamcally, f necessary. Whle theoretcal worst case IaaS schedulng models begn to emerge, fast statstcal technques appled to real data are effectve as have been shown emprcally. In ths paper, we presented an expermental study of two greedy acceptance algorthms, namely, SSL-SM and SSL-PM, wth known worst case performance bounds. They are based on the adaptaton of the preemptve EDD algorthm for ob schedulng wth dfferent servce levels on dfferent number of machnes. Our study results n several contrbutons. Frstly, we dentfed several servce levels to make schedulng decsons wth respect to ob acceptance; secondly, we consdered and analyzed two test cases on a sngle machne and on parallel machnes; thrdly, we estmated the cost functon for dfferent servce levels; then, we showed that the slack factor can be dynamcally adusted n response to changes n the confguraton and/or the workload. To ths end, the past workload wthn a gven tme nterval can be analyzed to determne an approprate slack factor. The tme nterval for ths adaptaton depends on the dynamcs of the workload characterstcs and IaaS confguraton. Though our model of IaaS s smplfed, t s stll a vald basc abstracton of SLAs that can be formalzed and treated automatcally. In ths paper, we explored only a few scenaros of usng SLAs. The IaaS clouds are usually large scale and vary sgnfcantly. It s not possble to satsfy all QoS constrants from the servce provder perspectve f a sngle servce level s used. Hence, a balance between the number of servce levels and the number of resources needs to be found and adusted dynamcally. A system can have several specfc servce levels (e.g., Bronze, Slver, Gold) and algorthms to keep the system wth QoS specfed n SLA. However, further study of algorthms for multple servce classes and the resource allocaton algorthms s requred to assess ther actual effcency and effectveness. Ths wll be the subect of future work to acheve a better understandng of servce levels n IaaS clouds. Moreover, other scenaros of the problem wth dfferent types of SLAs and workloads wth a combnaton of obs wth and wthout SLA stll need to be addressed. Also, as future work, we wll consder the elastcty of slack factors n order to ncrease proft whle provdng better QoS to users.

11 410 Anuar Lezama Barquet, Andre Tchernykh, and Ramn Yahyapour References 1. Garg, S.K., Gopalayengar, S.K., & Buyya, R. (2011). SLA-based Resource Provsonng for Heterogeneous Workloads n a Vrtualzed Cloud Datacenter. 11th nternatonal conference on Algorthms and Archtectures for parallel processng (ICA3PP'11), Melbourne, Australa, Wu, L., Garg, S.K., & Buyya, R. (2011). SLAbased admsson control for a Software-as-a- Servce provder n Cloud computng envronments. 11th IEEE/ACM Internatonal Symposum on Cluster, Cloud and Grd Computng (CCGrd 2011), Newport Beach, CA., USA, Patel, P., Ranabahu, A., & Sheth, A. (2009). Servce Level Agreement n Cloud Computng (Techncal Report). Oho Center of Excellence n Knowledge-enabled Computng. 4. Andreux, A., Czakowsk, K., Dan, A., Keahey, K., Ludwg, H., Nakata, T., Pruyne, J., Rofrano, J., Tuecke, S., & Xu, M. (2004). Web servces agreement specfcaton (WS-Agreement), (GFD- R-P.107). Global Grd. 5. Revew and summary of cloud servce level agreements. (s.f.). Retreved from -rev2sla.html. 6. Wu, L., Garg, S.K., & Buyya, R. (2011). SLA- Based Resource Allocaton for Software as a Servce Provder (SaaS) n Cloud Computng Envronments. 11 th IEEE/ACM Internatonal Symposum on Cluster, Cloud and Grd Computng (CCGrd 2011), Newport Beach, CA,USA, Fretas, A.L., Parlavantzas, N., & Pazat, J.L. (2011). Cost Reducton Through SLA-drven Self- Management. Nnth IEEE European Conference on Web Servces (ECOWS), Lugano, Swtzerland, Slagh, G.C., Şerban, L.D., & Ltan, C.M. (2010). A Framework for Buldng Intellgent SLA Negotaton Strateges under Tme Constrants. Economcs of Grds, Clouds, Systems, and Servces, Lecture Notes n Computer Scence, 6296, Macías, M., Smth, G., Rana, O., Gutart, J., & Torres, J. (2010). Enforcng Servce Level Agreements Usng an Economcally Enhanced Resource Manager. Economc Models and Algorthms for Dstrbuted Systems, Autonomc Systems, Baruah, S.K. & Hartsa, J.R. (1997). Schedulng for overload n real-tme systems. IEEE Transactons on Computers, 46(9), Schwegelshohn, U. & Tchernykh, A. (2012). Onlne Schedulng for Cloud Computng and Dfferent Servce Levels. IEEE 26th Internatonal Parallel and Dstrbuted Processng Symposum Workshops, Shangha, Chna, Gupta, B.D. & Pals, M.A. (2001). Onlne real-tme preemptve schedulng of obs wth deadlnes on multple machnes. Journal of Schedulng, 4(6), D. Fetelson (2008). Parallel Workloads Archve. Algorthms and Archtectures for. 14. Iosup, A., L, H., Jan, M., Anoep, S., Dumtrescu, C., Wolters, L., & Epema, D.H. (2008). The Grd Workloads Archve. Future Generaton Computer Systems, 24(7), Amazon Servces. (2013). Precos deamazon EC2 Retreved from Anuar Lezama Barquet obtaned a degree n Electrc and Electronc Engneer from the Natonal Autonomous Unversty of Mexco (UNAM). He receved hs M.S. n Computer Scence from the CICESE Research Center n Hs nterests nclude parallel computng, schedulng, and cloud computng. Andre Tchernykh s a researcher at the Computer Scence Department, CICESE Research Center, Ensenada, Baa Calforna, Mexco. From 1975 to 1990 he was wth the Insttute of Precse Mechancs and Computer Technology of the Russan Academy of Scences (Moscow, Russa). He receved hs Ph.D. n Computer Scence n In CICESE, he s a coordnator of the Parallel Computng Laboratory. He s a member of the Natonal System of Researchers of Mexco

12 Performance Evaluaton of Infrastructure as Servce Clouds wth SLA Constrants 411 (SNI), Level II. He leads a number of natonal and nternatonal research proects. He served as a program commttee member of several professonal conferences and a general co-char for nternatonal conferences on Parallel Computng Systems. Hs man research nterests nclude schedulng, load balancng, adaptve resource allocaton, scalable energy-aware algorthms, green grd and cloud computng, ecofrendly P2P schedulng, mult-obectve optmzaton, schedulng n real tme systems, computatonal ntellgence, heurstcs, metaheurstcs, and ncomplete nformaton processng. Ramn Yahyapour s executve drector of the GWDG Unversty of Göttngen. He has done research n Clouds, Grd and Servce-orented Infrastructures for several years. Hs research nterests are n resource management. He s a steerng group member and on the Board of Drectors n the Open Grd Forum. He has partcpated n several natonal and European research proects. Also, he s a scentfc coordnator of the FP7 IP SLA@SOI and was a steerng group member n the CoreGRID Network of Excellence. Artcle receved on 22/02/2013; accepted 01/08/2013.