Application Level Resource Scheduling with Optimal Schedule Interval Filling (RS-OSIF) for Distributed Cloud Computing Environments

Size: px
Start display at page:

Download "Application Level Resource Scheduling with Optimal Schedule Interval Filling (RS-OSIF) for Distributed Cloud Computing Environments"

Transcription

1 Internatonal Journal of Appled Engneerng Research ISSN Volume 12, Number 24 (2017) pp Research Inda Publcatons. Applcaton Level Resource Schedulng wth Optmal Schedule Interval Fllng (RS-OSIF) for Dstrbuted Cloud Computng Envronments V. Mural Mohan 1 Research Scholar, Department of Computer Scence and Engneerng, Koneru Lakshmaah Educaton Foundaton, Vaddeswaram, Guntur, Andhra Pradesh, Inda. Orcd Id: K.V.V. Satyanarayana 2 Professor, Department of Computer Scence and Engneerng, Koneru Lakshmaah Educaton Foundaton, Vaddeswaram, Guntur, Andhra Pradesh, Inda. Orcd Id: Abstract Applcaton level resource schedulng n dstrbuted cloud computng s a sgnfcant research objectve that grabbed the attenton of many researchers n recent lterature. Mnmal resource schedulng falures, robust task completon and far resource usage are the crtcal factors of the resource schedulng strateges. Hence, ths manuscrpt proposed a scalable resource-schedulng model for dstrbuted cloud computng envronments that amed to acheve the schedulng metrcs. The proposed model called "Applcaton Level Resource Schedulng wth Optmal Schedule Interval Fllng (RS-OSIF)" schedules the resource to respectve task such that the optmal utlzaton of resource dle tme acheved. The proposed model performs the schedulng n herarchcal order and they are optmal dle resource allocaton, f no ndvdual resource s found to allocate then t allocates optmal multple dle resources wth consderable schedule ntervals fllng. Ths manuscrpt explores () requrement of resource schedulng strateges, () recent schedulng models found n contemporary lterature, () methods and materals used and approach of the proposed resource schedulng strategy (v)and ts advantage over other benchmarkng models. The performance analyss of the proposal done through cross valdaton of the metrcs lke load (wndow of tasks) versus loss (no of tasks faled to accomplsh), task completon optmalty and process overhead n resource schedulng. The expermental results evncng that the proposed model s scalable and robust under the adapted metrcs. Keywords: Cloud Computng, Dstrbuted Systems, Resource Schedulng, Request wndow, Schedule nterval fllng INTRODUCTION Cloud computng become more crucal [1], [2] due to phenomenal requrement of nternet usage and bg data wth data n hgh volume and maxmal access speed. The cloud computng envronment facltates the users wth Infrastructure as a Servce (IA as) and Platform as a Servce (PA as) and Software as a Servce (SaaS), whch s a new dmenson of executng the computer aded applcatons [3]. The users can access these servce through nternet under custom defned busness nterests such as pay per use, subscrbe and use [4], [5]. The cloud nfrastructure conssts huge number of resources as allance, hence the few of the factors found n cloud computng are often smlar to other dstrbuted computng envronments such as grd computng. The process of resource management usng vrtualzaton that specfc to cloud computng [6] enables to schedule the resources as utlty [7]. Hence, the pay per use s possble only n cloud computng [2]. The overwhelmed demand of cloud computng evnces the need of optmal schedulng of the resources. Ths can be a sgnfcant challenge, snce the load of tasks trggered by the users fluctuates dynamcally and f resource schedulng s not at ts optmal, then users fnd dffculty to focus on ther own busness nterests [8]. The optmal resource schedulng found to be the most sgnfcant factor n Servce Level Agreements (SLAs) executed between provders and users n cloud computng envronments [9]. Henceforth, SLA executon falures those often evnced due to nsgnfcant resource schedulng costs the revenue of the provders [10]. Techncally, the cloud resource schedulng s a challengng problem for powereffcent requrements n provder s context and QoS n the context of users [11]. Sgnfcant contrbutons n cloud resource schedulng strateges notced n contemporary lterature [2], [3], [12]. Accordng to these contrbutons, the resource schedulng optmzaton confrm to be NP-Hard [13], [14]. Hence the current researchers are amng to optmze the resource schedulng such that the complexty s lnear. In ths context, ths manuscrpt proposed a novel cloud resource schedulng strategy that evncng the process overhead as lnear. RELATED WORK Contemporary models those related to resource schedulng n cloud computng envronment are brefed here n ths secton. A detaled revew of the schedulng strateges related to cloud computng were explored n [15]. The schedulng strateges specfc to resource schedulng were consdered n ths secton. The communty aware resource schedulng [16] 15746

2 Internatonal Journal of Appled Engneerng Research ISSN Volume 12, Number 24 (2017) pp Research Inda Publcatons. amed to optmze the bufferng tme of the resource and elapsed schedule tme. Ths model reschedules the job n queue f not addressed by the resources that assgned earler. Gaunt et.al [17] has proposed the resource schedulng technque that follows queung theory that tends to escalate the mean dle tme of the ndependent and dynamc jobs that are bounded to deadlnes. For executng the sutable jobs and dequeuer, three workload-schedulng polces are proposed. The method that depcted by Altno et.al [18] amed to schedule the resources n regard to fulfl the need of ndependent workloads. The crtcalty of ths model s the schedulng of resources under SLA wth power falure awareness, and optmal power utlzaton. Fault tolerance through decson makng about system falures and share nodes control s another phenomenal proactve methodology that uses Vrtual Machnes. Such an approach depcted n [19], where the decson s makng through feedbacks to schedule the resources wth mnmal contenton. Ths model reles on preempton of the resources from the scheduled jobs and reschedules to acheve the contenton free resource schedulng. Estmatng the arrval rate and completon tme of the jobs are two specfc factors consdered by the resource schedulng model depcted n [20]. In order to ths, the depcted model uses Hadoop cluster that represents the set of resources that are nterconnected. The objectve of the model s to comprse the mnmal allocaton of resources to all of the actve users. The other contemporary models that schedule the resources based on the job prorty has proposed n [21], [22], [23], and [24]. The model depcted n [21] referred as dynamc prorty schedulng algorthm (DPSA) that tends to resolve the ssue of resource schedulng aganst servce requests nstead of tasks. The user requests related to servces are analyzed and grouped as unt of tasks, whch s based on ther propertes such as requred resources and ther prorty. The smlar objectve s consdered by the other model depcted n [22] that montors user requrements through hyper vsonary request montorng that balances the resources by allocatng resources dynamcally. The model depcted n [23] opted to the process of vrtualzng the resources dynamcally to mprovse the server usage. The metrc skewness s adapted to estmate the dsproportonate usage of the resources, whch helps to balance the load and hotspot mtgaton. The resource schedulng strategy depcted n [25] amed to generate max possble revenue through allocatng resources to the workloads havng maxmum requrement of the resources and havng lmted bufferng tme. Ymng et.al [26] has proposed herarchcal dstrbuted loop self-resource schedulng polcy to ensure balancng of load by usng weghted self-schedulng approach n vared cloud envronments. The attrbutes those entals the job prorty and ther resource requrements were used n [24], whch are n use to sort the jobs n regard to resource schedulng. The common objectve of all these prortes based schedulng models s mnmal completon tme. The other contemporary model, whch s schedulng resources based on the SLA between provder and user s depcted n [27]. The consdered metrcs of the SLA are latency, throughput and cost. Ths model reled on Mnmzed Geometrc Buchberger Algorthm, whch s a stochastc nteger programmng. The mult objectve schedulng strategy s depcted n [28], whch s amed to schedule the resources aganst scentfc work loads. The crtcal objectves of the proposed model s optmal schedulng wth mnmal search tme. Orderng web based applcatons n cloud based data center envronments were explored n MUSE [29] and applcaton placement controller [30]. The MUSE replcates the web applcatons deployed and uses a dspatch algorthm n frontend to serve each request ratonally to reduce the nfrequent servers. The model called applcaton placement controller balances the load; n order to ths the model uses network flow algorthms. Chen G et al., [31] proposed a model that ntegrates server allocaton and load remttng for connecton orented servces such as wndows messenger. These contrbutons [29], [30], [31] are not dependent of vrtual machnes. Henceforth they use mult-ter archtecture to organze the applcatons and also use a dspatch algorthm n frontend to balance the load. Compared to these models our applcaton ratonally vrtualzes the applcaton nfrastructure and further schedules these vrtual machnes as resources. The context of the cloud computng s orentaton of applcatons optmal data trackng, whch s also often referred as data localty. The cloud servce called Map Reduce [32] s one that often uses n ths context of optmal data trackng. Isard M et al., [33] ntroduced a schedulng strategy called Quncy that schedules the tasks under the metrcs lke data localty maxmzaton and far allocaton of resources. The smlar contrbuton found n 34 that amed to balance the executon tme taken for data trackng process. The schedulng algorthm devsed 35 schedules n the jobs under dynamc prortes to acheve the far allocaton of resources. Our work s smlar to pure-software low-cost solutons [29], [31], [34], [35], [36], [37], [38], [39], whch s applcaton level resource schedulng strategy. Unlke the contemporary models observed n lterature, the resource schedulng algorthm proposed here n ths manuscrpt s generalzed and not a dependent of any hardware resources and cloud servces, whch was prmarly attempted and successfully materalzed n [40]. The other crtcal factor s that the model proposed here s composton of the set tasks wth smlar resource requrement as a request wndow. To the best of our knowledge that ganed from the revew of the contemporary lterature, ths manuscrpt s the frst contrbuton that studed schedulng of resources between smlar tasks as one unt. The concept of accumulatng smlar tasks as one wndow to optmze the resource utlzaton s adapted here n ths manuscrpt. RESOURCE SCHEDULING WITH SCHEDULE INTERVAL FILLING FOR CLOUD COMPUTING The Resource Schedulng wth Schedule Interval Fllng (RS

3 Internatonal Journal of Appled Engneerng Research ISSN Volume 12, Number 24 (2017) pp Research Inda Publcatons. OSIF) s proposed n ths manuscrpt functons as frontend to Resource Allocaton Controller. Intally, the set of smlar tasks trggered are pooled as a wndow. The schedule nterval fllng can be defned as usage of the nterval tme between the par of resource scheduled tmes n sequence. The schedulng strategy performs the search for optmal resource for a gven tasks wndow n a herarchcal order. The herarchcal order of the search for optmal resource s as follows: A control frame respectve to each trggered tasks wndow (here after referred as wndow) carres the requrements such as expected resource, tme to engage that resource, the sze of the wndow, wndow arrval tme and ts completon tme. The arrval tme of request wndow s the aggregate value of tme requred to reach resource allocaton controller, volume of tme requred to process a control frame (see Eq1). t ( w ) p ( w ) t (Eq1) w cf cf w // the aggregate value of arrval tme t ( w ) of the control frame cf, process tme p ( w ),tme cf cf tw requred for the wndow to reach resource allocaton controller and elapsed threshold defned. The requrements and prortes obtaned from the control frame, the proposed RS-OSIF schedules the resources, whch are explored n followng sectons. RS-OSIF Schedulng Strategy Resource Allocaton Controller executes RS-OSIF to perform resource allocaton to the wndow that represented by the control packet arrved, whch s as follows: The adaptable to the requrements and dle tme of the resource that suts to accomplsh the completon of the tasks wndow are two standards followed by proposed resource schedulng strategy.rs-osif, upon falure to dentfy an ndvdual resource that meets the schedulng crtera, then pools mnmal set of resources to meet ths schedulng crtera, f faled then selects one or more resources wth maxmal schedulng ntervals (dle tme between par of schedule tmes n sequence) and schedule them to fulfll the requrements of the wndow to be arrved. If ether of these cases succeeds, then segments the wndow n to mnmum number of wndows such that resource schedulng succeeds under specfed factors. The resource allocaton to the target wndow at schedulng ntervals, whch s the thrd level of the proposed schedulng herarchy, s explored n followng steps. Schedules a resource to the wndows w k and w l expected to be arrved at dfferent tmes, f avalable wth schedulng nterval, such that w k wl o b ( w w ) w //begn tme b ( w w ) of the k l schedulng nterval tme w kwl w of the wndow w k l s less than the arrval o e( w w ) ( cw ) // end tme e ( w w ) of the k l nterval w kwl s greater than the completon tme ( c ) of the tasks n wndow w, here s w the elapsed completon tme offset defned. If faled to meet the above crtera, then selects mnmal set of resources, whch are already scheduled and havng schedulng ntervals such that, o Schedulng Interval begn tme of all the selected compatble resources are dentcal and less than the arrval tme of the wndow, and sum of the schedulng ntervals s greater than the completon tme of the tasks found n gven wndow. If found pools all the selected resources and schedules to the target wndow. If faled to meet the above crtera, then segments the target wndow n to two and executes RS-ISOF on each wndow. Strategy of RS-OSIF s explored wth mathematcal notatons and algorthm flow n followng Sectons. Pseudo representaton of schedulng algorthm RS( w, cf ) Begn 1. Let cf be the control frame of respectve wndow w, 2. r // vector of optmal resources, whch s empty ntally. 3. r RS OSIF ( cf, R) //nvokng a method that tracks optmal resource under three levels of RS-OSIF that meets the crtera of requrements found n cf respectve to the wndow w, here R s the set of resources avalable 4. If ( r ) Begn a. Partton the w n to two wndows k l w, w and apply RS-OSIF on each such that control frame cf represents the both wndows. b., RS w cf // nvokng man method for frst part of the wndow c., RS w cf // nvokng man method for second part of the wndow 5. End // of lne 4 6. Else Begn // of lne 4 a. f the sze of the r s one then schedules that resource b. Else pools the all resources as one unt and schedules to the wndow c. Ext 7. End // of condton n lne 6 8. End // of the functon w represented by cf

4 Internatonal Journal of Appled Engneerng Research ISSN Volume 12, Number 24 (2017) pp Research Inda Publcatons. Pseudo representaton of optmal resource selecton algorthm RS OSIF ( cf, R) Begn 1. er // an empty vector that contans elgble resources dentfed durng the process 2. r // an empty vector contans optmal resources to schedule found n the process. 3. For-each { rr R} begn 4. If ( b( ntr) ) ( w ) ( e( ntr ) b( ntr )) cw ) Begn// The begn of ( ) b nt r the next dle tme frame that summed up wth elapsed threshold defned s less than the arrval w tme. In addton, the total dle tme of the resource (whch s the absolute dfference between end and begn of the dle tme) r s greater than the expected c completon tme w of the gven task wndow that summed up wth completon elapsed offset defned. 5. r r 6. Break the loop // n lne End //of the condton n lne 4 8. End of the loop n lne 3 r s not empty 9. If return r // completon of the method at frst level of the herarchy 10. For-each { rr R} begn ( b( nt ) ) ( ) 11. f r w a. er r begn 12. End // of lne End // of lne If er s not empty Begn a. Sort the er as er n descendng order of ther dle tme b. 0 snt // aggregate of the dle tmes observed for selected resources n r rr er 15. For each a. r r b. Begn snt e( nt ) b( nt ) snt ( c ) c. If w r r Begn d. Return r // completon of the method at second level of the herarchy 16. End //of lne End// of lne er // empty the vector er 19. For-each { rr R} begn ( b( s ) ) ( ) 20. f r w sr begn // f the begn of the schedule nterval of the resource r s less than the arrval tme of the wndow w a. er r 21. End // of lne End//lne If er s not empty Begn w a. Sort the er as er n descendng order of ther schedule ntervals rr er 24. For-each Begn a. r r b. snt e( s ) b( s ) c. If snt ( c ) r Begn w r d. Return r // completon of the method at thrd level of the herarchy 25. End //of lne End// of lne Return r 28. End //of the Method In order to perform the resource schedulng, RS-OSIF ntates to track possble optmal resource, f faled then attempts to segment the wndow n to two wndows and performs the schedulng of the resources for each partton. Here the process of segmentng the wndow s on demand; hence the segmentaton process clams mnmal overhead. EXPERIMENTAL SETUP AND EMPIRICAL ANALYSIS The performance of RS-OSIF s assessed through smulaton study performed on Planet Lab [41] s used to smulate the dstrbuted cloud computng envronment wth stream of tasks and ratonally vrtualzed multple resources. The performance of the RS-OSI`F s assessed by the metrcs task load versus resource allocaton falure, task load versus task completon optmalty schedulng process overhead. The values observed for these metrcs from the smulaton study of RS-OSIF are compared to the results observed for these metrcs from other benchmarkng models called A Framework for Resource Allocaton Strateges n Cloud Computng Envronment (FRAS) [38] and the analytc herarchy process for task schedulng and resource allocaton n cloud computng envronment (AHP) [39]. The parameters used n smulaton envronment are as follows (see Table1)

5 Internatonal Journal of Appled Engneerng Research ISSN Volume 12, Number 24 (2017) pp Research Inda Publcatons. Table 1: Parameters used n smulaton Number of users 125 No of Resources and ther vrtualzatons 155 The range of tasks nvolved to form a Request wndow The range of mllon nstructons per request wndow 11 to 25 smlar tasks 0.1 to 1 Range of task prortes 5 to 15 elapsed threshold values used 0.05% of actual The smlar tasks were pooled as wndow n the range of 11 to 25 tasks n each wndow. The proposed RS-OSIF s mplemented n java and deployed as frontend of the smulaton. The resources scheduled and the tasks completon status was logged along wth RS-OSIF executon flow. The executon flow logs were used to estmate the process overhead and the logs of scheduled resources and tasks completon state were used to assess the tasks load versus resource allocaton falures and tasks completon optmalty. The evnced results for these metrcs at dvergent load of tasks were compared wth the results obtaned from other contemporary models FRAS and AHP. The comparson of Load versus resource schedulng falures observed for RS- OSIF, FRAS and AHP were analyzed and represented n Fgure 1 as lne chart, whch s concludng that the proposed model s 39%, 28% of schedulng falures were reduced that compared to FRAS and AHP respectvely The task completon optmalty observed for RS-OSIF and other two models were analyzed and compared n Fgure 2. The comparson of task completon optmalty observed for all of these three models evncng that the RS-OSIF s maxmzng the task completon optmalty by 31%, 28% n respectve of FRAS and AHP. Process overhead observed aganst request wndow load (see Fgure 3) s evnced as lnear n the case of RS-OSIF, where n other two cases the process overhead s nonlnear (NP-Hard). Table 3: Task completon rato aganst wndow load as set of nstructons n mllons. FRAS AHP RS-OSIF Fgure 1: Request wndow Load versus resource allocaton falures. Table 2: Resource Schedulng Falure rato aganst wndow load as set of nstructons n mllons. FRAS AHP RS-OSIF Fgure 2: Request wndow Load versus task completon optmalty

6 Internatonal Journal of Appled Engneerng Research ISSN Volume 12, Number 24 (2017) pp Research Inda Publcatons. The rato of request wndow loss aganst the request wndow load s evnced n Fgure 1. The request wndow load s normalzed to the value between 0 and 1, whch s actually the number of pool of tasks as wndow per unt of tme. The expermental study ndcatng that the RS-OSIF s sgnfcantly defused the wndow loss that compared to other two models (see Fgure 1 and Table 2). Hence the hgh task accomplshment observed for RS-OSIF (see Fgure 2 and Table 3). The condtonal executon of the levels of herarchcal order followed by RS-OSIF and allocaton of resources to the pool of tasks also the context of poolng more than one resource to fulfl the need of a tasks wndow evnced the process overhead as lnear (see Fgure 3). Table 4: Process overhead Rato observed aganst wndow load as nstructons per wndow n mllons. FRAS AHP RS-OSIF Fgure 3: Process overhead versus request wndow load Fgure 4: Resource utlzaton rato n mllon nstructons per second. The resource utlzaton rato s also beng assessed (see Table 5 and Fgure 4) for proposed model and the other two models consdered for experments. The utlzaton rato s measured as multple nstructons per second (MIPS), whch s the benchmark standard proposed by standard performance Evaluaton Corporaton [42]. Table 5: Resource Utlzaton Rato (as Mllon Instructons per Seconds) FRAS AHP RS-OSIF CONCLUSION Ths manuscrpt proposed as resource schedulng algorthm for dstrbuted cloud computng envronment. The proposed model s Resource Schedulng wth Optmal Schedule Interval Fllng (RS-OSIF), whch s consderng the smlar tasks as one unt of wndow and schedulng the resources accordng to the avalablty of the prortes of each task n the tasks wndow and avalablty of the resource dle tme. The proposed model schedules the resources n herarchcal order. In frst level of the herarchy, tracks an dle resource that fulfls the prortes of the tasks wndow, f faled then tracks the set of dle resources and pools them to fulfl the need, f faled then tracks for one or more resources wth compatble schedulng ntervals and pools them and schedules to respectve tasks wndow. If faled to meet the any of the above crtera of the herarchy, then reforms the tasks wndow, such that the avalable resources n current context can fulfl the requrements of the tasks wndow. The expermental study evncng that the proposed model s robust n resource schedulng wth optmal task completon tme and mnmal resource allocaton falures. Snce the allocaton strategy s performed n herarchcal order and executon of each level n herarchy s condtonal, the computatonal overhead s found as lnear. The maxmal resource utlzaton wth mnmal vrtual machnes and less computatonal over head s observed snce the resources are allocated to the pool of tasks, nstead to an ndvdual task. The future drecton of the research can consder to compose the resources to the pool of dvergent jobs (those need to execute n sequence or parallel) nvolved n each task to acheve optmal resource schedulng

7 Internatonal Journal of Appled Engneerng Research ISSN Volume 12, Number 24 (2017) pp Research Inda Publcatons. REFERENCES [1] Loshn, Davd. Bg data analytcs: from strategc plannng to enterprse ntegraton wth tools, technques, NoSQL, and graph. Elsever, [2] Toos, Adel Nadjaran, Rodrgo N. Calheros, and Rajkumar Buyya. "Interconnected cloud computng envronments: Challenges, taxonomy, and survey." ACM Computng Surveys (CSUR) 47.1 (2014): 7. [3] Zhang, Q, Lu Cheng, and Raouf Boutaba. "Cloud computng: state-of-the-art and research challenges." Journal of nternet servces and applcatons 1.1 (2010): [4] Foster, Ian, et al. "Cloud computng and grd computng 360-degree compared." Grd Computng Envronments Workshop, GCE'08. Ieee, [5] Xu, Fe, et al. "Managng performance overhead of vrtual machnes n cloud computng: A survey, state of the art, and future drectons." Proceedngs of the IEEE (2014): [6] Grandson, Tyrone, et al. "Towards a formal defnton of a computng cloud." Servces (servces-1), th World Congress on. IEEE, [7] Buyya, Rajkumar, et al. "Cloud computng and emergng IT platforms: Vson, hype, and realty for delverng computng as the 5th utlty." Future Generaton computer systems 25.6 (2009): [8] Armbrust, Mchael, et al. "A vew of cloud computng." Communcatons of the ACM 53.4 (2010): [9] Morshedlou, Hossen, and Mohammad Reza Meybod. "Decreasng mpact of sla volatons: a proactve resource allocaton approachfor cloud computng envronments." IEEE Transactons on Cloud Computng 2.2 (2014): [10] Dkaakos, Maros D., et al. "Cloud computng: Dstrbuted nternet computng for IT and scentfc research." IEEE Internet computng 13.5 (2009). [11] Balga, Jayant, et al. "Green cloud computng: Balancng energy n processng, storage, and transport." Proceedngs of the IEEE 99.1 (2011): [12] Helg, Leonard, and Stefan Voß. "A scentometrc analyss of cloud computng lterature." IEEE Transactons on Cloud Computng 2.3 (2014): [13] Genez, Thago AL, Luz F. Bttencourt, and Edmundo RM Madera. "Workflow schedulng for SaaS/PaaS cloud provders consderng two SLA levels." Network Operatons and Management Symposum (NOMS), 2012 IEEE. IEEE, [14] Xu, Jelong, et al. "Enhancng survvablty n vrtualzed data centers: a servce-aware approach." IEEE Journal on Selected Areas n Communcatons (2013): [15] Mohan, V. Mural, and K. V. V. Satyanarayana. "The Contemporary Affrmaton of Taxonomy and Recent Lterature on Workflow Schedulng and Management n Cloud Computng." Global Journal of Computer Scence and Technology 16.1 (2016). [16] Huang, Ye, et al. "Explorng decentralzed dynamc schedulng for grds and clouds usng the communtyaware schedulng algorthm." Future Generaton Computer Systems 29.1 (2013): [17] Le, Guan, Ke Xu, and Junde Song. "Dynamc resource provsonng and schedulng wth deadlne constrant n elastc cloud." Servce Scences (ICSS), 2013 Internatonal Conference on. IEEE, [18] Sampao, Altno M., and Jorge G. Barbosa. "Dynamc power-and falure-aware cloud resources allocaton for sets of ndependent tasks." Cloud Engneerng (IC2E), 2013 IEEE Internatonal Conference on. IEEE, [19] L, Jayn, et al. "Adaptve resource allocaton for preemptable jobs n cloud systems." Intellgent Systems Desgn and Applcatons (ISDA), th Internatonal Conference on. IEEE, [20] Rasool, Aysan, and Douglas G. Down. "An adaptve schedulng algorthm for dynamc heterogeneous Hadoop systems." Proceedngs of the 2011 Conference of the Center for Advanced Studes on Collaboratve Research. IBM Corp., [21] Lee, Zhongyuan, Yng Wang, and Wen Zhou. "A dynamc prorty schedulng algorthm on servce request schedulng n cloud computng." Electronc and Mechancal Engneerng and Informaton Technology (EMEIT), 2011 Internatonal Conference on. Vol. 9. IEEE, [22] Hwang, Jnho, and Tmothy Wood. "Adaptve dynamc prorty schedulng for vrtual desktop nfrastructures." Proceedngs of the 2012 IEEE 20th Internatonal Workshop on Qualty of Servce. IEEE Press, [23] Xao, Zhen, Weja Song, and Q Chen. "Dynamc resource allocaton usng vrtual machnes for cloud computng envronment." IEEE transactons on parallel and dstrbuted systems 24.6 (2013): [24] Wu, Xaonan, et al. "A task schedulng algorthm based on QoS-drven n cloud computng." Proceda Computer Scence 17 (2013): [25] Jan, Navendu, et al. "Near-optmal schedulng mechansms for deadlne-senstve jobs n large computng clusters." ACM Transactons on Parallel Computng 2.1 (2015): 3. [26] Han, Ymng, and Anthony T. Chronopoulos. "A herarchcal dstrbuted loop self-schedulng scheme for cloud systems." Network Computng and Applcatons (NCA), th IEEE Internatonal Symposum on. IEEE, [27] L, Qang. "Applyng Stochastc Integer Programmng to Optmzaton of Resource Schedulng n Cloud 15752

8 Internatonal Journal of Appled Engneerng Research ISSN Volume 12, Number 24 (2017) pp Research Inda Publcatons. Computng." JNW 7.7 (2012): [28] Zhang, Fan, et al. "Mult-objectve schedulng of many tasks n cloud platforms." Future Generaton Computer Systems 37 (2014): [29] Chase, Jeffrey S., et al. "Managng energy and server resources n hostng centers." ACM SIGOPS operatng systems revew 35.5 (2001): [30] Tang, Chunqang, et al. "A scalable applcaton placement controller for enterprse data centers." Proceedngs of the 16th nternatonal conference on World Wde Web. ACM, [31] Chen, Gong, et al. "Energy-Aware Server Provsonng and Load Dspatchng for Connecton-Intensve Internet Servces." NSDI. Vol [32] Guo, Zhenhua, and Geoffrey Fox. "Improvng mapreduce performance n heterogeneous network envronments and resource utlzaton." Cluster, Cloud and Grd Computng (CCGrd), th IEEE/ACM Internatonal Symposum on. IEEE, [33] Isard, Mchael, et al. "Quncy: far schedulng for dstrbuted computng clusters." Proceedngs of the ACM SIGOPS 22nd symposum on Operatng systems prncples. ACM, [34] Bobroff, Norman, Andrzej Kochut, and Krk Beaty. "Dynamc placement of vrtual machnes for managng sla volatons." Integrated Network Management, IM'07. 10th IFIP/IEEE Internatonal Symposum on. IEEE, [35] Das, Tathagata, et al. "LteGreen: Savng Energy n Networked Desktops Usng Vrtualzaton." USENIX annual techncal conference [36] Savage, Yuvraj Agarwal Stefan, and Rajesh Gupta. "Sleepserver: A software-only approach for reducng the energy consumpton of pcs wthn enterprse envronments." Power (KW) (2010): 200. [37] Setty S, Braun B, Vu V, Blumberg AJ, Parno B, Walfsh M. Proceedngs of the 8th ACM European Conference on Computer Systems, EuroSys In8th ACM European Conference on Computer Systems, EuroSys [38] Arfeen, M. Asad, Krzysztof Pawlkowsk, and Andreas Wllg. "A framework for resource allocaton strateges n cloud computng envronment." Computer Software and Applcatons Conference Workshops (COMPSACW), 2011 IEEE 35th Annual. IEEE, [39] Ergu, Daj, et al. "The analytc herarchy process: task schedulng and resource allocaton n cloud computng envronment." The Journal of Supercomputng (2013): [40] Mohan, V. Mural, and K. V. V. Satyanarayana. "Effcent task schedulng strategy towards Qos aware optmal resource utlzaton n cloud computng." Journal of Theoretcal and Appled Informaton Technology 80.1 (2015): 152. [41] Chun, Brent, et al. "Planetlab: an overlay testbed for broad-coverage servces." ACM SIGCOMM Computer Communcaton Revew 33.3 (2003): [42] Benchmarks, S. P. E. C. "Standard performance evaluaton corporaton." (2000)