Optimal Scheduling of Virtualised Workloads using Learning Algorithms

Lieferzeit: Lieferbar innerhalb 14 Tagen

69,90 

ISBN: 3330000570
ISBN 13: 9783330000575
Autor: Khalid, Omer
Verlag: LAP LAMBERT Academic Publishing
Umfang: 224 S.
Erscheinungsdatum: 10.12.2016
Auflage: 1/2016
Format: 1.5 x 22 x 15
Gewicht: 352 g
Produktform: Kartoniert
Einband: Kartoniert
Artikelnummer: 782492 Kategorie:

Beschreibung

The introduction of virtualization technology to grid and cloud computing infrastructures has enabled applications to be decoupled from the underlying hardware providing the benefits of portability, better control over execution environment and isolation. Virtualization layer also incurs a performance penalty, which can be significant for High Performance Computing (HPC) applications with high work volumes. Virtualization thus brings new requirements for dynamic adaptation of the scheduling to realize the potential flexibility of faster re-tasking and reconfiguration of workloads. Often scheduling approaches are based on some well-defined system-wide performance metric within the context of the given systems scope of operation. However, this is not optimized for the structure and behavior of specific applications having a mix of task types each with their own task precedences and resource requirements. This body of work is concerned with combining virtualization and adaptive scheduling techniques to achieve an optimal balance between task placement flexibility and processing performance on large scale scientific Grid infrastructure while offsetting virtualization overhead.

Autorenporträt

Dr Omer Khalid: Studied BSc(Hons) Computer Science at University of Greenwich, UK. Completed his doctoral and post-doctoral research at CERN, Switzerland while working for ATLAS experiment at Large Hadron Collider (LHC). Currently a Senior Program Manager at Google UK Ltd.

Herstellerkennzeichnung:


OmniScriptum SRL
Str. Armeneasca 28/1, office 1
2012 Chisinau
MD

E-Mail: info@omniscriptum.com

Das könnte Ihnen auch gefallen …