Abstract
1. Introduction
2. Background and related work
3. Problem formulation
4. Scheduling strategy
5. Performance evaluation
6. Conclusions and future work
References
Abstract
Green cloud computing attracts significant attention from both academia and industry. One of the major challenges involved, is to provide a high level of Quality of Service (QoS) in a cost-effective way for the end users and in an energy-efficient manner for the cloud providers. Towards this direction, this paper presents an energy-efficient, QoS-aware and cost-effective scheduling strategy for real-time workflow applications in cloud computing systems. The proposed approach utilizes per-core Dynamic Voltage and Frequency Scaling (DVFS) on the underlying heterogeneous multi-core processors, as well as approximate computations, in order to fill in schedule gaps. At the same time, it takes into account the effects of input error on the processing time of the component tasks. Our goal is to provide timeliness and energy efficiency by trading off result precision, while keeping the result quality of the completed jobs at an acceptable standard and the monetary cost required for the execution of the jobs at a reasonable level. The proposed scheduling heuristic is compared to two other baseline policies, under the impact of various QoS requirements. The simulation experiments reveal that our approach outperforms the other examined policies, providing promising results.
Introduction
As cloud computing continues to gain momentum, providing a high level of Quality of Service (QoS) in a cost-effective way for the end users and in an energy-efficient manner for the cloud providers, is an ongoing challenge that gathers significant attention from both academia and industry. A Service Level Agreement (SLA) is commonly required between the end users and a cloud provider. It is a contract between the two parties that defines the type of the provided service, the QoS requirements, as well as the adopted pricing scheme [1]. Compared to other data center infrastructure components, processors typically consume the greatest amount of energy [2–۴]. A widely used power management method is the Dynamic Voltage and Frequency Scaling (DVFS) technique. DVFS allows the dynamic adjustment of the supply voltage and operating frequency (i.e., speed) of a processor, based on the workload conditions [5,6]. Modern multi-core processor architectures incorporate voltage regulators for each integrated core, allowing per-core DVFS, so that each core can operate at a different voltage and frequency level from the other cores of the same processor [7]. While this provides flexibility and better energy efficiency, it involves higher control complexity, especially in the case of clouds where the heterogeneous physical resources are usually virtualized and managed by the hypervisor (Virtual Machine Monitor — VMM) [8]. With the rapid growth of cloud computing, there is a dramatic increase in the number and variety of applications processed on such platforms. They encompass a wide spectrum of sectors and activities, ranging from social media and big data analytics, to healthcare monitoring and financial applications [9,10]. The workload generated by such applications usually comprises jobs with multiple component tasks that have precedence constraints among them.