رویکرد برنامه ریزی بهینه سازی شده انرژی، مقرون به صرفه و آگاه از QoS
ترجمه نشده

رویکرد برنامه ریزی بهینه سازی شده انرژی، مقرون به صرفه و آگاه از QoS

عنوان فارسی مقاله: یک رویکرد برنامه ریزی بهینه سازی شده انرژی، مقرون به صرفه و آگاه از QoS برای کاربردهای گردش کار در زمان واقعی در سیستم های رایانش ابری با استفاده از DVFS و محاسبات تقریبی
عنوان انگلیسی مقاله: An energy-efficient, QoS-aware and cost-effective scheduling approach for real-time workflow applications in cloud computing systems utilizing DVFS and approximate computations
مجله/کنفرانس: سیستم های کامپیوتری نسل آینده-Future Generation Computer Systems
رشته های تحصیلی مرتبط: مهندسی کامپیوتر
گرایش های تحصیلی مرتبط: رایانش ابری، الگوریتم ها و محاسبات
کلمات کلیدی فارسی: کیفیت خدمت، بهینه سازی انرژی، گردش کار در زمان واقعی، هسته اصلی DVFS، محاسبات تقریبی، برنامه ریزی
کلمات کلیدی انگلیسی: Quality of Service، Energy efficiency، Real-time workflows، Per-core DVFS، Approximate computations، Scheduling
نوع نگارش مقاله: مقاله پژوهشی (Research Article)
شناسه دیجیتال (DOI): https://doi.org/10.1016/j.future.2019.02.019
دانشگاه: Department of Informatics, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
صفحات مقاله انگلیسی: 11
ناشر: الزویر - Elsevier
نوع ارائه مقاله: ژورنال
نوع مقاله: ISI
سال انتشار مقاله: 2019
ایمپکت فاکتور: 7.007 در سال 2018
شاخص H_index: 93 در سال 2019
شاخص SJR: 0.835 در سال 2018
شناسه ISSN: 0167-739X
شاخص Quartile (چارک): Q1 در سال 2018
فرمت مقاله انگلیسی: PDF
وضعیت ترجمه: ترجمه نشده است
قیمت مقاله انگلیسی: رایگان
آیا این مقاله بیس است: خیر
کد محصول: E12070
فهرست مطالب (انگلیسی)

Abstract

1. Introduction

2. Background and related work

3. Problem formulation

4. Scheduling strategy

5. Performance evaluation

6. Conclusions and future work

References

بخشی از مقاله (انگلیسی)

Abstract

Green cloud computing attracts significant attention from both academia and industry. One of the major challenges involved, is to provide a high level of Quality of Service (QoS) in a cost-effective way for the end users and in an energy-efficient manner for the cloud providers. Towards this direction, this paper presents an energy-efficient, QoS-aware and cost-effective scheduling strategy for real-time workflow applications in cloud computing systems. The proposed approach utilizes per-core Dynamic Voltage and Frequency Scaling (DVFS) on the underlying heterogeneous multi-core processors, as well as approximate computations, in order to fill in schedule gaps. At the same time, it takes into account the effects of input error on the processing time of the component tasks. Our goal is to provide timeliness and energy efficiency by trading off result precision, while keeping the result quality of the completed jobs at an acceptable standard and the monetary cost required for the execution of the jobs at a reasonable level. The proposed scheduling heuristic is compared to two other baseline policies, under the impact of various QoS requirements. The simulation experiments reveal that our approach outperforms the other examined policies, providing promising results.

Introduction

As cloud computing continues to gain momentum, providing a high level of Quality of Service (QoS) in a cost-effective way for the end users and in an energy-efficient manner for the cloud providers, is an ongoing challenge that gathers significant attention from both academia and industry. A Service Level Agreement (SLA) is commonly required between the end users and a cloud provider. It is a contract between the two parties that defines the type of the provided service, the QoS requirements, as well as the adopted pricing scheme [1]. Compared to other data center infrastructure components, processors typically consume the greatest amount of energy [2–۴]. A widely used power management method is the Dynamic Voltage and Frequency Scaling (DVFS) technique. DVFS allows the dynamic adjustment of the supply voltage and operating frequency (i.e., speed) of a processor, based on the workload conditions [5,6]. Modern multi-core processor architectures incorporate voltage regulators for each integrated core, allowing per-core DVFS, so that each core can operate at a different voltage and frequency level from the other cores of the same processor [7]. While this provides flexibility and better energy efficiency, it involves higher control complexity, especially in the case of clouds where the heterogeneous physical resources are usually virtualized and managed by the hypervisor (Virtual Machine Monitor — VMM) [8]. With the rapid growth of cloud computing, there is a dramatic increase in the number and variety of applications processed on such platforms. They encompass a wide spectrum of sectors and activities, ranging from social media and big data analytics, to healthcare monitoring and financial applications [9,10]. The workload generated by such applications usually comprises jobs with multiple component tasks that have precedence constraints among them.