Fog computing is considered a formidable next-generation complement to cloud computing. Nowadays, in light of the dramatic rise in the number of IoT devices, several problems have been raised in cloud architectures. By introducing fog computing as a mediate layer between the user devices and the cloud, one can extend cloud computing's processing and storage capability. Offloading can be utilized as a mechanism that transfers computations, data, and energy consumption from the resource-limited user devices to resource-rich fog/cloud layers to achieve an optimal experience in the quality of applications and improve the system performance. This paper provides a systematic and comprehensive study to evaluate fog offloading mechanisms' current and recent works. Each selected paper's pros and cons are explored and analyzed to state and address the present potentialities and issues of offloading mechanisms in a fog environment efficiently. We classify offloading mechanisms in a fog system into four groups, including computation-based, energy-based, storage-based, and hybrid approaches. Furthermore, this paper explores offloading metrics, applied algorithms, and evaluation methods related to the chosen offloading mechanisms in fog systems. Additionally, the open challenges and future trends derived from the reviewed studies are discussed.
Fog computing, which is also known as fog or fog networking, is introduced as an emergent and novel paradigm for extending cloud services. Although cloud computing is presented as a model that provides on-demand and ubiquitous access to a shared pool of computing and storage resources, cloud resources are far from users, and as a result, the cloud cannot support low-latency services alone. The fog can extend these computing and storage resources by incorporating a transitional layer between the IoT devices and the cloud, leading to a three-layer hierarchy: user devices layer, fog layer, and cloud layer. The middle fog layer consists of a set of base stations, routers, and gateways that are geographically distributed and places as near as possible to the IoT devices. It is widely known that fog brings the cloud closer to the ground (IoT devices) [72, 88].
Conclusion and limitation
In conclusion, in this paper, a systematic study was presented with a focus on the current research studies of offloading mechanisms in fog computing, including its architecture, technologies, and application. In this study, by applying our search query, 131 publications were selected at the initial selection. At the final selection, we selected 37 papers with reference to the research questions and classified them based on their contents. According to RQ2, the applied mechanisms in the offloading of fog computing were classified into four groups, with the highest percentage of studies done in computation-based mechanisms with 38%, hybrid mechanisms with 35%, energy-based mechanisms with 16%, and storage-based mechanisms with 11% of all types of applied mechanisms. They were compared and analyzed according to their significance and crucial evaluation metrics. The key differences, advantages, disadvantages, and important factors of each of the selected works were addressed in the concept of offloading in fog. Based on RQ3, the most important metrics in various proposed approaches were energy and response time by 24% and cost by 20%. According to RQ4, the simulation method (67% of the papers) was dominant in most categories as a method of evaluation, followed by design (22% of the papers) and real-testbed (8% of the papers). In addition, with respect to the RQ5, the most common algorithms were the non-heuristic ones by 74% and the heuristic ones by 26%. Furthermore, based on RQ6, the existing fog offloading mechanisms have faced several open issues and future trends such as trustworthiness and security, multi-objective mechanisms, big data analytics, new generation mobile networks, network management, and carbon-aware offloading for geo-distributed. Lucidly, the most important challenges are scalability and real-testbed implementation.