Abstract
I- Introduction
II- Preliminaries
III- The Achievable Reliability of Best Single Relay Selection
IV- Achievable Reliability via Blocklength Allocation for the BSR Network
V- Numerical Results and Discussion
VI- CONCLUSION
References
Abstract
We study a multi-node Internet of Things system supporting low-latency high-reliability communication to a destination node. The rest of the nodes are potential relays in which the best single relay (BSR) is selected to assist the transmission to the destination. The system operates with finite blocklength (FBL) codes to satisfy the low-latency requirement. The scope of this work is to derive and improve the FBL performance of the considered BSR system. On the one hand, we extend Polyanskiy's FBL model of a single-hop scenario to the considered relaying system and derive the corresponding achievable reliability. On the other hand, by employing a practical FBL coding scheme, namely polar codes (PCs), an FBL performance bound attainable by a low-complexity coding scheme is presented. In particular, we provide a reliability bound of a dynamic-length PC scheme. Addressing a source-driven BSR strategy, as well as a relay-driven BSR strategy, we investigate two viable strategies for relay selection in the FBL regime, while the corresponding performance under an infinite blocklength (IBL) assumption serves as a reference. We prove that the two BSR strategies have the same performance in the IBL regime, while the relay-driven strategy is significantly more reliable than the source-driven one when considering the FBL regime. Furthermore, following the derived FBL performance model, we provide an optimal design to minimize the overall error probability via blocklength allocation. Through simulation and numerical investigations, we show the appropriateness of the proposed analytical model. Moreover, we evaluate both the achievable performance with FBLs and the performance of PCs in the considered scenarios while comparing the source-driven and relay-driven strategies.
INTRODUCTION
FUTURE wireless networks are expected to support high speed, low-latency and high reliability transmissions while connecting a massive number of smart devices, e.g., enabling the Internet of Things (IoT) [2], [3]. Many envisioned IoT applications, such as industrial control applications, autonomous driving, cyber-physical systems, E-health, haptic feedback in virtual and augmented reality, smart grid, and remote surgery, will have stringent transmission latency and reliability requirements [4], [5] that cannot be met by existing wireless networks. The common features of these IoT applications are as follows: (i) the transmission reliability is usually a concern, (ii) due to low-latency constraints, the coding blocklengths for wireless transmissions are quite short, (iii) usually multiple IoT nodes are densely deployed. It is known that cooperative relaying significantly promotes the transmission performance and greatly capitalizes on dense node packings [6]–[8]. Consequently, the performance of a network with multiple IoT nodes may be enhanced by relaying [9], [10], as each node may potentially act as a relay, e.g., via BSR selection, assisting transmissions for peer nodes [11]–[13]. However, the above studies of relaying and the application of relaying in multi-node scenarios are conducted under the ideal assumption of communicating arbitrarily reliably at rates close to Shannons channel capacity. They thus implicitly assume an IBL regime, which does not allow for for the accurate assessment of the performance in latency-critical IoT scenarios operating with short blocklengths to satisfy the low-latency requirement. In the FBL regime, the error probability in communication is not negligible due to the impact of the short blocklength. Early in 1962, Strassen has presented a normal approximation of the coding rate [14]. More recently, an achievable upper bound on the coding rate is identified in [15] for a single-hop transmission system, taking the error probability into account.