Abstract
1- Introduction
2- Timing V&V, Certification, and Standards
3- Understanding CCP under RM and HRP
4- The CCP-RM Mechanism
5- The CCP-HRP Mechanism
6- EVIDENCE FOR CERTIFICATION
7- RAILWAY CASE STUDY ON FPGA
8- RELATED WORK
9- CONCLUSIONS
References
Abstract
Real-time systems are witnessing a significant increase in critical software's size, complexity, and performance needs, which can only be satisfied with high-performance hardware features. Cache memories, pervasively used to improve average performance, complicate Worst-Case Execution Time analysis: cache placement (i.e., how software objects are mapped to cache) during the testing phase does not only critically affect the observed performance, but also proves to be arduous to control and preserve up to operation. The probabilistic variant of Measurement-Based Timing Analysis (MBPTA) responds to this challenge by deploying time-randomized caches that naturally explore a different random cache placement in each run, relieving the user from producing tests that intercept relevant Cache Conflict Placements (CCP). Yet, to meet an adequate probabilistic CCP coverage, the user is required to collect a minimum number of measurements. We present two mechanisms, CCP-RM and CCP-HRP, to identify CCP with relevant probability of occurrence and large impact on execution-time, for the random modulo (RM) and hash-based random placement (HRP) policies. CCP-RM and CCP-HRP enable a reliable application of MBPTA by computing the number of runs R ' necessary to meet the desired CCP coverage. We exhaustively evaluate CCP-RM and CCP-HRP, showing their effectiveness on well-known benchmarks and a railway case study, on top of an accurate simulator and a concrete RTL implementation.
INTRODUCTION
In critical-embedded real-time systems [34] software continues to be in charge of providing most innovative services, making it instrumental in increasing products’ competitive edge in the market [23]. Software is also increasingly driving the decision making process over a huge amounts of data of diverse types, which not only increases its complexity but also complicates timing validation and verification (V&V). The latter focus on providing evidence that system functions perform timely: to that end, timing analysis methods are used to estimate the worst-case execution time (WCET) of tasks. WCET estimates must be reliable, according to the level of confidence defined in the relevant safety standards, and as tight as possible, to minimize the provisioning of hardware resources. Timing V&V is further challenged by the use of performance-accelerating hardware (e.g., caches) to provide the unprecedented rise in performance needs for critical software’s, expected to be as high as 100x in the next years in the automotive domain [1]. Increased hardware and software complexity reduces the confidence that can be placed on WCET estimates derived by measurement-based timing analysis (MBTA), the most used timing analysis technique in critical real-time embedded systems [44]. In particular, increased effort is needed on the user to concoct stressing execution scenarios during the (analysis-time) test campaign as a means to capture bad scenarios that can arise during system operation [18].