Abstract
1- Introduction
2- Comparison of deep learning feature representation and conventional feature learning
3- Automatic feature extraction using deep learning methods
4- Deep learning approaches for human activity recognition using mobile and wearable sensor data
5- Classification algorithms and performance evaluation of human activities
6- Common datasets for deep learning based human activity recognition
7- Deep learning implementation frameworks
8- Open research challenges
9- Conclusion
References
Abstract
Human activity recognition systems are developed as part of a framework to enable continuous monitoring of human behaviours in the area of ambient assisted living, sports injury detection, elderly care, rehabilitation, and entertainment and surveillance in smart home environments. The extraction of relevant features is the most challenging part of the mobile and wearable sensor-based human activity recognition pipeline. Feature extraction influences the algorithm performance and reduces computation time and complexity. However, current human activity recognition relies on handcrafted features that are incapable of handling complex activities especially with the current influx of multimodal and high dimensional sensor data. With the emergence of deep learning and increased computation powers, deep learning and artificial intelligence methods are being adopted for automatic feature learning in diverse areas like health, image classification, and recently, for feature extraction and classification of simple and complex human activity recognition in mobile and wearable sensors. Furthermore, the fusion of mobile or wearable sensors and deep learning methods for feature learning provide diversity, offers higher generalisation, and tackles challenging issues in human activity recognition. The focus of this review is to provide in-depth summaries of deep learning methods for mobile and wearable sensor-based human activity recognition. The review presents the methods, uniqueness, advantages and their limitations. We not only categorise the studies into generative, discriminative and hybrid methods but also highlight their important advantages. Furthermore, the review presents classification and evaluation procedures and discusses publicly available datasets for mobile sensor human activity recognition. Finally, we outline and explain some challenges to open research problems that require further research and improvements.
Introduction
Human activity recognition is an important area of research in ubiquitous computing, human behaviour analysis and human-computer interaction. Research in these areas employ different machine learning algorithms to recognise simple and complex activities such as walking, running, cooking, etc. Particularly, recognition of daily activities is essential for maintaining healthy lifestyle, patient rehabilitation and activity shifts among the elderly citizens that can help to detect and diagnose serious illnesses. Therefore, human activity recognition framework provides mechanism to detect both postural and ambulatory activities, body movements and actions of users using different multimodal data generated by variety of sensors(Cao, Wang, Zhang, Jin, & Vasilakos, 2017; Ordonez & Roggen, 2016). Previous studies in human activity recognition can be broadly categorised based on diverse devices, sensor modalities and data utilised for detection of activity details. These include video based, wearable and mobile phone sensors, social network sensors and wireless signals. Video-based sensors are utilised to capture images, video or surveillance camera features to recognise daily activity (Cichy, Khosla, Pantazis, Torralba, & Oliva, 2016; Onofri, Soda, Pechenizkiy, & Iannello, 2016). With the introduction of mobile phones and other wearable sensors, inertial sensor data (S. Bhattacharya & Lane, 2016; Bulling, Blanke, & Schiele, 2014b) are collected using mobile or wearable embedded sensors placed at different body positions in order to infer human activities details and transportation modes. Alternatively, the use of social network methods (Y. Jia, et al., 2016) that exploit appropriate users’ information from multiple social network sources to understand user behaviour and interest have also been proposed recently. In addition, wireless signal based human activity recognition (Savazzi, Rampa, Vicentini, & Giussani, 2016) takes advantages of signal propagated by the wireless devices to categorise human activity. However, the use of sensor data generated using smartphones and other wearable devices have dominated the research landscape in human motion analysis, activity monitoring and detection due to their obvious advantages over other sensor modalities (Cornacchia, Ozcan, Zheng, & Velipasalar, 2017).