Human Activity Recognition (HAR) is an important component in assistive technologies, however, we have not seen wide adoption of HAR technologies in our homes. Two main hurdles to the wide adoption of HAR technologies in our homes are the expensive infrastructure requirement and the use of supervised learning in the HAR technologies. Many HAR researches have been carried out assuming an environment embedded with sensors. In addition, the majority of HAR technologies use supervised approaches, where there are labeled data to train the expert system. In reality, our natural living environment are not embedded with sensors. Labeled data are not available in our natural living environment. We are developing a framework for autonomous HAR suitable in our natural living environment, i.e. the sensor-less homes. The framework uses unsupervised learning approach to enable a robot, acting as a mobile sensor hub, to autonomously collect data and learn the different human activities without requiring manual (human) labeling of the data.
To develop a system of autonomous human activity recognition, we have proposed a pipeline with different processes in a broader perspective of human activity analysis.
The different stages apply different machine learning approaches including supervised and unsupervised. The learning and recognition stages have been extensively studied in HAR researches, and have mainly applied supervised learning techniques. On the other hand, the discovery stage is a much less studied problem. The discovery stage attempts to differentiate or group different actions or activities. This resembles the ability of a child in knowing that one action is different or similar to another, despite not knowing what the actions are, i.e. without labels or without being told. If the discovery stage can successfully group different activities into their respective groups, these groups can be fed to the subsequent stage to learn a model for each group, i.e. each activity.
Our initial works are focused in solving the different problems in the discovery stage.
We believe this is an important ability of any intelligent system to be able to self-learn. The concepts and techniques developed in the discovery stage will be applicable in other domains such as object recognition.
- W. Ong, L. Palafox, and T. Koseki, “Investigation of Feature Extraction for Unsupervised Learning in Human Activity Detection,” Bulletin of Networking, Computing, Systems, and Software, North America, 2, Jan. 2013 (presented at The Second International Workshop on Networking, Computing, Systems, and Software, Okinawa, Japan in Dec 2012) (pdf)
- W. Ong, and T. Koseki, “Unsupervised Activity Detection Based On Human Range of Motion Features,” Seoul National University-University of Tokyo (SNU-UT) Joint Seminar, Mar. 2013 (pdf)
- W. Ong, T. Koseki, L. Palafox, “Unsupervised Human Activity Detection with Skeleton Data from RGB-D Sensor,” in Proceedings of The Fifth International Conference on Computational Intelligence, Communication Systems and Networks (CICSyN), pp.30-35, 5-7 June 2013 (pdf)
- W. Ong, T. Koseki, L. Palafox, “Investigation of Cluster Validity Indices for Unsupervised Human Activity Discovery,” in Proceedings of The 2013 International Conference on Artificial Intelligence (ICAI’13), Vol.1, pp 315-321, 22-25 July 2013 (pdf)
- W. Ong, T. Koseki, L. Palafox, “An Unsupervised Approach for Human Activity Detection and Recognition”, International Journal of Simulation: Systems, Science and Technology (IJSSST), Vol.14, No.5, 2013 (pdf)
- W. Ong, L. Palafox, T. Koseki, “An Incremental Approach of Clustering for Human Activity Discovery”, IEEJ Transactions on Electronics, Information and Systems, Vol.134, No.11, 2014 (pdf)
- W. Ong, L. Palafox, T. Koseki, “Autonomous Learning and Recognition of Human Action based on An Incremental Approach of Clustering”, IEEJ Transactions on Electronics, Information and Systems, Vol 135, No.9, 2015 (pdf)