Projects in personal robots are where we learn about the different technologies associated with robotics including microcontroller, communication, sensors, actuators, perception, localisation, mapping, navigation and machine intelligence.
Shop Assistant Robot
This is Anwari’s final year project in his BSc. The project is an initial concept prototype of a shop assistant robot. If it is equipped with a cart, the shopper will not have to carry the basket.
People
- Ak Muhammad Anwari Fikri Bin Pg Ali Sham
- Ong Wee Hong
- Owais Ahmed Malik
Data/Code
Publications
Videos
Mapping and autonomous navigation in the “shop” …
Shop assistant …
AR-based Navigator in UBD
This is the outcome of two BSc final year projects. Nur Afifah worked on the outdoor navigation, and Huey Theeng worked on the indoor and integration of both.
Augmented Reality is a user friendly mean to provide information in the real-world on the screen. It capitalize on the abundant of digital data and present them to the user in at the most relevant place and time. Our current work is not developed on a mobile robot (yet). Instead, in this project, we are developing an AR-based indoor navigator mobile app to guide a user to navigate from any location to a desired destination inside a building. The ultimate goal is to apply the indoor positioning technique in mobile robots. The project started with developing an AR-based outdoor navigator application for UBD campus. We are currently extending the application with indoor navigator capability.
“Global” positioning in indoor environment has been challenging due to the absence of a standard and reliable GPS (Global Positioning System). The satellite GPS is not usable in indoor environment. We are learning the existing works in indoor positioning systems with the interest to develop one that we can use for autonomous mobile robot navigation. As a start, we are looking at fiducial markers.
People
- Lim Huey Theeng
- Nur Afifah Ilyana binti Ilham
- Ong Wee Hong
- Owais Ahmed Malik
Data/Code
Publications
Videos
Application of Deep Learning in Visual SLAM
This is Nazrul’s MSc project. Nazrul is exploring the potential application of latest deep learning networks in improving the performance of Visual SLAM. His current focus is in the improvement of features detection and matching.
People
- Muhammad Nazrul Fitri Bin Hj Ismail
- Owais Ahmed Malik (Main Supervisor)
- Ong Wee Hong
Data/Code
Publications
Videos
Application of Reinforcement Learning in Robot Navigation
This is Hafiq’s MSc project. In this research, we explore the use of Reinforcement Learning (RL), especially the Deep Reinforcement Learning (DRL) in robot navigation ion crowded environment. The ultimate goal is to improve the local and global planning of the navigation system through RL targeting the performance in crowded or complex environment where existing local and global planning techniques are not efficient.
People
- Hafiq Anas
- Ong Wee Hong
- Owais Ahmed Malik
Data/Code
- Deep Reinforcement Learning-Based Mapless Crowd Navigation with Perceived Risk of the Moving Crowd for Mobile Robots: https://github.com/ailabspace/drl-based-mapless-crowd-navigation-with-perceived-risk
- Comparison of DQN with Q-Learning and SARSA for Robot Local Navigation: https://github.com/ailabspace/comparison-of-dqn-with-q-learning-and-sarsa-for-robot-local-navigation
Publications
- Hafiq Anas, Wee Hong Ong, Owais Ahmed Malik, “Deep Reinforcement Learning-Based Mapless Crowd Navigation with Perceived Risk of the Moving Crowd for Mobile Robots”, arXiv preprint (2023) https://doi.org/10.48550/arXiv.2304.03593
- Hafiq Anas, Wee Hong Ong, Owais Ahmed Malik, “Comparison of Deep Q-Learning, Q-Learning and SARSA Reinforced Learning for Robot Local Navigation”, the 9th International Conference on Robot Intelligence Technology and Applications (RITA 2021), 16-17 December 2021. https://doi.org/10.1007/978-3-030-97672-9_40 (pdf)
Videos
Video demo for the updated version of our paper: Deep Reinforcement Learning-Based Mapless Crowd Navigation with Perceived Risk of the Moving Crowd for Mobile Robots. In this work we define the risk perception as k most dangerous obstacles. We have also improved the learning performance by using waypoints to increase the reward density.
Video demo for paper: Deep Reinforcement Learning-Based Mapless Crowd Navigation with Perceived Risk of the Moving Crowd for Mobile Robots. This is an initial result with risk perception from the (one) dangerous obstacle.
Video demo for paper: Comparison of Deep Q-Learning, Q-Learning and SARSA Reinforced Learning for Robot Local Navigation
Robot Navigation with Eddie Robot and ROS
This is a BSc final year project. In this project, we implemented the ROS navigation stack on the Eddie robot platform. We replaced the controller with a Raspberry Pi and a laptop. We learned to setup the Robot Operating System (ROS), implement the navigation stack, configure and write the necessary transformation, odometry and motor control nodes specific to the Eddie robot.
The Eddie robot will serve as the mobile platform for our other high level applications and research works on personal robots. The mobile platform has the ability to autonomously build the map of its surrounding, localize itself within the map and navigate around safely without bumping into obstacles.
We have added a tall body to the Eddie robot to fix a Kinect sensor on it. The Kinect sensor serves to provide the visual perception to the robot. The Kinect also serves as the laserscan input to the navigation stack. The controller of the Eddie robot has been replaced with a Raspberry Pi and a laptop. The two computers communicates through direct LAN connection between them. Most of the ROS nodes are run in the laptop, while the Raspberry Pi runs the nodes dealing with the drive train (motors and odometry).
People
- Hafiq Anas
- Ong Wee Hong
Data/Code
- ROS Eddie Navigation Robot: https://github.com/ailabspace/ros-eddie-navigation-robot
Publications
- Hafiq Anas, Wee Hong Ong, “An implementation of ROS Autonomous Navigation on Parallax Eddie platform”, arXiv preprint (2021) https://arxiv.org/abs/2108.12571 (pdf)
Videos
Note: The video has been speedup by about x2.5. The scene is outside the Laboratory of Robotics and Intelligent System (Robolab V3) in the Universiti Brunei Darussalam (UBD).
Robot Navigation with iRobot Create Robot
This is a BSc final year project. We implemented the Simultaneous Localization and Mapping (SLAM) on a iRobot Create robot, the Wanderer-v1. The implementation uses Python. In this project, we learned the basic concepts in SLAM and the implementation of various algorithms involved.
The SLAM was implemented from scratch involving implementation of the sensor model, motion model and the localization using particle filter. The implementation is a FastSLAM. In addition, A-star search has been implemented for path planning. The robot is able to navigate to a given location while performing SLAM and avoiding obstacles.
The robot is equipped with a Kinect sensor to provide the laserscan. The odometry is from the iRobot Create. No other sensors are used at the moment.
People
- Muhammad Amirul Akmal Haji Menjeni
- Ong Wee Hong
- Pg Dr Haji NorJaidi bin Pg Hj Tuah
Data/Code
Publications
Videos
Note: The video has been speedup by about x3. The scene is outside the Laboratory of Robotics and Intelligent System (Robolab V3) in the Universiti Brunei Darussalam (UBD).
Robot Control Over Internet
This is a BSc final year project. We learned to remotely control a LEGO Mindstorms robot over the internet to provide remote surveillance with live cam feedback.
People
- Kenneth Chiang-Yu Chin
- Ong Wee Hong
- Dr Seyed Mohamed Buhari
Publications
- Kenneth Chiang-Yu Chin, Seyed Mohamed Buhari and Wee-Hong Ong, “Impact of LEGO Sensors in Remote Controlled Robot”, IEEE International Conference on Robotics and Biomimetics 2008, pp 1777-1778 (pdf)