Jianhao Jiao
Senior Research Fellow at University College London (UCL)
Mobile Robot, Navigation, Embodied Intelligence
Hong Kong, 2022, captured by Ren Xin
I am currently the senior research fellow in University College London (UCL), Department of Computer Science. I work in the Robot Perception and Learning Lab which is led by Prof.Dimitrios Kanoulas. My research in SLAM, sensor fusion, and autonomous navigation includes notable contributions such as M-LOAM, FusionPortable, and the scalable, structure-free visual navigation system, OpenNavMap.
My long-term research aims to develop lifelong, cognitive spatial memory mechanisms for autonomous systems. This work centers on achieving sustainable autonomy and dynamic map maintenance capable of scaling to rapidly increasing spatio-temporal requirements in the most challenging, dynamic, and unstructured worlds. This foundational capability is critical for applications, including logistics, infrastructure inspection, and high-stakes rescue missions in domains such as mines and forests.
I received my Ph.D. in Robotics in 2021 from The Hong Kong University of Science and Technology (HKUST). I was fortunate to collaborate with some excellent researchers including Prof.Rui Fan, Dr.Lei Tai, Dr.Haoyang Ye, Dr.Peng Yun, and Prof.Jin Wu. I was the research associate in the Intelligent and Autonomous Driving Center (IADC) from 2022 to 2023.
More details regarding my previous/ongoing projects can be found on Research Projects. Please feel free to contact me (jiaojh1994 at gmail dot com) if you have questions about our projects and want collaboration.
News
| Oct 1, 2025 | OpenNavMap is accepted to IROS 2025 Workshop: Open World Navigation in Human-centric Environments. |
|---|---|
| Sep 24, 2025 | Invited by Prof. Maurice Fallon at the University of Oxford to present a talk on our recent research advancements. It was a great opportunity to share our work and engage with his group. |
| Sep 20, 2025 | One paper is accepted to ACM SenSys. This paper proposes a Novel Robotic Platform and Datasets for Testing the Starlink Satellite Communication under Movement. Congratulation to Mr.Boyi Liu (HKUST Ph.D.)! |
| Sep 20, 2025 | One paper is accepted to NeurIPS 2025. The topic is about Depth Completion using Novel Event Cameras which uses our FusionPortable-V2 as the benchmark. Congratulation to Dr.Zhiqiang Yan (NUS Research Fellow)! |
| Sep 20, 2025 | Serving as the Associate Editor of ICRA 2026. |
| Jun 15, 2025 | One paper is accepted to IROS 2025. The topic is about Visual Localization using the Novel Satellite Image. Congratulation to Yilong (HKUST Ph.D.)! See you in Hangzhou, China! |
| May 15, 2025 | Our workshop on Event-Based Vision, in collaboration with Prof. Yizhou, Dr. Yifu Wang, Prof. Boxin Shi, Prof. Liyuan Pan, Prof. Laurent Kneip, and Prof. Richard Hartley, has been accepted to IROS 2025. Stay tuned for upcoming announcements regarding the challenges and agenda. |
| Feb 1, 2025 | Serving again as the Associate Editor of IROS 2025. |
| Jan 29, 2025 | Four papers are accepted to ICRA 2025. Topics cover Image-Goal Navigation, Visual Localization, Gaussian Splatting, and Traversability Estimation. Check this page for the preprint. Congratulation to Changkun (HKUST Ph.D.), Yuzhou (ICL Ph.D., supervised by Prof.Andrew Davison), and Sebastian (RA in University of Toronto)! See you in Atlanta! |
| Jan 19, 2025 | General Place Recognition Survey: Towards Real-World Autonomy is accepted to IEEE Transactions on Robotics as a Survey paper. |
Featured Publications
-
-
LiteVLoc: Map-Lite Visual Localization for Image Goal NavigationThis paper introduces LiteVLoc, a hierarchical visual localization framework using lightweight topometric maps for efficient, precise camera pose estimation, validated through experiments in simulated and real-world scenarios.
International Conference on Robotics and Automation (ICRA), 2025 -
FusionPortableV2: A Unified Multi-Sensor Dataset for Generalized SLAM Across Diverse Platforms and Scalable EnvironmentsWe propose a multi-sensor dataset that addresses the generalization challenge of SLAM algorithms by providing diverse sensor data, motion patterns, and environmental scenarios across 27 sequences from four platforms, totaling 38.7km. The dataset, which includes GT trajectories and RGB point cloud maps, is used to evaluate SOTA SLAM algorithms and explore its potential in other perception tasks, demonstrating its broad applicability in advancing robotic research.
International Journal of Robotics Research (IJRR), 2024 -
Real-Time Metric-Semantic Mapping for Autonomous Navigation in Outdoor EnvironmentsWe proposed an online and large-scale semantic mapping system that uses LiDAR-Visual-Inertial sensing to create a real-time global mesh map of outdoor environments, achieving high-speed map update and integrating the map into a real-world vehicle navigation system.
IEEE Transactions on Automation Science and Engineering (T-ASE), 2024 -
LCE-Calib: Automatic LiDAR-Frame/Event Camera Extrinsic Calibration With a Globally Optimal SolutionWe proposed an automatic checkerboard-based approach for calibrating extrinsics between a LiDAR and a frame/event camera by introducing a unified globally optimal solution for calibration optimization.
IEEE/ASME Transactions on Mechatronics (T-MECH), 2023 -
MLOD: Awareness of Extrinsic Perturbation in Multi-LiDAR 3D Object Detection for Autonomous DrivingWe proposed a two-stage and uncertainty-aware multi-LiDAR 3D object detection system that fuses multi-LiDAR data and explicitly addresses extrinsic perturbation on extrinsics.
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020 -
Automatic Calibration of Multiple 3D LiDARs in Outdoor EnvironmentWe proposed an automatic multi-LiDAR calibration system that requires no calibration target or manual initialization, achieving high reliability and accuracy with minimal rotation and translation errors for mobile platforms.
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2019