Jianhao Jiao
Senior Research Fellow at University College London (UCL)
Mobile Robot, Navigation, Embodied Intelligence
Hong Kong, 2022, captured by Ren Xin
I am currently the senior research fellow in University College London (UCL), Department of Computer Science. I work in the Robot Perception and Learning Lab which is led by Prof.Dimitrios Kanoulas. My long-term research aims to develop a mobile robotic system with human-level proficiency in localization, navigation, and decision-making, ultimately facilitating applications such as logistics, inspection, and rescue.
I received my Ph.D. in Robotics in 2021 from The Hong Kong University of Science and Technology (HKUST). I was fortunate to collaborate with some excellent researchers, including Prof.Rui Fan, Dr.Lei Tai, Dr.Haoyang Ye, Dr.Peng Yun, and Mr.Jin Wu. I was the research associate in the Intelligent and Autonomous Driving Center (IADC) from 2022 to 2023.
More details regarding my previous/ongoing projects can be found on Research Projects. Please feel free to contact me (jiaojh1994 at gmail dot com) if you have questions about our projects and want collaboration.
News
Featured Publications
-
LiteVLoc: Map-Lite Visual Localization for Image Goal Navigation
This paper introduces LiteVLoc, a hierarchical visual localization framework using lightweight topometric maps for efficient, precise camera pose estimation, validated through experiments in simulated and real-world scenarios.
Under Review, 2024 -
General Place Recognition Survey: Towards Real-World Autonomy
We provide a comprehensive review of the current SOTA advancements in place recognition, alongside the remaining challenges, and underscore its broad applications in robotics.
Conditionally Accepted by IEEE Transactions on Robotics, 2024 -
FusionPortableV2: A Unified Multi-Sensor Dataset for Generalized SLAM Across Diverse Platforms and Scalable Environments
We propose a multi-sensor dataset that addresses the generalization challenge of SLAM algorithms by providing diverse sensor data, motion patterns, and environmental scenarios across 27 sequences from four platforms, totaling 38.7km. The dataset, which includes GT trajectories and RGB point cloud maps, is used to evaluate SOTA SLAM algorithms and explore its potential in other perception tasks, demonstrating its broad applicability in advancing robotic research.
International Journal of Robotics Research (IJRR), 2024 -
Real-Time Metric-Semantic Mapping for Autonomous Navigation in Outdoor Environments
We proposed an online and large-scale semantic mapping system that uses LiDAR-Visual-Inertial sensing to create a real-time global mesh map of outdoor environments, achieving high-speed map update and integrating the map into a real-world vehicle navigation system.
IEEE Transactions on Automation Science and Engineering, 2024 -
LCE-Calib: Automatic LiDAR-Frame/Event Camera Extrinsic Calibration With a Globally Optimal Solution
We proposed an automatic checkerboard-based approach for calibrating extrinsics between a LiDAR and a frame/event camera by introducing a unified globally optimal solution for calibration optimization.
IEEE/ASME Transactions on Mechatronics, 2023 -
MLOD: Awareness of Extrinsic Perturbation in Multi-LiDAR 3D Object Detection for Autonomous Driving
We proposed a two-stage and uncertainty-aware multi-LiDAR 3D object detection system that fuses multi-LiDAR data and explicitly addresses extrinsic perturbation on extrinsics.
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020 -
Automatic Calibration of Multiple 3D LiDARs in Outdoor Environment
We proposed an automatic multi-LiDAR calibration system that requires no calibration target or manual initialization, achieving high reliability and accuracy with minimal rotation and translation errors for mobile platforms.
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2019