Mobile robot 3D trajectory estimation on a multilevel surface with multimodal fusion of 2D camera features and a 3D light detection and ranging point cloud

Vinicio Rosas-Cervantes, Quoc Dong Hoang, Sooho Woo, Soon Geul Lee

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)

Abstract

Nowadays, multi-sensor fusion is a popular tool for feature recognition and object detection. Integrating various sensors allows us to obtain reliable information about the environment. This article proposes a 3D robot trajectory estimation based on a multimodal fusion of 2D features extracted from color images and 3D features from 3D point clouds. First, a set of images was collected using a monocular camera, and we trained a Faster Region Convolutional Neural Network. Using the Faster Region Convolutional Neural Network, the robot detects 2D features from camera input and 3D features using the point’s normal distribution on the 3D point cloud. Then, by matching 2D image features to a 3D point cloud, the robot estimates its position. To validate our results, we compared the trained neural network with similar convolutional neural networks. Then, we evaluated their response for the mobile robot trajectory estimation.

Original languageEnglish
JournalInternational Journal of Advanced Robotic Systems
Volume19
Issue number2
DOIs
Publication statusPublished - 31 Mar 2022

Bibliographical note

Publisher Copyright:
© The Author(s) 2022.

Keywords

  • 3D localization
  • Mobile robot
  • feature recognition
  • odometry and mapping

Fingerprint

Dive into the research topics of 'Mobile robot 3D trajectory estimation on a multilevel surface with multimodal fusion of 2D camera features and a 3D light detection and ranging point cloud'. Together they form a unique fingerprint.

Cite this