[ICRA 2024] Benchmarking Classical and Learning-Based Multibeam Point Cloud Registration

1 Division of Robotics, Perception and Learning (RPL), KTH Royal Institute of Technology
2 Institute of Computer Graphics and Vision (ICGV), TU Graz
3 Ocean Infinity
4 Department of Marine Sciences, University of Gothenburg

Abstract

Deep learning has shown promising results for multiple 3D point cloud registration datasets. However, in the underwater domain, most registration of multibeam echo-sounder (MBES) point cloud data are still performed using classical methods in the iterative closest point (ICP) family. In this work, we curate and release DotsonEast Dataset, a semi-synthetic MBES registration dataset constructed from an autonomous underwater vehicle in West Antarctica. Using this dataset, we systematically benchmark the performance of 2 classical and 4 learning-based methods. The experimental results show that the learning-based methods work well for coarse alignment, and are better at recovering rough transforms consistently at high overlap (20-50%). In comparison, GICP (a variant of ICP) performs well for fine alignment and is better across all metrics at extremely low overlap (10%). To the best of our knowledge, this is the first work to benchmark both learning-based and classical registration methods on an AUV-based MBES dataset.

Example registration result
Example MBES submap pair from the proposed DotsonEast Dataset . Each row showcases the predicted transformations (left), the consistency error of the map (middle) and the t-SNE embedding of the feature descriptors (right). For methods without feature descriptors, the right column is left blank. The ground truth and null transforms are provided for comparison. The right column on the ground truth row shows the point cloud pair colored by depth.

Benchmark Dataset and Metrics

Dataset

The data presented in this paper was collected using RAN - Gothenburg University’s Kongsberg Hugin AUV equipped with a Kongsberg EM2040 multibeam echo sounder during the 2022 ITGC cruise. The survey site was close to the eastern side of Dotson ice shelf in West Antarctica. As such, the data has unusually large elevation changes.

Survey trajectory bathymetry
The bathymetry collected during the survey, overlayed with AUV's trajectory.
Survey details and sensor settings
Survey details and sensor settings.

To construct a dataset with groundtruth, we perform the following steps:


Dataset construction
Dataset construction pipeline. Since we synthesize the transformations artificially, we have the transformation ground truth, allowing for more varied evaluations.

Metrics

We evaluate the tested methods on three sets of metrics:

  1. Map consistency metrics: How good is the quality of reconstructed mesh?
    • Success rate: % pairs where the tested method successfully return an estimated transformation, regardless of its correctness.
    • Consistency error: estimated bathymetric surface thickness. Generally, wrongly registered regions will have high consistency error.
    • Predicted overlap: % mesh grid points hit by both submaps using the predicted transformation.

  2. Registration errors: How accurate are the predicted transformation errors?
    • Recall: % pairs with relative rotation error < 5 ° and translation error < 10 m.
    • Relative translation error (RTE) for all recalled pairs.
    • Relative rotation error (RRE) for all recalled pairs.

  3. Feature correspondence metrics: How good are the suggested feature matches? (Only computed for feature-based methods.)
    • Feature match ratio (FMR): % pairs with > 5% inlier matches under groundtruth transformation. Here, the inlier ratio is set to 2m, twice the resolution of the downsampled point cloud.
    • Inlier ratio (IR): the % inliers under groundtruth transformation.

Evaluated Methods

Using the above metrics, we evaluate the following methods. The code and pretrained models for DotsonEast dataset will be released shortly.

Results

We benchmark the evaluated methods using submap pairs with between 10-50% of overlap. Amongst others, we notice the following:

  • Learning-based methods are good at recovering coarse alignments at high %overlap, with high %success rate, %recall and low transformation errors.
  • At low overlap, GICP is the most robust method amongst the methods tested, showing highest %recall and lowest transformation errors.
  • Consistency error commonly used to estimate reconstructed bathymetric mesh quality can signal coarse alignment, but small transformation errors will go undetected.
  • There are fundamental differences between LiDAR and MBES point clouds, though models for LiDAR can be finetuned for MBES data.

Consistency metrics
Map consistency metrics for decreasing overlap ratios. Note that the consistency and predicted overlap (%) are only computed for successful registrations. Specifically, the success rate (%) reported here is the rate at which the method returns a transform, regardless of its correctness. This is consistent with real-life AUV missions, where ground truth transformation is unknown. Transformation accuracy is evaluated in the subsequent metrics.
Registration errors
Registration error metrics given decreasing overlap ratios. Benefiting from the synthetic ground truth transformation, the recall (%) here accounts for the cases where the predicted transformation is has an RRE < 5° and RTE < 10m. The RRE and RTE are only computed for the correctly recalled pairs.
Feature correspondence
The feature match recall given decreasing overlap ratios. Note the poor performance of FPFH. The trained Predator clearly outperforms the other methods in this metric.

BibTeX

        
          @inproceedings{ling2024benchmarking,
                        title={Benchmarking Classical and Learning-Based Multibeam Point Cloud Registration}, 
                        author={Ling, Li and Zhang, Jun and Bore, Nils and Folkesson, John and Wåhlin, Anna},
                        booktitle={2024 IEEE International Conference on Robotics and Automation (ICRA)},
                        year={2024},
                        organization={IEEE}
          }