Precise positioning is essential for anthropomorphic robots in industrial automation, medical robotics, and human-robot interaction. However, sensor drift, mechanical tolerances, and environmental disturbances introduce errors, necessitating robust compensation strategies. This review categorises measurement techniques into sensor-based, vision-based, kinematic, dynamic, and hybrid approaches. Sensor-based measurement methods, including Light Detection and Ranging (LiDAR), Inertial Measurement Units (IMUs), and ultrasonic sensors, provide real-time spatial data but require drift correction. Vision-based measurement methods, such as monocular and stereo cameras and Simultaneous Localization and Mapping (SLAM), enhance environmental perception but demand effective calibration. Kinematic and dynamic models support motion estimation but require frequent recalibration. Hybrid sensor fusion, integrating multiple modalities with artificial intelligence (AI)driven metrology, significantly improves accuracy but increases computational complexity. This review compares measurement trade-offs in accuracy, efficiency, and real-time applicability, demonstrating that AI-enhanced sensor fusion outperforms standalone methods. Emerging technologies, including quantum sensors, next-generation LiDAR, and deep learning-based uncertainty correction, offer promising advancements.
Advances in Measurement Methods and Techniques for Positioning of Anthropomorphic Robots: A Review
Carni D. L.;Lamonaca F.
2025-01-01
Abstract
Precise positioning is essential for anthropomorphic robots in industrial automation, medical robotics, and human-robot interaction. However, sensor drift, mechanical tolerances, and environmental disturbances introduce errors, necessitating robust compensation strategies. This review categorises measurement techniques into sensor-based, vision-based, kinematic, dynamic, and hybrid approaches. Sensor-based measurement methods, including Light Detection and Ranging (LiDAR), Inertial Measurement Units (IMUs), and ultrasonic sensors, provide real-time spatial data but require drift correction. Vision-based measurement methods, such as monocular and stereo cameras and Simultaneous Localization and Mapping (SLAM), enhance environmental perception but demand effective calibration. Kinematic and dynamic models support motion estimation but require frequent recalibration. Hybrid sensor fusion, integrating multiple modalities with artificial intelligence (AI)driven metrology, significantly improves accuracy but increases computational complexity. This review compares measurement trade-offs in accuracy, efficiency, and real-time applicability, demonstrating that AI-enhanced sensor fusion outperforms standalone methods. Emerging technologies, including quantum sensors, next-generation LiDAR, and deep learning-based uncertainty correction, offer promising advancements.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


