A Patch Based Real-Time 6D Object Pose Refinement Method for Robotic Manipulation

1. Video Demo

  The above video shows the test results of our pose refinement method in the test environment and the open environment, where the initial, refined and Ground-Truth 3D Bounding Box are shown in orange, blue and green, respectively.


2. Image Demo

Markdowm Image

Fig. 1. Pose refinement visualization on test data.

  The Fig.1 shows the Pose refinement visualization of our method, where the refined and Ground-Truth 3D Bounding Box are shown in blue and green, respectively.

Fig. 2. Robotic manipulation platform of Eye-in-Hand.

Markdowm Image

Fig. 3. The pose estimation results visualization of Eye-in-Hand.

  As shown in Fig. 3, we fix the relative pose between the calibration board and the object to evaluate the pose refinement accuracy under various camera view-points.


3. Key-inside


4. Method

Markdowm Image


5. Contrast

YOLO6D (CVPR 2018): B. Tekin, S. N. Sinha, and P. Fua, “Real-time seamless single shot 6d object pose prediction,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 292–301
CullNet (ICCVW 2019): K. Gupta, L. Petersson, and R. Hartley, “Cullnet: Calibrated and pose aware confidence scores for object pose estimation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, 2019, pp. 0–0
PVNet (CVPR 2019): S. Peng, Y. Liu, Q. Huang, X. Zhou, and H. Bao, “Pvnet: Pixel-wise voting network for 6dof pose estimation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 4561–4570

6. Paper