我们先看一下c++ 中的参数解释
第一个输入的点是一个, (N, 3) 维的 三维坐标系中的点, xyz
第二个是旋转向量,
第三个是平移向量.
第四个是相机内参,
第五个是相机的畸变系数, 如果输入是4个时, 就是[k1, k2, p1, p2], 输入5个时就是 [k1, k2, p1, p2, k3], 也可以是更多, [k1, k2, p1, p2, k3, k4, k5, k6]
实战在python里面, 我用lidar的点往图像上投影的时候是这么用的(lidar上的3d框, 即8个点.)文章来源:https://www.toymoban.com/news/detail-739026.html
rotation = lidar2camera_pose[:3, :3]
translation = lidar2camera_pose[:3, 3]
dist = np.array(camera_disinfo)
imagePoints, _ = cv2.projectPoints(lidar_points, rotation, translation, camera_K, dist)
imagePoints = np.reshape(imagePoints, (8, 2))
maxrect = cv2.boundingRect(imagePoints.astype(int))
但是这样做无法把相机后面的点给排除掉, 所以可以这样改文章来源地址https://www.toymoban.com/news/detail-739026.html
lidar_points = np.dot(lidar2camera_pose[:3, :3], lidar_points.T).T + lidar2camera_pose[:3, [3]].reshape(1, 3)
lidar_points = lidar_points[lidar_points[:, 2]>0]
if len(lidar_points) < 8:
return None
rotation = np.eye(3)
translation = np.zeros((3, 1))
dist = np.array(camera_disinfo)
imagePoints, flag = cv2.projectPoints(lidar_points, rotation, translation, camera_K, dist)
imagePoints = np.reshape(imagePoints, (8, 2))
maxrect = cv2.boundingRect(imagePoints.astype(int))
到了这里,关于python中cv2.projectPoints的用法的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!