CN118052883A - Binocular vision-based multiple circular target space positioning method - Google Patents

Binocular vision-based multiple circular target space positioning method Download PDF

Info

Publication number
CN118052883A
CN118052883A CN202410082494.6A CN202410082494A CN118052883A CN 118052883 A CN118052883 A CN 118052883A CN 202410082494 A CN202410082494 A CN 202410082494A CN 118052883 A CN118052883 A CN 118052883A
Authority
CN
China
Prior art keywords
edge
pixel
target
circular
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410082494.6A
Other languages
Chinese (zh)
Inventor
张德育
郭向坤
张月欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Ligong University
Original Assignee
Shenyang Ligong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Ligong University filed Critical Shenyang Ligong University
Priority to CN202410082494.6A priority Critical patent/CN118052883A/en
Publication of CN118052883A publication Critical patent/CN118052883A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20228Disparity calculation for image-based rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供一种基于双目视觉的多个圆形目标空间定位方法,涉及视觉传感器定位技术领域,通过对双目相机进行立体标定,得到相机的内外参后进行立体校正;通过运用图像处理相关技术对圆形标志物进行识别提取。在此基础上利用Canny算子进行粗定位,再通过Zernike矩法与大津法相结合对粗定位边缘进行亚像素提取,采用灰度梯度优化来细化边缘,再利用最小二乘椭圆拟合法对细化边缘进行中心坐标定位。其次结合基于时空特性跟踪的方法对连续时刻的图像序列进行目标跟踪,最终实现基于相同编号目标对的立体匹配,并基于双目视觉的三角测量原理从而定位空间坐标。最后通过实验证明了本发明可以实现快速、高精度和稳定的多个圆形目标空间定位。

The present invention provides a method for spatial positioning of multiple circular targets based on binocular vision, which relates to the technical field of visual sensor positioning. A binocular camera is stereoscopically calibrated to obtain the internal and external parameters of the camera and then stereo correction is performed; circular markers are identified and extracted by using image processing related technologies. On this basis, the Canny operator is used for coarse positioning, and then the sub-pixel extraction of the coarse positioning edge is performed by combining the Zernike moment method with the Otsu method, the grayscale gradient optimization is used to refine the edge, and then the least squares ellipse fitting method is used to locate the center coordinate of the refined edge. Secondly, the method based on spatiotemporal characteristic tracking is combined to track the image sequence at continuous moments, and finally stereo matching based on the same numbered target pair is realized, and the spatial coordinates are located based on the triangulation principle of binocular vision. Finally, it is proved through experiments that the present invention can achieve fast, high-precision and stable spatial positioning of multiple circular targets.

Description

一种基于双目视觉的多个圆形目标空间定位方法A spatial positioning method for multiple circular targets based on binocular vision

技术领域Technical Field

本发明涉及视觉传感器定位技术领域,尤其涉及一种基于双目视觉的多个圆形目标空间定位方法。The present invention relates to the field of vision sensor positioning technology, and in particular to a binocular vision-based spatial positioning method for multiple circular targets.

背景技术Background technique

圆形目标空间定位即目标识别与空间定位技术,该技术作为计算机视觉的基础,在机器人领域、图像检索、无人机飞行环境感知等方面有着广泛的应用。圆形目标识别,就是在一种静止的图像或者动态视频中检查出人们感兴趣的目标对象。在视觉传感器定位技术领域,特别是在机器视觉和目标定位方面,人们越来越关注开发更为精准和高效的定位方法。基于视觉的测量技术是一种利用数字相机和计算机技术进行近距离摄影测量的方法。它主要应用于需要获取物体表面详细信息、进行三维建模、测量形状和尺寸等方面。根据视觉测量原理,该原理基于摄影获取的目标像对。当前,获取像对的方法主要包括单目视觉和多目视觉两种。单目视觉需要通过相机移动和外部固定标志点进行解算,而在多目视觉中,双目视觉因其简单操作和高精度而被广泛采用。双目视觉通过直接解算两相机之间的外参数矩阵来获取目标的相对解算参数。在变形监测方面,由于通常难以找到固定的外部标志点,因此双目视觉测量技术得到了广泛的应用。Circular target spatial positioning is target recognition and spatial positioning technology. As the basis of computer vision, this technology has a wide range of applications in the field of robotics, image retrieval, and UAV flight environment perception. Circular target recognition is to detect the target object of interest in a still image or dynamic video. In the field of visual sensor positioning technology, especially in machine vision and target positioning, people are paying more and more attention to the development of more accurate and efficient positioning methods. Vision-based measurement technology is a method of close-range photogrammetry using digital cameras and computer technology. It is mainly used in areas such as obtaining detailed information on the surface of an object, performing three-dimensional modeling, and measuring shapes and sizes. According to the principle of visual measurement, the principle is based on the target image pairs obtained by photography. At present, the methods for obtaining image pairs mainly include monocular vision and multi-ocular vision. Monocular vision needs to be solved by camera movement and external fixed markers, while in multi-ocular vision, binocular vision is widely used because of its simple operation and high precision. Binocular vision obtains the relative solution parameters of the target by directly solving the external parameter matrix between the two cameras. In deformation monitoring, since it is usually difficult to find fixed external markers, binocular vision measurement technology has been widely used.

双目视觉测量技术基本原理双目视觉测量技术基本原理是通过模拟人类双眼获取立体视觉信息的方式,利用两个相机分别拍摄同一场景,通过对相机间的图像差异进行分析,从而获取目标的三维空间信息。目前,众多学者对项目视觉测量技术进行了研究并应用到了很多领域。双目视觉系统因其能够模拟人眼的立体感知而在机器视觉领域得到了广泛应用。然而,当前的双目视觉方法仍然存在一些挑战,尤其是在处理多个圆形目标的定位问题上。现有的技术在多目标定位中可能受到目标形状、尺寸和立体匹配的的限制,导致定位精度难以满足高要求的实际应用。The basic principle of binocular vision measurement technology is to simulate the way human eyes obtain stereoscopic visual information, use two cameras to shoot the same scene separately, and analyze the image differences between the cameras to obtain the three-dimensional spatial information of the target. At present, many scholars have studied the project vision measurement technology and applied it to many fields. Binocular vision systems have been widely used in the field of machine vision because they can simulate the stereoscopic perception of the human eye. However, there are still some challenges in the current binocular vision method, especially in dealing with the positioning problem of multiple circular targets. Existing technologies may be limited by the shape, size and stereo matching of the target in multi-target positioning, resulting in positioning accuracy that is difficult to meet the high requirements of practical applications.

发明内容Summary of the invention

针对现有技术的不足,本发明提供一种基于双目视觉的多个圆形目标空间定位方法,旨在克服现有技术的局限性,提高定位精度和鲁棒性,以更好地适应各种复杂环境下的应用需求。本发明将双目视觉测量技术和图像处理相关技术相结合,完成特定颜色的圆形目标识别与空间定位,能够提高基于标志物的定位监测的准确性。In view of the shortcomings of the prior art, the present invention provides a method for spatial positioning of multiple circular targets based on binocular vision, aiming to overcome the limitations of the prior art, improve positioning accuracy and robustness, and better adapt to application requirements in various complex environments. The present invention combines binocular vision measurement technology with image processing related technologies to complete the recognition and spatial positioning of circular targets of specific colors, which can improve the accuracy of positioning monitoring based on markers.

本发明所采取的技术方案为:The technical solution adopted by the present invention is:

一种基于双目视觉的多个圆形目标空间定位方法,包括以下步骤:A method for spatially locating multiple circular targets based on binocular vision comprises the following steps:

步骤1:通过双目相机的左右摄像头拍摄不同角度的标定板图片,根据张正友标定法并通过Matlab标定工具箱完成双目相机的立体标定和校正;Step 1: Use the left and right cameras of the binocular camera to take pictures of the calibration plate at different angles, and complete the stereo calibration and correction of the binocular camera according to Zhang Zhengyou's calibration method and the Matlab calibration toolbox;

步骤2:启动已经标定好的双目相机采集圆形目标图像,对采集的圆形目标图像进行滤波去噪,再对去噪后的图像进行二值化分割,得到分割目标图像,再利用形态学滤波,最后利用圆形度准则对圆形目标区域进行识别;Step 2: Start the calibrated binocular camera to collect circular target images, filter and denoise the collected circular target images, and then perform binary segmentation on the denoised images to obtain segmented target images. Then, use morphological filtering, and finally use the circularity criterion to identify the circular target area.

步骤2-1:对采集的圆形目标图像进行滤波去噪;Step 2-1: Filter and denoise the collected circular target image;

利用引导滤波进行原始目标图像滤波去噪,基于引导信号对目标图像进行引导,以调整滤波过程中的权重;The original target image is filtered and denoised using guided filtering, and the target image is guided based on the guidance signal to adjust the weight in the filtering process;

步骤2-2:二值化分割;Step 2-2: Binarization segmentation;

使用HSI颜色空间进行标志物的识别,将去噪后的目标图像从RGB空间转换到HSI空间;根据HSI各分量的范围阈值提取红色标志物,如式(1)所示:The HSI color space is used to identify the markers. The denoised target image is converted from the RGB space to the HSI space. The red markers are extracted according to the range thresholds of each HSI component, as shown in formula (1):

式中,H为定义颜色的频率,即色调;S表示颜色的深浅程度,即饱和度;I表示亮度;当图像像素值的各个HSI分量同时满足式(1)所对应的红色阈值条件时,则将对应位置的像素点设为前景颜色1,反之则设为黑色背景0,再利用开运算和闭运算相结合的形态学滤波方法进行进一步噪声去除;In the formula, H is the frequency that defines the color, that is, the hue; S represents the depth of the color, that is, the saturation; I represents the brightness; when each HSI component of the image pixel value satisfies the red threshold condition corresponding to formula (1) at the same time, the pixel point at the corresponding position is set to the foreground color 1, otherwise it is set to the black background 0, and then the morphological filtering method combining the opening operation and the closing operation is used to further remove the noise;

步骤2-3:利用圆形度准则对圆形目标区域进行识别;具体为,通过计算目标区域的圆形度来对圆形目标区域进行识别;Step 2-3: using a circularity criterion to identify a circular target area; specifically, identifying the circular target area by calculating the circularity of the target area;

圆形度表示目标圆形与理想圆形的相似程度,通过计算目标区域的周长和面积来确定;目标区域的周长和面积的比值计算公式如式(2)所示:The circularity indicates the similarity between the target circle and the ideal circle, which is determined by calculating the perimeter and area of the target area. The calculation formula of the ratio of the perimeter and area of the target area is shown in formula (2):

式中,C为目标区域的圆形度,Sc表示目标区域面积,Lc为目标区域的轮廓周长,β为比例系数,其表达式为:In the formula, C is the circularity of the target area, Sc represents the area of the target area, Lc is the contour perimeter of the target area, and β is the proportional coefficient. The expression is:

Sc=πr2,Lc=2πr (3)S c =πr 2 , L c =2πr (3)

其中r为目标区域半径,通过圆形度计算公式得,当C=1时,该连通区域视为圆形,目标区域的圆形度C满足以下条件:Where r is the radius of the target area. According to the circularity calculation formula, when C=1, the connected area is considered a circle, and the circularity C of the target area satisfies the following conditions:

Cmin≤C≤Cmax (4)C min ≤C ≤C max (4)

式中,Cmin和Cmax分别表示圆形目标与双目相机的连线和双目相机光轴之间的夹角最大和最小值时的标志点的圆形度,其取值表达为:In the formula, C min and C max represent the circularity of the landmark point when the angle between the line connecting the circular target and the binocular camera and the optical axis of the binocular camera is the maximum and minimum respectively, and its value is expressed as:

式中,θ为双目相机的偏转角度;Where θ is the deflection angle of the binocular camera;

步骤3:利用Canny边缘检测算法对识别的圆形目标区域进行粗定边缘,再利用Zernike矩对粗定边缘进行亚像素边缘检测,同时引入基于大津法的边缘阈值判断准则,判定亚像素边缘,再利用基于灰度梯度优化的方法进行亚像素边缘细化,最后,通过应用最小二乘椭圆拟合法对亚像素边缘进行中心定位,从而获取圆形目标的中心像素坐标;Step 3: Use the Canny edge detection algorithm to roughly determine the edge of the identified circular target area, and then use the Zernike moment to perform sub-pixel edge detection on the roughly determined edge. At the same time, the edge threshold judgment criterion based on the Otsu method is introduced to determine the sub-pixel edge, and then the sub-pixel edge is refined by using the grayscale gradient optimization method. Finally, the sub-pixel edge is centrally located by applying the least squares ellipse fitting method to obtain the central pixel coordinates of the circular target;

步骤3-1:利用Canny边缘检测算法对识别的圆形目标区域进行粗定边缘;Step 3-1: Use the Canny edge detection algorithm to roughly determine the edge of the identified circular target area;

首先通过使用高斯滤波器对经过二值化的圆形目标图像进行平滑处理,其次在平滑处理后的图像上,使用梯度算子计算图像的梯度,得到图像中每个像素点的梯度强度和方向,获得梯度图;对梯度图进行非极大值抑制,通过在梯度方向上比较一个像素点的梯度强度与其相邻两个像素点的梯度强度,仅保留局部最大梯度值的像素点,从而抑制非边缘区域的响应,然后将梯度图分为两个阈值,高阈值和低阈值;根据像素点的梯度强度将像素点分为强边缘、弱边缘和非边缘三类。强边缘为真实边缘,弱边缘为真实边缘的一部分,非边缘则是剩余区域;对于弱边缘,通过连接其周围的强边缘像素,形成连续的粗定边缘;First, the binarized circular target image is smoothed by using a Gaussian filter. Secondly, the gradient operator is used to calculate the gradient of the image on the smoothed image to obtain the gradient strength and direction of each pixel in the image and obtain the gradient map. The gradient map is non-maximum suppressed by comparing the gradient strength of a pixel with the gradient strength of its two adjacent pixels in the gradient direction, and only the pixel with the local maximum gradient value is retained, thereby suppressing the response of the non-edge area. The gradient map is then divided into two thresholds, a high threshold and a low threshold. The pixels are divided into three categories: strong edge, weak edge, and non-edge according to the gradient strength of the pixel. A strong edge is a true edge, a weak edge is part of a true edge, and a non-edge is the remaining area. For a weak edge, a continuous coarse edge is formed by connecting the strong edge pixels around it.

步骤3-2:利用Zernike矩对粗定边缘进行亚像素边缘检测;Step 3-2: Use Zernike moments to perform sub-pixel edge detection on the roughly determined edge;

令Anm表示图像f(x,y)的Zernike矩,通过转换公式,则Anm表达为:Let A nm represent the Zernike moment of the image f(x, y). Through the conversion formula, A nm is expressed as:

其中(n+1)/π为归一化因子;为Zernike矩多项式,ρ为原点到像素点(x,y)的矢量距离;θ为ρ与x轴的夹角,如果将图像旋转θ角度,则旋转后图像f(x,y)的Zernike矩A′nm满足下式:Where (n+1)/π is the normalization factor; is the Zernike moment polynomial, ρ is the vector distance from the origin to the pixel point (x, y); θ is the angle between ρ and the x-axis. If the image is rotated by angle θ, the Zernike moment A′ nm of the rotated image f(x, y) satisfies the following formula:

根据上述公式,对粗定边缘目标图像的Zernike矩A00、A11和A20进行计算,得到旋转后图像的Zernike矩A′00、A′11、A′20满足如下关系式:According to the above formula, the Zernike moments A 00 , A 11 and A 20 of the roughly defined edge target image are calculated, and the Zernike moments A′ 00 , A′ 11 , A′ 20 of the rotated image satisfy the following relationship:

Zernike矩亚像素边缘的4个参数h、k、l和的计算公式如下:The four parameters h, k, l and The calculation formula is as follows:

式(9)中,h为背景灰度值,k为灰度阶跃值、l为模板中心到边缘的垂直距离,为边缘与x轴的夹角。采用N×N的Zernike矩模板,根据上述公式计算的参数,得到第i个亚像素边缘点Pi(x’,y’),其计算公式为:In formula (9), h is the background gray value, k is the gray step value, l is the vertical distance from the center to the edge of the template, is the angle between the edge and the x-axis. Using the N×N Zernike moment template and the parameters calculated by the above formula, the ith sub-pixel edge point P i (x', y') is obtained, and the calculation formula is:

步骤3-3:引入基于大津法的边缘阈值判断准则,判定亚像素边缘;Step 3-3: Introduce the edge threshold judgment criterion based on Otsu's method to determine the sub-pixel edge;

所述判定亚像素边缘,基于Zernike矩算法亚像素边缘点的阈值判断条件为:The threshold judgment condition of the sub-pixel edge point based on the Zernike moment algorithm is as follows:

l≤lt∩k≥kt (11)l≤l t ∩k≥k t (11)

式(11)中,kt为灰度阶跃阈值,lt为距离阈值,取值为:In formula (11), kt is the grayscale step threshold, lt is the distance threshold, and its value is:

引入大津法的判定准则,基于边缘区域和背景区域的类间方差,其选择最佳阈值的依据是类间方差的最大化;具体来说,大津法通过遍历所有潜在阈值,计算每个阈值下的类间方差,最终选择能最大化类间方差的阈值作为最佳选择;在包含n个像素点的标志图像中,设梯度幅值为i的像素点个数为ni,因此,梯度幅值i出现的概率为pi=ni/n,通过计算阈值t,将图像分割两个类别为前景边缘区域和背景,其分别对应的概率为:The judgment criteria of the Otsu method are introduced. Based on the inter-class variance of the edge area and the background area, the basis for selecting the optimal threshold is to maximize the inter-class variance. Specifically, the Otsu method traverses all potential thresholds, calculates the inter-class variance under each threshold, and finally selects the threshold that can maximize the inter-class variance as the best choice. In a sign image containing n pixels, let the number of pixels with gradient amplitude i be n i , therefore, the probability of the occurrence of gradient amplitude i is p i = n i /n. By calculating the threshold t, the image is divided into two categories as the foreground edge area and the background, and the corresponding probabilities are:

其均值分别为:Their means are:

平均梯度幅值为则得到两类的类间方差为The average gradient amplitude is Then the between-class variance of the two classes is

当方差σ2最大时,前景边缘和背景的差异达到最大,因此分割的准确率最高,此时的灰度阶跃阈值kt即为最佳取值;通过使用最佳灰度阶跃阈值kt和距离阈值lt进行基于Zernike矩亚像素边缘进行计算,最终得到亚像素边缘点pi′(x’,y’),并将其存放边缘点集{Pi}中。When the variance σ 2 is the largest, the difference between the foreground edge and the background reaches the maximum, so the segmentation accuracy is the highest, and the grayscale step threshold k t at this time is the optimal value; by using the optimal grayscale step threshold k t and the distance threshold l t to calculate the sub-pixel edge based on the Zernike moment, the sub-pixel edge point p i ′(x', y') is finally obtained and stored in the edge point set {P i }.

步骤3-4:利用基于灰度梯度优化的方法进行亚像素边缘细化;Step 3-4: Sub-pixel edge refinement is performed using a grayscale gradient optimization-based method;

基于灰度梯度优化的方法中分布的中心值即为梯度方向变化最大的位置,即边缘所在的位置;首先,通过步骤3-3中的改进的亚像素Zernike矩算法提取亚像素边缘信息,再利用最小二乘椭圆拟合法对所有亚像素边缘点进行中心拟合,获取圆形标志物的粗定中心;基于粗定中心和相应的粗定边缘,得到每个边缘像素点对应的径向直线方程:The center value of the distribution in the grayscale gradient optimization method is the position where the gradient direction changes the most, that is, the position of the edge. First, the sub-pixel edge information is extracted by the improved sub-pixel Zernike moment algorithm in step 3-3, and then the least squares ellipse fitting method is used to perform center fitting on all sub-pixel edge points to obtain the rough center of the circular marker. Based on the rough center and the corresponding rough edge, the radial line equation corresponding to each edge pixel is obtained:

y=kx+b (16)y=kx+b (16)

其中:in:

式中,(x0,y0)表示粗定的中心坐标;(xi,yi)表示每个边缘点坐标,对于某一边缘点,沿着其径向方向计算灰度沿梯度方向的一阶导数,并利用高斯函数求得其中心灰度值。其中,高斯函数的表达式为:In the formula, (x 0 , y 0 ) represents the roughly determined center coordinates; ( xi , yi ) represents the coordinates of each edge point. For a certain edge point, the first-order derivative of the grayscale along the gradient direction is calculated along its radial direction, and the central grayscale value is obtained using the Gaussian function. The expression of the Gaussian function is:

式(18)中,a高斯比例系数,b为梯度灰度一阶导数高斯分布的中心值,c2为梯度灰度一阶导数高斯分布的方差,pi′(xb,yb)为细化后的的边缘点所在位置,依次对所有边缘点进行细化;In formula (18), a is the Gaussian proportional coefficient, b is the center value of the first-order derivative Gaussian distribution of the gradient grayscale, c 2 is the variance of the first-order derivative Gaussian distribution of the gradient grayscale, and p i ′(x b , y b ) is the location of the edge point after refinement. All edge points are refined in turn;

步骤3-5:应用最小二乘椭圆拟合法对亚像素边缘进行中心定位,获取圆形目标的中心像素坐标;Step 3-5: Use the least squares ellipse fitting method to locate the center of the sub-pixel edge and obtain the center pixel coordinates of the circular target;

利用最小二乘椭圆拟合法对亚像素边缘点集{Pi}进行处理,从而定位圆形目标的圆心。椭圆的一般方程如下:The sub-pixel edge point set {P i } is processed using the least squares ellipse fitting method to locate the center of the circular target. The general equation of the ellipse is as follows:

F(x)=ax2+bxy+cy2+dx+ey+f=0 (19)F(x)=ax 2 +bxy+cy 2 +dx+ey+f=0 (19)

通过最小二乘椭圆拟合法对边缘点进行拟合解得参数A、B、C、D、E、F,则椭圆中心点坐标计算公式为:The edge points are fitted by the least squares ellipse fitting method to obtain the parameters A, B, C, D, E, and F. The coordinate calculation formula of the ellipse center point is:

最终得到圆形目标中心点坐标为(xr,yr)。The final coordinates of the center point of the circular target are (x r , y r ).

步骤4:对双目相机拍摄的多个圆形目标进行初始编号,再利用基于时空特征的目标跟踪方法对编号的圆形目标进行持续识别,定位圆形目标中心像素坐标,再将同一时刻的双目相机拍摄的两幅圆形目标图像按照同一编号像素坐标进行立体匹配,根据双目视觉测量原理进行空间定位。Step 4: Initially number the multiple circular targets photographed by the binocular camera, and then use the target tracking method based on spatiotemporal features to continuously identify the numbered circular targets, locate the pixel coordinates of the center of the circular targets, and then stereo match the two circular target images taken by the binocular camera at the same time according to the same numbered pixel coordinates, and perform spatial positioning according to the binocular vision measurement principle.

步骤4-1:对所有圆形目标进行初始定位,获取其中心像素坐标值,将其像素中心坐标值按像素x值的升序列排序,并存储在集合C中;将集合C分成m个子集合P1,P2,...Pk,Pm,每个子集合中包含n个元素;进一步,对Pk对每个子集中个每一个元素按y值的升序进行排序,k=1,2,...k,n,得到Qki,k=1,2,....,m,i=1,2,......,n,Qki即为第m行,第n列圆形目标像素坐标,集合C中的元素代表目标图像从上到下、从左到右排序的像素点,每个中心像素点的编号即为相应标志物的编号;Step 4-1: Initially locate all circular targets, obtain their central pixel coordinate values, sort their pixel center coordinate values in ascending order of pixel x values, and store them in set C; divide set C into m subsets P 1 , P 2 , ... P k , P m , each subset contains n elements; further, sort each element in each subset P k in ascending order of y values, k = 1, 2, ... k, n, and obtain Q ki , k = 1, 2, ...., m, i = 1, 2, ..., n, Q ki is the pixel coordinate of the circular target in the mth row and nth column, the elements in set C represent the pixel points of the target image sorted from top to bottom and from left to right, and the number of each central pixel point is the number of the corresponding marker;

步骤4-2:采用标志物识别方法处理图像序列,获得多标志物提取结果图像,以实现对标志物前景和背景空间的区分;对每个圆形目标进行基于最小外接矩形的空间特征和基于目标所在图像时刻序列的时间特征的计算,将当前时刻识别得到的监测点圆形标志以及其最小外接矩形作为该目标的空间特征算子,同时记录当前时刻作为时间描述算子,最终,整合空间特征算子和时间描述算子,共同构建出目标特征向量A,如下式所示:Step 4-2: Use the marker recognition method to process the image sequence and obtain the multi-marker extraction result image to distinguish the marker foreground and background space; calculate the spatial features based on the minimum enclosing rectangle and the temporal features based on the image time sequence of the target for each circular target, and use the circular marker of the monitoring point identified at the current moment and its minimum enclosing rectangle as the spatial feature operator of the target, and record the current moment as the time description operator. Finally, integrate the spatial feature operator and the time description operator to jointly construct the target feature vector A, as shown in the following formula:

A={Ocen,Mmatrix,Tfps} (21)A={O cen ,M matrix ,T fps } (21)

式中,A的结构属性是元胞数组,Ocen为圆形目标像素中心坐标编号,Mmatrix为圆形标志的最小外接矩阵,Tfps为当前时刻的二值化圆形标志所在的时刻序列。Wherein, the structural attribute of A is a cell array, O cen is the coordinate number of the center of the circular target pixel, M matrix is the minimum circumscribed matrix of the circular mark, and T fps is the time sequence of the binary circular mark at the current moment.

建立时序数据组B,由n个特征向量A组成,其表达式如下:Create a time series data set B, which consists of n feature vectors A, and its expression is as follows:

B={A1,A2,…,An} (22)B={A 1 , A 2 , … , An } (22)

An表示当前时刻图像中序号为n的圆形目标的特征向量,n表示前景目标的数量。A n represents the feature vector of the circular target with sequence number n in the image at the current moment, and n represents the number of foreground targets.

步骤4-3:通过追踪图像中圆形目标的位置来关联连续图像帧中的同一目标,利用目标特征向量,结合目标外接矩形重叠度OLD,实现圆形目标在不同时刻的匹配关联;其中OLD的计算模型如下所示:Step 4-3: By tracking the position of the circular target in the image, the same target in consecutive image frames is associated. The target feature vector is used in combination with the target circumscribed rectangle overlap OLD to achieve matching association of the circular target at different times. The calculation model of OLD is as follows:

OLD=K/(PT+K+QT+1) (23)OLD=K/( PT +K+QT +1 ) (23)

式中,OLD用于评估两个目标区域的相似性,其值越高,两个目标为同一对象的可能性越大;K代表两个目标区域的相似面积,分别指代连续时刻中两个待检测目标的区域面积。判断两个目标是否成功关联的匹配条件如下:In the formula, OLD is used to evaluate the similarity of two target regions. The higher its value, the greater the possibility that the two targets are the same object; K represents the similar area of the two target regions, which refers to the area of the two targets to be detected in consecutive moments. The matching conditions for determining whether two targets are successfully associated are as follows:

OLD≥α (24)OLD≥α (24)

其中,α表示判别阈值,当待检测目标符合上述相似性的判定条件时,则认为这两个目标是同一个标志,如果不符合,则认为它们不是同一个目标。如果不同时间点的两个目标被判断为同一目标,那么最新图像帧中的目标特征信息将替换时序元胞组B中上一时刻关联的目标特征;Among them, α represents the discrimination threshold. When the target to be detected meets the above similarity judgment conditions, the two targets are considered to be the same symbol. If not, they are considered to be different targets. If two targets at different time points are judged to be the same target, the target feature information in the latest image frame will replace the target feature associated with the previous moment in the time series cell group B;

步骤4-4:对初始编号的圆形目标进行连续时刻的目标跟踪,获得同时刻序列中左右相机图像对应编号的圆形目标,通过对应编号实现立体匹配,最后利用双目视觉测量原理求解其空间坐标;Step 4-4: Track the circular target with the initial number at continuous moments, obtain the circular target with the corresponding number in the left and right camera images in the sequence at the same time, realize stereo matching through the corresponding number, and finally solve its spatial coordinates by using the binocular vision measurement principle;

双目相机使用两个不同角度拍摄的图像来计算视差图,然后通过视差图获得像素的三维信息,b为两台相机之间的基线距离,f为相机镜头的焦距,L为成像宽度,XL和XR分别表示两个成像点PL和PR在各自坐标系中的X轴方向上的像素距离,相应的像素坐标上的差异成为视差d,Z是空间中物体点到两相机基线的深度距离。其中视差d表示为:The binocular camera uses two images taken at different angles to calculate the disparity map, and then obtains the three-dimensional information of the pixel through the disparity map. b is the baseline distance between the two cameras, f is the focal length of the camera lens, L is the imaging width, XL and XR respectively represent the pixel distances of the two imaging points PL and PR in the X-axis direction in their respective coordinate systems, and the corresponding difference in pixel coordinates is called disparity d, and Z is the depth distance from the object point in space to the baseline of the two cameras. The disparity d is expressed as:

d=|XL-XR| (25)d=| XL - XR | (25)

则两个成像点PL和PR之间的距离为:Then the distance between the two imaging points PL and PR is:

根据相似三角形理论得:According to the theory of similar triangles:

则得到点P到投影中心平面的距离Z,表示如下:Then the distance Z from point P to the projection center plane is obtained, which is expressed as follows:

当三维空间中的点P发生移动时,其在左右相机上的成像点PL和PR也会相应变化,左右相机的视差会随之变化;根据视差原理得该点的深度信息,根据三角形相似的原理,即可得到空间中的点三维坐标为:When a point P in three-dimensional space moves, its imaging points PL and PR on the left and right cameras will also change accordingly, and the parallax of the left and right cameras will change accordingly; the depth information of the point can be obtained based on the parallax principle, and the three-dimensional coordinates of the point in space can be obtained based on the principle of triangle similarity:

采用上述技术方案所产生的有益效果在于:The beneficial effects of adopting the above technical solution are:

本发明提供一种基于双目视觉的多个圆形目标空间定位方法,本发明将双目视觉测量技术和图像处理相关技术相结合,完成特定颜色的圆形目标识别与空间定位。首先提出了一种基于圆形标志的改进亚像素边缘的中心定位方法,通过运用图像处理相关技术对圆形标志物进行识别提取。在此基础上利用Canny算子进行粗定位,再通过Zernike矩法与大津法相结合对粗定位边缘进行亚像素提取,采用灰度梯度优化来细化边缘,再利用最小二乘椭圆拟合法对细化边缘进行中心坐标定位。其次提出一种基于编号的立体匹配方法,结合基于时空特性跟踪的方法对连续时刻的图像序列进行目标跟踪,最终实现基于相同编号目标对的立体匹配,从而定位空间坐标。最后通过实验证明了本文提出的方法可以实现快速、高精度和稳定的空间定位。The present invention provides a method for spatial positioning of multiple circular targets based on binocular vision. The present invention combines binocular vision measurement technology with image processing related technology to complete the recognition and spatial positioning of circular targets of specific colors. Firstly, a center positioning method based on improved sub-pixel edges of circular markers is proposed, and circular markers are identified and extracted by using image processing related technology. On this basis, the Canny operator is used for coarse positioning, and then the sub-pixel extraction of the coarse positioning edge is performed by combining the Zernike moment method with the Otsu method, and the grayscale gradient optimization is used to refine the edge, and then the least squares ellipse fitting method is used to locate the center coordinate of the refined edge. Secondly, a stereo matching method based on numbering is proposed, and the target tracking of the image sequence at continuous moments is performed in combination with a method based on spatiotemporal characteristic tracking, and finally stereo matching based on the same numbered target pair is realized, so as to locate the spatial coordinates. Finally, it is proved through experiments that the method proposed in this paper can achieve fast, high-precision and stable spatial positioning.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1为本发明具体实施方式提供的基于双目视觉的多个圆形目标空间定位方法的流程图;FIG1 is a flow chart of a method for spatially locating multiple circular targets based on binocular vision provided by a specific embodiment of the present invention;

图2为本发明具体实施方式提供的HBVCAM-4M2142HD-2型号双目相机的实物图;FIG2 is a physical picture of a HBVCAM-4M2142HD-2 model binocular camera provided in a specific embodiment of the present invention;

图3为本发明具体实施方式提供的棋盘网格网格标定板图;FIG3 is a diagram of a checkerboard grid calibration plate provided in a specific embodiment of the present invention;

图4为本发明具体实施方式提供的双目相机采集场景实物图;FIG4 is a real-life picture of a scene captured by a binocular camera provided in a specific embodiment of the present invention;

图5为本发明具体实施方式提供的一组双目相机标定图像;FIG5 is a set of binocular camera calibration images provided in a specific embodiment of the present invention;

其中5(a)为左相机拍摄标定图像,5(b)为右相机拍摄标定图像;Among them, 5(a) is the calibration image taken by the left camera, and 5(b) is the calibration image taken by the right camera;

图6本发明具体实施方式提供的双目视觉系统测量原理示意图;FIG6 is a schematic diagram of the measurement principle of a binocular vision system provided in a specific embodiment of the present invention;

图7为本发明具体实施方式提供的双目视觉系统相机成像示意图;FIG7 is a schematic diagram of camera imaging of a binocular vision system provided in a specific embodiment of the present invention;

图8为本发明具体实施方式提供多个圆形标志识别图像;FIG8 provides a plurality of circular logo recognition images according to a specific embodiment of the present invention;

其中8(a)为左相机拍摄多个圆形标志原实物图,8(b)为右相机拍摄多个圆形标志原实物图,8(c)为左相机拍摄多个圆形标志识别图像,8(d)为右相机拍摄多个圆形标志识别图像;8(a) is a picture of the original objects of multiple circular signs taken by the left camera, 8(b) is a picture of the original objects of multiple circular signs taken by the right camera, 8(c) is a picture of the recognized images of multiple circular signs taken by the left camera, and 8(d) is a picture of the recognized images of multiple circular signs taken by the right camera;

具体实施方式Detailed ways

下面结合附图和实施例,对本发明的具体实施方式作进一步详细描述。以下实施例用于说明本发明,但不用来限制本发明的范围。The specific implementation of the present invention is further described in detail below in conjunction with the accompanying drawings and examples. The following examples are used to illustrate the present invention, but are not intended to limit the scope of the present invention.

一种基于双目视觉的多个圆形目标空间定位方法,如图1所示,包括以下步骤:A method for spatial positioning of multiple circular targets based on binocular vision, as shown in FIG1 , comprises the following steps:

步骤1:通过双目相机的左右摄像头拍摄不同角度的标定板图片,根据张正友标定法并通过Matlab标定工具箱完成双目相机的立体标定和校正;Step 1: Use the left and right cameras of the binocular camera to take pictures of the calibration plate at different angles, and complete the stereo calibration and correction of the binocular camera according to Zhang Zhengyou's calibration method and the Matlab calibration toolbox;

双目相机的最终目的是通过获取圆形目标中心像素坐标,在利用立体匹配得到视差图后通过三角相似原理求得圆形目标中心的空间坐标,但这是在双目相机处于理想情况下得到的,而通常双目相机会存在畸变,因此在使用双目相机之前需要对双目相机进行立体标定与校正,使得双目相机能够在理想状态下工作。The ultimate goal of the binocular camera is to obtain the pixel coordinates of the center of the circular target, and then obtain the spatial coordinates of the center of the circular target through the triangular similarity principle after obtaining the disparity map through stereo matching. However, this is obtained when the binocular camera is in an ideal condition. Usually, the binocular camera will have distortion. Therefore, the binocular camera needs to be stereo calibrated and corrected before use so that it can work in an ideal condition.

根据张正友标定法,利用MATLAB对双目相机实现离线标定,HBVCAM-4M2142HD-2-V22,其感光原件为CMOS,固定焦距为3mm,如图2所示;标定板采用自制的7×7棋盘网格,角点之间的距离为40mm,如图3所示。利用Python+opencv编写图像采集程序;According to Zhang Zhengyou's calibration method, MATLAB is used to realize offline calibration of the binocular camera, HBVCAM-4M2142HD-2-V22, whose photosensitive element is CMOS and fixed focal length is 3mm, as shown in Figure 2; the calibration board uses a self-made 7×7 chessboard grid, and the distance between the corner points is 40mm, as shown in Figure 3. Use Python+opencv to write the image acquisition program;

双目相机标定在空旷的室外平地进行,室外具有良好的自然光照条件,标定板被放置在左右两个相机图像中心位置,使用双目采集装置采集不同位置的标定板。由于双目相机本身与标定板之间的相对方向不垂直,因此在标定图像采集过程当中,标定板的移动角度不应过大,如图4所示。故在以下位置参数范围内随机采集图像:物机距离为800mm-1000mm,采集窗口分辨率640×480pixel,目标平面与图像平面夹角为0~45度,采集25组标定图像,如图5所示,是使用双目采集装置采集的一组标定图像。The binocular camera calibration is carried out in an open outdoor flat area with good natural lighting conditions. The calibration plate is placed at the center of the left and right camera images, and the binocular acquisition device is used to collect calibration plates at different positions. Since the relative direction between the binocular camera itself and the calibration plate is not perpendicular, the movement angle of the calibration plate should not be too large during the calibration image acquisition process, as shown in Figure 4. Therefore, images are randomly collected within the following position parameter range: the object-machine distance is 800mm-1000mm, the acquisition window resolution is 640×480pixel, the angle between the target plane and the image plane is 0-45 degrees, and 25 groups of calibration images are collected. As shown in Figure 5, this is a group of calibration images collected using the binocular acquisition device.

打开MATLAB仿真软件平台里的“Stereo Camera Calibrator”工具箱,设置完“Coefficients”、“Skew”、“Tangential Distortion”后,导入前面采集的25组标定板左右视图的图片,导入完成后点击工具箱上面的Calibrator按键,工具箱就会自动查找并导入图片中的每个角点,在标定时去掉部分偏差较大的图片,从而提高标定的效果,标定完成后会得到双目相机的内外参数,通过MATLAB软件里的工具箱标定后会得到双目相机的左右相机内参,以及摄像机的旋转矩阵与平移矩阵,双目相机的参数如下表1所示。Open the "Stereo Camera Calibrator" toolbox in the MATLAB simulation software platform. After setting "Coefficients", "Skew" and "Tangential Distortion", import the 25 sets of left and right view images of the calibration plate collected earlier. After the import is complete, click the Calibrator button on the toolbox. The toolbox will automatically find and import each corner point in the image. During calibration, some images with large deviations will be removed to improve the calibration effect. After the calibration is completed, the internal and external parameters of the stereo camera will be obtained. After calibration through the toolbox in the MATLAB software, the left and right camera internal parameters of the stereo camera, as well as the rotation matrix and translation matrix of the camera will be obtained. The parameters of the stereo camera are shown in Table 1 below.

表1双目相机标定结果Table 1. Stereo camera calibration results

经过标定获得双目相机的内参与旋转、平移矩阵后,对采集的标定板左右图像利用校正原理进行校正,经过校正的标定板左右图像,用双目相机采集的标定板左右图像对应的每个像素点都在同一水平线上。双目相机的立体校正能为后续立体匹配以及双目测距提供更精确的数据。After obtaining the intrinsic rotation and translation matrix of the binocular camera through calibration, the left and right images of the calibration plate collected are corrected using the correction principle. After the correction, each pixel corresponding to the left and right images of the calibration plate collected by the binocular camera is on the same horizontal line. The stereo calibration of the binocular camera can provide more accurate data for subsequent stereo matching and binocular ranging.

步骤2:启动已经标定好的双目相机采集圆形目标图像,对采集的圆形目标图像进行滤波去噪,再对去噪后的图像进行二值化分割,得到分割目标图像,再利用形态学滤波,最后利用圆形度准则对圆形目标区域进行识别;Step 2: Start the calibrated binocular camera to collect circular target images, filter and denoise the collected circular target images, and then perform binary segmentation on the denoised images to obtain segmented target images. Then, use morphological filtering, and finally use the circularity criterion to identify the circular target area.

步骤2-1:对采集的圆形目标图像进行滤波去噪;Step 2-1: Filter and denoise the collected circular target image;

利用引导滤波进行原始目标图像滤波去噪,基于引导信号对目标图像进行引导,以调整滤波过程中的权重,更好的保留图像的边缘和细节;Guided filtering is used to filter and denoise the original target image. The target image is guided based on the guidance signal to adjust the weights in the filtering process to better preserve the edges and details of the image.

步骤2-2:二值化分割;Step 2-2: Binarization segmentation;

使用HSI颜色空间进行标志物的识别,将去噪后的目标图像从RGB空间转换到HSI空间;根据HSI各分量的范围阈值提取红色标志物,如式(1)所示:The HSI color space is used to identify the markers. The denoised target image is converted from the RGB space to the HSI space. The red markers are extracted according to the range thresholds of each HSI component, as shown in formula (1):

式中,H为定义颜色的频率,即色调;S表示颜色的深浅程度,即饱和度;I表示亮度;当图像像素值的各个HSI分量同时满足式(1)所对应的红色阈值条件时,则将对应位置的像素点设为前景颜色1,反之则设为黑色背景0,再利用开运算和闭运算相结合的形态学滤波方法进行进一步噪声去除;In the formula, H is the frequency that defines the color, that is, the hue; S represents the depth of the color, that is, the saturation; I represents the brightness; when each HSI component of the image pixel value satisfies the red threshold condition corresponding to formula (1) at the same time, the pixel point at the corresponding position is set to the foreground color 1, otherwise it is set to the black background 0, and then the morphological filtering method combining the opening operation and the closing operation is used to further remove the noise;

步骤2-3:利用圆形度准则对圆形目标区域进行识别;具体为,通过计算目标区域的圆形度来对圆形目标区域进行识别,以此提取标志区域并排除可能的干扰目标。Step 2-3: Use the circularity criterion to identify the circular target area; specifically, the circular target area is identified by calculating the circularity of the target area, thereby extracting the mark area and excluding possible interfering targets.

圆形度表示目标圆形与理想圆形的相似程度,通过计算目标区域的周长和面积来确定;目标区域的周长和面积的比值计算公式如式(2)所示:The circularity indicates the similarity between the target circle and the ideal circle, which is determined by calculating the perimeter and area of the target area. The calculation formula of the ratio of the perimeter and area of the target area is shown in formula (2):

式中,C为目标区域的圆形度,Sc表示目标区域面积,Lc为目标区域的轮廓周长,β为比例系数,其表达式为:In the formula, C is the circularity of the target area, Sc represents the area of the target area, Lc is the contour perimeter of the target area, and β is the proportional coefficient. The expression is:

Sc=πr2,Lc=2πr (3)S c =πr 2 , L c =2πr (3)

其中r为目标区域半径,通过圆形度计算公式得,当C=1时,该连通区域视为圆形,考虑到成像距离和角度等因素的不同,导致二值化图像中的目标可能会出现拉伸或扭曲等变化。因此,目标区域的圆形度C满足以下条件:Where r is the radius of the target area. According to the circularity calculation formula, when C=1, the connected area is considered a circle. Considering the differences in imaging distance and angle, the target in the binary image may be stretched or distorted. Therefore, the circularity C of the target area satisfies the following conditions:

Cmin≤C≤Cmax (4)C min ≤C ≤C max (4)

式中,Cmin和Cmax分别表示圆形目标与双目相机的连线和双目相机光轴之间的夹角最大和最小值时的标志点的圆形度,其取值表达为:In the formula, C min and C max represent the circularity of the landmark point when the angle between the line connecting the circular target and the binocular camera and the optical axis of the binocular camera is the maximum and minimum respectively, and its value is expressed as:

式中,θ为双目相机的偏转角度,理想状态下β=1。通过以上操作,将圆形目标从背景当中识别出来。Where θ is the deflection angle of the binocular camera, and in an ideal state, β = 1. Through the above operations, the circular target is identified from the background.

步骤3:利用Canny边缘检测算法对识别的圆形目标区域进行粗定边缘,再利用Zernike矩对粗定边缘进行亚像素边缘检测,同时引入基于大津法的边缘阈值判断准则,判定亚像素边缘,再利用基于灰度梯度优化的方法进行亚像素边缘细化,最后,通过应用最小二乘椭圆拟合法对亚像素边缘进行中心定位,从而获取圆形目标的中心像素坐标;Step 3: Use the Canny edge detection algorithm to roughly determine the edge of the identified circular target area, and then use the Zernike moment to perform sub-pixel edge detection on the roughly determined edge. At the same time, the edge threshold judgment criterion based on the Otsu method is introduced to determine the sub-pixel edge, and then the sub-pixel edge is refined by using the grayscale gradient optimization method. Finally, the sub-pixel edge is centrally located by applying the least squares ellipse fitting method to obtain the central pixel coordinates of the circular target;

步骤3-1:利用Canny边缘检测算法对识别的圆形目标区域进行粗定边缘;Step 3-1: Use the Canny edge detection algorithm to roughly determine the edge of the identified circular target area;

首先通过使用高斯滤波器对经过二值化的圆形目标图像进行平滑处理,以降低图片中的噪声影响,同时保留图片中的更多细节;其次在平滑处理后的图像上,使用Sobel算子等梯度算子计算图像的梯度,得到图像中每个像素点的梯度强度和方向,获得梯度图;梯度强度表示图像在该点的灰度变化率,而梯度方向表示变化最快的方向;对梯度图进行非极大值抑制,通过在梯度方向上比较一个像素点的梯度强度与其相邻两个像素点的梯度强度,仅保留局部最大梯度值的像素点,从而抑制非边缘区域的响应,然后将梯度图分为两个阈值,高阈值和低阈值;根据像素点的梯度强度将像素点分为强边缘、弱边缘和非边缘三类。强边缘为真实边缘,弱边缘为真实边缘的一部分,非边缘则是剩余区域;对于弱边缘,通过连接其周围的强边缘像素,形成连续的粗定边缘;通常采用的是基于梯度方向的边缘跟踪算法,如基于4邻域或8邻域的连接。First, the binarized circular target image is smoothed by using a Gaussian filter to reduce the noise effect in the image while retaining more details in the image; secondly, the gradient operator such as the Sobel operator is used to calculate the gradient of the image on the smoothed image, and the gradient intensity and direction of each pixel in the image are obtained to obtain a gradient map; the gradient intensity represents the grayscale change rate of the image at that point, and the gradient direction represents the direction of the fastest change; the gradient map is non-maximum suppressed, by comparing the gradient intensity of a pixel with the gradient intensity of its two adjacent pixels in the gradient direction, only the pixel with the local maximum gradient value is retained, thereby suppressing the response of the non-edge area, and then the gradient map is divided into two thresholds, a high threshold and a low threshold; according to the gradient intensity of the pixel, the pixel is divided into three categories: strong edge, weak edge and non-edge. Strong edge is the real edge, weak edge is part of the real edge, and non-edge is the remaining area; for weak edge, by connecting the strong edge pixels around it, a continuous rough edge is formed; the edge tracking algorithm based on gradient direction is usually used, such as the connection based on 4 neighborhoods or 8 neighborhoods.

步骤3-2:利用Zernike矩对粗定边缘进行亚像素边缘检测;Step 3-2: Use Zernike moments to perform sub-pixel edge detection on the roughly determined edge;

令Anm表示图像f(x,y)的Zernike矩,通过转换公式,则Anm表达为:Let A nm represent the Zernike moment of the image f(x, y). Through the conversion formula, A nm is expressed as:

其中(n+1)/π为归一化因子;为Zernike矩多项式,ρ为原点到像素点(x,y)的矢量距离;θ为ρ与x轴的夹角,如果将图像旋转θ角度,则旋转后图像f(x,y)的Zernike矩A′nm满足下式:Where (n+1)/π is the normalization factor; is the Zernike moment polynomial, ρ is the vector distance from the origin to the pixel point (x, y); θ is the angle between ρ and the x-axis. If the image is rotated by θ, the Zernike moment A′ nm of the rotated image f(x, y) satisfies the following formula:

根据上述公式,对粗定边缘目标图像的Zernike矩A00、A11和A20进行计算,得到旋转后图像的Zernike矩A′00、A′11、A′20满足如下关系式:According to the above formula, the Zernike moments A 00 , A 11 and A 20 of the roughly defined edge target image are calculated, and the Zernike moments A′ 00 , A′ 11 , A′ 20 of the rotated image satisfy the following relationship:

Zernike矩亚像素边缘的4个参数h、k、l和的计算公式如下:The four parameters h, k, l and The calculation formula is as follows:

式(9)中,h为背景灰度值,k为灰度阶跃值、l为模板中心到边缘的垂直距离,为边缘与x轴的夹角。采用N×N的Zernike矩模板,N取值为7,根据上述公式计算的参数,得到第i个亚像素边缘点Pi(x’,y’),其计算公式为:In formula (9), h is the background gray value, k is the gray step value, l is the vertical distance from the center to the edge of the template, is the angle between the edge and the x-axis. Using an N×N Zernike moment template, where N is 7, and the parameters calculated using the above formula, the ith sub-pixel edge point P i (x', y') is obtained. The calculation formula is:

步骤3-3:引入基于大津法的边缘阈值判断准则,判定亚像素边缘;Step 3-3: Introduce the edge threshold judgment criterion based on Otsu's method to determine the sub-pixel edge;

所述判定亚像素边缘,基于Zernike矩算法亚像素边缘点的阈值判断条件为:The threshold judgment condition of the sub-pixel edge point based on the Zernike moment algorithm is as follows:

l≤lt∩k≥kt (11)l≤l t ∩k≥k t (11)

式(11)中,kt为灰度阶跃阈值,lt为距离阈值,取值为:In formula (11), kt is the grayscale step threshold, lt is the distance threshold, and its value is:

在式(12)中,N代表模板的大小,它通常取值为7。作为灰度阶跃阈值kt,它对边缘信息的提取具的提取至关重要。引入大津法的判定准则,基于边缘区域和背景区域的类间方差,其选择最佳阈值的依据是类间方差的最大化;具体来说,大津法通过遍历所有潜在阈值,计算每个阈值下的类间方差,最终选择能最大化类间方差的阈值作为最佳选择;在包含n个像素点的标志图像中,设梯度幅值为i的像素点个数为ni,因此,梯度幅值i出现的概率为pi=ni/n,通过计算阈值t,将图像分割两个类别为前景边缘区域和背景,其分别对应的概率为:In formula (12), N represents the size of the template, which is usually taken as 7. As the grayscale step threshold kt , it is crucial to the extraction of edge information. The judgment criterion of the Otsu method is introduced. Based on the inter-class variance of the edge area and the background area, the basis for selecting the optimal threshold is to maximize the inter-class variance; specifically, the Otsu method traverses all potential thresholds, calculates the inter-class variance under each threshold, and finally selects the threshold that can maximize the inter-class variance as the best choice; in the sign image containing n pixels, let the number of pixels with gradient amplitude i be n i , therefore, the probability of the occurrence of gradient amplitude i is p i = n i /n, by calculating the threshold t, the image is divided into two categories as the foreground edge area and the background, and the corresponding probabilities are:

其均值分别为:Their means are:

平均梯度幅值为则得到两类的类间方差为The average gradient amplitude is Then the between-class variance of the two classes is

σ2=ω0(-μt)210t)2 (15)σ 2 =ω 0 (-μ t ) 210t ) 2 (15)

当方差σ2最大时,前景边缘和背景的差异达到最大,因此分割的准确率最高,此时的灰度阶跃阈值kt即为最佳取值;通过使用最佳灰度阶跃阈值kt和距离阈值lt进行基于Zernike矩亚像素边缘进行计算,最终得到亚像素边缘点pi′(x’,y’),并将其存放边缘点集{Pi}中。When the variance σ 2 is the largest, the difference between the foreground edge and the background reaches the maximum, so the segmentation accuracy is the highest, and the grayscale step threshold k t at this time is the optimal value; by using the optimal grayscale step threshold k t and the distance threshold l t to calculate the sub-pixel edge based on the Zernike moment, the sub-pixel edge point p i ′(x', y') is finally obtained and stored in the edge point set {P i }.

步骤3-4:利用基于灰度梯度优化的方法进行亚像素边缘细化;Step 3-4: Sub-pixel edge refinement is performed using a grayscale gradient optimization-based method;

基于灰度梯度优化的方法中分布的中心值即为梯度方向变化最大的位置,即边缘所在的位置;首先,通过步骤3-3中的改进的亚像素Zernike矩算法提取亚像素边缘信息,再利用最小二乘椭圆拟合法对所有亚像素边缘点进行中心拟合,获取圆形标志物的粗定中心;基于粗定中心和相应的粗定边缘,得到每个边缘像素点对应的径向直线方程:The center value of the distribution in the grayscale gradient optimization method is the position where the gradient direction changes the most, that is, the position of the edge. First, the sub-pixel edge information is extracted by the improved sub-pixel Zernike moment algorithm in step 3-3, and then the least squares ellipse fitting method is used to perform center fitting on all sub-pixel edge points to obtain the rough center of the circular marker. Based on the rough center and the corresponding rough edge, the radial line equation corresponding to each edge pixel is obtained:

y=kx+b (16)y=kx+b (16)

其中:in:

式中,(x0,y0)表示粗定的中心坐标;(xi,yi)表示每个边缘点坐标,对于某一边缘点,沿着其径向方向计算灰度沿梯度方向的一阶导数,并利用高斯函数求得其中心灰度值。其中,高斯函数的表达式为:In the formula, (x 0 , y 0 ) represents the roughly determined center coordinates; ( xi , yi ) represents the coordinates of each edge point. For a certain edge point, the first-order derivative of the grayscale along the gradient direction is calculated along its radial direction, and the central grayscale value is obtained using the Gaussian function. The expression of the Gaussian function is:

式(18)中,a高斯比例系数,b为梯度灰度一阶导数高斯分布的中心值,c2为梯度灰度一阶导数高斯分布的方差,pi′(xb,yb)为细化后的的边缘点所在位置,依次对所有边缘点进行细化;In formula (18), a is the Gaussian proportional coefficient, b is the center value of the first-order derivative Gaussian distribution of the gradient grayscale, c 2 is the variance of the first-order derivative Gaussian distribution of the gradient grayscale, and p i ′(x b , y b ) is the location of the edge point after refinement. All edge points are refined in turn;

步骤3-5:应用最小二乘椭圆拟合法对亚像素边缘进行中心定位,获取圆形目标的中心像素坐标;Step 3-5: Use the least squares ellipse fitting method to locate the center of the sub-pixel edge and obtain the center pixel coordinates of the circular target;

利用最小二乘椭圆拟合法对亚像素边缘点集{Pi}进行处理,从而定位圆形目标的圆心。椭圆的一般方程如下:The sub-pixel edge point set {P i } is processed using the least squares ellipse fitting method to locate the center of the circular target. The general equation of the ellipse is as follows:

F(x)=ax2+bxy+cy2+dx+ey+f=0 (19)F(x)=ax 2 +bxy+cy 2 +dx+ey+f=0 (19)

通过最小二乘椭圆拟合法对边缘点进行拟合解得参数A、B、C、D、E、F,则椭圆中心点坐标计算公式为:The edge points are fitted by the least squares ellipse fitting method to obtain the parameters A, B, C, D, E, and F. The coordinate calculation formula of the ellipse center point is:

最终得到圆形目标中心点坐标为(xr,yr)。The final coordinates of the center point of the circular target are (x r , y r ).

步骤4:对双目相机拍摄的多个圆形目标进行初始编号,再利用基于时空特征的目标跟踪方法对编号的圆形目标进行持续识别,定位圆形目标中心像素坐标,再将同一时刻的双目相机拍摄的两幅圆形目标图像按照同一编号像素坐标进行立体匹配,根据双目视觉测量原理进行空间定位。Step 4: Initially number the multiple circular targets photographed by the binocular camera, and then use the target tracking method based on spatiotemporal features to continuously identify the numbered circular targets, locate the pixel coordinates of the center of the circular targets, and then stereo match the two circular target images taken by the binocular camera at the same time according to the same numbered pixel coordinates, and perform spatial positioning according to the binocular vision measurement principle.

步骤4-1:对所有圆形目标进行初始定位,获取其中心像素坐标值,将其像素中心坐标值按像素x值的升序列排序,并存储在集合C中;将集合C分成m个子集合P1,P2,...Pk,Pm,每个子集合中包含n个元素;进一步,对Pk对每个子集中个每一个元素按y值的升序进行排序,k=1,2,...k,n,得到Qki,k=1,2,....,m,i=1,2,......,n,Qki即为第m行,第n列圆形目标像素坐标,集合C中的元素代表目标图像从上到下、从左到右排序的像素点,每个中心像素点的编号即为相应标志物的编号;Step 4-1: Initially locate all circular targets, obtain their central pixel coordinate values, sort their pixel center coordinate values in ascending order of pixel x values, and store them in set C; divide set C into m subsets P 1 , P 2 , ... P k , P m , each subset contains n elements; further, sort each element in each subset P k in ascending order of y values, k = 1, 2, ... k, n, and obtain Q ki , k = 1, 2, ...., m, i = 1, 2, ..., n, Q ki is the pixel coordinate of the circular target in the mth row and nth column, the elements in set C represent the pixel points of the target image sorted from top to bottom and from left to right, and the number of each central pixel point is the number of the corresponding marker;

步骤4-2:采用标志物识别方法处理图像序列,获得多标志物提取结果图像,以实现对标志物前景和背景空间的区分;对每个圆形目标进行基于最小外接矩形的空间特征和基于目标所在图像时刻序列的时间特征的计算,将当前时刻识别得到的监测点圆形标志以及其最小外接矩形作为该目标的空间特征算子,同时记录当前时刻作为时间描述算子,最终,整合空间特征算子和时间描述算子,共同构建出目标特征向量A,如下式所示:Step 4-2: Use the marker recognition method to process the image sequence and obtain the multi-marker extraction result image to distinguish the marker foreground and background space; calculate the spatial features based on the minimum enclosing rectangle and the temporal features based on the image time sequence of the target for each circular target, and use the circular marker of the monitoring point identified at the current moment and its minimum enclosing rectangle as the spatial feature operator of the target, and record the current moment as the time description operator. Finally, integrate the spatial feature operator and the time description operator to jointly construct the target feature vector A, as shown in the following formula:

A={Ocen,Mmatrix,Tfps} (21)A={O cen ,M matrix ,T fps } (21)

式中,A的结构属性是元胞数组,Ocen为圆形目标像素中心坐标编号,Mmatrix为圆形标志的最小外接矩阵,用于提供下一步目标关联时所需的判定信息;同时,Tfps为当前时刻的二值化圆形标志所在的时刻序列。Wherein, the structural attribute of A is a cell array, O cen is the coordinate number of the center of the circular target pixel, M matrix is the minimum circumscribed matrix of the circular mark, which is used to provide the judgment information required for the next step of target association; at the same time, T fps is the time sequence of the binary circular mark at the current moment.

建立时序数据组B,该组是时刻为单位汇总整个跟踪过程中所有标志物目标的状态变化,由n个特征向量A组成,其表达式如下:A time series data group B is established. This group summarizes the state changes of all marker targets in the entire tracking process as a unit of time. It consists of n feature vectors A, and its expression is as follows:

B={A1,A2,…,An} (22)B={A 1 , A 2 , … , An } (22)

An表示当前时刻图像中序号为n的圆形目标的特征向量,n表示前景目标的数量。A n represents the feature vector of the circular target with sequence number n in the image at the current moment, and n represents the number of foreground targets.

步骤4-3:通过追踪图像中圆形目标的位置来关联连续图像帧中的同一目标,利用目标特征向量,结合目标外接矩形重叠度OLD,实现圆形目标在不同时刻的匹配关联;其中OLD的计算模型如下所示:Step 4-3: By tracking the position of the circular target in the image, the same target in consecutive image frames is associated. The target feature vector is used in combination with the target circumscribed rectangle overlap OLD to achieve matching association of the circular target at different times. The calculation model of OLD is as follows:

OLD=K/(PT+K+QT+1) (23)OLD=K/( PT +K+QT +1 ) (23)

式中,OLD用于评估两个目标区域的相似性,其值越高,两个目标为同一对象的可能性越大;K代表两个目标区域的相似面积,分别指代连续时刻中两个待检测目标的区域面积。判断两个目标是否成功关联的匹配条件如下:In the formula, OLD is used to evaluate the similarity of two target regions. The higher its value, the greater the possibility that the two targets are the same object; K represents the similar area of the two target regions, which refers to the area of the two targets to be detected in consecutive moments. The matching conditions for determining whether two targets are successfully associated are as follows:

OLD≥α (24)OLD≥α (24)

其中,α表示判别阈值,用于验证目标跟踪的可靠性。当待检测目标符合上述相似性的判定条件时,则认为这两个目标是同一个标志,如果不符合,则认为它们不是同一个目标。如果不同时间点的两个目标被判断为同一目标,那么最新图像帧中的目标特征信息将替换时序元胞组B中上一时刻关联的目标特征;Among them, α represents the discrimination threshold, which is used to verify the reliability of target tracking. When the target to be detected meets the above similarity judgment conditions, the two targets are considered to be the same mark. If not, they are considered to be different targets. If two targets at different time points are judged to be the same target, the target feature information in the latest image frame will replace the target feature associated with the previous moment in the time series cell group B;

步骤4-4:对初始编号的圆形目标进行连续时刻的目标跟踪,获得同时刻序列中左右相机图像对应编号的圆形目标,通过对应编号实现立体匹配,最后利用双目视觉测量原理求解其空间坐标;Step 4-4: Track the circular target with the initial number at continuous moments, obtain the circular target with the corresponding number in the left and right camera images in the moment sequence, realize stereo matching through the corresponding number, and finally solve its spatial coordinates by using the binocular vision measurement principle;

双目相机使用两个不同角度拍摄的图像来计算视差图,然后通过视差图获得像素的三维信息,双目测距原理如图6所示。b为两台相机之间的基线距离,f为相机镜头的焦距,L为成像宽度,XL和XR分别表示两个成像点PL和PR在各自坐标系中的X轴方向上的像素距离,相应的像素坐标上的差异成为视差d,Z是空间中物体点到两相机基线的深度距离。其中视差d表示为:The binocular camera uses two images taken at different angles to calculate the disparity map, and then obtains the three-dimensional information of the pixel through the disparity map. The principle of binocular ranging is shown in Figure 6. b is the baseline distance between the two cameras, f is the focal length of the camera lens, L is the imaging width, XL and XR respectively represent the pixel distances of the two imaging points PL and PR in the X-axis direction in their respective coordinate systems, and the corresponding difference in pixel coordinates is called disparity d, and Z is the depth distance from the object point in space to the baseline of the two cameras. The disparity d is expressed as:

d=|XL-XR| (25)d=| XL - XR | (25)

则两个成像点PL和PR之间的距离为:Then the distance between the two imaging points PL and PR is:

根据相似三角形理论得:According to the theory of similar triangles:

则得到点P到投影中心平面的距离Z,表示如下:Then the distance Z from point P to the projection center plane is obtained, which is expressed as follows:

当三维空间中的点P发生移动时,其在左右相机上的成像点PL和PR也会相应变化,左右相机的视差会随之变化;根据视差原理得该点的深度信息,点P在相机的成像如图7所示。由图8可以看出,根据三角形相似的原理,即可得到空间中的点三维坐标为:When point P in the three-dimensional space moves, its imaging points PL and PR on the left and right cameras will also change accordingly, and the parallax of the left and right cameras will change accordingly; the depth information of the point is obtained according to the parallax principle, and the imaging of point P on the camera is shown in Figure 7. As can be seen from Figure 8, according to the principle of triangle similarity, the three-dimensional coordinates of the point in space can be obtained as:

为了验证本发明提出的基于双目视觉的多个圆形目标空间定位方法的有效性,现进行坡面圆形目标空间定位实验,如图8所示。为了确保初始点坐标的稳定性,本实施例中对圆形目标的初始位置进行了十次测量,以十次测量的均值作为标志点的初始坐标。得到多个圆形目标的像素坐标和三维坐标(以左相机坐标系为世界坐标系)如表1所示。In order to verify the effectiveness of the binocular vision-based spatial positioning method for multiple circular targets proposed in the present invention, a spatial positioning experiment for circular targets on a slope is conducted, as shown in FIG8 . In order to ensure the stability of the coordinates of the initial point, the initial position of the circular target is measured ten times in this embodiment, and the average of the ten measurements is used as the initial coordinate of the marker point. The pixel coordinates and three-dimensional coordinates (with the left camera coordinate system as the world coordinate system) of the multiple circular targets are obtained as shown in Table 1.

表1监测点标志像素坐标和三维坐标Table 1 Pixel coordinates and three-dimensional coordinates of monitoring point markers

由于无法直接验证三维坐标的误差,本发明采用了一种替代方法:通过测量并分析两个标志物间的相对距离来评估双目视觉测量系统的精度。为了获得准确的距离数据,本实施例使用了激光测距仪进行距离测量,该仪器的测量精度达到0.1mm。如表2所示,展示了不同监测点标志之间的相对距离的测量和计算数据。Since it is impossible to directly verify the error of the three-dimensional coordinates, the present invention adopts an alternative method: the accuracy of the binocular vision measurement system is evaluated by measuring and analyzing the relative distance between two markers. In order to obtain accurate distance data, this embodiment uses a laser rangefinder for distance measurement, and the measurement accuracy of the instrument reaches 0.1mm. As shown in Table 2, the measurement and calculation data of the relative distance between different monitoring point markers are shown.

表2监测点标志相对距离测量 单位:mmTable 2 Relative distance measurement of monitoring point marks Unit: mm

如表2所示,本发明提出的基于双目视觉的圆形目标空间定位方法测量的相对距离的绝对误差误差范围在1.8mm至3.4mm之间。相对距离测量绝对误差的均值为2.6mm。可以看到本发明提出的基于双目视觉的圆形目标空间定位精度能够达到毫米级水平,足以满足边坡位移监测的高要求,为边坡位移的高精度监测提供了可靠的基础。As shown in Table 2, the absolute error range of the relative distance measured by the circular target spatial positioning method based on binocular vision proposed in the present invention is between 1.8 mm and 3.4 mm. The average absolute error of the relative distance measurement is 2.6 mm. It can be seen that the circular target spatial positioning accuracy based on binocular vision proposed in the present invention can reach the millimeter level, which is sufficient to meet the high requirements of slope displacement monitoring and provides a reliable basis for high-precision monitoring of slope displacement.

以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开的实施例中所涉及的发明范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述发明构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开的实施例中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。The above description is only a preferred embodiment of the present disclosure and an explanation of the technical principles used. Those skilled in the art should understand that the scope of the invention involved in the embodiments of the present disclosure is not limited to the technical solutions formed by a specific combination of the above-mentioned technical features, but should also cover other technical solutions formed by any combination of the above-mentioned technical features or their equivalent features without departing from the above-mentioned inventive concept. For example, the above-mentioned features are replaced with the technical features with similar functions disclosed in the embodiments of the present disclosure (but not limited to) to form a technical solution.

Claims (4)

1.一种基于双目视觉的多个圆形目标空间定位方法,其特征在于,包括以下步骤:1. A method for spatial positioning of multiple circular targets based on binocular vision, characterized in that it comprises the following steps: 步骤1:通过双目相机的左右摄像头拍摄不同角度的标定板图片,根据张正友标定法并通过Matlab标定工具箱完成双目相机的立体标定和校正;Step 1: Use the left and right cameras of the binocular camera to take pictures of the calibration plate at different angles, and complete the stereo calibration and correction of the binocular camera according to Zhang Zhengyou's calibration method and the Matlab calibration toolbox; 步骤2:启动已经标定好的双目相机采集圆形目标图像,对采集的圆形目标图像进行滤波去噪,再对去噪后的图像进行二值化分割,得到分割目标图像,再利用形态学滤波,最后利用圆形度准则对圆形目标区域进行识别;Step 2: Start the calibrated binocular camera to collect circular target images, filter and denoise the collected circular target images, and then perform binary segmentation on the denoised images to obtain segmented target images. Then, use morphological filtering, and finally use the circularity criterion to identify the circular target area. 步骤3:利用Canny边缘检测算法对识别的圆形目标区域进行粗定边缘,再利用Zernike矩对粗定边缘进行亚像素边缘检测,同时引入基于大津法的边缘阈值判断准则,判定亚像素边缘,再利用基于灰度梯度优化的方法进行亚像素边缘细化,最后,通过应用最小二乘椭圆拟合法对亚像素边缘进行中心定位,从而获取圆形目标的中心像素坐标;Step 3: Use the Canny edge detection algorithm to roughly determine the edge of the identified circular target area, and then use the Zernike moment to perform sub-pixel edge detection on the roughly determined edge. At the same time, the edge threshold judgment criterion based on the Otsu method is introduced to determine the sub-pixel edge, and then the sub-pixel edge is refined by using the grayscale gradient optimization method. Finally, the sub-pixel edge is centrally located by applying the least squares ellipse fitting method to obtain the central pixel coordinates of the circular target; 步骤4:对双目相机拍摄的多个圆形目标进行初始编号,再利用基于时空特征的目标跟踪方法对编号的圆形目标进行持续识别,定位圆形目标中心像素坐标,再将同一时刻的双目相机拍摄的两幅圆形目标图像按照同一编号像素坐标进行立体匹配,根据双目视觉测量原理进行空间定位。Step 4: Initially number the multiple circular targets photographed by the binocular camera, and then use the target tracking method based on spatiotemporal features to continuously identify the numbered circular targets, locate the pixel coordinates of the center of the circular targets, and then stereo match the two circular target images taken by the binocular camera at the same time according to the same numbered pixel coordinates, and perform spatial positioning according to the binocular vision measurement principle. 2.根据权利要求1所述的一种基于双目视觉的多个圆形目标空间定位方法,其特征在于,所述步骤2具体包括以下步骤:2. The method for spatially locating multiple circular targets based on binocular vision according to claim 1, wherein step 2 specifically comprises the following steps: 步骤2-1:对采集的圆形目标图像进行滤波去噪;Step 2-1: Filter and denoise the collected circular target image; 利用引导滤波进行原始目标图像滤波去噪,基于引导信号对目标图像进行引导,以调整滤波过程中的权重;The original target image is filtered and denoised using guided filtering, and the target image is guided based on the guidance signal to adjust the weight in the filtering process; 步骤2-2:二值化分割;Step 2-2: Binarization segmentation; 使用HSI颜色空间进行标志物的识别,将去噪后的目标图像从RGB空间转换到HSI空间;根据HSI各分量的范围阈值提取红色标志物,如式(1)所示:The HSI color space is used to identify the markers. The denoised target image is converted from the RGB space to the HSI space. The red markers are extracted according to the range thresholds of each HSI component, as shown in formula (1): 式中,H为定义颜色的频率,即色调;S表示颜色的深浅程度,即饱和度;I表示亮度;当图像像素值的各个HSI分量同时满足式(1)所对应的红色阈值条件时,则将对应位置的像素点设为前景颜色1,反之则设为黑色背景0,再利用开运算和闭运算相结合的形态学滤波方法进行进一步噪声去除;In the formula, H is the frequency that defines the color, that is, the hue; S represents the depth of the color, that is, the saturation; I represents the brightness; when each HSI component of the image pixel value satisfies the red threshold condition corresponding to formula (1) at the same time, the pixel point at the corresponding position is set to the foreground color 1, otherwise it is set to the black background 0, and then the morphological filtering method combining the opening operation and the closing operation is used to further remove the noise; 步骤2-3:利用圆形度准则对圆形目标区域进行识别;具体为,通过计算目标区域的圆形度来对圆形目标区域进行识别;Step 2-3: using a circularity criterion to identify a circular target area; specifically, identifying the circular target area by calculating the circularity of the target area; 圆形度表示目标圆形与理想圆形的相似程度,通过计算目标区域的周长和面积来确定;目标区域的周长和面积的比值计算公式如式(2)所示:The circularity indicates the similarity between the target circle and the ideal circle, which is determined by calculating the perimeter and area of the target area. The calculation formula of the ratio of the perimeter and area of the target area is shown in formula (2): 式中,C为目标区域的圆形度,Sc表示目标区域面积,Lc为目标区域的轮廓周长,β为比例系数,其表达式为:In the formula, C is the circularity of the target area, Sc represents the area of the target area, Lc is the contour perimeter of the target area, and β is the proportional coefficient. The expression is: Sc=πr2,Lc=2πr (3)S c =πr 2 , L c =2πr (3) 其中r为目标区域半径,通过圆形度计算公式得,当C=1时,该连通区域视为圆形,目标区域的圆形度C满足以下条件:Where r is the radius of the target area. According to the circularity calculation formula, when C=1, the connected area is considered a circle, and the circularity C of the target area satisfies the following conditions: Cmin≤C≤Cmax (4)C min ≤C ≤C max (4) 式中,Cmin和Cmax分别表示圆形目标与双目相机的连线和双目相机光轴之间的夹角最大和最小值时的标志点的圆形度,其取值表达为:In the formula, C min and C max represent the circularity of the landmark point when the angle between the line connecting the circular target and the binocular camera and the optical axis of the binocular camera is the maximum and minimum respectively, and its value is expressed as: 式中,θ为双目相机的偏转角度。Where θ is the deflection angle of the binocular camera. 3.根据权利要求1所述的一种基于双目视觉的多个圆形目标空间定位方法,其特征在于,所述步骤3具体包括以下步骤:3. The method for spatially locating multiple circular targets based on binocular vision according to claim 1, wherein step 3 specifically comprises the following steps: 步骤3-1:利用Canny边缘检测算法对识别的圆形目标区域进行粗定边缘;Step 3-1: Use the Canny edge detection algorithm to roughly determine the edge of the identified circular target area; 首先通过使用高斯滤波器对经过二值化的圆形目标图像进行平滑处理,其次在平滑处理后的图像上,使用梯度算子计算图像的梯度,得到图像中每个像素点的梯度强度和方向,获得梯度图;对梯度图进行非极大值抑制,通过在梯度方向上比较一个像素点的梯度强度与其相邻两个像素点的梯度强度,仅保留局部最大梯度值的像素点,从而抑制非边缘区域的响应,然后将梯度图分为两个阈值,高阈值和低阈值;根据像素点的梯度强度将像素点分为强边缘、弱边缘和非边缘三类;强边缘为真实边缘,弱边缘为真实边缘的一部分,非边缘则是剩余区域;对于弱边缘,通过连接其周围的强边缘像素,形成连续的粗定边缘;First, the binarized circular target image is smoothed by using a Gaussian filter. Secondly, the gradient operator is used to calculate the gradient of the image on the smoothed image to obtain the gradient strength and direction of each pixel in the image and obtain the gradient map. The gradient map is non-maximum suppressed by comparing the gradient strength of a pixel with the gradient strength of its two adjacent pixels in the gradient direction, and only the pixel with the local maximum gradient value is retained, thereby suppressing the response of the non-edge area. Then the gradient map is divided into two thresholds, a high threshold and a low threshold. According to the gradient strength of the pixel, the pixel is divided into three categories: strong edge, weak edge and non-edge. The strong edge is the real edge, the weak edge is part of the real edge, and the non-edge is the remaining area. For the weak edge, a continuous coarse edge is formed by connecting the strong edge pixels around it. 步骤3-2:利用Zernike矩对粗定边缘进行亚像素边缘检测;Step 3-2: Use Zernike moments to perform sub-pixel edge detection on the roughly defined edge; 令Anm表示图像f(x,y)的Zernike矩,通过转换公式,则Anm表达为:Let A nm represent the Zernike moment of the image f(x, y). Through the conversion formula, A nm is expressed as: 其中(n+1)/π为归一化因子;为Zernike矩多项式,ρ为原点到像素点(x,y)的矢量距离;θ为ρ与x轴的夹角,如果将图像旋转θ角度,则旋转后图像f(x,y)的Zernike矩A′nm满足下式:Where (n+1)/π is the normalization factor; is the Zernike moment polynomial, ρ is the vector distance from the origin to the pixel point (x, y); θ is the angle between ρ and the x-axis. If the image is rotated by angle θ, the Zernike moment A′ nm of the rotated image f(x, y) satisfies the following formula: 根据上述公式,对粗定边缘目标图像的Zernike矩A00、A11和A20进行计算,得到旋转后图像的Zernike矩A′00、A′11、A′20满足如下关系式:According to the above formula, the Zernike moments A 00 , A 11 and A 20 of the roughly defined edge target image are calculated, and the Zernike moments A′ 00 , A′ 11 , A′ 20 of the rotated image satisfy the following relationship: Zernike矩亚像素边缘的4个参数h、k、l和的计算公式如下:The four parameters h, k, l and The calculation formula is as follows: 式(9)中,h为背景灰度值,k为灰度阶跃值、l为模板中心到边缘的垂直距离,φ为边缘与x轴的夹角;采用N×N的Zernike矩模板,根据上述公式计算的参数,得到第i个亚像素边缘点Pi(x’,y’),其计算公式为:In formula (9), h is the background gray value, k is the gray step value, l is the vertical distance from the template center to the edge, and φ is the angle between the edge and the x-axis. Using an N×N Zernike moment template and the parameters calculated by the above formula, the i-th sub-pixel edge point P i (x', y') is obtained, and its calculation formula is: 步骤3-3:引入基于大津法的边缘阈值判断准则,判定亚像素边缘;Step 3-3: Introduce the edge threshold judgment criterion based on Otsu's method to determine the sub-pixel edge; 所述判定亚像素边缘,基于Zernike矩算法亚像素边缘点的阈值判断条件为:The threshold judgment condition of the sub-pixel edge point based on the Zernike moment algorithm is as follows: l≤lt∩k≥kt (11)l≤l t ∩k≥k t (11) 式(11)中,kt为灰度阶跃阈值,lt为距离阈值,取值为:In formula (11), kt is the grayscale step threshold, lt is the distance threshold, and its value is: 引入大津法的判定准则,基于边缘区域和背景区域的类间方差,其选择最佳阈值的依据是类间方差的最大化;具体来说,大津法通过遍历所有潜在阈值,计算每个阈值下的类间方差,最终选择能最大化类间方差的阈值作为最佳选择;在包含n个像素点的标志图像中,设梯度幅值为i的像素点个数为ni,因此,梯度幅值i出现的概率为pi=ni/n,通过计算阈值t,将图像分割两个类别为前景边缘区域和背景,其分别对应的概率为:The judgment criteria of the Otsu method are introduced. Based on the inter-class variance of the edge area and the background area, the basis for selecting the optimal threshold is to maximize the inter-class variance. Specifically, the Otsu method traverses all potential thresholds, calculates the inter-class variance under each threshold, and finally selects the threshold that can maximize the inter-class variance as the best choice. In a sign image containing n pixels, let the number of pixels with gradient amplitude i be n i , therefore, the probability of the occurrence of gradient amplitude i is p i = n i /n. By calculating the threshold t, the image is divided into two categories as the foreground edge area and the background, and the corresponding probabilities are: 其均值分别为:Their means are: 平均梯度幅值为则得到两类的类间方差为The average gradient amplitude is Then the between-class variance of the two classes is σ2=ω0(-μt)210t)2 (15)σ 2 =ω 0 (-μ t ) 210t ) 2 (15) 当方差σ2最大时,前景边缘和背景的差异达到最大,因此分割的准确率最高,此时的灰度阶跃阈值kt即为最佳取值;通过使用最佳灰度阶跃阈值kt和距离阈值lt进行基于Zernike矩亚像素边缘进行计算,最终得到亚像素边缘点p′i(x’,y’),并将其存放边缘点集{Pi}中;When the variance σ 2 is the largest, the difference between the foreground edge and the background is the largest, so the segmentation accuracy is the highest, and the grayscale step threshold k t is the optimal value at this time; by using the optimal grayscale step threshold k t and the distance threshold l t to calculate the sub-pixel edge based on the Zernike moment, the sub-pixel edge point p′ i (x', y') is finally obtained and stored in the edge point set {P i }; 步骤3-4:利用基于灰度梯度优化的方法进行亚像素边缘细化;Step 3-4: Sub-pixel edge refinement is performed using a grayscale gradient optimization-based method; 基于灰度梯度优化的方法中分布的中心值即为梯度方向变化最大的位置,即边缘所在的位置;首先,通过步骤3-3中的改进的亚像素Zernike矩算法提取亚像素边缘信息,再利用最小二乘椭圆拟合法对所有亚像素边缘点进行中心拟合,获取圆形标志物的粗定中心;基于粗定中心和相应的粗定边缘,得到每个边缘像素点对应的径向直线方程:The center value of the distribution in the grayscale gradient optimization method is the position where the gradient direction changes the most, that is, the position of the edge. First, the sub-pixel edge information is extracted by the improved sub-pixel Zernike moment algorithm in step 3-3, and then the least squares ellipse fitting method is used to perform center fitting on all sub-pixel edge points to obtain the rough center of the circular marker. Based on the rough center and the corresponding rough edge, the radial line equation corresponding to each edge pixel is obtained: y=kx+b (16)y=kx+b (16) 其中:in: 式中,(x0,y0)表示粗定的中心坐标;(xi,yi)表示每个边缘点坐标,对于某一边缘点,沿着其径向方向计算灰度沿梯度方向的一阶导数,并利用高斯函数求得其中心灰度值;其中,高斯函数的表达式为:Where (x 0 , y 0 ) represents the roughly determined center coordinates; ( xi , yi ) represents the coordinates of each edge point. For a certain edge point, the first-order derivative of the grayscale along the gradient direction is calculated along its radial direction, and the central grayscale value is obtained using the Gaussian function; where the expression of the Gaussian function is: 式(18)中,a高斯比例系数,b为梯度灰度一阶导数高斯分布的中心值,c2为梯度灰度一阶导数高斯分布的方差,p′i(xb,yb)为细化后的的边缘点所在位置,依次对所有边缘点进行细化;In formula (18), a is the Gaussian proportional coefficient, b is the center value of the first-order derivative Gaussian distribution of the gradient grayscale, c 2 is the variance of the first-order derivative Gaussian distribution of the gradient grayscale, and p′ i (x b , y b ) is the position of the edge point after refinement. All edge points are refined in turn; 步骤3-5:应用最小二乘椭圆拟合法对亚像素边缘进行中心定位,获取圆形目标的中心像素坐标;Step 3-5: Use the least squares ellipse fitting method to locate the center of the sub-pixel edge and obtain the center pixel coordinates of the circular target; 利用最小二乘椭圆拟合法对亚像素边缘点集{Pi}进行处理,从而定位圆形目标的圆心;椭圆的一般方程如下:The sub-pixel edge point set {P i } is processed using the least squares ellipse fitting method to locate the center of the circular target; the general equation of the ellipse is as follows: F(x)=ax2+bxy+cy2+dx+ey+f=0 (19)F(x)=ax 2 +bxy+cy 2 +dx+ey+f=0 (19) 通过最小二乘椭圆拟合法对边缘点进行拟合解得参数A、B、C、D、E、F,则椭圆中心点坐标计算公式为:The edge points are fitted by the least squares ellipse fitting method to obtain the parameters A, B, C, D, E, and F. The coordinate calculation formula of the ellipse center point is: 最终得到圆形目标中心点坐标为(xr,yr)。The final coordinates of the center point of the circular target are (x r , y r ). 4.根据权利要求1所述的一种基于双目视觉的多个圆形目标空间定位方法,其特征在于,所述步骤4具体包括以下步骤:4. The method for spatially locating multiple circular targets based on binocular vision according to claim 1, wherein step 4 specifically comprises the following steps: 步骤4-1:对所有圆形目标进行初始定位,获取其中心像素坐标值,将其像素中心坐标值按像素x值的升序列排序,并存储在集合C中;将集合C分成m个子集合P1,P2,...Pk,Pm,每个子集合中包含n个元素;进一步,对Pk对每个子集中个每一个元素按y值的升序进行排序,k=1,2,...k,n,得到Qki,k=1,2,....,m,i=1,2,......,n,Qki即为第m行,第n列圆形目标像素坐标,集合C中的元素代表目标图像从上到下、从左到右排序的像素点,每个中心像素点的编号即为相应标志物的编号;Step 4-1: Initially locate all circular targets, obtain their central pixel coordinate values, sort their pixel center coordinate values in ascending order of pixel x values, and store them in set C; divide set C into m subsets P 1 , P 2 , ... P k , P m , each subset contains n elements; further, sort each element in each subset P k in ascending order of y values, k = 1, 2, ... k, n, and obtain Q ki , k = 1, 2, ...., m, i = 1, 2, ..., n, Q ki is the pixel coordinate of the circular target in the mth row and nth column, the elements in set C represent the pixel points of the target image sorted from top to bottom and from left to right, and the number of each central pixel point is the number of the corresponding marker; 步骤4-2:采用标志物识别方法处理图像序列,获得多标志物提取结果图像,以实现对标志物前景和背景空间的区分;对每个圆形目标进行基于最小外接矩形的空间特征和基于目标所在图像时刻序列的时间特征的计算,将当前时刻识别得到的监测点圆形标志以及其最小外接矩形作为该目标的空间特征算子,同时记录当前时刻作为时间描述算子,最终,整合空间特征算子和时间描述算子,共同构建出目标特征向量A,如下式所示:Step 4-2: Use the marker recognition method to process the image sequence and obtain the multi-marker extraction result image to distinguish the marker foreground and background space; calculate the spatial features based on the minimum enclosing rectangle and the temporal features based on the image time sequence of the target for each circular target, and use the circular marker of the monitoring point identified at the current moment and its minimum enclosing rectangle as the spatial feature operator of the target, and record the current moment as the time description operator. Finally, integrate the spatial feature operator and the time description operator to jointly construct the target feature vector A, as shown in the following formula: A={Ocen,Mmatrix,Tfps} (21)A={O cen ,M matrix ,T fps } (21) 式中,A的结构属性是元胞数组,Ocen为圆形目标像素中心坐标编号,Mmatrix为圆形标志的最小外接矩阵,Tfps为当前时刻的二值化圆形标志所在的时刻序列;Wherein, the structural attribute of A is a cell array, O cen is the coordinate number of the center of the circular target pixel, M matrix is the minimum circumscribed matrix of the circular mark, and T fps is the time sequence of the binary circular mark at the current moment; 建立时序数据组B,由n个特征向量A组成,其表达式如下:Create a time series data set B, which consists of n feature vectors A, and its expression is as follows: B={A1,A2,…,An} (22)B={A 1 , A 2 , … , An } (22) An表示当前时刻图像中序号为n的圆形目标的特征向量,n表示前景目标的数量;A n represents the feature vector of the circular target with sequence number n in the current image, and n represents the number of foreground targets; 步骤4-3:通过追踪图像中圆形目标的位置来关联连续图像帧中的同一目标,利用目标特征向量,结合目标外接矩形重叠度OLD,实现圆形目标在不同时刻的匹配关联;其中OLD的计算模型如下所示:Step 4-3: By tracking the position of the circular target in the image, the same target in consecutive image frames is associated. The target feature vector is used in combination with the target circumscribed rectangle overlap OLD to achieve matching association of the circular target at different times. The calculation model of OLD is as follows: OLD=K/(PT+K+QT+1) (23)OLD=K/( PT +K+QT +1 ) (23) 式中,OLD用于评估两个目标区域的相似性,其值越高,两个目标为同一对象的可能性越大;K代表两个目标区域的相似面积,分别指代连续时刻中两个待检测目标的区域面积;判断两个目标是否成功关联的匹配条件如下:In the formula, OLD is used to evaluate the similarity of two target regions. The higher its value, the greater the possibility that the two targets are the same object; K represents the similar area of the two target regions, which refers to the area of the two targets to be detected in consecutive moments respectively; the matching conditions for judging whether the two targets are successfully associated are as follows: OLD≥α (24)OLD≥α (24) 其中,α表示判别阈值,当待检测目标符合上述相似性的判定条件时,则认为这两个目标是同一个标志,如果不符合,则认为它们不是同一个目标;如果不同时间点的两个目标被判断为同一目标,那么最新图像帧中的目标特征信息将替换时序元胞组B中上一时刻关联的目标特征;Among them, α represents the discrimination threshold. When the target to be detected meets the above similarity judgment conditions, the two targets are considered to be the same symbol. If not, they are considered to be different targets. If two targets at different time points are judged to be the same target, the target feature information in the latest image frame will replace the target feature associated with the previous moment in the time series cell group B. 步骤4-4:对初始编号的圆形目标进行连续时刻的目标跟踪,获得同时刻序列中左右相机图像对应编号的圆形目标,通过对应编号实现立体匹配,最后利用双目视觉测量原理求解其空间坐标;Step 4-4: Track the circular target with the initial number at continuous moments, obtain the circular target with the corresponding number in the left and right camera images in the sequence at the same time, realize stereo matching through the corresponding number, and finally solve its spatial coordinates by using the binocular vision measurement principle; 双目相机使用两个不同角度拍摄的图像来计算视差图,然后通过视差图获得像素的三维信息,b为两台相机之间的基线距离,f为相机镜头的焦距,L为成像宽度,XL和XR分别表示两个成像点PL和PR在各自坐标系中的X轴方向上的像素距离,相应的像素坐标上的差异成为视差d,Z是空间中物体点到两相机基线的深度距离;其中视差d表示为:The binocular camera uses two images taken at different angles to calculate the disparity map, and then obtains the three-dimensional information of the pixel through the disparity map. b is the baseline distance between the two cameras, f is the focal length of the camera lens, L is the imaging width, XL and XR respectively represent the pixel distances of the two imaging points PL and PR in the X-axis direction in their respective coordinate systems, and the corresponding difference in pixel coordinates is called disparity d. Z is the depth distance from the object point in space to the baseline of the two cameras; the disparity d is expressed as: d=|XL-XR| (25)d=| XL - XR | (25) 则两个成像点PL和PR之间的距离为:Then the distance between the two imaging points PL and PR is: 根据相似三角形理论得:According to the theory of similar triangles: 则得到点P到投影中心平面的距离Z,表示如下:Then the distance Z from point P to the projection center plane is obtained, which is expressed as follows: 当三维空间中的点P发生移动时,其在左右相机上的成像点PL和PR也会相应变化,左右相机的视差会随之变化;根据视差原理得该点的深度信息,根据三角形相似的原理,即可得到空间中的点三维坐标为:When a point P in three-dimensional space moves, its imaging points PL and PR on the left and right cameras will also change accordingly, and the parallax of the left and right cameras will change accordingly; the depth information of the point can be obtained based on the parallax principle, and the three-dimensional coordinates of the point in space can be obtained based on the principle of triangle similarity:
CN202410082494.6A 2024-01-19 2024-01-19 Binocular vision-based multiple circular target space positioning method Pending CN118052883A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410082494.6A CN118052883A (en) 2024-01-19 2024-01-19 Binocular vision-based multiple circular target space positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410082494.6A CN118052883A (en) 2024-01-19 2024-01-19 Binocular vision-based multiple circular target space positioning method

Publications (1)

Publication Number Publication Date
CN118052883A true CN118052883A (en) 2024-05-17

Family

ID=91046077

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410082494.6A Pending CN118052883A (en) 2024-01-19 2024-01-19 Binocular vision-based multiple circular target space positioning method

Country Status (1)

Country Link
CN (1) CN118052883A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118918176A (en) * 2024-07-05 2024-11-08 北京理工大学 Space ring detection and pose estimation method based on monocular vision
CN119090906A (en) * 2024-08-27 2024-12-06 南京航空航天大学 A high-precision positioning method for LDI targets based on deep learning
CN119380042A (en) * 2024-12-31 2025-01-28 天目山实验室 An event feature extraction method based on spatial gradient variance maximization
CN119494879A (en) * 2024-11-04 2025-02-21 长沙丰达智能科技有限公司 Visual positioning method and device for seed slots of degradable pallets used in automated production lines
CN120031983A (en) * 2025-01-23 2025-05-23 南京木木西里科技有限公司 A method, device, storage medium and program product for array positioning based on minimum error

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118918176A (en) * 2024-07-05 2024-11-08 北京理工大学 Space ring detection and pose estimation method based on monocular vision
CN119090906A (en) * 2024-08-27 2024-12-06 南京航空航天大学 A high-precision positioning method for LDI targets based on deep learning
CN119090906B (en) * 2024-08-27 2025-04-15 南京航空航天大学 A high-precision positioning method for LDI targets based on deep learning
CN119494879A (en) * 2024-11-04 2025-02-21 长沙丰达智能科技有限公司 Visual positioning method and device for seed slots of degradable pallets used in automated production lines
CN119494879B (en) * 2024-11-04 2025-04-18 长沙丰达智能科技有限公司 Degradable tray seed groove visual positioning method and device for automatic production line
CN119380042A (en) * 2024-12-31 2025-01-28 天目山实验室 An event feature extraction method based on spatial gradient variance maximization
CN120031983A (en) * 2025-01-23 2025-05-23 南京木木西里科技有限公司 A method, device, storage medium and program product for array positioning based on minimum error

Similar Documents

Publication Publication Date Title
CN118052883A (en) Binocular vision-based multiple circular target space positioning method
CN106709950B (en) Binocular vision-based inspection robot obstacle crossing wire positioning method
CN115273062B (en) 3D target detection method integrating three-dimensional laser radar and monocular camera
CN103411553B (en) The quick calibrating method of multi-linear structured light vision sensors
CN109211207B (en) Screw identification and positioning device based on machine vision
CN103868460B (en) Binocular stereo vision method for automatic measurement based on parallax optimized algorithm
CN119648679A (en) Circuit board welding fault identification method and system based on machine vision
CN108416791A (en) A Binocular Vision-Based Pose Monitoring and Tracking Method for Parallel Mechanism Maneuvering Platform
CN107392929B (en) An intelligent target detection and size measurement method based on human visual model
CN105913013A (en) Binocular vision face recognition algorithm
CN114331879A (en) Visible light and infrared image registration method for equalized second-order gradient histogram descriptor
Farag A comprehensive real-time road-lanes tracking technique for autonomous driving
CN113313116A (en) Vision-based accurate detection and positioning method for underwater artificial target
CN106218409A (en) A kind of can the bore hole 3D automobile instrument display packing of tracing of human eye and device
CN116596987B (en) Workpiece three-dimensional size high-precision measurement method based on binocular vision
Li et al. Road markings extraction based on threshold segmentation
CN108171753A (en) Stereoscopic vision localization method based on centroid feature point Yu neighborhood gray scale cross correlation
CN113409334A (en) Centroid-based structured light angle point detection method
CN115880371A (en) A center positioning method of reflective target under infrared viewing angle
CN108388854A (en) A kind of localization method based on improvement FAST-SURF algorithms
CN110415363A (en) A kind of object recognition positioning method at random based on trinocular vision
CN113723432B (en) Intelligent identification and positioning tracking method and system based on deep learning
CN117372498A (en) Multi-pose bolt size measurement method based on three-dimensional point cloud
JPH05215547A (en) Method for determining corresponding points between stereo images
CN114943775A (en) Image global automatic calibration method, device and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination