CN114964217B - A method and device for estimating state information - Google Patents

A method and device for estimating state information

Info

Publication number
CN114964217B
CN114964217B CN202110217283.5A CN202110217283A CN114964217B CN 114964217 B CN114964217 B CN 114964217B CN 202110217283 A CN202110217283 A CN 202110217283A CN 114964217 B CN114964217 B CN 114964217B
Authority
CN
China
Prior art keywords
current
information
image
previous
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110217283.5A
Other languages
Chinese (zh)
Other versions
CN114964217A (en
Inventor
蔡之奡
李天威
王笑非
穆北鹏
刘一龙
童哲航
王宇桐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Momenta Suzhou Technology Co Ltd
Original Assignee
Momenta Suzhou Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Momenta Suzhou Technology Co Ltd filed Critical Momenta Suzhou Technology Co Ltd
Priority to CN202110217283.5A priority Critical patent/CN114964217B/en
Priority to PCT/CN2021/109535 priority patent/WO2022179047A1/en
Publication of CN114964217A publication Critical patent/CN114964217A/en
Application granted granted Critical
Publication of CN114964217B publication Critical patent/CN114964217B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

本发明实施例公开一种状态信息估计方法及装置,该方法包括:获得在当前时刻目标对象的当前图像及当前传感器数据;利用前一时刻对应的IMU数据及前一状态信息,确定目标对象的初始状态信息;利用各当前图像的特征点,确定各当前图像的第一匹配点对;利用各当前图像中的特征点及前一图像中的特征点,确定各当前图像与前一图像的第二匹配点对;基于当前图像的第一匹配点对、第二匹配点对及前N时刻图像的第一匹配点对、第二匹配点对,确定各待利用特征点对应的三维位置信息;基于三维位置信息、各待利用特征点的图像位置信息、初始状态信息及当前传感器数据,确定目标对象的当前状态信息,以实现对包含任意数量图像采集设备的对象的状态信息的估计。

An embodiment of the present invention discloses a state information estimation method and device, which includes: obtaining a current image and current sensor data of a target object at a current moment; determining the initial state information of the target object using IMU data and previous state information corresponding to a previous moment; determining a first matching point pair of each current image using feature points of each current image; determining a second matching point pair of each current image and the previous image using feature points in each current image and feature points in the previous image; determining three-dimensional position information corresponding to each feature point to be used based on the first matching point pair and the second matching point pair of the current image and the first matching point pair and the second matching point pair of images at previous N moments; determining the current state information of the target object based on the three-dimensional position information, the image position information of each feature point to be used, the initial state information and the current sensor data, so as to realize the estimation of the state information of an object including any number of image acquisition devices.

Description

State information estimation method and device
Technical Field
The invention relates to the technical field of automation, in particular to a state information estimation method and device.
Background
In the mapping scheme and the positioning scheme of automatic driving of a vehicle and movement of a robot, how to estimate accurate and reliable movement state information of the vehicle or the robot through various sensors mounted on the vehicle or the robot is a very important problem. A system for estimating motion state information of a target, i.e., a vehicle or a robot, may be referred to as a state estimator of the robot.
In the related art, methods for estimating motion state information of a target are generally classified into two schemes of filtering and optimizing. The filtering scheme has good instantaneity and higher precision, and is easier to be deployed in the solution of automatic driving of the vehicle and robot movement in a lightweight manner.
At present, in order to ensure safe driving of an automatic driving vehicle, the automatic driving vehicle is often provided with a plurality of cameras with different directions so as to shoot the surrounding environment of the vehicle, acquire surrounding environment information, and further combine sensor data acquired by other multiple sensors to position so as to ensure accurate positioning of the vehicle and further ensure safe driving of the vehicle.
Current filtering schemes do not have the ability to estimate vehicle state information for a multi-sensor system containing any number of cameras. How to provide a method of estimating vehicle state information for a multi-sensor system including any number of cameras is a problem to be solved.
Disclosure of Invention
The invention provides a state information estimation method and a state information estimation device, which are used for estimating state information of an object of a multi-sensor system comprising any number of image acquisition equipment. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a method for estimating status information, where the method includes:
Acquiring a current image acquired by a multi-image acquisition device arranged on a target object at a current moment and current sensor data acquired by other sensors, wherein the other sensors comprise an IMU;
Determining initial state information of the target object at the current moment by using IMU data corresponding to the moment before the current moment and the previous state information of the target object at the moment before the current moment;
Determining a matching point pair between each current image by utilizing the characteristic points detected by each current image and the relative pose relation between the image acquisition devices corresponding to each current image, and taking the matching point pair as a first matching point pair corresponding to the current image;
Determining a matching point pair between each current image and a previous image by using the feature points detected by each current image and the feature points detected by the previous image, and taking the matching point pair as a second matching point pair corresponding to the current image;
Determining three-dimensional position information corresponding to each feature point to be utilized based on a first matching point pair and a second matching point pair corresponding to a current image and a first matching point pair and a second matching point pair corresponding to an image at the previous N time of the current time;
and determining the current state information of the target object at the current moment based on the three-dimensional position information, the image position information of each feature point to be utilized, the initial state information and the current sensor data.
Optionally, the initial state information comprises initial speed information and initial pose information, wherein the initial pose information comprises initial pose information and initial position information;
the step of determining initial state information of the target object at the current time by using IMU data corresponding to the previous time of the current time and previous state information of the target object at the previous time includes:
determining angular velocity information and acceleration information of the target object at a previous time by using IMU data corresponding to the previous time at the current time;
constructing a first state transition equation by utilizing the angular velocity information of the previous moment and the previous posture information in the previous state information;
Constructing a second state transition equation by utilizing the previous posture information, the acceleration information at the previous moment and the previous speed information in the previous state information;
constructing a third state transition equation by using the initial speed information, the previous speed information and previous position information in the previous state information; and determining initial position information of the target object at the current moment by using the third state transition equation.
Optionally, the step of determining the three-dimensional position information corresponding to each feature point to be utilized based on the first matching point pair and the second matching point pair corresponding to the current image and the first matching point pair and the second matching point pair corresponding to the previous N-time image at the current time includes:
And determining three-dimensional position information corresponding to each feature point to be utilized based on the first matching point pair and the second matching point pair corresponding to the current image, the first matching point pair and the second matching point pair corresponding to the previous N-time image at the current time, the equipment pose information of the image acquisition equipment corresponding to each current image, the equipment pose information of the image acquisition equipment corresponding to each previous N-time image and the pose information of the target object corresponding to the current image and each previous N-time image according to a triangulation algorithm.
Optionally, the step of determining the current state information of the target object at the current moment based on the three-dimensional position information, the image position information of each feature point to be utilized, the initial state information and the current sensor data includes:
Based on the current sensor data and the initial state information, determining intermediate state information of the target object at the current moment;
determining projection position information of projection points of the space points corresponding to the feature points to be utilized in the images by using the three-dimensional position information, the intermediate pose information in the intermediate state information, the object pose information in the state information of the target object corresponding to each moment image of the previous N moment images, the equipment pose information of each image acquisition equipment and the internal reference matrix;
Constructing a re-projection error equation based on projection position information corresponding to each feature point to be utilized and image position information of each feature point to be utilized in the current image;
And determining the current state information of the target object at the current moment based on the reprojection error equation.
Optionally, the step of determining the current state information of the target object at the current moment based on the reprojection error equation includes:
Constructing a target measurement equation based on the re-projection error equation;
and determining the current state information of the target object at the current moment by using the target measurement equation and the filtering update equation.
In a second aspect, an embodiment of the present invention provides a state information estimation apparatus, including:
the first acquisition module is configured to acquire a current image acquired by the multi-image acquisition device arranged on the target object at the current moment and current sensor data acquired by other sensors, wherein the other sensors comprise an IMU;
The first determining module is configured to determine initial state information of the target object at the current moment by using IMU data corresponding to the moment before the current moment and the previous state information of the target object at the moment before the current moment;
The second determining module is configured to determine a matching point pair between the current images by using the characteristic points detected by the current images and the relative pose relationship between the image acquisition devices corresponding to the current images, and the matching point pair is used as a first matching point pair corresponding to the current images;
A third determining module configured to determine, as a second matching point pair corresponding to the current image, a matching point pair between each current image and a previous image thereof by using the feature points detected by each current image and the feature points detected by the previous image thereof;
The fourth determining module is configured to determine three-dimensional position information corresponding to each feature point to be utilized based on a first matching point pair and a second matching point pair corresponding to the current image and a first matching point pair and a second matching point pair corresponding to the previous N-time image at the current time;
And a fifth determining module configured to determine current state information of the target object at a current time based on the three-dimensional position information, the image position information of each feature point to be utilized, the initial state information, and the current sensor data.
Optionally, the initial state information comprises initial speed information and initial pose information, wherein the initial pose information comprises initial pose information and initial position information;
The first determining module is specifically configured to determine angular velocity information and acceleration information of the target object at a previous time by using IMU data corresponding to the previous time at the current time;
constructing a first state transition equation by utilizing the angular velocity information of the previous moment and the previous posture information in the previous state information;
Constructing a second state transition equation by utilizing the previous posture information, the acceleration information at the previous moment and the previous speed information in the previous state information;
constructing a third state transition equation by using the initial speed information, the previous speed information and previous position information in the previous state information; and determining initial position information of the target object at the current moment by using the third state transition equation.
Optionally, the fourth determining module is specifically configured to determine, according to a triangulation algorithm, three-dimensional position information corresponding to each feature point to be utilized based on the first matching point pair and the second matching point pair corresponding to the current image, the first matching point pair and the second matching point pair corresponding to the previous N time images at the current time, device pose information of the image acquisition device corresponding to each current image, device pose information of the image acquisition device corresponding to each previous N time image, and pose information of the target object corresponding to the current image and each previous N time image.
Optionally, the fifth determining module includes:
A first determining unit configured to determine intermediate state information of the target object at a current time based on the current sensor data and the initial state information;
The second determining unit is configured to determine projection position information of projection points of the space points corresponding to the feature points to be utilized in the images where the space points are located by using the three-dimensional position information, the intermediate pose information in the intermediate state information, the object pose information in the state information of the target object corresponding to each moment image of the previous N moment images, the equipment pose information of each image acquisition equipment and the internal reference matrix;
the construction unit is configured to construct a re-projection error equation based on projection position information corresponding to each feature point to be utilized and image position information of each feature point to be utilized in the current image;
And a third determining unit configured to determine current state information of the target object at a current time based on the reprojection error equation.
Optionally, the third determining unit is specifically configured to construct a target measurement equation based on the re-projection error equation;
and determining the current state information of the target object at the current moment by using the target measurement equation and the filtering update equation.
From the foregoing, it can be seen that the method and apparatus for estimating state information provided by the embodiments of the present invention obtain current image acquired by a multi-image acquisition device set at a current time and current sensor data acquired by other sensors, where the other sensors include an IMU, determine initial state information of the target object at the current time by using IMU data corresponding to the previous time and previous state information of the target object at the previous time, determine a pair of matching points between the current images by using feature points detected by each current image and a relative pose relationship between image acquisition devices corresponding to each current image, determine a pair of matching points between the current images by using feature points detected by each current image and feature points detected by a previous image thereof, determine a pair of matching points between each current image and a previous image thereof, as a pair of second matching points corresponding to the current image, determine a pair of first matching points and a pair of second matching points corresponding to the previous N-time image of the target object based on the first matching point pair and the second matching point pair corresponding to the current image, and determine a pair of three-dimensional position information by using the feature points detected by each current image and the position information of the target object.
By applying the embodiment of the invention, three-dimensional position information corresponding to each feature point to be utilized can be constructed through expanding states, namely utilizing the tracking result of each feature point of the multi-image acquisition equipment, namely, a second matching point pair corresponding to the current image and the previous N time image thereof, and the feature point association relation between the multi-image acquisition equipment, namely, the current image and the first matching point pair corresponding to the previous N time image thereof, so as to construct multi-state constraint, fuse the current sensor data of other sensor data, determine and obtain the current state information of the target object at the current time, so as to obtain a state estimation result with higher precision and higher robustness, and realize the estimation of the state information of the object of the multi-sensor system comprising any number of image acquisition equipment. Of course, it is not necessary for any one product or method of practicing the invention to achieve all of the advantages set forth above at the same time.
The innovation points of the embodiment of the invention include:
1. The three-dimensional position information corresponding to each feature point to be utilized can be constructed by expanding states, namely utilizing the tracking result of each feature point of the multi-image acquisition equipment, namely, a second matching point pair corresponding to the current image and the previous N time image thereof, and the feature point association relationship between the multi-image acquisition equipment, namely, the current image and the previous N time image thereof, namely, a first matching point pair corresponding to the current image and the previous N time image thereof, so as to construct multi-state constraint, and the current sensor data of other sensor data are fused, so that the current state information of a target object at the current time is determined, and the state estimation result with higher precision and higher robustness is obtained, so that the estimation of the state information of the object of the multi-sensor system comprising any number of image acquisition equipment is realized.
2. Based on error state Kalman filtering, the IMU data of the previous moment of the current moment and the previous state information of the target corresponding to the previous moment are utilized, and the initial state information of the target object at the current moment is obtained by constructing a state transition equation so as to provide a basis for the subsequent determination of the current state information of the target object with higher precision and higher robustness at the current moment.
3. Considering that the arrival of other sensor data is earlier compared with the image, firstly, based on the current sensor data arriving earlier and initial state information, determining intermediate state information of a target object at the current moment, further referring to an error state Kalman filtering theory, constructing a reprojection error equation aiming at projection position information of space points corresponding to feature points to be utilized in the image and image position information of the feature points to be utilized, and further constructing a target measurement equation so as to optimize the intermediate state information and obtain the current state information with high precision and robustness.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is apparent that the drawings in the following description are only some embodiments of the invention. Other figures may be derived from these figures without inventive effort for a person of ordinary skill in the art.
Fig. 1 is a schematic flow chart of a state information estimation method according to an embodiment of the present invention;
Fig. 2 is a schematic structural diagram of a state information estimation device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without any inventive effort, are intended to be within the scope of the invention.
It should be noted that the terms "comprising" and "having" and any variations thereof in the embodiments of the present invention and the accompanying drawings are intended to cover non-exclusive inclusions. A process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed but may alternatively include other steps or elements not listed or inherent to such process, method, article, or apparatus.
The invention provides a state information estimation method and a state information estimation device, which are used for estimating state information of an object of a multi-sensor system comprising any number of image acquisition equipment. The following describes embodiments of the present invention in detail.
Fig. 1 is a schematic flow chart of a state information estimation method according to an embodiment of the present invention. The method may comprise the steps of:
S101, obtaining a current image acquired by a multi-image acquisition device set by a target object at the current moment and current sensor data acquired by other sensors.
Wherein the other sensors include IMUs.
The state information estimation method provided by the embodiment of the invention can be applied to any electronic equipment with computing capability, and the electronic equipment can be a terminal or a server. In one implementation, the functional software implementing the method may exist in the form of separate client software or may exist in the form of a plug-in to the currently relevant client software, for example, in the form of a functional module of an autopilot system, as may be the case. The electronic device may be a device provided to the target object, or may be a device provided to the target object.
In one case, the electronic device may be a multi-sensor state estimator that inputs data of various sensors, such as a multi-image acquisition device, an IMU (Inertial measurement unit ), a GNSS (Global Navigation SATELLITE SYSTEM, global satellite navigation system/global navigation satellite system), a wheel speed meter, that is, a wheel speed sensor, and the like, and obtains various state information of the entire multi-sensor system, that is, a target object on which the multi-sensor system is provided, mainly including position information, attitude information, speed information, and the like, on the assumption that all sensors are approximately rigid-body transformations.
The target object may be an autonomous vehicle or an intelligent robot. The multiple image capturing devices set by the target object may be multiple image capturing device systems set on the target object, which are any number, any position and any orientation of the multiple image capturing devices mounted on the same rigid body.
In the running process of the target object, the multi-image acquisition device and other sensors can acquire data in real time and send the data to the electronic device, so that the electronic device can acquire images acquired by the multi-image acquisition device set by the target object at the current moment as a current image, and acquire sensor data acquired by the other sensors from the moment before the current moment to the current moment as current sensor data.
Other sensors may include, but are not limited to, IMU (Inertial measurement unit ), wheel speed sensors, and GNSS (Global Navigation SATELLITE SYSTEM, global satellite navigation system/global navigation satellite system). And may also include GPS (Global Positioning System ), radar, etc.
In one case, the relative positions of the multiple image acquisition device and the target object may be considered fixed, and the relative positions of the other sensors and the target object are fixed. Correspondingly, the pose information of any one of the image acquisition equipment, the target object and other sensors is determined, and the pose information of other objects is also determined.
S102, determining initial state information of the target object at the current moment by utilizing IMU data corresponding to the moment before the current moment and the previous state information of the target object at the moment before.
In this step, the electronic device may obtain the state information of the target object at the previous time, as the previous state information, and obtain IMU data corresponding to the previous time at the current time. Based on error state Kalman filtering (ESKF, error-STATE KALMAN FILTER), the initial state information of the target object at the current moment is predicted and determined by using IMU data corresponding to the previous moment and the previous state information. The state information may include, but is not limited to, pose information of the target object and velocity information, wherein the pose information includes position information and pose information. The IMU data corresponding to the previous moment is the IMU data acquired from the previous moment of the current moment to the previous moment of the current moment of the IMU set by the target object.
The prediction determination of the state information of the target vehicle can be achieved by the following state transition equation, and specifically, the following formula (1) can be used for representing:
Where t k-1 denotes a time immediately before the current time, t k denotes the current time, Representing the previous state information, u k represents IMU data corresponding to a time previous to the current time,Initial state information at the current time.
In another embodiment of the present invention, the initial state informationIncludes initial speed informationAnd initial pose information, wherein the initial pose information comprises initial pose informationInitial position information
The step S102 may include the following steps 011-014:
011, determining the angular velocity information and the acceleration information of the target object at the previous moment by using IMU data at the previous moment of the current moment.
012, Constructing a first state transition equation by utilizing the angular velocity information of the previous moment and the previous posture information in the previous state information, and determining the initial posture information of the target object at the current moment by utilizing the first state transition equation.
And 013, constructing a second state transition equation by utilizing the previous posture information, the acceleration information at the previous moment and the previous speed information in the previous state information, and determining the initial speed information of the target object at the current moment by utilizing the second state transition equation.
014 Constructing a third state transition equation by using the initial velocity information, the previous velocity information and the previous position information in the previous state information, and determining the initial position information of the target object at the current time by using the third state transition equation.
In this implementation manner, the electronic device performs unbiasing and gravitational acceleration removal on IMU data corresponding to a previous time at the current time, and angular velocity information and acceleration information of the target object at the previous time are respectively represented by ω k-1 and α k-1.
Further, a first state transition equation is constructed by using the angular velocity information at the previous moment and the previous posture information in the previous state information; and determining initial attitude information of the target object at the current moment by using a first state transition equation. Specifically, the first state transition equation may be expressed by the following formula (2):
Wherein, the Representing previous pose information in previous state information,Representing a quaternion multiplication.
And the electronic equipment constructs a second state transition equation by utilizing the previous posture information, the acceleration information at the previous moment and the previous speed information in the previous state information, and determines the initial speed information of the target object at the current moment by utilizing the second state transition equation. Specifically, the second state transition equation can be expressed by the following formula (3):
Wherein, the Representing the previous speed information in the previous state information.
The electronic device further constructs a third state transition equation by using the initial speed information, the previous speed information and the previous position information in the previous state information, and determines the initial position information of the target object at the current moment by using the third state transition equation. The third state transition equation can be expressed by the following equation (4):
Wherein, the Representing previous position information in the previous state information.
And S103, determining a matching point pair between the current images by utilizing the characteristic points detected by the current images and the relative pose relation between the image acquisition devices corresponding to the current images, and taking the matching point pair as a first matching point pair corresponding to the current images.
The relative position relationship between the image acquisition devices arranged on the target object is fixed, and correspondingly, the electronic device can pre-store the relative pose relationship between the image acquisition devices.
After the electronic device obtains each current image, the electronic device may first perform feature point detection on each current image by using a preset feature point detection algorithm, so as to obtain feature points in each current image. In this case, the preset feature point detection algorithm may be a FAST feature point detection algorithm or the like that can detect feature points in an image.
And the electronic equipment determines the region of interest in each current image according to the relative pose relationship between the image acquisition equipment corresponding to each current image, wherein the region of interest in the current image is a region which is overlapped with other current images. And extracting feature descriptors of feature points in the interested region in each current image by utilizing a preset feature descriptor extraction algorithm aiming at each current image to obtain feature descriptors corresponding to the feature points in the interested region in the current image. And matching the feature points in the interested areas in the current images based on a fast approximate nearest neighbor algorithm (Fast Library for Approximate Nearest Neighbors) and feature descriptors corresponding to the feature points in the interested areas in the current images to obtain matched feature point pairs, namely matching point pairs, between the current images, and taking the matched feature point pairs as first matching point pairs corresponding to the current images. The first matching point pair corresponding to the current image, i.e., the kth frame image, may be expressed as: c i,cj represents the ith image capturing device and the jth image capturing device, respectively, and the value of c i,cj is an integer between [1, n ], where n represents the number of image capturing devices set by the target vehicle.
The preset feature descriptor extraction algorithm may be BRIER feature descriptor extraction algorithm.
S104, determining a matching point pair between each current image and the previous image by using the feature points detected by each current image and the feature points detected by the previous image, and taking the matching point pair as a second matching point pair corresponding to the current image.
In this step, the electronic device tracks, for each current image, the feature points detected by the current image and the feature points detected by the previous image thereof by using a sparse optical flow KLT algorithm, to obtain a matching feature point pair, i.e., a matching point pair, between the current image and the previous image thereof, as a second matching point pair corresponding to the current image. The second matching point pair corresponding to the current image may be expressed as: c i denotes an i-th image capturing apparatus, and the value range of c i is an integer between [1, n ], where n denotes the number of image capturing apparatuses set by the target vehicle.
The embodiment of the present invention does not limit the execution sequence of S104 and S103, and the electronic device may execute S103 first and then S104, may execute S104 first and then S103, or may execute S103 and S104 in parallel.
S105, determining three-dimensional position information corresponding to each feature point to be utilized based on the first matching point pair and the second matching point pair corresponding to the current image and the first matching point pair and the second matching point pair corresponding to the image at the previous N time at the current time.
The previous N time images of the current time refer to images acquired by the multi-image acquisition equipment of the target object at each time within the previous N times of the current time.
After the electronic device obtains the first matching point pair and the second matching point pair corresponding to the current image, the electronic device refers to the multi-state constraint Kalman filtering (MSCKF), and in order to construct a multi-constraint condition, the state expansion in the multi-state constraint Kalman filtering is performed in the process of estimating the state information of the target object containing the multi-image acquisition device. Specifically, firstly, a corresponding sliding window is expanded, the length of the sliding window is set to be n+1, the sliding window contains pose information in state information of a target object, and the method specifically includes the following steps:
xk=[πkk-1,…πe…πk-N], An integer of (a);
Wherein x k represents state information in a sliding window corresponding to the current image, and pi k represents pose information in state information of a target corresponding to the current image. In one case, the state information of the target object may be directly represented by the state information of the IMU, and accordingly, the sliding window includes the state information of the target object, that is, pose information of the IMU set by the target object, and in another case, the state information of the target object may be determined by using the state information of the IMU and a pre-stored pose conversion relationship between the IMU and the target object, which is all possible. Pi e is And the pose information of the target object corresponding to the e-th moment image in the current image and the previous N-moment image thereof under the world coordinate system is represented.
Correspondingly, a first matching point pair and a second matching point pair corresponding to each image in the previous N-time images at the current time are obtained, and three-dimensional position information corresponding to each feature point to be utilized is determined based on the first matching point pair and the second matching point pair corresponding to the current image, the first matching point pair and the second matching point pair corresponding to the previous N-time images at the current time, the equipment pose information of the image acquisition equipment corresponding to the current image and the equipment pose information of the image acquisition equipment corresponding to each time image in the previous N-time images. The feature points to be utilized are feature points which can be determined in three-dimensional position information among the feature points detected in each current image and the previous N moment images.
The determining flow of the first matching point pair corresponding to each time image in the previous N time images at the current time is the same as the determining flow of the first matching point corresponding to the current image, and the determining flow of the second matching point pair corresponding to each time image in the previous N time images at the current time is the same as the determining flow of the second matching point corresponding to the current image, which is not described herein. N is a positive integer.
The device pose information of the image acquisition device corresponding to the image may refer to device pose information when the image acquisition device acquires the image.
In one implementation of the present invention, the step S105 may include the steps of:
And determining three-dimensional position information corresponding to each feature point to be utilized based on the first matching point pair and the second matching point pair corresponding to the current image, the first matching point pair and the second matching point pair corresponding to the previous N-time image at the current time, the equipment pose information of the image acquisition equipment corresponding to each current image, the equipment pose information of the image acquisition equipment corresponding to each previous N-time image and the pose information of the target object corresponding to the current image and each previous N-time image according to a triangulation algorithm.
In one implementation, the relative position relationship between the target object and the image acquisition devices arranged on the target object is determined, and the electronic device can prestore the relative position relationship between the target object and each image acquisition device arranged on the target object. In the case of pose information determination of a target object, pose information of each image acquisition device is determined.
The electronic equipment is based on the image position information of each characteristic point in the first matching point pair corresponding to the current image on the image thereof and the image position information of each characteristic point in the second matching point pair on the image thereof according to a triangulation algorithm, namely the image position information of each characteristic point in the first matching point pair corresponding to each characteristic point in the image thereof and the image position information of each characteristic point in the second matching point pair corresponding to each characteristic point in the previous N time images at the current time, the equipment pose information of the image acquisition equipment corresponding to each current image, the equipment pose information of the image acquisition equipment corresponding to each moment image of each previous N time image, and the pose information of the current image and the target object corresponding to each previous N time image, namely the matching results of all the characteristic points of the current image corresponding to each pose information and the previous N time images in the sliding window corresponding to the current imageAnd carrying out triangulation to determine three-dimensional position information corresponding to each feature point to be utilized. The specific calculation process of the triangulation algorithm may refer to the calculation process of the triangulation algorithm in the related art, and will not be described herein.
In one case, the minimum re-projection error can be constructed by using the three-dimensional position information corresponding to each feature point to be utilized, the image position information of each feature point to be utilized, the equipment pose information and the internal reference matrix of the image acquisition equipment corresponding to the image where each feature point to be utilized is located, and the minimum re-projection error is iteratively optimized by using the Levenberg-Marquardt method to obtain more accurate three-dimensional position information corresponding to each feature point to be utilized. Wherein, the three-dimensional position information set corresponding to each feature point to be utilized can be expressed asWherein, the And representing the three-dimensional position information of the space point corresponding to the q-th feature point to be utilized under the world coordinate.
The image position information of each feature point to be utilized is the image position information of each feature point to be utilized in the image where the feature point to be utilized is located. The images of the feature points to be utilized comprise a current image and a previous N time image of the current time. .
S106, determining the current state information of the target object at the current moment based on the three-dimensional position information, the image position information of the feature points to be utilized in the current image, the initial state information and the current sensor data.
In this step, considering that the frequency of the image acquisition device for acquiring the image is lower than the frequency of the other sensors for acquiring the data of the other sensors, the electronic device may first refer to the error state kalman filtering theory, update the state information of the target object based on the current sensor data and the initial state information corresponding to the current time obtained by the electronic device first, and obtain the intermediate state information of the target object. And further, referring to an error state Kalman filtering theory, constructing a corresponding measurement equation by utilizing the three-dimensional position information, the image position information of each feature point to be utilized in the current image and the intermediate state information, and further, determining the current state information of the target object at the current moment based on the measurement equation. The current state information may be state information in a world coordinate system.
By applying the embodiment of the invention, three-dimensional position information corresponding to each feature point to be utilized can be constructed through expanding states, namely utilizing the tracking result of each feature point of the multi-image acquisition equipment, namely, a second matching point pair corresponding to the current image and the previous N time image thereof, and the feature point association relation between the multi-image acquisition equipment, namely, the current image and the first matching point pair corresponding to the previous N time image thereof, so as to construct multi-state constraint, fuse the current sensor data of other sensor data, determine and obtain the current state information of the target object at the current time, so as to obtain a state estimation result with higher precision and higher robustness, and realize the estimation of the state information of the object of the multi-sensor system comprising any number of image acquisition equipment.
In another embodiment of the present invention, the step S106 may include the following steps 021-024:
021, based on the current sensor data and the initial state information, determining the intermediate state information of the target object at the current moment.
022, Determining projection position information of projection points of the space points corresponding to the feature points to be utilized in the images by utilizing the three-dimensional position information, the middle pose information in the middle state information, the object pose information in the state information of the target object corresponding to each moment image of the previous N moment images, the equipment pose information of each image acquisition equipment and the internal reference matrix.
023, Constructing a reprojection error equation based on projection position information corresponding to each feature point to be utilized and image position information of each feature point to be utilized in the current image.
024 Determining the current state information of the target object at the current moment based on the reprojection error equation.
In one case, the current sensor data may include, but is not limited to, current IMU data acquired by the IMU from a time previous to the current time, current GNSS data acquired by the GNSS from a time previous to the current time, and current wheel speed data acquired by the wheel speed sensor from a time previous to the current time. The electronic equipment carries out updating iteration on the initial state information based on the obtained current IMU data, the current GNSS data and the current wheel speed data to obtain intermediate state information of the target object.
The method comprises the steps of constructing a measurement equation for carrying out a state updating process on a target object by utilizing the current sensor data by referring to an error state Kalman filtering theory, wherein the measurement equation can be uniformly represented by the following formula (5):
Where z k represents the measurement, i.e., current IMU data, current GNSS data, or current wheel speed data, and R k represents the noise of the measurement. h (·) represents a function mapped from a system state to a measured value, where the system state refers to state information of a target object output by the system, and when the state of the target object is updated with current sensor data, the initial state information is an initial value of the system state of the state update.
Specifically, for GNSS measurements, z k represents current GNSS data, and accordingly, the above formula (5) may be expressed as the following formula (5.1):
Wherein, the Representing a certain frame of GNSS data in the current GNSS data,The state information of the target object, which is the system state before the state update system that updates the state information of the target object, is substituted with the frame GNSS data, and R GNSS represents the noise measured by the system when the state update system that updates the state information of the target object is substituted with the frame GNSS data.
For wheel speed measurements, z k represents current wheel speed data, and accordingly, the above equation (5) may be expressed as the following equation (5.2):
Wherein, the Represents a certain frame of wheel speed data among the current wheel speed data,The R odo represents noise measured by the system when the frame wheel speed data is substituted into the state updating system for updating the state information of the target object.
For IMU measurements, if the target object is stationary, i.eAnd the IMU data measured by the IMU of the target object over a preset time period in the past, such as 1 second, indicates that the change in the angular velocity and acceleration of the target vehicle is less than a preset threshold, the above formula (5) may be expressed as the following formula (5.3):
Wherein 0 represents one frame of IMU data in the current IMU data, The state information of the target object, which is the system state before the state update system that updates the state information of the target object, is substituted into the frame IMU data, and R static represents the noise measured by the system when the frame IMU data is substituted into the state update system that updates the state information of the target object.
In this case, if the target object is not in a stationary state, the state information of the target object may not be updated by using the current IMU data.
In this implementation, the update order of different numbers when the state information of the target object is updated by using the current sensor data is not limited, and the state update system obtains what type of current sensor data, that is, updates the state information of the target object by using the current sensor data of the type in the current sensor data.
Subsequently, the filter update amount of the state update system can be calculated according to each measurement equation. The update equation corresponding to the filtering update amount is represented by the following formula (6):
Wherein, P k|k-1 represents the predicted state covariance, the initial value is the initial state covariance, and the initial state covariance can be represented by the following formula (7):
Wherein, the I.e. state transition equationFor state quantityIs a derivative of (a). Q k is a state transition error, typically the noise parameter of the IMU, which is constant.
K k is expressed as Kalman gain and represents the amplitude of the current system state to be adjusted; Representing current state information of the target object at the current moment, and P k|k represents current state covariance of the current moment; i.e., h (·) vs. state quantity Is a derivative of (a).Generalized addition is represented, including addition of vectors and addition of rotation vectors.
After the electronic equipment determines the intermediate state information of the target object at the current moment by utilizing the current sensor data and the initial state information, a reprojection error equation is constructed by utilizing the pose information in the intermediate state information of the target object and the object pose information in the state information of the target object corresponding to the images at all moments of the previous N moment images, the three-dimensional position information corresponding to all the feature points to be utilized, the image position information of all the feature points to be utilized and the equipment pose information and the internal reference matrix of the image acquisition equipment corresponding to the images where the feature points to be utilized are located, and then the current state information of the target object at the current moment is determined based on the reprojection error equation.
In one case, the process of constructing the re-projection error equation may be:
And according to the three-dimensional position information of the space point corresponding to each feature point to be utilized, the object pose information in the state information of the object corresponding to the image of the feature point to be utilized and the equipment pose information of the image acquisition equipment corresponding to the image of the feature point to be utilized, converting the space point corresponding to the feature point to be utilized from a world coordinate system to the coordinate system of the object, converting the space point to the equipment coordinate system of the image acquisition equipment corresponding to the image of the feature point to be utilized, and further combining the internal reference matrix of the image acquisition equipment corresponding to the image of the feature point to be utilized, projecting the space point corresponding to the feature point to be utilized from the equipment coordinate system of the image acquisition equipment corresponding to the image of the feature point to be utilized to the image coordinate system of the image of the feature point to be utilized, so as to obtain the projection position information of the space point corresponding to the feature point to be utilized at the projection point of the image of the feature point to be utilized. Further, for each feature point to be utilized, the image position information of the feature point to be utilized and the projection position information of the space point corresponding to the feature point to be utilized on the projection point of the image where the feature point to be utilized is located are utilized to construct a re-projection error equation.
Specifically, in theory, the projection position information of the spatial point corresponding to the feature point to be utilized on the projection point of the image where the feature point to be utilized is located coincides with the image position of the feature point to be utilized on the image where the feature point to be utilized is located, and the corresponding reprojection error equation can be represented by the following formula (8):
Representing the image position information, namely visual measurement, of the qth feature point to be utilized on the image acquired by the c i th image acquisition device in the image in which the qth feature point to be utilized is positioned, namely the current image and the e time image in the previous N time image; representing three-dimensional position information corresponding to the q-th feature points to be utilized, wherein the value range of q is 1 to S and integers between the two, and S represents the total number of the feature points to be utilized; The equipment pose information of the c i th image acquisition equipment in the e time image in the current image and the previous N time image, namely the external parameters, Pose information of a target object corresponding to an e-th moment image in a current image and a previous N-moment image thereof is represented; an internal reference matrix of the c i th image acquisition device in the e-th image in the current image and the previous N-time image is represented; And representing the vertical axis coordinate value in the three-dimensional position information corresponding to the q-th feature point to be utilized.
And further adjusts the reprojection error equation based on the reprojection error equationAnd (3) a value so that the re-projection error equation is established, and determining the current state information of the target object at the current moment.
In another embodiment of the present invention, the 024 may include the following steps 0241-0242:
0241 constructing a target measurement equation based on the reprojection error equation.
0242 Determining the current state information of the target object at the current moment by using the target measurement equation and the filtering update equation.
Referring to the error state kalman filtering theory, in order to update the current state information of the target object at the current moment, a measurement equation z k=h(xk)+Rk which has the same processing mode as the sensor data acquired by other sensors needs to be constructed. In the implementation manner, a target measurement equation is constructed based on the reprojection error equation.
Wherein, the left side of the equal sign in the formula (8)I.e. visual measurement then represents z k, and right-hand side of the middle sign in formula (8) above represents h (x k). Accordingly, the visual measurement equation can be expressed as:
the state quantity in the vision measurement equation can be observed to contain by the formula The image position information of the feature points to be utilized, namely the feature point positions, are not used as state quantity in the target measurement equation, and the first approximation and the left multiplication on two sides of the equation are carried out according to the theory of Kalman filtering MSCKF of multi-state constraintLeft zero space cancellation of (2)Residual influence of (2) to finally obtain a visual target measurement equation
Solving and obtaining pose information in a sliding window corresponding to the current image and system state corresponding to the current moment by the target measurement equation and the filtering update equation, namely the update equation corresponding to the filtering update quantity of the formula (6)I.e. the current state information of the target object at the current moment.
Corresponding to the above method embodiment, the embodiment of the present invention provides a state information estimation device, as shown in fig. 2, where the device may include:
a first obtaining module 210, configured to obtain a current image collected by a multi-image collecting device set by a target object at a current moment and current sensor data collected by other sensors, where the other sensors include an IMU;
A first determining module 220, configured to determine initial state information of the target object at the current time by using IMU data corresponding to the previous time of the current time and previous state information of the target object at the previous time;
A second determining module 230 configured to determine a matching point pair between each current image as a first matching point pair corresponding to the current image, using the feature points detected by each current image and the relative pose relationship between the image capturing devices corresponding to each current image;
a third determining module 240 configured to determine, as a second matching point pair corresponding to the current image, a matching point pair between each current image and its previous image using the feature points detected by each current image and the feature points detected by its previous image;
A fourth determining module 250, configured to determine three-dimensional position information corresponding to each feature point to be utilized based on a first matching point pair and a second matching point pair corresponding to the current image and a first matching point pair and a second matching point pair corresponding to the previous N-time image at the current time;
A fifth determining module 260 is configured to determine current state information of the target object at the current time based on the three-dimensional position information, the image position information of each feature point to be utilized, the initial state information, and the current sensor data.
By applying the embodiment of the invention, three-dimensional position information corresponding to each feature point to be utilized can be constructed through expanding states, namely utilizing the tracking result of each feature point of the multi-image acquisition equipment, namely, a second matching point pair corresponding to the current image and the previous N time image thereof, and the feature point association relation between the multi-image acquisition equipment, namely, the current image and the first matching point pair corresponding to the previous N time image thereof, so as to construct multi-state constraint, fuse the current sensor data of other sensor data, determine and obtain the current state information of the target object at the current time, so as to obtain a state estimation result with higher precision and higher robustness, and realize the estimation of the state information of the object of the multi-sensor system comprising any number of image acquisition equipment.
In another embodiment of the present invention, the initial state information includes initial velocity information and initial pose information, wherein the initial pose information includes initial pose information and initial position information;
the first determining module 220 is specifically configured to determine angular velocity information and acceleration information of the target object at a previous time by using IMU data corresponding to the previous time of the current time;
constructing a first state transition equation by utilizing the angular velocity information of the previous moment and the previous posture information in the previous state information;
Constructing a second state transition equation by utilizing the previous posture information, the acceleration information at the previous moment and the previous speed information in the previous state information;
constructing a third state transition equation by using the initial speed information, the previous speed information and previous position information in the previous state information; and determining initial position information of the target object at the current moment by using the third state transition equation.
In another embodiment of the present invention, the fourth determining module 250 is specifically configured to determine, according to a triangulation algorithm, three-dimensional position information corresponding to each feature point to be utilized based on the first matching point pair and the second matching point pair corresponding to the current image, the first matching point pair and the second matching point pair corresponding to the previous N-time image at the current time, the device pose information of the image capturing device corresponding to each current image, the device pose information of the image capturing device corresponding to each previous N-time image, and the pose information of the target object corresponding to the current image and each previous N-time image.
In another embodiment of the present invention, the fifth determining module 260 includes:
A first determining unit (not shown in the figure) configured to determine intermediate state information of the target object at a current time based on the current sensor data and the initial state information;
A second determining unit (not shown in the figure) configured to determine projection position information of a projection point of a space point corresponding to each feature point to be utilized in an image where the space point is located by using the three-dimensional position information, intermediate pose information in the intermediate state information, object pose information in state information of a target object corresponding to each moment image of the previous N moment images, and device pose information and an internal reference matrix of each image acquisition device;
A construction unit (not shown in the figure) configured to construct a re-projection error equation based on projection position information corresponding to each feature point to be utilized and image position information of each feature point to be utilized in the current image;
A third determining unit (not shown in the figure) is configured to determine current state information of the target object at a current time based on the re-projection error equation.
In another embodiment of the present invention, the third determining unit is specifically configured to construct a target measurement equation based on the re-projection error equation;
and determining the current state information of the target object at the current moment by using the target measurement equation and the filtering update equation.
The system and device embodiments correspond to the system embodiments, and have the same technical effects as the method embodiments, and specific description refers to the method embodiments. The apparatus embodiments are based on the method embodiments, and specific descriptions may be referred to in the method embodiment section, which is not repeated herein. Those of ordinary skill in the art will appreciate that the drawing is merely a schematic illustration of one embodiment and that modules or flow in the drawing are not necessarily required to practice the invention.
It will be appreciated by those of ordinary skill in the art that modules in an apparatus of an embodiment may be distributed in an apparatus of an embodiment as described in the embodiments, and that corresponding changes may be located in one or more apparatuses different from the embodiment. The modules of the above embodiments may be combined into one module, or may be further split into a plurality of sub-modules.
It should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention, and not for limiting the same, and although the present invention has been described in detail with reference to the above-mentioned embodiments, it should be understood by those skilled in the art that the technical solution described in the above-mentioned embodiments may be modified or some technical features may be equivalently replaced, and these modifications or substitutions do not make the essence of the corresponding technical solution deviate from the spirit and scope of the technical solution of the embodiment of the present invention.

Claims (8)

1.一种状态信息估计方法,其特征在于,所述方法包括:1. A state information estimation method, characterized in that the method comprises: 获得在当前时刻目标对象所设置的多图像采集设备采集的当前图像及其他传感器采集的当前传感器数据,其中,所述其他传感器包括IMU,所述多图像采集设备为安装在同一刚体上的任意数量、任意位置与朝向的多图像采集设备系统;Obtaining a current image captured by a multi-image acquisition device and current sensor data captured by other sensors set on the target object at the current moment, wherein the other sensors include an IMU, and the multi-image acquisition device is a system of multiple image acquisition devices installed on the same rigid body in any number, position, and orientation; 利用所述当前时刻的前一时刻对应的IMU数据以及所述前一时刻所述目标对象的前一状态信息,确定所述目标对象在所述当前时刻的初始状态信息;Determining initial state information of the target object at the current moment by using IMU data corresponding to a moment before the current moment and previous state information of the target object at the previous moment; 利用各当前图像所检测出的特征点以及各当前图像对应的图像采集设备之间的相对位姿关系,确定各当前图像之间的匹配点对,作为当前图像对应的第一匹配点对;Determine matching point pairs between the current images as first matching point pairs corresponding to the current images using the feature points detected in the current images and the relative positional relationships between the image acquisition devices corresponding to the current images; 利用各当前图像所检测出的特征点及其前一图像所检测出的特征点,确定各当前图像与其前一图像之间的匹配点对,作为当前图像对应的第二匹配点对;Using the feature points detected in each current image and the feature points detected in the previous image, a matching point pair between each current image and its previous image is determined as a second matching point pair corresponding to the current image; 基于当前图像对应的第一匹配点对及第二匹配点对以及所述当前时刻的前N时刻图像对应的第一匹配点对及第二匹配点对,确定各待利用特征点对应的三维位置信息;Determine the three-dimensional position information corresponding to each feature point to be used based on the first matching point pair and the second matching point pair corresponding to the current image and the first matching point pair and the second matching point pair corresponding to the images at N moments before the current moment; 基于所述三维位置信息、各待利用特征点的图像位置信息、所述初始状态信息及所述当前传感器数据,确定所述目标对象在当前时刻的当前状态信息,包括:基于所述当前传感器数据以及所述初始状态信息,确定所述目标对象在当前时刻的中间状态信息;利用所述三维位置信息、所述中间状态信息中的中间位姿信息、前N时刻图像各时刻图像所对应目标对象的状态信息中的对象位姿信息以及各图像采集设备的设备位姿信息和内参矩阵,确定各待利用特征点对应的空间点在其所在图像中投影点的投影位置信息;基于各待利用特征点对应的投影位置信息和各待利用特征点在当前图像中的图像位置信息,构建重投影误差方程;基于所述重投影误差方程,确定所述目标对象在当前时刻的当前状态信息。Based on the three-dimensional position information, the image position information of each feature point to be used, the initial state information and the current sensor data, the current state information of the target object at the current moment is determined, including: based on the current sensor data and the initial state information, the intermediate state information of the target object at the current moment is determined; using the three-dimensional position information, the intermediate posture information in the intermediate state information, the object posture information in the state information of the target object corresponding to each image at the previous N moments, and the device posture information and internal parameter matrix of each image acquisition device, the projection position information of the projection point of the spatial point corresponding to each feature point to be used in the image in which it is located is determined; based on the projection position information corresponding to each feature point to be used and the image position information of each feature point to be used in the current image, a reprojection error equation is constructed; based on the reprojection error equation, the current state information of the target object at the current moment is determined. 2.如权利要求1所述的方法,其特征在于,所述初始状态信息包括:初始速度信息及初始位姿信息,其中,所述初始位姿信息包括:初始姿态信息以及初始位置信息;2. The method according to claim 1, wherein the initial state information comprises: initial velocity information and initial posture information, wherein the initial posture information comprises: initial posture information and initial position information; 所述利用所述当前时刻的前一时刻对应的IMU数据以及所述前一时刻所述目标对象的前一状态信息,确定所述目标对象在所述当前时刻的初始状态信息的步骤,包括:The step of determining the initial state information of the target object at the current moment by using the IMU data corresponding to the moment before the current moment and the previous state information of the target object at the previous moment includes: 利用所述当前时刻的前一时刻对应的IMU数据,确定所述前一时刻所述目标对象的角速度信息和加速度信息;Determine the angular velocity and acceleration information of the target object at the previous moment using IMU data corresponding to the previous moment; 利用所述前一时刻的角速度信息以及所述前一状态信息中的前一姿态信息,构建第一状态转移方程;利用所述第一状态转移方程确定所述目标对象在所述当前时刻的初始姿态信息;Constructing a first state transfer equation using the angular velocity information at the previous moment and the previous posture information in the previous state information; determining the initial posture information of the target object at the current moment using the first state transfer equation; 利用所述前一姿态信息、所述前一时刻的加速度信息以及所述前一状态信息中的前一速度信息,构建第二状态转移方程;利用所述第二状态转移方程确定所述目标对象在所述当前时刻的初始速度信息;constructing a second state transition equation using the previous posture information, the acceleration information at the previous moment, and the previous velocity information in the previous state information; and determining the initial velocity information of the target object at the current moment using the second state transition equation; 利用所述初始速度信息、所述前一速度信息以及所述前一状态信息中的前一位置信息,构建第三状态转移方程;利用所述第三状态转移方程确定所述目标对象在所述当前时刻的初始位置信息。A third state transfer equation is constructed using the initial velocity information, the previous velocity information, and the previous position information in the previous state information; and the initial position information of the target object at the current moment is determined using the third state transfer equation. 3.如权利要求1所述的方法,其特征在于,所述基于当前图像对应的第一匹配点对及第二匹配点对以及当前时刻的前N时刻图像对应的第一匹配点对及第二匹配点对,确定各待利用特征点对应的三维位置信息的步骤,包括:3. The method according to claim 1 , wherein the step of determining the three-dimensional position information corresponding to each feature point to be utilized based on the first matching point pair and the second matching point pair corresponding to the current image and the first matching point pair and the second matching point pair corresponding to the images at N moments before the current moment comprises: 按照三角测量算法,基于当前图像对应的第一匹配点对及第二匹配点对、当前时刻的前N时刻图像对应的第一匹配点对及第二匹配点对、各当前图像对应的图像采集设备的设备位姿信息、各前N时刻图像对应的图像采集设备的设备位姿信息以及所述当前图像及各前N时刻图像对应的目标对象的位姿信息,确定各待利用特征点对应的三维位置信息。According to the triangulation algorithm, the three-dimensional position information corresponding to each feature point to be used is determined based on the first matching point pair and the second matching point pair corresponding to the current image, the first matching point pair and the second matching point pair corresponding to the images at the previous N moments before the current moment, the device posture information of the image acquisition device corresponding to each current image, the device posture information of the image acquisition device corresponding to each image at the previous N moments, and the posture information of the target object corresponding to the current image and each image at the previous N moments. 4.如权利要求1所述的方法,其特征在于,所述基于所述重投影误差方程,确定所述目标对象在当前时刻的当前状态信息的步骤,包括:4. The method according to claim 1, wherein the step of determining the current state information of the target object at the current moment based on the reprojection error equation comprises: 基于所述重投影误差方程,构建目标测量方程;Based on the reprojection error equation, construct a target measurement equation; 利用所述目标测量方程以及滤波更新方程,确定所述目标对象在当前时刻的当前状态信息。The target measurement equation and the filter update equation are used to determine the current state information of the target object at the current moment. 5.一种状态信息估计装置,其特征在于,所述装置包括:5. A state information estimation device, characterized in that the device comprises: 第一获得模块,被配置为获得在当前时刻目标对象所设置的多图像采集设备采集的当前图像及其他传感器采集的当前传感器数据,其中,所述其他传感器包括IMU,所述多图像采集设备为安装在同一刚体上的任意数量、任意位置与朝向的多图像采集设备系统;A first acquisition module is configured to obtain a current image captured by a multi-image acquisition device set on the target object at a current moment and current sensor data captured by other sensors, wherein the other sensors include an IMU, and the multi-image acquisition device is a system of multiple image acquisition devices installed on the same rigid body in any number, position, and orientation; 第一确定模块,被配置为利用所述当前时刻的前一时刻对应的IMU数据以及所述前一时刻所述目标对象的前一状态信息,确定所述目标对象在所述当前时刻的初始状态信息;A first determining module is configured to determine initial state information of the target object at the current moment by using IMU data corresponding to a moment before the current moment and previous state information of the target object at the previous moment; 第二确定模块,被配置为利用各当前图像所检测出的特征点以及各当前图像对应的图像采集设备之间的相对位姿关系,确定各当前图像之间的匹配点对,作为当前图像对应的第一匹配点对;A second determination module is configured to determine matching point pairs between the current images as first matching point pairs corresponding to the current images by using feature points detected in the current images and relative positional relationships between image acquisition devices corresponding to the current images; 第三确定模块,被配置为利用各当前图像所检测出的特征点及其前一图像所检测出的特征点,确定各当前图像与其前一图像之间的匹配点对,作为当前图像对应的第二匹配点对;a third determining module configured to determine, by using the feature points detected in each current image and the feature points detected in the previous image, a pair of matching points between each current image and its previous image as a second matching point pair corresponding to the current image; 第四确定模块,被配置为基于当前图像对应的第一匹配点对及第二匹配点对以及所述当前时刻的前N时刻图像对应的第一匹配点对及第二匹配点对,确定各待利用特征点对应的三维位置信息;A fourth determination module is configured to determine the three-dimensional position information corresponding to each feature point to be used based on the first matching point pair and the second matching point pair corresponding to the current image and the first matching point pair and the second matching point pair corresponding to the images at N moments before the current moment; 第五确定模块,被配置为基于所述三维位置信息、各待利用特征点的图像位置信息、所述初始状态信息及所述当前传感器数据,确定所述目标对象在当前时刻的当前状态信息;a fifth determining module configured to determine current state information of the target object at a current moment based on the three-dimensional position information, the image position information of each feature point to be utilized, the initial state information, and the current sensor data; 所述第五确定模块包括:The fifth determining module includes: 第一确定单元,被配置为基于所述当前传感器数据以及所述初始状态信息,确定所述目标对象在当前时刻的中间状态信息;a first determining unit configured to determine intermediate state information of the target object at a current moment based on the current sensor data and the initial state information; 第二确定单元,被配置为利用所述三维位置信息、所述中间状态信息中的中间位姿信息、前N时刻图像各时刻图像所对应目标对象的状态信息中的对象位姿信息以及各图像采集设备的设备位姿信息和内参矩阵,确定各待利用特征点对应的空间点在其所在图像中投影点的投影位置信息;The second determining unit is configured to determine projection position information of a projection point of a spatial point corresponding to each feature point to be utilized in the image in which the spatial point is located, using the three-dimensional position information, the intermediate pose information in the intermediate state information, the object pose information in the state information of the target object corresponding to each image at each of the previous N moments, and the device pose information and intrinsic parameter matrix of each image acquisition device; 构建单元,被配置为基于各待利用特征点对应的投影位置信息和各待利用特征点在当前图像中的图像位置信息,构建重投影误差方程;a construction unit configured to construct a reprojection error equation based on projection position information corresponding to each feature point to be utilized and image position information of each feature point to be utilized in the current image; 第三确定单元,被配置为基于所述重投影误差方程,确定所述目标对象在当前时刻的当前状态信息。The third determining unit is configured to determine the current state information of the target object at a current moment based on the reprojection error equation. 6.如权利要求5所述的装置,其特征在于,所述初始状态信息包括:初始速度信息及初始位姿信息,其中,所述初始位姿信息包括:初始姿态信息以及初始位置信息;6. The device according to claim 5, wherein the initial state information comprises: initial velocity information and initial posture information, wherein the initial posture information comprises: initial posture information and initial position information; 所述第一确定模块,被具体配置为利用所述当前时刻的前一时刻对应的IMU数据,确定所述前一时刻所述目标对象的角速度信息和加速度信息;The first determining module is specifically configured to determine the angular velocity information and acceleration information of the target object at the previous moment by using the IMU data corresponding to the previous moment; 利用所述前一时刻的角速度信息以及所述前一状态信息中的前一姿态信息,构建第一状态转移方程;利用所述第一状态转移方程确定所述目标对象在所述当前时刻的初始姿态信息;Constructing a first state transfer equation using the angular velocity information at the previous moment and the previous posture information in the previous state information; determining the initial posture information of the target object at the current moment using the first state transfer equation; 利用所述前一姿态信息、所述前一时刻的加速度信息以及所述前一状态信息中的前一速度信息,构建第二状态转移方程;利用所述第二状态转移方程确定所述目标对象在所述当前时刻的初始速度信息;constructing a second state transition equation using the previous posture information, the acceleration information at the previous moment, and the previous velocity information in the previous state information; and determining the initial velocity information of the target object at the current moment using the second state transition equation; 利用所述初始速度信息、所述前一速度信息以及所述前一状态信息中的前一位置信息,构建第三状态转移方程;利用所述第三状态转移方程确定所述目标对象在所述当前时刻的初始位置信息。A third state transfer equation is constructed using the initial velocity information, the previous velocity information, and the previous position information in the previous state information; and the initial position information of the target object at the current moment is determined using the third state transfer equation. 7.如权利要求5所述的装置,其特征在于,所述第四确定模块,被具体配置为按照三角测量算法,基于当前图像对应的第一匹配点对及第二匹配点对、当前时刻的前N时刻图像对应的第一匹配点对及第二匹配点对、各当前图像对应的图像采集设备的设备位姿信息、各前N时刻图像对应的图像采集设备的设备位姿信息以及所述当前图像及各前N时刻图像对应的目标对象的位姿信息,确定各待利用特征点对应的三维位置信息。7. The device as described in claim 5 is characterized in that the fourth determination module is specifically configured to determine the three-dimensional position information corresponding to each feature point to be used based on the first matching point pair and the second matching point pair corresponding to the current image, the first matching point pair and the second matching point pair corresponding to the images at the previous N moments before the current moment, the device posture information of the image acquisition device corresponding to each current image, the device posture information of the image acquisition device corresponding to each image at the previous N moments, and the posture information of the target object corresponding to the current image and each image at the previous N moments according to the triangulation algorithm. 8.如权利要求5所述的装置,其特征在于,所述第三确定单元,被具体配置为基于所述重投影误差方程,构建目标测量方程;8. The apparatus according to claim 5, wherein the third determining unit is specifically configured to construct a target measurement equation based on the reprojection error equation; 利用所述目标测量方程以及滤波更新方程,确定所述目标对象在当前时刻的当前状态信息。The target measurement equation and the filter update equation are used to determine the current state information of the target object at the current moment.
CN202110217283.5A 2021-02-26 2021-02-26 A method and device for estimating state information Active CN114964217B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110217283.5A CN114964217B (en) 2021-02-26 2021-02-26 A method and device for estimating state information
PCT/CN2021/109535 WO2022179047A1 (en) 2021-02-26 2021-07-30 State information estimation method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110217283.5A CN114964217B (en) 2021-02-26 2021-02-26 A method and device for estimating state information

Publications (2)

Publication Number Publication Date
CN114964217A CN114964217A (en) 2022-08-30
CN114964217B true CN114964217B (en) 2025-09-26

Family

ID=82973589

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110217283.5A Active CN114964217B (en) 2021-02-26 2021-02-26 A method and device for estimating state information

Country Status (2)

Country Link
CN (1) CN114964217B (en)
WO (1) WO2022179047A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117804442B (en) * 2023-12-28 2025-10-28 广东汇天航空航天科技有限公司 Method, device, electronic device and storage medium for determining position and posture of aircraft
CN118864602B (en) * 2024-09-14 2024-12-03 中科南京智能技术研究院 A ground robot repositioning method and device based on multi-sensor fusion
CN121113053A (en) * 2025-09-30 2025-12-12 河北中军智能科技有限公司 An anti-vibration inertial navigation device for spacecraft and its implementation method

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8174568B2 (en) * 2006-12-01 2012-05-08 Sri International Unified framework for precise vision-aided navigation
CN109506642B (en) * 2018-10-09 2021-05-28 浙江大学 A robot multi-camera visual inertial real-time positioning method and device
CN111862146B (en) * 2019-04-30 2023-08-29 北京魔门塔科技有限公司 Target object positioning method and device
CN112016568B (en) * 2019-05-31 2024-07-05 北京初速度科技有限公司 Tracking method and device for image feature points of target object
CN112050806B (en) * 2019-06-06 2022-08-30 北京魔门塔科技有限公司 Positioning method and device for moving vehicle
CN112115980A (en) * 2020-08-25 2020-12-22 西北工业大学 Binocular vision odometer design method based on optical flow tracking and point line feature matching
CN112304307B (en) * 2020-09-15 2024-09-06 浙江大华技术股份有限公司 Positioning method and device based on multi-sensor fusion and storage medium
CN112269851B (en) * 2020-11-16 2024-05-17 Oppo广东移动通信有限公司 Map data updating method and device, storage medium and electronic equipment
CN112270710B (en) * 2020-11-16 2023-12-19 Oppo广东移动通信有限公司 Pose determining method, pose determining device, storage medium and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
IMU辅助的双目视觉里程计研究;涂金戈;信息科技辑;20210215;第2020卷(第06期);论文正文 *

Also Published As

Publication number Publication date
WO2022179047A1 (en) 2022-09-01
CN114964217A (en) 2022-08-30

Similar Documents

Publication Publication Date Title
CN109991636B (en) Map construction method and system based on GPS, IMU and binocular vision
CN112197770B (en) Robot positioning method and positioning device thereof
Panahandeh et al. Vision-aided inertial navigation based on ground plane feature detection
CN112230242A (en) Pose estimation system and method
CN109885080B (en) Autonomous control system and autonomous control method
CN112050806B (en) Positioning method and device for moving vehicle
CN110296702A (en) Visual sensor and the tightly coupled position and orientation estimation method of inertial navigation and device
CN108731670A (en) Inertia/visual odometry combined navigation locating method based on measurement model optimization
CN113503872B (en) A low-speed unmanned vehicle positioning method based on the fusion of camera and consumer-grade IMU
US20180075614A1 (en) Method of Depth Estimation Using a Camera and Inertial Sensor
CN114964217B (en) A method and device for estimating state information
Zhang et al. Vision-aided localization for ground robots
CN117685953A (en) UWB and vision fusion positioning method and system for multi-UAV collaborative positioning
CN111738047A (en) Self-position estimation method
KR101737950B1 (en) Vision-based navigation solution estimation system and method in terrain referenced navigation
CN114910069A (en) Fusion positioning initialization system and method for unmanned aerial vehicle
CN106709222B (en) IMU drift compensation method based on monocular vision
Li et al. Exploring the potential of the deep-learning-aided Kalman filter for GNSS/INS integration: A study on 2-D simulation datasets
CN118089728A (en) A quadruped robot trajectory generation method, device, equipment and storage medium
CN111862146A (en) Target object positioning method and device
CN118746293A (en) High-precision positioning method based on multi-sensor fusion SLAM
CN109341685B (en) A vision-assisted landing navigation method for fixed-wing aircraft based on homography transformation
CN118031951A (en) Multi-source fusion positioning method, device, electronic device and storage medium
CN115601431B (en) Control method for stability of visual inertial odometer and related equipment
CN106441282B (en) A kind of star sensor star tracking method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant