Disclosure of Invention
In view of the technical problems and the shortcomings, the invention provides a high-risk scene-oriented risk, dynamics and intention collaborative track prediction method, which comprehensively utilizes outward risk clues (comprising relative speed, relative distance and collision time), vehicle dynamics characteristics (comprising acceleration, transverse acceleration, longitudinal acceleration and yaw rate) and driving intention information (historical track characteristics), realizes early perception of potential risks by constructing a priori enhanced risk attention mechanism and a three-dimensional interaction micro-graph module, captures dynamic changes of a vehicle from short-term disturbance to long-term behavior trend by adopting a multi-scale dynamics time sequence encoder, and performs semantic constraint and optimization on a multi-modal track by introducing an explicit intention recognition auxiliary task.
In order to achieve the above purpose, the invention adopts the following technical scheme:
A risk, dynamics and intention collaborative trajectory prediction method for a high risk scene, the method comprising the steps of:
step 1, collecting data, extracting multidimensional motion characteristics of the vehicle, and carrying out characteristic recombination to obtain track characteristics Outward risk clueAnd kinetic characteristics;
Step 2, for the track characteristicsAnd map featuresCoding to obtain the history track characteristicsAnd map features;
Step 3. Outward danger cluePerforming time sequence modeling to obtain a hidden state sequenceAnd a potential risk scoreCalculating a risk score at time t according to the potential risk score at time t and relative parameters between the automatic driving vehicle and the target vehicleNormalizing to obtain risk perception attention weightThen the hidden state sequenceAnd risk aware attention weightingWeighted summation is carried out to obtain the time characteristics of risk perception;
Constructing a full-connection small graph based on outward risk feature vectors, updating the full-connection small graph through a graph attention mechanism, obtaining node embedding and overall interaction situation of a target vehicle by sink nodes, and combining the node embedding and overall interaction situation with risk-perceived time features of the target vehicle and the sink nodesSplicing and fusing to obtain a final risk representation vectorWhich is related to the historical track characteristicsFusion to obtain features;
Step 4, according to the dynamics characteristicsAcquiring short-term, medium-term and long-term characteristics, and processing the three to obtain multi-scale dynamics characteristicsFeatures and characteristics ofFusion to obtain features;
Step 5. Characteristic featuresAnd map featuresEncoding after splicing to obtain the characteristics of the intelligent agentAnd map features;
Step 6, based on the intelligent body characteristicsAnd map featuresObtaining probability of lane change cut in of target vehicleThereby obtaining the final characteristics of the target vehicleReplacement of agent features with such featuresTarget vehicle characteristics in (a)Obtaining the final intelligent body characteristics;
Step 7. Final characteristics of the target vehicleFinal agent characteristicsAnd map featuresAnd splicing, and decoding and outputting a prediction result.
In the method, the data acquisition in the step 1 is firstly standardized, and then the multidimensional movement characteristics of the vehicle are extracted, wherein the multidimensional movement characteristics of the vehicle comprise position information, speed information, acceleration information, transverse acceleration information, longitudinal acceleration information, relative position, relative speed and collision time of each vehicle, yaw rate information and map characteristics of the vehicle, and when the characteristics are recombined, the position, the speed and the acceleration of the vehicle are spliced to obtain track characteristicsSplicing the relative positions, the relative speeds and the collision time of each vehicle to obtain an outward risk clueSplicing the acceleration, the transverse acceleration, the longitudinal acceleration and the yaw acceleration to obtain dynamic characteristics。
As a preferred embodiment of the present invention, a double-gated loop unit is used in step 3 for outward risk cluesPerforming time sequence modeling to obtain a hidden state sequenceNonlinear mapping is carried out by using a multi-layer perceptron MLP to obtain potential risk scores。
Preferably, the risk score at time t in step 3The expression of (2) is:
;
In the formula, Is a learnable scalar coefficient, is used to balance the contributions of different risk factors,As a function of the reciprocal of the distance,As a function of distance contraction, the collision time between the autonomous vehicle and the target vehicle isThe relative distance isThe distance is changed to;
The reciprocal function of distance is:
;
In the formula, Is a stable constant;
The distance contraction function is:
;
In the formula, The relative distance between the automatic driving vehicle and the target vehicle at the time t-1;
Thereafter using softmax function to score risk at time t Normalizing to obtain the risk perception attention weight at each moment t。
In the method, when a full-connection small graph is constructed in the step 3, the relative positions, the relative speeds and the collision time of all vehicles are firstly subjected to average pooling in the time dimension, outward risk feature vectors are organized and generated according to target vehicles, automatic driving vehicles and nearest vehicles in front of a self-driving lane, then the outward risk feature vectors are subjected to high-dimensional mapping by using a full-connection linear layer, and the mapped high-dimensional feature vectors are used as nodes of a graph neural network to form the full-connection small graph.
As the optimization of the invention, the standard followed by the updating of each layer of the graph attention mechanism in the step 3 is in the GAT form, and after the updating of the graph attention mechanism, the sink node obtains the node embedding of the target vehicleOverall interaction situationThe expression of (2) is:
;
In the formula, Reflecting the node embedment of the target vehicle,The overall interaction situation of the three is synthesized,The node embedded feature representing the nearest preceding vehicle of the updated target vehicle, the autonomous vehicle, and the own vehicle lane.
As a preferred embodiment of the present invention, short term characterization in step 4Is obtained by combining dynamic characteristicsAccording to historical time stepsPerforming window cutting, setting step length to 1, and setting first window size toExtractingWindows, for each window useThe encoder encodes and splices to obtain short-term characteristics;
Mid-term characteristicsIs obtained by combining dynamic characteristicsPerforming secondary window cutting according to the historical time steps, setting the step length to be 5, and setting the second window size to beExtracting=Each window is encoded by using a bidirectional gating circulation unit to obtain characteristicsFor each featureSplicing the hidden states at the last moment to obtain mid-term characteristics。
As a preferred aspect of the invention, long term characteristics in step 4Is obtained by combining dynamic characteristicsEncoding using bi-directional LSTM to obtainSubsequently, a multi-head self-attention mechanism pair is usedStrengthening dependence to obtainFinally, carrying out an average pooling operation to obtain long-term characteristicsThe short-term characteristic, the medium-term characteristic and the long-term characteristic are spliced, the information interaction with the remembering is carried out, and the multi-scale dynamics characteristic is obtained after the average pooling。
As a preferred aspect of the present invention, step 6 specifically includes the steps of:
step 6.1. Characterizing the agent And map featuresAveraging and pooling to obtain featuresAndAt the same time, from the smart body featuresExtracting features of a target vehicle;
Step 6.2. The two features averaged in step 6.1 are pooled and the features of the target vehicleSplicing, and obtaining a characteristic vector of the target vehicle in the lane change cut in after nonlinear mapping;
Step 6.3. Feature vector based on target vehicle occurrence lane change cut inOutputting probability of lane change cut in of target vehicleThe expression is:
;
wherein, the Activating a function for Sigmoid;
step 6.4. Probability of the target vehicle to generate lane change cut in Features with the target vehicleFusion to obtain final characteristics of the target vehicle;
Step 6.5 Using the final characteristics of the target vehicleReplacement agent featuresTarget vehicle characteristics in (a)Obtaining the final intelligent body characteristics。
As a preference of the invention, step 7 will be the final characteristics of the target vehicleFinal agent characteristicsAnd map featuresSplicing, mapping by using two MLPs, and outputting multi-mode tracks of a target vehicleTrajectory probability。
The invention also provides a computer readable medium, on which a computer program is stored, which when being executed by a processor implements the above-mentioned risk, dynamics and intention collaborative trajectory prediction method for high risk scenarios.
The invention has the advantages and beneficial effects that:
(1) According to the invention, aiming at emergency high-risk scenes such as sudden parallel line, sudden braking deceleration, shielding occurrence and the like, a collaborative prediction framework for fusing risk clues, dynamics information and driving intention is established, and high-sensitivity risk perception can be realized at an early stage of rapid accumulation of risk factors through a priori enhanced risk attention mechanism and three-body interaction micro-graph modeling, so that timeliness and accuracy of track prediction are effectively improved, intelligent prediction and safety enhancement of the high-risk scenes are realized, and the method has higher engineering application value and popularization significance.
(2) The invention provides multi-scale dynamic time sequence coding, and realizes layered modeling of the non-stationary motion process of the vehicle by comprehensively utilizing short-term disturbance characteristics, medium-term stability trend and long-term behavior modes, so that the model can understand dynamic evolution of the vehicle on different time scales, and the track prediction robustness and generalization capability under complex traffic flow are improved.
(3) According to the invention, an explicit intention recognition auxiliary task is introduced into a traditional track prediction framework, and semantic constraint and optimization are carried out by combining driving intention (such as lane change cut-in, sharp turn and the like), so that the predicted track is more in line with actual driving intention and traffic rules, and potential conflict and collision risk can be recognized in advance.
(4) The method can be widely applied to various complex scenes such as urban roads, highways, intersections and the like, and particularly shows stronger stability and response speed in burst high-risk scenes. Through the end-to-end collaborative optimization structure, the invention can provide reliable risk priori and behavior prediction support for an automatic driving system, and the overall driving safety is obviously improved.
(5) Compared with the traditional model based on historical track regression or social compatibility assumption, the invention leads the model not to assume friendly traffic behavior any more by introducing a risk attention and structured interaction modeling mechanism, can actively identify dangerous game situation, realizes earlier and more accurate risk early warning and dynamic response, and improves the robustness and safety decision quality of the system in extreme scenes.
(6) The invention can be directly deployed on the existing automatic driving hardware platform, mainly relies on environment and motion data acquired by vehicle-mounted sensors (laser radar, camera, millimeter wave radar, GPS and the like), does not need to add expensive hardware modules or additional sensing equipment, and has good portability and engineering application value.
(7) The method provided by the invention is characterized in that key evaluation indexes are obtained,,,The method obviously exceeds the track algorithms such as MTR, HIVT, SIMPL and the like of the existing mainstream, and the feasibility of the method is fully proved.
Detailed Description
The following detailed description of the application, taken in conjunction with the accompanying drawings, is not intended to limit the scope of the application, so that those skilled in the art may better understand the technical solutions of the application and their advantages.
As shown in fig. 1 to 3, the present embodiment provides a risk, dynamics and intention collaborative trajectory prediction method for a high risk scenario, which includes the following steps:
Step 1, data acquisition and feature extraction:
step 1.1. High risk trajectory data screening:
The method comprises the steps of collecting running data of an automatic driving vehicle in an actual road traffic environment, obtaining information of surrounding vehicles through vehicle-mounted sensors (such as a laser radar, a millimeter wave radar, a camera, a GPS sensor and the like), preprocessing the original data, and screening track fragments containing obvious abnormal driving behaviors or collision risks.
In the embodiment, the information of the surrounding vehicles comprises information of track information of the surrounding vehicles, types of roads and the like, wherein the track information of the surrounding vehicles comprises coordinates of historical tracks of the vehicles, speeds, accelerations, steering angles and the like of the vehicles, and the types of the surrounding vehicles comprise bicycles, electric vehicles, automobiles, trucks and the like.
And 1.2, carrying out standardization processing on the screened high-risk emergency track fragments (track fragments containing obvious abnormal driving behaviors or collision risks), wherein the standardization processing comprises time synchronization and coordinate system synchronization, and carrying out time synchronization on a plurality of vehicle-mounted sensors by using pulse signals provided by a GPS (global positioning system) during the time synchronization.
Step 1.3. Extracting multidimensional motion characteristics of all vehicles in the scene from the standardized track data, wherein the multidimensional motion characteristics comprise the position information of the vehiclesSpeed informationAcceleration informationLateral acceleration informationLongitudinal acceleration informationRelative position of each vehicleRelative velocityAnd Time-to-Collision (TTC)Yaw rate informationMap features;
Wherein, the Representing the position of the nth vehicle, the position of the ith vehicle,The total of the historical time steps is represented,Represents the firstX and y coordinates of the ith vehicle at the time step; representing the speed of the nth vehicle, the speed of the ith vehicle ,Represents the firstThe speed of the ith vehicle at a time step,Representing the acceleration of the nth vehicle, the acceleration of the ith vehicle,Represents the firstAcceleration of the ith vehicle at a time step,Represents the lateral acceleration of the nth vehicle, the lateral acceleration of the ith vehicle,Represents the firstLateral acceleration of the ith vehicle at a time step,Representing the longitudinal acceleration of the nth vehicle,Representing the yaw rate of the nth vehicle, the yaw rate of the ith vehicle,Represents the firstYaw rate of the ith vehicle at the individual time steps;
relative position of each vehicle ,,,,],TV denotes a target vehicle, EV denotes a surrounding vehicle, ego denotes an autonomous vehicle, CIPV denotes a preceding vehicle closest to a current lane of an own vehicle (autonomous vehicle).Representing the relative position (relative distance) of the target vehicle from the surrounding vehicles,Representing the relative position of the target vehicle and the nearest preceding vehicle to the current lane of the host vehicle (autonomous vehicle),Representing the relative position of the autonomous vehicle and the target vehicle,Representing the relative position of the surrounding vehicle and the preceding vehicle nearest to the current lane of the own vehicle (autonomous vehicle),Representing the relative position of the autonomous vehicle to surrounding vehicles,Representing the relative position of the front vehicle nearest to the current lane of the autonomous vehicle (autonomous vehicle); representing the total historical time steps in the past In each time step, the relative distance between the target vehicle and the surrounding vehicles is the same as other relative position parameters, and the total historical time steps are recordedThe relative distance between two vehicles at each time step;
Relative velocity ,,,,,,Representing the relative speed of the target vehicle and surrounding vehicles,Representing the relative speed of the subject vehicle and the nearest preceding vehicle to the current lane of the host vehicle (autonomous vehicle),Representing the relative speed of the autonomous vehicle and the target vehicle,Representing the relative speed of the surrounding vehicle and the preceding vehicle nearest to the current lane of the own vehicle (autonomous vehicle),Representing the relative speed of the autonomous vehicle and the surrounding vehicles,Representing the relative speed of the lead vehicle nearest to the current lane of the autonomous vehicle (autonomous vehicle); representing the total historical time steps in the past In the method, the relative speed from the target vehicle to surrounding vehicles at each time step is the same as other relative speed parameters;
Time of collision ,,,,,,Representing the time of collision of the target vehicle with surrounding vehicles,Representing the time of collision of the target vehicle with the preceding vehicle nearest to the current lane of the own vehicle (autonomous vehicle),Representing the time of collision of the autonomous vehicle with the target vehicle,Representing the time of collision of surrounding vehicles with the preceding vehicle nearest to the current lane of the own vehicle (autonomous vehicle),Representing the time of collision of the autonomous vehicle with surrounding vehicles,Representing the time of collision of an autonomous vehicle with a preceding vehicle nearest to the current lane of the autonomous vehicle (autonomous vehicle); representing the total historical time steps in the past In each time step, the collision time from the target vehicle to the surrounding vehicles is the same as other collision time parameters.
Step 1.4. Feature recombination, namely splicing the position, the speed and the acceleration of the vehicle to obtain track featuresRelative position of each vehicleRelative velocityTime-to-Collision (TTC)Splicing to obtain an outward risk clueSplicing the acceleration, the transverse acceleration, the longitudinal acceleration and the yaw acceleration to obtain dynamic characteristics;
Specifically, in this embodiment, the position, speed, and acceleration information of the vehicle are spliced to obtain the track featureCan be expressed as:
In the formula, Representing vector concatenation operations.
Relative distance of each vehicleRelative velocityTime-to-Collision (TTC)Splicing to obtain an outward risk clueAt first, the relative position of each time step t is extractedRelative velocityTime to collisionSplicing to obtain an outward risk clue at the time t:
Total historical time steps in the pastOutward risk cues of (a)Can be expressed as:
splicing the acceleration, the transverse acceleration, the longitudinal acceleration and the yaw acceleration to obtain dynamic characteristics Can be expressed as:
step2, track feature and map feature coding:
step 2.1. First, the track is characterized using a PointNet-based polyline encoder And map featuresCoding to obtain the history track characteristicsAnd map featuresCan be expressed as:
step 3, modeling the risk of the outward risk clue:
step 3.1. Use of a double gated loop unit (GRU, hidden dimension 256) for outbound risk cues Performing time sequence modeling to obtain a hidden state sequenceAnd using a simple multi-layer perceptron (MLP) to perform nonlinear mapping to obtain potential risk scoresCan be expressed as:
In the formula, For the coded sequence of hidden states,Representing the risk potential score at each historical time step,Representing the hidden state at the moment of the history t,Representing the risk potential score at time t of the history.
Step 3.2 scoring the risk potential at time t using a priori risk scoring functionCollision time between an autonomous vehicle (Ego) and a Target Vehicle (TV)Distance to each otherAnd distance variationPerforming risk scoring to obtain a risk score at the time tThe expression is:
In the formula, Is a learnable scalar coefficient, is used to balance the contributions of different risk factors,As a function of the reciprocal of the distance,For distance contraction functions, the physical prior term is specifically defined as follows:
TTC constraint: the risk score is higher the shorter the collision time between the own vehicle (automatic driving vehicle) and the Target Vehicle (TV) is, which indicates the current time t;
distance reciprocal function:
In the formula, For a constant stability, the risk contribution is limited when the vehicle distance is large, whereas when the distance is gradually shortened to a very close,The risk of a sudden amplification makes the model particularly sensitive to the scene of an imminent collision.
Distance contraction function:
The function represents when the distance between two vehicles is shortened In the time-course of which the first and second contact surfaces,The positive value and the faster the shortening, the larger the value, representing a rapid accumulation of risk, while when the distance remains unchanged or increases, it is stated that there is no newly increased risk of convergence.
Step 3.3. Risk score for time t Using softmax functionNormalizing to obtain the risk perception attention weight at each moment tCan be expressed as:
Step 3.4. Use of the hidden state sequence obtained in step 3.1 Risk perceived attention weight at time t as obtained in step 3.3Weighted summation is carried out to obtain the time characteristics of risk perceptionCan be expressed as:
step 3.5. Relative position to each vehicle And relative velocityTime-to-Collision (TTC)Averaging in the time dimension, generating outward risk feature vectors organized by Target Vehicle (TV), autonomous vehicle (Ego), nearest vehicle in front of the self-propelled lane (CIPV);
Specifically, to the relative positionRelative velocityTime-to-Collision (TTC)Averaging pooling in the time dimension can be expressed as:
wherein, the 、、Respectively representing the relative position, the relative speed and the collision time after the average pooling;
Organizing and generating outward risk feature vectors by a Target Vehicle (TV), an autonomous vehicle (Ego), a nearest vehicle (CIPV) in front of a driving lane of a host vehicle Can be expressed as:
specifically, in this embodiment, the subscript avg represents average pooling, and the upper subscript is The value is positive and vice versa, for example: representing the relative position of the target vehicle after the average pooling and the nearest preceding vehicle of the current lane of the own vehicle (automatic driving vehicle), taking a positive value Representing the relative position of the target vehicle and the automatic driving vehicle after the average pooling and taking a negative value.
Step 3.6. Use a simple fully connected linear layer to outward risk feature vectorThe high-dimensional mapping is performed, and can be expressed as:
where γ is a fully connected linear layer.
Step 3.7. Mapping the resulting high dimensional feature vectorsAs nodes of the graph neural network, a fully connected small graph is formed, which can be expressed as:
,
And is stacked up The layer's graph annotation mechanism, the standard followed by each layer's updates is in the form of GAT, which can be expressed as:
In the formula, Is a weight vector that is trainable and,Representing the set of neighbor nodes of node i,Representing the mapping characteristics of the ith node of the first layer,Representing the relationship of the edge between the i-th node and the neighbor node j,Representing the update weight of the update weight,Features representing node i of the updated layer i +1,Features of node i representing the first layer.
Step 3.8. After updating by the graph attention mechanism, the sink node obtains the node embedding of the target vehicleOverall interaction situationCan be expressed as:
In the formula, Reflecting the node embedment of the target vehicle,The overall interaction situation of the three is synthesized,Node embedded features representing the latest preceding vehicle of the updated target vehicle, the automatic driving vehicle and the own vehicle lane;
step 3.9. Time characterization of risk perception in step 3.4 Node embedding with the target vehicle in step 3.8Overall interaction situationPerforming splicing fusion, and performing nonlinear mapping by using a simple multi-layer perceptron (MLP) to obtain a final risk representation vectorCan be expressed as:
In the formula, In order to splice the features after fusion,For the final risk-representative vector,Representing a simple multi-layer perceptron consisting of a fully connected layer and a Relu activation function.
Step 3.10. The historical track obtained in step 2.1 is characterizedAnd the final risk characterization vector obtained in step 3.9Fusion using a simple MLP to obtain features。
Step 4, dynamics characteristic multi-scale time sequence coding:
step 4.1. Characterization of dynamics Window cutting is performed according to the historical time steps, the step length is set to be 1, and the first window size is set to beIn the present embodimentExtraction ofWindows, for each window useThe encoder encodes and splices to obtain short-term characteristicsCan be expressed as:
=
In the formula, Consists of two layers of one-dimensional convolution, batch Normalization (BN), nonlinear activation function ReLU and average pooling layers,,Representing the short-term characteristics of the kth window,Representing the kth window.
Step 4.2. Characterization of dynamicsPerforming secondary window cutting according to the historical time steps, setting the step length to be 5, and setting the second window size to beExtracting=A plurality of windows, each window is encoded by a bi-directional gating cyclic unit (BiGRU) to obtain characteristicsFor each featureSplicing the hidden states at the last moment to obtain mid-term characteristicsCan be expressed as:
}
In the formula, Representing the (k) th window of the window,,Represents the hidden state at the last instant of the kth window,A bi-directional gate-controlled loop unit is shown,Representing the last instant of the window.
Step 4.3. Characterization of dynamicsEncoding using bi-directional LSTM (BiLSTM) to obtainSubsequently, a multi-head self-attention mechanism pair is usedStrengthening dependence to obtainFinally, carrying out an average pooling operation to obtain long-term characteristicsCan be expressed as:
In the formula, Comprising three stacked bi-directional LSTM layers,In order to be a multi-headed self-attention mechanism,Representing an average pooling operation.
Step 4.4. Short term characterization in step 4.1Mid-term characteristics obtained in step 4.2Long term characteristics obtained in step 4.3Splicing to obtainThen using a multi-head self-attention mechanism pairPerforming the interaction of the remembered information to obtain the characteristicsFeatures ofCarrying out average pooling to obtain multi-scale dynamic characteristicsCan be expressed as:
Step 4.5. The features obtained in step 3.10 And the multiscale dynamics feature obtained in step 4.4Feature acquisition using an MLP fusionCan be expressed as:
step 5, feature coding:
step 5.1. The features obtained in step 4.5 And map featuresSplicing, encoding with Transformer Encoder (transducer encoder) to obtain the feature of the agentAnd map featuresCan be expressed as:
In the formula, The Transformer Encoder layers are represented, consisting of a multi-headed self-attention mechanism, a feed-forward fully connected network, and residual connections and layer normalization.
Step 6, intention recognition auxiliary task:
Step 6.1. Characterizing the agent of step 5.1 And map featuresAveraging and pooling to obtain featuresAndAt the same time, from the smart body featuresExtracting features of a target vehicle。
In particular, for agent featuresAnd map featuresCarrying out average pooling to obtain characteristicsAndCan be expressed as:
step 6.2. The features obtained in step 6.1 、And features of the target vehicleSplicing, and performing nonlinear mapping by using one MLP to obtain a characteristic vector of the target vehicle when the lane change cut in (forced lane change/jam-up) occursCan be expressed as:
,,
wherein, the Representing the spliced features;
Step 6.3. Feature vector based on target vehicle occurrence lane change cut in Outputting probability of lane change cut in of target vehicleCan be expressed as:
wherein, the The function is activated for Sigmoid.
Step 6.4. Probability of the target vehicle to generate lane change cut inFeatures with the target vehicleFusion to obtain final characteristics of the target vehicleCan be expressed as:
step 6.5 Using the final characteristics of the target vehicle Replacement of original agent featuresTarget vehicle characteristics in (a)Obtaining the final intelligent body characteristics。
Step 7, track decoding:
step 7.1. Final characterization of target vehicle Final agent characteristicsAnd map featuresSplicing, mapping by using two MLPs, and outputting multi-mode tracks of a target vehicleTrajectory probabilityCan be expressed as:
It should be noted that the high risk scenario described in this embodiment is a high risk emergency event, including but not limited to, a vehicle emergency braking event, a vehicle sharp turn or a sharp turn event, a vehicle rapid lane change event accompanied by a significant deceleration or acceleration, a vehicle distance rapid decrease, a vehicle sudden lane change overtaking, and a collision risk index (such as a collision time TTC) being smaller than a set threshold.
In order to verify the feasibility of the track prediction method provided by the invention, the invention performs experiments on a large-scale high-risk data set ESP, and combines the large-scale high-risk data set ESP with the existing mainstream advanced track prediction algorithm to obtain average accuracyMinimum displacement errorMinimum endpoint errorAnd rate of missing detectionThe results of the comparison are shown in Table 1.
Table 1 shows the experimental results of each predictive algorithm on the ESP high risk dataset
| Method |
mAP6 |
minADE6 |
minFDE6 |
MR6 |
| MTR |
0.25 |
2.03 |
3.48 |
0.46 |
| HIVT |
0.23 |
2.11 |
3.62 |
0.48 |
| SIMPL |
0.25 |
1.92 |
3.33 |
0.43 |
| HPNet |
0.24 |
2.11 |
3.32 |
0.43 |
| TNT |
0.22 |
2.33 |
5.21 |
0.48 |
| MTP+ESP |
0.26 |
1.94 |
3.31 |
0.44 |
| The invention (ours) |
0.36 |
1.21 |
2.01 |
0.28 |
As can be seen from Table 1, compared with the mainstream advanced trajectory algorithm, the trajectory prediction method provided by the invention has the average precisionMinimum displacement errorMinimum endpoint errorAnd rate of missing detectionThe method has the advantages that the method is greatly improved, and accurate prediction can be realized under a high-risk scene.
The invention also provides electronic equipment, which comprises one or more processors and a memory, wherein the memory is used for storing one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors realize the track prediction method.
The present invention also provides a computer readable medium having stored thereon a computer program which when executed by a processor implements the trajectory prediction method described above.
Those skilled in the art will appreciate that all or part of the functions of the various methods/modules in the above embodiments may be implemented by hardware, or may be implemented by a computer program. When all or part of the functions in the above embodiments are implemented by means of a computer program, the program may be stored in a computer-readable storage medium, which may include a read-only memory, a random access memory, a magnetic disk, an optical disk, a hard disk, etc., and the program is executed by a computer to implement the functions. For example, the program is stored in the memory of the device, and when the program in the memory is executed by the processor, all or part of the functions described above can be realized.
In addition, when all or part of the functions in the above embodiments are implemented by means of a computer program, the program may be stored in a storage medium such as a server, another computer, a magnetic disk, an optical disk, a flash disk, or a removable hard disk, and the program in the above embodiments may be implemented by downloading or copying the program into a memory of a local device or updating a version of a system of the local device, and when the program in the memory is executed by a processor.
The above description is only a specific embodiment of the present invention, but the scope of the present invention is not limited thereto. Any changes or substitutions that would be easily recognized by those skilled in the art within the technical scope of the present disclosure are intended to be covered by the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.