CN121043912B - A method for predicting the collaborative trajectory of risk, dynamics, and intent in high-risk scenarios - Google Patents

A method for predicting the collaborative trajectory of risk, dynamics, and intent in high-risk scenarios

Info

Publication number
CN121043912B
CN121043912B CN202511608914.0A CN202511608914A CN121043912B CN 121043912 B CN121043912 B CN 121043912B CN 202511608914 A CN202511608914 A CN 202511608914A CN 121043912 B CN121043912 B CN 121043912B
Authority
CN
China
Prior art keywords
risk
feature
vehicle
target vehicle
dynamics
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202511608914.0A
Other languages
Chinese (zh)
Other versions
CN121043912A (en
Inventor
郭洪艳
李朋龙
刘俊
孟庆瑜
刘嫣然
梁爽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN202511608914.0A priority Critical patent/CN121043912B/en
Publication of CN121043912A publication Critical patent/CN121043912A/en
Application granted granted Critical
Publication of CN121043912B publication Critical patent/CN121043912B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Traffic Control Systems (AREA)

Abstract

本发明属于交通控制领域,涉及面向高风险场景的风险、动力学与意图协同轨迹预测方法,该方法综合利用外向风险线索、车辆动力学特征以及驾驶意图信息(历史轨迹特征),通过构建先验增强的风险注意力机制和三体交互微图模块,实现对潜在风险的早期感知;同时,采用多尺度动力学时序编码器,捕获车辆从短期扰动到长期行为趋势的动态变化;并引入显式的意图识别辅助任务,对多模态轨迹进行语义约束与优化;通过上述模块的协同建模,本发明能够显著提升自动驾驶系统在突发cut‑in、急刹等复杂高风险情境下的轨迹预测精度与鲁棒性,为后续决策与控制模块提供更可靠的先验支持,从而有效提升车辆在复杂交通环境中的行驶安全性。

This invention belongs to the field of traffic control and relates to a risk, dynamics, and intention-based collaborative trajectory prediction method for high-risk scenarios. This method comprehensively utilizes outward risk cues, vehicle dynamics characteristics, and driving intention information (historical trajectory features). It achieves early perception of potential risks by constructing a priori enhanced risk attention mechanism and a three-body interactive micro-map module. Simultaneously, it employs a multi-scale dynamic temporal encoder to capture the dynamic changes of the vehicle from short-term disturbances to long-term behavioral trends. Furthermore, it introduces an explicit intention recognition auxiliary task to semantically constrain and optimize multimodal trajectories. Through the collaborative modeling of the above modules, this invention can significantly improve the trajectory prediction accuracy and robustness of autonomous driving systems in complex high-risk scenarios such as sudden cut-ins and emergency braking, providing more reliable prior support for subsequent decision-making and control modules, thereby effectively improving vehicle driving safety in complex traffic environments.

Description

Risk, dynamics and intention collaborative track prediction method for high-risk scene
Technical Field
The invention belongs to the field of traffic control, relates to track prediction of an automatic driving vehicle in a high-risk scene, and particularly relates to a risk, dynamics and intention cooperative automatic driving track prediction method for a high-risk emergency scene.
Background
With the rapid development of automatic driving technology, the driving safety of vehicles in complex and varied traffic environments has become an important premise for realizing high-level unmanned driving. The track prediction is used as a key link of engagement perception and decision planning in an automatic driving system, and the aim is to infer the future space-time states of surrounding traffic participants based on historical motion tracks and environmental information, so that the potential risks are recognized and avoided in advance. The existing track prediction method can obtain higher precision and stability in conventional stable driving scenes (such as straight running, following and conventional steering), but in high-risk emergency scenes such as high-speed sudden line combination, sudden braking deceleration, shielding entering and the like, the conventional model often shows problems of prediction delay, error accumulation, untimely response and the like due to the fact that vehicle motion signals show strong nonlinearity and obvious nonstationary characteristics, and safety requirements in complex traffic environments are difficult to meet. Therefore, it is highly desirable to provide a track prediction method with risk perception and dynamic understanding capabilities, which can fuse multi-source time sequence information, early identify and accurately predict potential risks in an emergency high-risk scene, and provide a priori support with more robustness and reliability for decision and control modules of an automatic driving system, so as to improve overall driving safety and system stability.
Disclosure of Invention
In view of the technical problems and the shortcomings, the invention provides a high-risk scene-oriented risk, dynamics and intention collaborative track prediction method, which comprehensively utilizes outward risk clues (comprising relative speed, relative distance and collision time), vehicle dynamics characteristics (comprising acceleration, transverse acceleration, longitudinal acceleration and yaw rate) and driving intention information (historical track characteristics), realizes early perception of potential risks by constructing a priori enhanced risk attention mechanism and a three-dimensional interaction micro-graph module, captures dynamic changes of a vehicle from short-term disturbance to long-term behavior trend by adopting a multi-scale dynamics time sequence encoder, and performs semantic constraint and optimization on a multi-modal track by introducing an explicit intention recognition auxiliary task.
In order to achieve the above purpose, the invention adopts the following technical scheme:
A risk, dynamics and intention collaborative trajectory prediction method for a high risk scene, the method comprising the steps of:
step 1, collecting data, extracting multidimensional motion characteristics of the vehicle, and carrying out characteristic recombination to obtain track characteristics Outward risk clueAnd kinetic characteristics;
Step 2, for the track characteristicsAnd map featuresCoding to obtain the history track characteristicsAnd map features;
Step 3. Outward danger cluePerforming time sequence modeling to obtain a hidden state sequenceAnd a potential risk scoreCalculating a risk score at time t according to the potential risk score at time t and relative parameters between the automatic driving vehicle and the target vehicleNormalizing to obtain risk perception attention weightThen the hidden state sequenceAnd risk aware attention weightingWeighted summation is carried out to obtain the time characteristics of risk perception;
Constructing a full-connection small graph based on outward risk feature vectors, updating the full-connection small graph through a graph attention mechanism, obtaining node embedding and overall interaction situation of a target vehicle by sink nodes, and combining the node embedding and overall interaction situation with risk-perceived time features of the target vehicle and the sink nodesSplicing and fusing to obtain a final risk representation vectorWhich is related to the historical track characteristicsFusion to obtain features;
Step 4, according to the dynamics characteristicsAcquiring short-term, medium-term and long-term characteristics, and processing the three to obtain multi-scale dynamics characteristicsFeatures and characteristics ofFusion to obtain features;
Step 5. Characteristic featuresAnd map featuresEncoding after splicing to obtain the characteristics of the intelligent agentAnd map features;
Step 6, based on the intelligent body characteristicsAnd map featuresObtaining probability of lane change cut in of target vehicleThereby obtaining the final characteristics of the target vehicleReplacement of agent features with such featuresTarget vehicle characteristics in (a)Obtaining the final intelligent body characteristics;
Step 7. Final characteristics of the target vehicleFinal agent characteristicsAnd map featuresAnd splicing, and decoding and outputting a prediction result.
In the method, the data acquisition in the step 1 is firstly standardized, and then the multidimensional movement characteristics of the vehicle are extracted, wherein the multidimensional movement characteristics of the vehicle comprise position information, speed information, acceleration information, transverse acceleration information, longitudinal acceleration information, relative position, relative speed and collision time of each vehicle, yaw rate information and map characteristics of the vehicle, and when the characteristics are recombined, the position, the speed and the acceleration of the vehicle are spliced to obtain track characteristicsSplicing the relative positions, the relative speeds and the collision time of each vehicle to obtain an outward risk clueSplicing the acceleration, the transverse acceleration, the longitudinal acceleration and the yaw acceleration to obtain dynamic characteristics
As a preferred embodiment of the present invention, a double-gated loop unit is used in step 3 for outward risk cluesPerforming time sequence modeling to obtain a hidden state sequenceNonlinear mapping is carried out by using a multi-layer perceptron MLP to obtain potential risk scores
Preferably, the risk score at time t in step 3The expression of (2) is:
;
In the formula, Is a learnable scalar coefficient, is used to balance the contributions of different risk factors,As a function of the reciprocal of the distance,As a function of distance contraction, the collision time between the autonomous vehicle and the target vehicle isThe relative distance isThe distance is changed to;
The reciprocal function of distance is:
;
In the formula, Is a stable constant;
The distance contraction function is:
;
In the formula, The relative distance between the automatic driving vehicle and the target vehicle at the time t-1;
Thereafter using softmax function to score risk at time t Normalizing to obtain the risk perception attention weight at each moment t
In the method, when a full-connection small graph is constructed in the step 3, the relative positions, the relative speeds and the collision time of all vehicles are firstly subjected to average pooling in the time dimension, outward risk feature vectors are organized and generated according to target vehicles, automatic driving vehicles and nearest vehicles in front of a self-driving lane, then the outward risk feature vectors are subjected to high-dimensional mapping by using a full-connection linear layer, and the mapped high-dimensional feature vectors are used as nodes of a graph neural network to form the full-connection small graph.
As the optimization of the invention, the standard followed by the updating of each layer of the graph attention mechanism in the step 3 is in the GAT form, and after the updating of the graph attention mechanism, the sink node obtains the node embedding of the target vehicleOverall interaction situationThe expression of (2) is:
;
In the formula, Reflecting the node embedment of the target vehicle,The overall interaction situation of the three is synthesized,The node embedded feature representing the nearest preceding vehicle of the updated target vehicle, the autonomous vehicle, and the own vehicle lane.
As a preferred embodiment of the present invention, short term characterization in step 4Is obtained by combining dynamic characteristicsAccording to historical time stepsPerforming window cutting, setting step length to 1, and setting first window size toExtractingWindows, for each window useThe encoder encodes and splices to obtain short-term characteristics;
Mid-term characteristicsIs obtained by combining dynamic characteristicsPerforming secondary window cutting according to the historical time steps, setting the step length to be 5, and setting the second window size to beExtracting=Each window is encoded by using a bidirectional gating circulation unit to obtain characteristicsFor each featureSplicing the hidden states at the last moment to obtain mid-term characteristics
As a preferred aspect of the invention, long term characteristics in step 4Is obtained by combining dynamic characteristicsEncoding using bi-directional LSTM to obtainSubsequently, a multi-head self-attention mechanism pair is usedStrengthening dependence to obtainFinally, carrying out an average pooling operation to obtain long-term characteristicsThe short-term characteristic, the medium-term characteristic and the long-term characteristic are spliced, the information interaction with the remembering is carried out, and the multi-scale dynamics characteristic is obtained after the average pooling
As a preferred aspect of the present invention, step 6 specifically includes the steps of:
step 6.1. Characterizing the agent And map featuresAveraging and pooling to obtain featuresAndAt the same time, from the smart body featuresExtracting features of a target vehicle;
Step 6.2. The two features averaged in step 6.1 are pooled and the features of the target vehicleSplicing, and obtaining a characteristic vector of the target vehicle in the lane change cut in after nonlinear mapping;
Step 6.3. Feature vector based on target vehicle occurrence lane change cut inOutputting probability of lane change cut in of target vehicleThe expression is:
;
wherein, the Activating a function for Sigmoid;
step 6.4. Probability of the target vehicle to generate lane change cut in Features with the target vehicleFusion to obtain final characteristics of the target vehicle;
Step 6.5 Using the final characteristics of the target vehicleReplacement agent featuresTarget vehicle characteristics in (a)Obtaining the final intelligent body characteristics
As a preference of the invention, step 7 will be the final characteristics of the target vehicleFinal agent characteristicsAnd map featuresSplicing, mapping by using two MLPs, and outputting multi-mode tracks of a target vehicleTrajectory probability
The invention also provides a computer readable medium, on which a computer program is stored, which when being executed by a processor implements the above-mentioned risk, dynamics and intention collaborative trajectory prediction method for high risk scenarios.
The invention has the advantages and beneficial effects that:
(1) According to the invention, aiming at emergency high-risk scenes such as sudden parallel line, sudden braking deceleration, shielding occurrence and the like, a collaborative prediction framework for fusing risk clues, dynamics information and driving intention is established, and high-sensitivity risk perception can be realized at an early stage of rapid accumulation of risk factors through a priori enhanced risk attention mechanism and three-body interaction micro-graph modeling, so that timeliness and accuracy of track prediction are effectively improved, intelligent prediction and safety enhancement of the high-risk scenes are realized, and the method has higher engineering application value and popularization significance.
(2) The invention provides multi-scale dynamic time sequence coding, and realizes layered modeling of the non-stationary motion process of the vehicle by comprehensively utilizing short-term disturbance characteristics, medium-term stability trend and long-term behavior modes, so that the model can understand dynamic evolution of the vehicle on different time scales, and the track prediction robustness and generalization capability under complex traffic flow are improved.
(3) According to the invention, an explicit intention recognition auxiliary task is introduced into a traditional track prediction framework, and semantic constraint and optimization are carried out by combining driving intention (such as lane change cut-in, sharp turn and the like), so that the predicted track is more in line with actual driving intention and traffic rules, and potential conflict and collision risk can be recognized in advance.
(4) The method can be widely applied to various complex scenes such as urban roads, highways, intersections and the like, and particularly shows stronger stability and response speed in burst high-risk scenes. Through the end-to-end collaborative optimization structure, the invention can provide reliable risk priori and behavior prediction support for an automatic driving system, and the overall driving safety is obviously improved.
(5) Compared with the traditional model based on historical track regression or social compatibility assumption, the invention leads the model not to assume friendly traffic behavior any more by introducing a risk attention and structured interaction modeling mechanism, can actively identify dangerous game situation, realizes earlier and more accurate risk early warning and dynamic response, and improves the robustness and safety decision quality of the system in extreme scenes.
(6) The invention can be directly deployed on the existing automatic driving hardware platform, mainly relies on environment and motion data acquired by vehicle-mounted sensors (laser radar, camera, millimeter wave radar, GPS and the like), does not need to add expensive hardware modules or additional sensing equipment, and has good portability and engineering application value.
(7) The method provided by the invention is characterized in that key evaluation indexes are obtained,,,The method obviously exceeds the track algorithms such as MTR, HIVT, SIMPL and the like of the existing mainstream, and the feasibility of the method is fully proved.
Drawings
Other objects and attainments together with a more complete understanding of the invention will become apparent and appreciated by referring to the following description taken in conjunction with the accompanying drawings. In the drawings:
Fig. 1 is a diagram showing the positional relationship among an autonomous vehicle (Ego), a Target Vehicle (TV), a nearest vehicle (CIPV) ahead of a self-propelled lane, and surrounding vehicles (EV) provided in the present invention;
FIG. 2 is a flowchart of a method for predicting a collaborative trajectory of risk, dynamics and intention for a high risk scenario provided by the present invention;
FIG. 3 is a flow chart of data processing of the a priori enhanced risk awareness mechanism and the three-dimensional interaction micro-graph portion of step 3 of the present invention.
Detailed Description
The following detailed description of the application, taken in conjunction with the accompanying drawings, is not intended to limit the scope of the application, so that those skilled in the art may better understand the technical solutions of the application and their advantages.
As shown in fig. 1 to 3, the present embodiment provides a risk, dynamics and intention collaborative trajectory prediction method for a high risk scenario, which includes the following steps:
Step 1, data acquisition and feature extraction:
step 1.1. High risk trajectory data screening:
The method comprises the steps of collecting running data of an automatic driving vehicle in an actual road traffic environment, obtaining information of surrounding vehicles through vehicle-mounted sensors (such as a laser radar, a millimeter wave radar, a camera, a GPS sensor and the like), preprocessing the original data, and screening track fragments containing obvious abnormal driving behaviors or collision risks.
In the embodiment, the information of the surrounding vehicles comprises information of track information of the surrounding vehicles, types of roads and the like, wherein the track information of the surrounding vehicles comprises coordinates of historical tracks of the vehicles, speeds, accelerations, steering angles and the like of the vehicles, and the types of the surrounding vehicles comprise bicycles, electric vehicles, automobiles, trucks and the like.
And 1.2, carrying out standardization processing on the screened high-risk emergency track fragments (track fragments containing obvious abnormal driving behaviors or collision risks), wherein the standardization processing comprises time synchronization and coordinate system synchronization, and carrying out time synchronization on a plurality of vehicle-mounted sensors by using pulse signals provided by a GPS (global positioning system) during the time synchronization.
Step 1.3. Extracting multidimensional motion characteristics of all vehicles in the scene from the standardized track data, wherein the multidimensional motion characteristics comprise the position information of the vehiclesSpeed informationAcceleration informationLateral acceleration informationLongitudinal acceleration informationRelative position of each vehicleRelative velocityAnd Time-to-Collision (TTC)Yaw rate informationMap features;
Wherein, the Representing the position of the nth vehicle, the position of the ith vehicle,The total of the historical time steps is represented,Represents the firstX and y coordinates of the ith vehicle at the time step; representing the speed of the nth vehicle, the speed of the ith vehicle ,Represents the firstThe speed of the ith vehicle at a time step,Representing the acceleration of the nth vehicle, the acceleration of the ith vehicle,Represents the firstAcceleration of the ith vehicle at a time step,Represents the lateral acceleration of the nth vehicle, the lateral acceleration of the ith vehicle,Represents the firstLateral acceleration of the ith vehicle at a time step,Representing the longitudinal acceleration of the nth vehicle,Representing the yaw rate of the nth vehicle, the yaw rate of the ith vehicle,Represents the firstYaw rate of the ith vehicle at the individual time steps;
relative position of each vehicle ,,,,],TV denotes a target vehicle, EV denotes a surrounding vehicle, ego denotes an autonomous vehicle, CIPV denotes a preceding vehicle closest to a current lane of an own vehicle (autonomous vehicle).Representing the relative position (relative distance) of the target vehicle from the surrounding vehicles,Representing the relative position of the target vehicle and the nearest preceding vehicle to the current lane of the host vehicle (autonomous vehicle),Representing the relative position of the autonomous vehicle and the target vehicle,Representing the relative position of the surrounding vehicle and the preceding vehicle nearest to the current lane of the own vehicle (autonomous vehicle),Representing the relative position of the autonomous vehicle to surrounding vehicles,Representing the relative position of the front vehicle nearest to the current lane of the autonomous vehicle (autonomous vehicle); representing the total historical time steps in the past In each time step, the relative distance between the target vehicle and the surrounding vehicles is the same as other relative position parameters, and the total historical time steps are recordedThe relative distance between two vehicles at each time step;
Relative velocity ,,,,,,Representing the relative speed of the target vehicle and surrounding vehicles,Representing the relative speed of the subject vehicle and the nearest preceding vehicle to the current lane of the host vehicle (autonomous vehicle),Representing the relative speed of the autonomous vehicle and the target vehicle,Representing the relative speed of the surrounding vehicle and the preceding vehicle nearest to the current lane of the own vehicle (autonomous vehicle),Representing the relative speed of the autonomous vehicle and the surrounding vehicles,Representing the relative speed of the lead vehicle nearest to the current lane of the autonomous vehicle (autonomous vehicle); representing the total historical time steps in the past In the method, the relative speed from the target vehicle to surrounding vehicles at each time step is the same as other relative speed parameters;
Time of collision ,,,,,,Representing the time of collision of the target vehicle with surrounding vehicles,Representing the time of collision of the target vehicle with the preceding vehicle nearest to the current lane of the own vehicle (autonomous vehicle),Representing the time of collision of the autonomous vehicle with the target vehicle,Representing the time of collision of surrounding vehicles with the preceding vehicle nearest to the current lane of the own vehicle (autonomous vehicle),Representing the time of collision of the autonomous vehicle with surrounding vehicles,Representing the time of collision of an autonomous vehicle with a preceding vehicle nearest to the current lane of the autonomous vehicle (autonomous vehicle); representing the total historical time steps in the past In each time step, the collision time from the target vehicle to the surrounding vehicles is the same as other collision time parameters.
Step 1.4. Feature recombination, namely splicing the position, the speed and the acceleration of the vehicle to obtain track featuresRelative position of each vehicleRelative velocityTime-to-Collision (TTC)Splicing to obtain an outward risk clueSplicing the acceleration, the transverse acceleration, the longitudinal acceleration and the yaw acceleration to obtain dynamic characteristics;
Specifically, in this embodiment, the position, speed, and acceleration information of the vehicle are spliced to obtain the track featureCan be expressed as:
In the formula, Representing vector concatenation operations.
Relative distance of each vehicleRelative velocityTime-to-Collision (TTC)Splicing to obtain an outward risk clueAt first, the relative position of each time step t is extractedRelative velocityTime to collisionSplicing to obtain an outward risk clue at the time t:
Total historical time steps in the pastOutward risk cues of (a)Can be expressed as:
splicing the acceleration, the transverse acceleration, the longitudinal acceleration and the yaw acceleration to obtain dynamic characteristics Can be expressed as:
step2, track feature and map feature coding:
step 2.1. First, the track is characterized using a PointNet-based polyline encoder And map featuresCoding to obtain the history track characteristicsAnd map featuresCan be expressed as:
step 3, modeling the risk of the outward risk clue:
step 3.1. Use of a double gated loop unit (GRU, hidden dimension 256) for outbound risk cues Performing time sequence modeling to obtain a hidden state sequenceAnd using a simple multi-layer perceptron (MLP) to perform nonlinear mapping to obtain potential risk scoresCan be expressed as:
In the formula, For the coded sequence of hidden states,Representing the risk potential score at each historical time step,Representing the hidden state at the moment of the history t,Representing the risk potential score at time t of the history.
Step 3.2 scoring the risk potential at time t using a priori risk scoring functionCollision time between an autonomous vehicle (Ego) and a Target Vehicle (TV)Distance to each otherAnd distance variationPerforming risk scoring to obtain a risk score at the time tThe expression is:
In the formula, Is a learnable scalar coefficient, is used to balance the contributions of different risk factors,As a function of the reciprocal of the distance,For distance contraction functions, the physical prior term is specifically defined as follows:
TTC constraint: the risk score is higher the shorter the collision time between the own vehicle (automatic driving vehicle) and the Target Vehicle (TV) is, which indicates the current time t;
distance reciprocal function:
In the formula, For a constant stability, the risk contribution is limited when the vehicle distance is large, whereas when the distance is gradually shortened to a very close,The risk of a sudden amplification makes the model particularly sensitive to the scene of an imminent collision.
Distance contraction function:
The function represents when the distance between two vehicles is shortened In the time-course of which the first and second contact surfaces,The positive value and the faster the shortening, the larger the value, representing a rapid accumulation of risk, while when the distance remains unchanged or increases, it is stated that there is no newly increased risk of convergence.
Step 3.3. Risk score for time t Using softmax functionNormalizing to obtain the risk perception attention weight at each moment tCan be expressed as:
Step 3.4. Use of the hidden state sequence obtained in step 3.1 Risk perceived attention weight at time t as obtained in step 3.3Weighted summation is carried out to obtain the time characteristics of risk perceptionCan be expressed as:
step 3.5. Relative position to each vehicle And relative velocityTime-to-Collision (TTC)Averaging in the time dimension, generating outward risk feature vectors organized by Target Vehicle (TV), autonomous vehicle (Ego), nearest vehicle in front of the self-propelled lane (CIPV);
Specifically, to the relative positionRelative velocityTime-to-Collision (TTC)Averaging pooling in the time dimension can be expressed as:
wherein, the Respectively representing the relative position, the relative speed and the collision time after the average pooling;
Organizing and generating outward risk feature vectors by a Target Vehicle (TV), an autonomous vehicle (Ego), a nearest vehicle (CIPV) in front of a driving lane of a host vehicle Can be expressed as:
specifically, in this embodiment, the subscript avg represents average pooling, and the upper subscript is The value is positive and vice versa, for example: representing the relative position of the target vehicle after the average pooling and the nearest preceding vehicle of the current lane of the own vehicle (automatic driving vehicle), taking a positive value Representing the relative position of the target vehicle and the automatic driving vehicle after the average pooling and taking a negative value.
Step 3.6. Use a simple fully connected linear layer to outward risk feature vectorThe high-dimensional mapping is performed, and can be expressed as:
where γ is a fully connected linear layer.
Step 3.7. Mapping the resulting high dimensional feature vectorsAs nodes of the graph neural network, a fully connected small graph is formed, which can be expressed as:
,
And is stacked up The layer's graph annotation mechanism, the standard followed by each layer's updates is in the form of GAT, which can be expressed as:
In the formula, Is a weight vector that is trainable and,Representing the set of neighbor nodes of node i,Representing the mapping characteristics of the ith node of the first layer,Representing the relationship of the edge between the i-th node and the neighbor node j,Representing the update weight of the update weight,Features representing node i of the updated layer i +1,Features of node i representing the first layer.
Step 3.8. After updating by the graph attention mechanism, the sink node obtains the node embedding of the target vehicleOverall interaction situationCan be expressed as:
In the formula, Reflecting the node embedment of the target vehicle,The overall interaction situation of the three is synthesized,Node embedded features representing the latest preceding vehicle of the updated target vehicle, the automatic driving vehicle and the own vehicle lane;
step 3.9. Time characterization of risk perception in step 3.4 Node embedding with the target vehicle in step 3.8Overall interaction situationPerforming splicing fusion, and performing nonlinear mapping by using a simple multi-layer perceptron (MLP) to obtain a final risk representation vectorCan be expressed as:
In the formula, In order to splice the features after fusion,For the final risk-representative vector,Representing a simple multi-layer perceptron consisting of a fully connected layer and a Relu activation function.
Step 3.10. The historical track obtained in step 2.1 is characterizedAnd the final risk characterization vector obtained in step 3.9Fusion using a simple MLP to obtain features
Step 4, dynamics characteristic multi-scale time sequence coding:
step 4.1. Characterization of dynamics Window cutting is performed according to the historical time steps, the step length is set to be 1, and the first window size is set to beIn the present embodimentExtraction ofWindows, for each window useThe encoder encodes and splices to obtain short-term characteristicsCan be expressed as:
=
In the formula, Consists of two layers of one-dimensional convolution, batch Normalization (BN), nonlinear activation function ReLU and average pooling layers,,Representing the short-term characteristics of the kth window,Representing the kth window.
Step 4.2. Characterization of dynamicsPerforming secondary window cutting according to the historical time steps, setting the step length to be 5, and setting the second window size to beExtracting=A plurality of windows, each window is encoded by a bi-directional gating cyclic unit (BiGRU) to obtain characteristicsFor each featureSplicing the hidden states at the last moment to obtain mid-term characteristicsCan be expressed as:
}
In the formula, Representing the (k) th window of the window,,Represents the hidden state at the last instant of the kth window,A bi-directional gate-controlled loop unit is shown,Representing the last instant of the window.
Step 4.3. Characterization of dynamicsEncoding using bi-directional LSTM (BiLSTM) to obtainSubsequently, a multi-head self-attention mechanism pair is usedStrengthening dependence to obtainFinally, carrying out an average pooling operation to obtain long-term characteristicsCan be expressed as:
In the formula, Comprising three stacked bi-directional LSTM layers,In order to be a multi-headed self-attention mechanism,Representing an average pooling operation.
Step 4.4. Short term characterization in step 4.1Mid-term characteristics obtained in step 4.2Long term characteristics obtained in step 4.3Splicing to obtainThen using a multi-head self-attention mechanism pairPerforming the interaction of the remembered information to obtain the characteristicsFeatures ofCarrying out average pooling to obtain multi-scale dynamic characteristicsCan be expressed as:
Step 4.5. The features obtained in step 3.10 And the multiscale dynamics feature obtained in step 4.4Feature acquisition using an MLP fusionCan be expressed as:
step 5, feature coding:
step 5.1. The features obtained in step 4.5 And map featuresSplicing, encoding with Transformer Encoder (transducer encoder) to obtain the feature of the agentAnd map featuresCan be expressed as:
In the formula, The Transformer Encoder layers are represented, consisting of a multi-headed self-attention mechanism, a feed-forward fully connected network, and residual connections and layer normalization.
Step 6, intention recognition auxiliary task:
Step 6.1. Characterizing the agent of step 5.1 And map featuresAveraging and pooling to obtain featuresAndAt the same time, from the smart body featuresExtracting features of a target vehicle
In particular, for agent featuresAnd map featuresCarrying out average pooling to obtain characteristicsAndCan be expressed as:
step 6.2. The features obtained in step 6.1 And features of the target vehicleSplicing, and performing nonlinear mapping by using one MLP to obtain a characteristic vector of the target vehicle when the lane change cut in (forced lane change/jam-up) occursCan be expressed as:
,,
wherein, the Representing the spliced features;
Step 6.3. Feature vector based on target vehicle occurrence lane change cut in Outputting probability of lane change cut in of target vehicleCan be expressed as:
wherein, the The function is activated for Sigmoid.
Step 6.4. Probability of the target vehicle to generate lane change cut inFeatures with the target vehicleFusion to obtain final characteristics of the target vehicleCan be expressed as:
step 6.5 Using the final characteristics of the target vehicle Replacement of original agent featuresTarget vehicle characteristics in (a)Obtaining the final intelligent body characteristics
Step 7, track decoding:
step 7.1. Final characterization of target vehicle Final agent characteristicsAnd map featuresSplicing, mapping by using two MLPs, and outputting multi-mode tracks of a target vehicleTrajectory probabilityCan be expressed as:
It should be noted that the high risk scenario described in this embodiment is a high risk emergency event, including but not limited to, a vehicle emergency braking event, a vehicle sharp turn or a sharp turn event, a vehicle rapid lane change event accompanied by a significant deceleration or acceleration, a vehicle distance rapid decrease, a vehicle sudden lane change overtaking, and a collision risk index (such as a collision time TTC) being smaller than a set threshold.
In order to verify the feasibility of the track prediction method provided by the invention, the invention performs experiments on a large-scale high-risk data set ESP, and combines the large-scale high-risk data set ESP with the existing mainstream advanced track prediction algorithm to obtain average accuracyMinimum displacement errorMinimum endpoint errorAnd rate of missing detectionThe results of the comparison are shown in Table 1.
Table 1 shows the experimental results of each predictive algorithm on the ESP high risk dataset
Method mAP6 minADE6 minFDE6 MR6
MTR 0.25 2.03 3.48 0.46
HIVT 0.23 2.11 3.62 0.48
SIMPL 0.25 1.92 3.33 0.43
HPNet 0.24 2.11 3.32 0.43
TNT 0.22 2.33 5.21 0.48
MTP+ESP 0.26 1.94 3.31 0.44
The invention (ours) 0.36 1.21 2.01 0.28
As can be seen from Table 1, compared with the mainstream advanced trajectory algorithm, the trajectory prediction method provided by the invention has the average precisionMinimum displacement errorMinimum endpoint errorAnd rate of missing detectionThe method has the advantages that the method is greatly improved, and accurate prediction can be realized under a high-risk scene.
The invention also provides electronic equipment, which comprises one or more processors and a memory, wherein the memory is used for storing one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors realize the track prediction method.
The present invention also provides a computer readable medium having stored thereon a computer program which when executed by a processor implements the trajectory prediction method described above.
Those skilled in the art will appreciate that all or part of the functions of the various methods/modules in the above embodiments may be implemented by hardware, or may be implemented by a computer program. When all or part of the functions in the above embodiments are implemented by means of a computer program, the program may be stored in a computer-readable storage medium, which may include a read-only memory, a random access memory, a magnetic disk, an optical disk, a hard disk, etc., and the program is executed by a computer to implement the functions. For example, the program is stored in the memory of the device, and when the program in the memory is executed by the processor, all or part of the functions described above can be realized.
In addition, when all or part of the functions in the above embodiments are implemented by means of a computer program, the program may be stored in a storage medium such as a server, another computer, a magnetic disk, an optical disk, a flash disk, or a removable hard disk, and the program in the above embodiments may be implemented by downloading or copying the program into a memory of a local device or updating a version of a system of the local device, and when the program in the memory is executed by a processor.
The above description is only a specific embodiment of the present invention, but the scope of the present invention is not limited thereto. Any changes or substitutions that would be easily recognized by those skilled in the art within the technical scope of the present disclosure are intended to be covered by the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (9)

1. The high-risk scene-oriented risk, dynamics and intention collaborative trajectory prediction method is characterized by comprising the following steps of:
Step 1, collecting data, extracting multidimensional motion characteristics of a vehicle, and carrying out characteristic recombination to obtain a track characteristic F, an outward risk clue R and a dynamics characteristic D;
Step 2, encoding the track characteristic F and the map characteristic M to obtain a historical track characteristic F 1 and a map characteristic M 1;
Step 3, carrying out time sequence modeling on the outward risk clue R to obtain a hidden state sequence H and a potential risk score S, and calculating a risk score according to the potential risk score at the moment t and relative parameters between the automatic driving vehicle and the target vehicle Carrying out weighted summation on the hidden state sequence H and the risk perception attention weight alpha t to obtain a time feature z of risk perception;
Constructing a full-connection small graph based on the outward risk feature vector, updating the full-connection small graph through a graph attention mechanism, obtaining node embedding and overall interaction situation of a target vehicle by a sink node, splicing and fusing the node embedding and overall interaction situation with a risk-aware time feature z to obtain a final risk representation vector z *, and fusing the final risk representation vector z with a history track feature F 1 to obtain features
Step 4, obtaining short-term, medium-term and long-term characteristics according to the dynamics characteristics D, and processing the three characteristics to obtain a multi-scale dynamics characteristic F D which is matched with the characteristicsFusing to obtain a characteristic F 2;
Step 5, splicing the feature F 2 and the map feature M 1, and then encoding to obtain an agent feature F agent and a map feature M 2;
Step 6, obtaining the probability P cut of the occurrence of the lane change cut in of the target vehicle based on the agent characteristic F agent and the map characteristic M 2, thereby obtaining the final characteristic of the target vehicle Replacing the target vehicle feature F TV in the agent feature F agent with the feature to obtain the final agent feature
Step 7. Final characteristics of the target vehicleFinal agent characteristicsSplicing the map features M 2, decoding and outputting a prediction result;
The method comprises the steps of firstly carrying out standardized processing after data acquisition in the step 1, and then extracting multidimensional motion characteristics of a vehicle, wherein the multidimensional motion characteristics of the vehicle comprise position information, speed information, acceleration information, transverse acceleration information, longitudinal acceleration information of the vehicle, relative position, relative speed and collision time of each vehicle, yaw rate information and map characteristics;
the full-connection small graph takes outward risk feature vectors generated by a target vehicle, an automatic driving vehicle and a nearest vehicle organization in front of a self-vehicle driving lane as nodes of a graph neural network;
The short-term features and the medium-term features are obtained by cutting the dynamic features according to historical time steps, then encoding and splicing, wherein the step length of the short-term features is set to 1, the step length of the medium-term features is set to 5, the window size is set to 15, and the long-term features are encoded by using a bidirectional LSTM.
2. The method for predicting risk, dynamics and intention collaborative trajectories for high-risk scenes according to claim 1, wherein in step 3, a double-layer gating circulation unit is used to perform time sequence modeling on an outward risk clue R to obtain a hidden state sequence H, and a multi-layer perceptron MLP is used to perform nonlinear mapping to obtain a potential risk score S.
3. The high risk scenario oriented risk, dynamics and intention co-trajectory prediction method according to claim 1, wherein the risk score at time t in step 3The expression of (2) is:
Where alpha, beta, gamma are learnable scalar coefficients for balancing the contributions of different risk factors, As a function of the reciprocal of the distance,As a function of distance contraction, the collision time between the autonomous vehicle and the target vehicle isThe relative distance isThe distance is changed to
The reciprocal function of distance is:
wherein ε is a stability constant;
The distance contraction function is:
In the formula, The relative distance between the automatic driving vehicle and the target vehicle at the time t-1;
Thereafter using softmax function to score risk at time t And (5) normalizing to obtain the risk perception attention weight alpha t at each moment t.
4. The method for predicting the risk, dynamics and intention collaborative trajectories of the high-risk scene according to claim 1 is characterized in that when a full-connection small graph is constructed in step 3, the relative positions, the relative speeds and the collision time of all vehicles are subjected to average pooling in the time dimension, outward risk feature vectors are organized and generated according to target vehicles, automatic driving vehicles and nearest vehicles in front of a self-driving lane, then the outward risk feature vectors are subjected to high-dimensional mapping by using a full-connection linear layer, and the high-dimensional feature vectors obtained through mapping are used as nodes of a graph neural network to form the full-connection small graph.
5. The method for predicting risk, dynamics and intention co-tracks for high risk scenes according to claim 1, wherein the standard followed by the update of each layer of the graph attention mechanism in step 3 is in the form of GAT, after the update of the graph attention mechanism, the sink node obtains a node embedded g T of the target vehicle, and the expression of the overall interaction situation g all is:
In the formula, g T reflects node embedding of the target vehicle, g all synthesizes the overall interaction situation of the three, and h T,hE,hc represents node embedding characteristics of the updated target vehicle, the automatic driving vehicle and the vehicle in front of the vehicle lane.
6. The method for predicting the risk, dynamics and intention collaborative trajectory for the high-risk scene according to claim 1, wherein the short-term feature F W in the step 4 is obtained by performing window cutting on the dynamics feature D according to a historical time step t h, setting a step length to be 1, setting a first window size to be w s =5, extracting t h-ws +1 windows, encoding each window by using a 1D-CNN encoder, and splicing to obtain a short-term feature F W;
The mid-term feature F m is obtained by performing secondary window cutting on the dynamic feature D according to historical time steps, setting the step length to 5, setting the second window size to w m =15, and extracting Each window is encoded by using a bidirectional gating circulation unit to obtain characteristicsFor each featureAnd splicing the hidden states at the last moment to obtain the mid-term characteristic F m.
7. The method for predicting risk, dynamics and intention collaborative trajectories for high risk scenes according to claim 1, wherein the long-term feature F L in step 4 is obtained by encoding the dynamics feature D with bi-directional LSTM to obtain H L, then using a multi-head self-attention mechanism to strengthen the dependence on H L to obtain H' L, finally performing an averaging pooling operation to obtain the long-term feature F L, and performing stitching, memorial information interaction and averaging pooling on the three features to obtain the multi-scale dynamics feature F D.
8. The high risk scenario-oriented risk, dynamics and intent co-trajectory prediction method according to claim 1, wherein step 6 specifically comprises the steps of:
Step 6.1. The agent feature F agent and the map feature M 2 are subjected to average pooling to obtain features AndSimultaneously, extracting a feature F TV of the target vehicle from the agent feature F agent;
Step 6.2, splicing the two features obtained in the step 6.1 through the average pooling and the feature F TV of the target vehicle, and obtaining a feature vector F cut_in of the target vehicle when the lane change cut in occurs after nonlinear mapping;
Step 6.3. Based on the feature vector F cut_in of the target vehicle occurrence lane change cut in, the probability P cut of the target vehicle occurrence lane change cut in is output, where the expression is:
Pcut=σ(Fcut_in);
wherein sigma is a Sigmoid activation function;
Step 6.4. Fusing the probability P cut of the occurrence of the lane change cut in of the target vehicle with the characteristic F TV of the target vehicle to obtain the final characteristic of the target vehicle
Step 6.5 Using the final characteristics of the target vehicleReplacing the target vehicle feature F TV in the agent feature F agent to obtain the final agent feature
9. A computer readable medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the high risk scenario oriented risk, dynamics and intent co-trajectory prediction method of any one of claims 1 to 8.
CN202511608914.0A 2025-11-05 2025-11-05 A method for predicting the collaborative trajectory of risk, dynamics, and intent in high-risk scenarios Active CN121043912B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202511608914.0A CN121043912B (en) 2025-11-05 2025-11-05 A method for predicting the collaborative trajectory of risk, dynamics, and intent in high-risk scenarios

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202511608914.0A CN121043912B (en) 2025-11-05 2025-11-05 A method for predicting the collaborative trajectory of risk, dynamics, and intent in high-risk scenarios

Publications (2)

Publication Number Publication Date
CN121043912A CN121043912A (en) 2025-12-02
CN121043912B true CN121043912B (en) 2026-02-10

Family

ID=97815036

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202511608914.0A Active CN121043912B (en) 2025-11-05 2025-11-05 A method for predicting the collaborative trajectory of risk, dynamics, and intent in high-risk scenarios

Country Status (1)

Country Link
CN (1) CN121043912B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN121393151B (en) * 2025-12-22 2026-02-17 同济大学 Fine-grained multitasking driving risk prediction method integrating track prediction auxiliary tasks

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120220434A (en) * 2025-03-25 2025-06-27 河北喜悦智慧科技有限公司 Edge computing traffic light emergency control method and device for sudden traffic events
CN120628104A (en) * 2025-06-06 2025-09-12 天津中德应用技术大学 A logistics robot path planning method based on multimodal perception

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111247565B (en) * 2017-09-06 2022-06-03 瑞士再保险有限公司 Electronic logging and tracking detection system for mobile telematics devices and corresponding method thereof
CN120766504A (en) * 2025-06-19 2025-10-10 清华大学 Traffic risk analysis method, device, equipment and medium based on scene understanding
CN120628135A (en) * 2025-06-27 2025-09-12 重庆长安汽车股份有限公司 Trajectory planning method, device, vehicle, and computer-readable storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120220434A (en) * 2025-03-25 2025-06-27 河北喜悦智慧科技有限公司 Edge computing traffic light emergency control method and device for sudden traffic events
CN120628104A (en) * 2025-06-06 2025-09-12 天津中德应用技术大学 A logistics robot path planning method based on multimodal perception

Also Published As

Publication number Publication date
CN121043912A (en) 2025-12-02

Similar Documents

Publication Publication Date Title
CN114092751A (en) A trajectory prediction method and device
CN114494158A (en) Image processing method, lane line detection method and related equipment
CN118823308A (en) A target detection and tracking network model and method based on multi-source heterogeneous sensor information fusion
CN121043912B (en) A method for predicting the collaborative trajectory of risk, dynamics, and intent in high-risk scenarios
CN118306428B (en) Method for constructing intelligent body driving prediction model, prediction method, equipment and medium
CN119734711A (en) Multimodal perception and decision-making system for intelligent connected vehicles based on deep learning
CN116309722A (en) Determination method, model training method and device of object's perception result state
US20250078519A1 (en) Road geometry estimation for vehicles
CN119939405B (en) Pedestrian track prediction and emergency braking test method in automatic driving platform
CN119992486A (en) Systems and methods for vision-language planning (VLP) based models for autonomous driving
CN118269968A (en) Prediction method of automatic driving collision risk fused with online map uncertainty
CN114120270A (en) Point cloud target detection method based on attention and sampling learning
EP4231044A1 (en) Object detection and state estimation from deep learned per-point radar representations
CN120123883B (en) Automatic driving automobile collision risk prediction method aiming at long tail phenomenon
CN117207959A (en) Driving process risk identification method, terminal and storage medium
CN116300928A (en) Data processing method and data processing model training method for vehicles
CN120963755A (en) An end-to-end autonomous driving system based on multimodal fusion
CN116935074B (en) Multi-target tracking method and device based on adaptive association of depth affinity network
CN119975387A (en) Optimizing Situational Awareness for Planners in Autonomous Driving
CN119218256A (en) Target-driven multimodal trajectory prediction method, vehicle, equipment and medium
CN120462453B (en) A collision risk prediction method for autonomous vehicles in extreme weather
CN116863430B (en) Point cloud fusion method for automatic driving
CN116872974A (en) BEV visual angle probability motion prediction system and method
CN121786477A (en) Track planning model generation, vehicle track planning device, method and medium
WO2026026092A1 (en) Target detection method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant