自监督的方法 (不同阈值):
causalTAD:自监督,异常数据不参与训练。双编码器。这个方法的缺点, road graph没参与,另外两个编码器并没有很好的结合在一起使用,编码阶段是完全分开的,所以特征没有共享。另外,在分数的计算中,也只是考虑了一个编码器的score
IRL: Sequential Anomaly Detection using Inverse Reinforcement Learning
Core Mechanism:
-
Takes GPS trajectories, breaks them into state-action pairs
-
State = [current_pos, initial_pos, time]
-
Action = velocity vector to next point (这里的动作是原始数据给出的,轨迹的移动)
-
Trains 10 neural network heads to learn normal behavior patterns
Reward & Anomaly Detection:
- Reward Scoring:
-
Calculates reward for each state-action pair
-
Normalizes: score = (reward - mean)/std_dev
-
Aggregates for full trajectory
- Uncertainty via Multiple Heads:
-
Uses variance across 10 heads' predictions
-
Low variance = model is confident
- Dual-Check Detection:
-
Flags anomaly if:
-
Normalized reward < -2 (unusual behavior)
-
Uncertainty < 1.5 (confident prediction)
-
The model essentially learns what's "normal" through rewards, and uses ensemble variance to ensure confident anomaly detection.
(Limitation):这10个模型应该相当接近,学到的东西
高斯分布的方法:
ATROM Open Anomalous Trajectory Recognition via Probabilistic Metric Learning:
场景的特殊设置。 it's the setting is use two seen to learn Gaussian distributions and use one unseen to test?
1. Training:
- Uses 2 types of anomalies + normal trajectories
- Learns Gaussian distributions for these known patterns
2. Testing:
- Tests on the 3rd type of anomaly (completely unseen in training)
- Model should identify this as "unknown" using the β threshold
They rotate which anomaly type is held out as unseen (navigation, detour, or route-switching) to thoroughly evaluate the model's ability to detect unknown patterns.
模型
ATROM's approach:
-
Training: Learns Gaussian distributions for each known pattern (normal + known anomaly types)
-
Detection: For new trajectories:
-
Calculates KL divergence scores against known distributions
-
If max score < threshold β → labels as "unknown"
-
If max score ≥ threshold β → assigns to highest-scoring known pattern
-
Main weakness: Uses arbitrary 90th percentile threshold β, which makes unfounded assumptions about anomaly distributions and doesn't adapt to real-world conditions.
GM-VSAE: Online Anomalous Trajectory Detection with Deep Generative Sequence Modeling
APPROACH:
- Training Phase:
-
RNN encodes trajectories into embeddings
-
Learns multiple Gaussian components representing different normal route types
-
Each component captures a different normal pattern (e.g., highway, local streets)
- Detection Phase:
-
Embeds new trajectory points through RNN
-
Calculates log probability against each Gaussian component
-
Accumulates and normalizes probabilities for online scoring
-
Flags anomaly when score exceeds threshold
WEAKNESS:
-
Primary: Threshold selection is empirical and dataset-dependent
-
Secondary:
-
Assumes anomalies deviate from learned normal patterns
-
Requires sufficient training data for each route type
This design enables efficient O(1) online detection but relies heavily on appropriate threshold setting.
MST-OATD: Multi-Scale Detection of Anomalous Spatio-Temporal Trajectories in Evolving Trajectory
多尺度的设计
No, it's not random sampling. The scales are created through systematic segmentation:
- Fixed-Size Sequential Segments:
-
Scale 1: Original sequence (no grouping)
-
Scale 2: Consecutive pairs of points
-
Scale 4: Consecutive groups of 4 points
For example, with 8 points [A,B,C,D,E,F,G,H]:
-
Scale 1: [A][B][C][D][E][F][G][H]
-
Scale 2: [AB][CD][EF][GH]
-
Scale 4: [ABCD][EFGH]
The model uses mean pooling within each segment to get the segment representation. If the trajectory length isn't divisible by the scale size, the last segment handles the remainder.
This systematic approach helps capture patterns at different granularities while maintaining the sequential nature of the trajectory.
高斯分布作为****分类器
The Gaussian Mixture Model generates different distributions for each scale level, capturing patterns specific to the different granularities of trajectory segmentation.
MainTUL: Mutual Distillation Learning Network for Trajectory-User Linking
The task is to identify which user generated a given trajectory, not to detect anomalies. The model learns to map trajectory patterns to specific users through supervised learning.
MainTUL is a mutual distillation learning network for trajectory-user linking that works as follows:
- Multi-Semantic Check-in Embedding:
-
Embeds POI locations (Wp), categories (Wc), and temporal information (Wt)
-
Uses trajectory augmentation through neighbor or random sampling
- Dual Encoders:
-
RNN-based encoder (LSTM) for input trajectories
-
Temporal-aware Transformer for augmented historical trajectories
-
Both process POI and category sequences in parallel
- Learning Strategy:
-
Combines cross-entropy supervision loss with ground truth
-
Uses mutual distillation by swapping trajectories and computing KL divergence between encoder predictions
ARDRNN: Anomalous Trajectory Detection using Recurrent Neural Network
It's a supervised learning method where:
-
Input: GPS trajectories between source-destination pairs
-
Labels: Binary classification (normal vs anomalous trajectories)
-
Training objective: Minimize cross-entropy loss
-
Architecture: RNN (LSTM/GRU) + MLP + Softmax classifier
The labels are obtained through complete-linkage clustering of trajectories, rather than manual annotation, but the learning itself is fully supervised.
规则based/排名based
iBOAT: Isolation-Based Online Anomalous Trajectory Detection
主要采用的是支持度和频率的统计 ( adaptive window detection的方式)。也就是current trajectory与历史轨迹数据的比较
Let me explain the adaptive window detection with a clear example:
Consider this scenario:
Starting condition:
-
Historical normal routes between S → D:
-
Route A: 100 taxis took this path
-
Route B: 200 taxis took this path
-
Route C: 150 taxis took this path
-
Total trajectories: 450
-
How the adaptive window works:
- Initial points (normal segment):
GPS points: g1 → g2 → g3 → g4
Window grows: Contains all 4 points
Support level: High (300/450 taxis took similar path)
Status: Window keeps growing
- Anomalous point detected:
Next point: g5 (deviates from normal routes)
Window before: [g1,g2,g3,g4,g5]
Support drops: Only 20/450 taxis took this path
Action: Window resets to just [g5]
- Recovery to normal path:
Next point: g6 (back on normal route)
New window starts: [g6]
Support increases: 280/450 taxis took this path
Status: Window can grow again
The key is that the window adapts - growing when the path is normal (high support) and resetting when anomalies are detected (low support), allowing for real-time detection of suspicious segments.
DeepTEA: Effective and Efficient Online Time-dependent Trajectory Outlier Detection.
匹配数据库增删的概念在里面
Structure:
-
Training set U = {U1, U2, ..., UC} where C is number of route types
-
Each Ui contains trajectories belonging to that route type
Update Process:
-
For new trajectory T:
-
Calculate probability for each route type
-
Use learned ranking model to get rank in each type
-
Add T to highest-probability type's subset (Umax)
-
Remove oldest trajectory from lowest-probability type's subset (Umin)
-
Key Point:
-
This maintains fixed dictionary size while keeping most relevant/representative trajectories for each route type
-
Uses ranking to balance trajectory distribution across types
Think of it like a prioritized circular buffer for each route type, where ranking determines insertion and removal priorities.
RL4OASD: Here's a clearer summary:
-
1. RSRNet: Uses LSTM to learn trajectory representations from traffic context and normal route features
-
2. ASDNet: Uses RL to detect anomalies with:
这里的动作是指对trajectory每个points打标签的行为
-
- Local reward: Promotes sequential consistency in labeling
-
- Global reward: Compares predictions against transition frequencies from historical data as pseudo-labels
-
The global reward essentially treats common trajectory patterns as "normal," which is a reasonable but imperfect assumption since anomalous behavior could also be frequent.
-
This makes the method weakly supervised rather than fully unsupervised, as it relies on frequency statistics as indirect supervision.
-
缺点,使用了历史数据的频率作为标签