Video Evaluation Dashboard

EvalForge v0.1.0 · 200 prompts · Evaluated 2026-03-15

Model Rankings

#ModelProviderGradeOverallSubject ConsistencyBackground ConsistencyTemporal FlickeringMotion SmoothnessDynamic DegreeAesthetic QualityImaging QualityOverall ConsistencyText Alignment
1Veo 3.1google_aiExcellent91.094.093.095.092.085.091.090.092.088.0
2Kling 2.6 Profal_aiGood87.091.089.092.088.082.087.086.088.083.0
3Seedance 1.5bytedanceGood84.089.087.090.086.080.084.083.085.081.0
4Wan 2.2alibabaGood81.086.085.088.084.078.080.082.083.079.0
5LTX 2.3lightricksModerate78.083.082.086.081.084.072.074.078.076.0

Model Comparison Radar

Score Heatmap

ModelSubject ConsistencyBackground ConsistencyTemporal FlickeringMotion SmoothnessDynamic DegreeAesthetic QualityImaging QualityOverall ConsistencyText Alignment
Veo 3.194.093.095.092.085.091.090.092.088.0
Kling 2.6 Pro91.089.092.088.082.087.086.088.083.0
Seedance 1.589.087.090.086.080.084.083.085.081.0
Wan 2.286.085.088.084.078.080.082.083.079.0
LTX 2.383.082.086.081.084.072.074.078.076.0

Category Breakdown

Veo 3.192.0
Subject Consistency
95
Text Alignment
90
Overall Consistency
93
Kling 2.6 Pro87.0
Subject Consistency
92
Text Alignment
85
Overall Consistency
89
Seedance 1.584.0
Subject Consistency
90
Text Alignment
82
Overall Consistency
86
Wan 2.281.0
Subject Consistency
87
Text Alignment
80
Overall Consistency
84
LTX 2.377.0
Subject Consistency
84
Text Alignment
77
Overall Consistency
79

Model Details