Evaluation of identity-stable human video generation under controlled motion

Hi everyone,

I’m an independent AI video researcher working on identity-stable human video generation.

I recently completed a small evaluation set focused on:

- Identity consistency across frames

- Neutral emotion stability

- Controlled motion (entry, pause, exit)

- Camera and lighting lock

Each test was evaluated using a structured pass/fail checklist covering identity, motion, environment, camera, and system integrity.

I’m sharing this here to:

- Learn how others evaluate identity stability in generative video

- Check whether these evaluation criteria align with current best practices

- Get feedback on benchmarking structure (without sharing proprietary details)

Thanks!