The dataset could not be loaded because the splits use different data file formats, which is not supported. Read more about the splits configuration. Click for more details.
Error code: FileFormatMismatchBetweenSplitsError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
π₯ Reproduce Website Demos
[Environment Set Up] Our environment setup is identical to CogVideoX. You can refer to their configuration to complete the environment setup.
conda create -n robomaster python=3.10 conda activate robomasterRobotic Manipulation on Diverse Out-of-Domain Objects.
python inference_inthewild.py \ --input_path demos/diverse_ood_objs \ --output_path samples/infer_diverse_ood_objs \ --transformer_path ckpts/RoboMaster \ --model_path ckpts/CogVideoX-Fun-V1.5-5b-InPRobotic Manipulation with Diverse Skills
python inference_inthewild.py \ --input_path demos/diverse_skills \ --output_path samples/infer_diverse_skills \ --transformer_path ckpts/RoboMaster \ --model_path ckpts/CogVideoX-Fun-V1.5-5b-InPLong Video Generation in Auto-Regressive Manner
python inference_inthewild.py \ --input_path demos/long_video \ --output_path samples/long_video \ --transformer_path ckpts/RoboMaster \ --model_path ckpts/CogVideoX-Fun-V1.5-5b-InP
π Benchmark Evaluation (Reproduce Paper Results)
βββ RoboMaster
βββ eval_metrics
βββ VBench
βββ common_metrics_on_video_quality
βββ eval_traj
βββ results
βββ bridge_eval_gt
βββ bridge_eval_ours
βββ bridge_eval_ours_tracking
(1) Inference on Benchmark & Prepare Evaluation Files
- Generating
bridge_eval_ours. (Note that the results may vary slightly across different computing machines, even with the same seed. We have prepared the reference files undereval_metrics/results)cd RoboMaster/ python inference_eval.py - Generating
bridge_eval_ours_tracking: Install CoTracker3, and then estimate tracking points with grid size 30 onbridge_eval_ours.
(2) Evaluation on Visual Quality
Evaluation of VBench metrics.
cd eval_metrics/VBench python evaluate.py \ --dimension aesthetic_quality imaging_quality temporal_flickering motion_smoothness subject_consistency background_consistency \ --videos_path ../results/bridge_eval_ours \ --mode=custom_input \ --output_path evaluation_resultsEvaluation of FVD and FID metrics.
cd eval_metrics/common_metrics_on_video_quality python calculate.py -v1_f ../results/bridge_eval_ours -v2_f ../results/bridge_eval_gt python -m pytorch_fid eval_1 eval_2
(3) Evaluation on Trajectory (Robotic Arm & Manipulated Object)
Estimation of TrajError metrics. (Note that we exclude some samples listed in
failed_track.txt, due to failed estimation by CoTracker3)cd eval_metrics/eval_traj python calculate_traj.py \ --input_path_1 ../results/bridge_eval_ours \ --input_path_2 ../results/bridge_eval_gt \ --tracking_path ../results/bridge_eval_ours_tracking \ --output_path evaluation_resultsCheck the visualization videos under
evaluation_results. We blend the trajectories of robotic arm and object throughout the entire video for better illustration.
- Downloads last month
- 14