X-EGO-CS / README.md
wangyz1999's picture
Update README.md
c0d9269 verified
metadata
license: mit
language:
  - en
tags:
  - video
  - video understanding
  - game
  - gameplay understanding
  - multi-agent
  - esport
  - counter-strike
  - opponent-modeling
  - ego-centric
  - cross ego-centric
task_categories:
  - video-classification
  - video-text-to-text
  - visual-question-answering
  - text-to-video
pretty_name: X-EGO

Dataset Card for X-Ego-CS

Links:

Cross-Ego Demo (Pistol Round)

Note: This demo concats videos in a grid. The original datasets videos are from individual player POV recording.

Dataset Summary

X-Ego-CS is a multi-agent gameplay video dataset for cross-egocentric multi-agent video understanding in Counter-Strike:2. It contains 124 hours of synchronized first-person gameplay footage captured from 45 professional-level Counter-Strike 2 matches. Each match includes multi-player egocentric video streams (POVs from all players) and corresponding state-action trajectories, enabling the study of team-level tactical reasoning and situational awareness from individual perspectives.

The dataset was introduced in the paper:

X-Ego: Acquiring Team-Level Tactical Situational Awareness via Cross-Egocentric Contrastive Video Representation Learning
Yunzhe Wang, Soham Hans, Volkan Ustun
University of Southern California, Institute for Creative Technologies (2025)
arXiv:2510.19150

X-Ego-CS supports research on multi-agent representation learning, egocentric video modeling, team tactic analysis, and AI-human collaboration in complex 3D environments.

How to Download

To download the full dataset using the Hugging Face CLI:

# Install the Hugging Face Hub client
pip install --upgrade huggingface_hub

# (Optional) Log in if the dataset is private
huggingface-cli login

# Download the dataset repository
huggingface-cli download wangyz1999/X-EGO-CS \
  --repo-type dataset \
  --local-dir ./X-EGO-CS \
  --resume-download \
  --max-workers 8

Dataset Structure

Data Fields

Segment Info

  • idx β€” Row index (unique for each segment)
  • partition β€” Dataset split label (e.g., train/test/val)
  • seg_duration_sec β€” Duration of the segment in seconds
  • start_tick, end_tick, prediction_tick β€” Game tick indices for start, end, and prediction points
  • start_seconds, end_seconds, prediction_seconds β€” Corresponding timestamps in seconds
  • normalized_start_seconds, normalized_end_seconds, normalized_prediction_seconds β€” Time values normalized to a [0–1] scale for model input

Match Metadata

  • match_id β€” Unique identifier of the match
  • round_num β€” Match round number
  • map_name β€” Name of the game map (e.g., de_mirage)

Player States (for player_0 β†’ player_9)

  • player_{i}_id β€” Unique identifier (e.g., Steam ID)
  • player_{i}_name β€” In-game player name
  • player_{i}_side β€” Team side (t for Terrorist, ct for Counter-Terrorist)
  • player_{i}_X, player_{i}_Y, player_{i}_Z β€” Player’s position coordinates (normalized or map-based)
  • player_{i}_place β€” Named location or area on the map (e.g., CTSpawn, SideAlley)

File Structure

data/
β”œβ”€β”€ demos/                       # Raw .dem files (by match)
β”‚   └── <match_id>.dem
β”œβ”€β”€ labels/                      # Global label datasets
β”‚   β”œβ”€β”€ enemy_location_nowcast_s1s_l5s.csv
β”‚   └── teammate_location_nowcast_s1s_l5s.csv
β”œβ”€β”€ metadata/                    # Match / round metadata
β”‚   β”œβ”€β”€ matches/
β”‚   β”‚   └── <match_id>.json
β”‚   └── rounds/
β”‚       └── <match_id>/
β”‚           └── round_<nn>.json
β”œβ”€β”€ trajectories/                # Player movement trajectories
β”‚   └── <match_id>/
β”‚       └── <player_id>/
β”‚           β”œβ”€β”€ round_<nn>.csv
β”‚           └── ...
└── videos/                      # Player POV recordings
    └── <match_id>/
        └── <player_id>/
            β”œβ”€β”€ round_<nn>.mp4
            └── ...

Dataset Creation

Curation Rationale

The dataset was designed to study cross-perspective alignment in team-based tactical games where each player’s view provides only a partial observation of the environment. Synchronizing multiple first-person streams allows for modeling shared situational awareness and implicit coordinationβ€”key ingredients in human team intelligence.

Source Data

  • Game: Counter-Strike 2 (Valve Corporation) in-game demo replay recording. Downloaded from top elo-leaderboard from Faceit.com
  • Recording setup: Screen capture of first-person gameplay, synchronized across all agents using timestamp alignment
  • Annotations: Automatically generated state-action trajectories derived from server replay data

Dataset Statistics

  • Total hours: 124
  • Total matches: 45
  • Agents per match: 10 (5 per team)
  • Frame rate: 30 fps
  • Video resolution: 1080x720

Citation

If you use this dataset, please cite the following paper:

@article{wang2025x,
  title={X-Ego: Acquiring Team-Level Tactical Situational Awareness via Cross-Egocentric Contrastive Video Representation Learning},
  author={Wang, Yunzhe and Hans, Soham and Ustun, Volkan},
  journal={arXiv preprint arXiv:2510.19150},
  year={2025}
}