license: mit
language:
- en
tags:
- video
- video understanding
- game
- gameplay understanding
- multi-agent
- esport
- counter-strike
- opponent-modeling
- ego-centric
- cross ego-centric
task_categories:
- video-classification
- video-text-to-text
- visual-question-answering
- text-to-video
pretty_name: X-EGO
Dataset Card for X-Ego-CS
Links:
- Paper
- Github Codebase
- Homepage (comming soon)
Cross-Ego Demo (Pistol Round)
Note: This demo concats videos in a grid. The original datasets videos are from individual player POV recording.
Dataset Summary
X-Ego-CS is a multi-agent gameplay video dataset for cross-egocentric multi-agent video understanding in Counter-Strike:2. It contains 124 hours of synchronized first-person gameplay footage captured from 45 professional-level Counter-Strike 2 matches. Each match includes multi-player egocentric video streams (POVs from all players) and corresponding state-action trajectories, enabling the study of team-level tactical reasoning and situational awareness from individual perspectives.
The dataset was introduced in the paper:
X-Ego: Acquiring Team-Level Tactical Situational Awareness via Cross-Egocentric Contrastive Video Representation Learning
Yunzhe Wang, Soham Hans, Volkan Ustun
University of Southern California, Institute for Creative Technologies (2025)
arXiv:2510.19150
X-Ego-CS supports research on multi-agent representation learning, egocentric video modeling, team tactic analysis, and AI-human collaboration in complex 3D environments.
How to Download
To download the full dataset using the Hugging Face CLI:
# Install the Hugging Face Hub client
pip install --upgrade huggingface_hub
# (Optional) Log in if the dataset is private
huggingface-cli login
# Download the dataset repository
huggingface-cli download wangyz1999/X-EGO-CS \
--repo-type dataset \
--local-dir ./X-EGO-CS \
--resume-download \
--max-workers 8
Dataset Structure
Data Fields
Segment Info
idxβ Row index (unique for each segment)partitionβ Dataset split label (e.g., train/test/val)seg_duration_secβ Duration of the segment in secondsstart_tick,end_tick,prediction_tickβ Game tick indices for start, end, and prediction pointsstart_seconds,end_seconds,prediction_secondsβ Corresponding timestamps in secondsnormalized_start_seconds,normalized_end_seconds,normalized_prediction_secondsβ Time values normalized to a [0β1] scale for model input
Match Metadata
match_idβ Unique identifier of the matchround_numβ Match round numbermap_nameβ Name of the game map (e.g., de_mirage)
Player States (for player_0 β player_9)
player_{i}_idβ Unique identifier (e.g., Steam ID)player_{i}_nameβ In-game player nameplayer_{i}_sideβ Team side (tfor Terrorist,ctfor Counter-Terrorist)player_{i}_X,player_{i}_Y,player_{i}_Zβ Playerβs position coordinates (normalized or map-based)player_{i}_placeβ Named location or area on the map (e.g., CTSpawn, SideAlley)
File Structure
data/
βββ demos/ # Raw .dem files (by match)
β βββ <match_id>.dem
βββ labels/ # Global label datasets
β βββ enemy_location_nowcast_s1s_l5s.csv
β βββ teammate_location_nowcast_s1s_l5s.csv
βββ metadata/ # Match / round metadata
β βββ matches/
β β βββ <match_id>.json
β βββ rounds/
β βββ <match_id>/
β βββ round_<nn>.json
βββ trajectories/ # Player movement trajectories
β βββ <match_id>/
β βββ <player_id>/
β βββ round_<nn>.csv
β βββ ...
βββ videos/ # Player POV recordings
βββ <match_id>/
βββ <player_id>/
βββ round_<nn>.mp4
βββ ...
Dataset Creation
Curation Rationale
The dataset was designed to study cross-perspective alignment in team-based tactical games where each playerβs view provides only a partial observation of the environment. Synchronizing multiple first-person streams allows for modeling shared situational awareness and implicit coordinationβkey ingredients in human team intelligence.
Source Data
- Game: Counter-Strike 2 (Valve Corporation) in-game demo replay recording. Downloaded from top elo-leaderboard from Faceit.com
- Recording setup: Screen capture of first-person gameplay, synchronized across all agents using timestamp alignment
- Annotations: Automatically generated state-action trajectories derived from server replay data
Dataset Statistics
- Total hours: 124
- Total matches: 45
- Agents per match: 10 (5 per team)
- Frame rate: 30 fps
- Video resolution: 1080x720
Citation
If you use this dataset, please cite the following paper:
@article{wang2025x,
title={X-Ego: Acquiring Team-Level Tactical Situational Awareness via Cross-Egocentric Contrastive Video Representation Learning},
author={Wang, Yunzhe and Hans, Soham and Ustun, Volkan},
journal={arXiv preprint arXiv:2510.19150},
year={2025}
}