File size: 5,733 Bytes
a36462f
 
 
 
 
 
877914e
a36462f
877914e
 
a36462f
 
 
03be174
 
b8957c0
 
 
 
7d46c28
9b077d5
b8957c0
 
 
172cfa5
 
 
 
 
7bbd0b0
377ceff
 
3f3ab3a
 
b1048a3
b8957c0
 
 
 
 
 
 
 
 
 
 
 
 
 
6a5cd3b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b8957c0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ff75ab0
 
b8957c0
 
 
 
 
 
 
 
 
 
 
 
c0d9269
b8957c0
 
 
 
 
 
 
 
 
 
 
 
 
7d46c28
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
---
license: mit
language:
- en
tags:
- video
- video understanding
- game
- gameplay understanding
- multi-agent
- esport
- counter-strike
- opponent-modeling
- ego-centric
- cross ego-centric
task_categories:
- video-classification
- video-text-to-text
- visual-question-answering
- text-to-video
pretty_name: X-EGO
---
# Dataset Card for X-Ego-CS

Links:
- [Paper](https://arxiv.org/abs/2510.19150)
- [Github Codebase](https://github.com/HATS-ICT/x-ego)
- Homepage (comming soon)

## Cross-Ego Demo (Pistol Round)
<video controls>
  <source src="https://huggingface.co/datasets/wangyz1999/X-EGO-CS/resolve/main/multi-ego-sync-demo-pistol.mp4" type="video/mp4">
</video>

**Note:** This demo concats videos in a grid. The original datasets videos are from individual player POV recording.

## Dataset Summary
**X-Ego-CS** is a multi-agent gameplay video dataset for **cross-egocentric multi-agent video understanding** in Counter-Strike:2. It contains **124 hours** of synchronized first-person gameplay footage captured from **45 professional-level Counter-Strike 2 matches**. Each match includes **multi-player egocentric video streams** (POVs from all players) and corresponding **state-action trajectories**, enabling the study of **team-level tactical reasoning** and **situational awareness** from individual perspectives.

The dataset was introduced in the paper:

> **X-Ego: Acquiring Team-Level Tactical Situational Awareness via Cross-Egocentric Contrastive Video Representation Learning**  
> *Yunzhe Wang, Soham Hans, Volkan Ustun*  
> University of Southern California, Institute for Creative Technologies (2025)  
> [arXiv:2510.19150](https://arxiv.org/abs/2510.19150)

X-Ego-CS supports research on **multi-agent representation learning**, **egocentric video modeling**, **team tactic analysis**, and **AI-human collaboration** in complex 3D environments.
---

## How to Download 

To download the full dataset using the Hugging Face CLI:

```bash
# Install the Hugging Face Hub client
pip install --upgrade huggingface_hub

# (Optional) Log in if the dataset is private
huggingface-cli login

# Download the dataset repository
huggingface-cli download wangyz1999/X-EGO-CS \
  --repo-type dataset \
  --local-dir ./X-EGO-CS \
  --resume-download \
  --max-workers 8
```

## Dataset Structure

### Data Fields

**Segment Info**
- `idx` β€” Row index (unique for each segment)  
- `partition` β€” Dataset split label (e.g., train/test/val)  
- `seg_duration_sec` β€” Duration of the segment in seconds  
- `start_tick`, `end_tick`, `prediction_tick` β€” Game tick indices for start, end, and prediction points  
- `start_seconds`, `end_seconds`, `prediction_seconds` β€” Corresponding timestamps in seconds  
- `normalized_start_seconds`, `normalized_end_seconds`, `normalized_prediction_seconds` β€” Time values normalized to a [0–1] scale for model input

**Match Metadata**
- `match_id` β€” Unique identifier of the match  
- `round_num` β€” Match round number  
- `map_name` β€” Name of the game map (e.g., *de_mirage*)

**Player States** (for `player_0` β†’ `player_9`)
- `player_{i}_id` β€” Unique identifier (e.g., Steam ID)  
- `player_{i}_name` β€” In-game player name  
- `player_{i}_side` β€” Team side (`t` for Terrorist, `ct` for Counter-Terrorist)  
- `player_{i}_X`, `player_{i}_Y`, `player_{i}_Z` β€” Player’s position coordinates (normalized or map-based)  
- `player_{i}_place` β€” Named location or area on the map (e.g., *CTSpawn*, *SideAlley*)


## File Structure
```
data/
β”œβ”€β”€ demos/                       # Raw .dem files (by match)
β”‚   └── <match_id>.dem
β”œβ”€β”€ labels/                      # Global label datasets
β”‚   β”œβ”€β”€ enemy_location_nowcast_s1s_l5s.csv
β”‚   └── teammate_location_nowcast_s1s_l5s.csv
β”œβ”€β”€ metadata/                    # Match / round metadata
β”‚   β”œβ”€β”€ matches/
β”‚   β”‚   └── <match_id>.json
β”‚   └── rounds/
β”‚       └── <match_id>/
β”‚           └── round_<nn>.json
β”œβ”€β”€ trajectories/                # Player movement trajectories
β”‚   └── <match_id>/
β”‚       └── <player_id>/
β”‚           β”œβ”€β”€ round_<nn>.csv
β”‚           └── ...
└── videos/                      # Player POV recordings
    └── <match_id>/
        └── <player_id>/
            β”œβ”€β”€ round_<nn>.mp4
            └── ...
```
---

## Dataset Creation

### Curation Rationale
The dataset was designed to study **cross-perspective alignment** in team-based tactical games where each player’s view provides only a partial observation of the environment. 
Synchronizing multiple first-person streams allows for modeling **shared situational awareness** and **implicit coordination**β€”key ingredients in human team intelligence.

### Source Data
- **Game:** Counter-Strike 2 (Valve Corporation) in-game demo replay recording. Downloaded from top elo-leaderboard from [Faceit.com](https://www.faceit.com/)
- **Recording setup:** Screen capture of first-person gameplay, synchronized across all agents using timestamp alignment  
- **Annotations:** Automatically generated state-action trajectories derived from server replay data  
---

## Dataset Statistics
- **Total hours:** 124  
- **Total matches:** 45  
- **Agents per match:** 10 (5 per team)  
- **Frame rate:** 30 fps  
- **Video resolution:** 1080x720  

---

## Citation

If you use this dataset, please cite the following paper:

```bibtex
@article{wang2025x,
  title={X-Ego: Acquiring Team-Level Tactical Situational Awareness via Cross-Egocentric Contrastive Video Representation Learning},
  author={Wang, Yunzhe and Hans, Soham and Ustun, Volkan},
  journal={arXiv preprint arXiv:2510.19150},
  year={2025}
}