image
imagewidth (px) 256
512
|
|---|
Neural 3D Video Dataset - Processed
This directory contains preprocessed multi-view video data from the Neural 3D Video dataset, converted into a format suitable for 4D reconstruction and novel view synthesis tasks.
Dataset Overview
Source Dataset: Neural 3D Video
License: CC-BY-4.0
Processed Scenes: 5 dynamic cooking scenes captured from multiple camera angles
Scenes
| Scene | Description | Cameras | Frames |
|---|---|---|---|
coffee_martini |
Making a coffee martini cocktail | 18 | 32 |
cook_spinach |
Cooking spinach in a pan | 18 | 32 |
cut_roasted_beef |
Cutting roasted beef | 18 | 32 |
flame_salmon_1 |
FlambΓ© salmon preparation | 18 | 32 |
sear_steak |
Searing steak in a pan | 18 | 32 |
Directory Structure
Neural-3D-Video-Dataset/
βββ README.md (this file)
βββ coffee_martini_processed/
β βββ 256/ # 256Γ256 resolution
β β βββ images/ # 32 frame images
β β β βββ sample_000_cam00.jpg
β β β βββ sample_001_cam01.jpg
β β β βββ ...
β β βββ transforms.json # Camera poses (JSON format)
β β βββ transforms.npz # Camera poses (NumPy format)
β β βββ camera_visualization.html # Interactive 3D camera viewer
β βββ 512/ # 512Γ512 resolution
β βββ images/
β βββ transforms.json
β βββ transforms.npz
β βββ camera_visualization.html
βββ cook_spinach_processed/
β βββ 256/ ...
β βββ 512/ ...
βββ cut_roasted_beef_processed/
β βββ 256/ ...
β βββ 512/ ...
βββ flame_salmon_1_processed/
β βββ 256/ ...
β βββ 512/ ...
βββ sear_steak_processed/
βββ 256/ ...
βββ 512/ ...
Data Format
Camera Poses (transforms.json)
The camera poses are stored in a JSON file with the following structure:
{
"frames": [
{
"front": {
"timestamp": 0,
"file_path": "./images/sample_000_cam00.jpg",
"w": 256,
"h": 256,
"fx": 341.33,
"fy": 341.33,
"cx": 128.0,
"cy": 128.0,
"w2c": [[...], [...], [...], [...]], // 4Γ4 world-to-camera matrix
"c2w": [[...], [...], [...]], // 3Γ4 camera-to-world matrix
"blender_camera_location": [x, y, z] // Camera position in world coordinates
}
},
...
]
}
Intrinsics (camera internal parameters):
- 256Γ256:
fx = fy = 341.33,cx = cy = 128.0 - 512Γ512:
fx = fy = 682.67,cx = cy = 256.0
Extrinsics (camera external parameters):
w2c: 4Γ4 world-to-camera transformation matrixc2w: 3Γ4 camera-to-world transformation matrix (rotation + translation)blender_camera_location: 3D camera position[x, y, z]in world coordinates
NumPy Format (transforms.npz)
For convenience, camera parameters are also provided in NumPy format:
import numpy as np
data = np.load('transforms.npz')
intrinsics = data['intrinsics'] # (32, 3, 3) - intrinsic matrices
extrinsics_w2c = data['extrinsics_w2c'] # (32, 4, 4) - world-to-camera
extrinsics_c2w = data['extrinsics_c2w'] # (32, 4, 4) - camera-to-world
camera_positions = data['camera_positions'] # (32, 3) - camera locations
Frame Images
- Format: JPEG
- Resolutions: 256Γ256 and 512Γ512
- Count: 32 frames per scene
- Naming:
sample_{frame:03d}_cam{camera:02d}.jpg
Each frame is extracted from a different camera view:
- Frame 0 β cam00
- Frame 1 β cam01
- ...
- Frame 17 β cam20
- Frame 18 β cam00 (loops back)
- ...
- Frame 31 β cam14
Data Processing
Original Data
- Source resolution: 2704Γ2028 (4:3 aspect ratio)
- Original format: Multi-view MP4 videos
- Camera model: LLFF format with
poses_bounds.npy
Processing Pipeline
- Center Crop: 2704Γ2028 β 2028Γ2028 (square)
- Resize: 2028Γ2028 β 256Γ256 or 512Γ512
- Intrinsics Adjustment: Focal length and principal point adjusted for crop and resize
- Extrinsics Extraction: Camera poses extracted from LLFF format
- Format Conversion: Converted to standard c2w/w2c matrices
Frame Sampling Strategy
To capture the dynamic motion from multiple viewpoints, frames are sampled such that each frame shows the scene from a different camera angle in sequence. This creates a "synchronized" multi-view video where:
- The temporal progression shows the dynamic action
- Each frame provides a different spatial viewpoint
- Camera angles loop after exhausting all 18 cameras
Camera Visualization
Each processed scene includes an interactive 3D camera visualization (camera_visualization.html):
- View camera positions and orientations in 3D space
- Interactive: Rotate, pan, and zoom to explore the camera rig
- Camera frustums: Visualize the viewing direction and field of view
- Trajectory path: See the sequence of frames and camera transitions
- Powered by Plotly: High-quality interactive graphics
Open the HTML file in any web browser to explore the camera setup.
Usage Examples
Loading Camera Poses (Python)
import json
import numpy as np
# Load from JSON
with open('coffee_martini_processed/256/transforms.json', 'r') as f:
data = json.load(f)
# Access first frame
frame0 = data['frames'][0]['front']
print(f"Camera intrinsics: fx={frame0['fx']}, fy={frame0['fy']}")
print(f"Camera position: {frame0['blender_camera_location']}")
print(f"Image path: {frame0['file_path']}")
# Load from NumPy
poses = np.load('coffee_martini_processed/256/transforms.npz')
intrinsics = poses['intrinsics'] # (32, 3, 3)
c2w = poses['extrinsics_c2w'] # (32, 4, 4)
Loading Images
import cv2
import os
scene_dir = 'coffee_martini_processed/256'
img_dir = os.path.join(scene_dir, 'images')
# Load all frames
frames = []
for i in range(32):
img_path = os.path.join(img_dir, f'sample_{i:03d}_cam*.jpg')
# Find the actual file (camera number may vary)
import glob
img_file = glob.glob(img_path)[0]
img = cv2.imread(img_file)
frames.append(img)
print(f"Loaded {len(frames)} frames, shape: {frames[0].shape}")
PyTorch Dataset Example
import torch
from torch.utils.data import Dataset
from PIL import Image
import json
import numpy as np
class Neural3DVideoDataset(Dataset):
def __init__(self, scene_dir):
self.scene_dir = scene_dir
# Load transforms
with open(os.path.join(scene_dir, 'transforms.json'), 'r') as f:
self.data = json.load(f)
self.frames = self.data['frames']
def __len__(self):
return len(self.frames)
def __getitem__(self, idx):
frame_data = self.frames[idx]['front']
# Load image
img_path = os.path.join(self.scene_dir, frame_data['file_path'])
img = Image.open(img_path).convert('RGB')
img = torch.from_numpy(np.array(img)).float() / 255.0
# Get camera parameters
intrinsics = torch.tensor([
[frame_data['fx'], 0, frame_data['cx']],
[0, frame_data['fy'], frame_data['cy']],
[0, 0, 1]
], dtype=torch.float32)
c2w = torch.tensor(frame_data['c2w'], dtype=torch.float32)
return {
'image': img,
'intrinsics': intrinsics,
'c2w': c2w,
'timestamp': frame_data['timestamp']
}
# Usage
dataset = Neural3DVideoDataset('coffee_martini_processed/256')
sample = dataset[0]
print(f"Image shape: {sample['image'].shape}")
print(f"Camera position: {sample['c2w'][:, 3]}")
Technical Details
Camera Configuration
- Number of cameras: 18 per scene
- Camera arrangement: Surrounding the scene in a roughly circular pattern
- Frame rate: 30 FPS (original videos)
- Camera model: Pinhole camera with radial distortion (pre-undistorted)
Coordinate System
- World coordinates: Right-handed coordinate system
- Camera coordinates:
- X-axis: Right
- Y-axis: Down
- Z-axis: Forward (viewing direction)
- c2w matrix: Transforms from camera space to world space
- w2c matrix: Transforms from world space to camera space
Quality Settings
- JPEG quality: 95
- Interpolation: Bilinear (cv2.INTER_LINEAR)
- Color space: RGB (8-bit per channel)
Citation
If you use this dataset in your research, please cite the original Neural 3D Video dataset:
@article{neural3dvideo2021,
title={Neural 3D Video Synthesis},
author={Author Names},
journal={Conference/Journal Name},
year={2021}
}
Processing Scripts
The data was processed using custom scripts available in the parent directory:
create_sync_video_with_poses.py- Single scene processingbatch_process_scenes.py- Batch processing for all scenes
License
This processed dataset inherits the CC-BY-4.0 license from the original Neural 3D Video dataset. Please respect the license terms when using this data.
Contact
For questions or issues regarding this processed dataset, please contact the dataset maintainer.
- Downloads last month
- 243