PlotQA_V1 / README.md
Abd223653's picture
Add/update dataset card
6d2e010 verified
metadata
license: apache-2.0
task_categories:
  - other
language:
  - en
tags:
  - dataset
  - pandas
  - parquet
size_categories:
  - 1M<n<10M
pretty_name: Plotqa V1

Plotqa V1

Dataset Description

This dataset was uploaded from a pandas DataFrame.

Dataset Structure

Overview

  • Total Examples: 5,733,893
  • Total Features: 9
  • Dataset Size: ~2805.4 MB
  • Format: Parquet files
  • Created: 2025-09-22 20:12:01 UTC

Data Instances

The dataset contains 5,733,893 rows and 9 columns.

Data Fields

  • image_index (int64): 0 null values (0.0%), Range: [0.00, 157069.00], Mean: 78036.26
  • qid (object): 0 null values (0.0%), 74 unique values
  • question_string (object): 0 null values (0.0%), 1,502,530 unique values
  • answer_bbox (object): 0 null values (0.0%), 798,805 unique values
  • template (object): 0 null values (0.0%), 6 unique values
  • answer (object): 0 null values (0.0%), 1,002,651 unique values
  • answer_id (int64): 0 null values (0.0%), Range: [0.00, 1481788.00], Mean: 185454.21
  • type (object): 0 null values (0.0%), 4 unique values
  • question_id (int64): 0 null values (0.0%), Range: [0.00, 2170651.00], Mean: 441648.27

Data Splits

Split Number of Examples
train 5,733,893

Dataset Creation

This dataset was created by uploading a pandas DataFrame to Hugging Face Hub using the datasets library.

Source Data

The data was processed and uploaded as parquet files for efficient storage and loading.

Usage

Loading the Dataset

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("Abd223653/PlotQA_V1")

# Convert to pandas DataFrame
df = dataset["train"].to_pandas()

print(f"Dataset shape: {df.shape}")
print(f"Columns: {list(df.columns)}")

Streaming (Memory Efficient)

from datasets import load_dataset

# Load dataset in streaming mode
dataset = load_dataset("Abd223653/PlotQA_V1", streaming=True)
train_stream = dataset["train"]

# Process in batches
for batch in train_stream.iter(batch_size=1000):
    # Process your batch here
    print(f"Processing batch with {len(batch['column_name'])} examples")

Basic Data Analysis

import pandas as pd
from datasets import load_dataset

# Load and explore the dataset
dataset = load_dataset("Abd223653/PlotQA_V1")
df = dataset["train"].to_pandas()

# Basic statistics
print(df.info())
print(df.describe())

# Check for missing values
print("Missing values per column:")
print(df.isnull().sum())

Data Quality

Missing Values

  • Total missing values: 0
  • Columns with missing values: 0
  • Percentage of complete rows: 100.0%

Data Types

  • int64: 3 columns
  • object: 6 columns

Limitations and Considerations

  • This dataset is provided as-is without warranty
  • Users should validate data quality for their specific use cases
  • Consider the licensing terms when using this dataset
  • Large datasets may require streaming or chunked processing