Abd223653 commited on
Commit
6d2e010
·
verified ·
1 Parent(s): 6008378

Add/update dataset card

Browse files
Files changed (1) hide show
  1. README.md +130 -31
README.md CHANGED
@@ -1,33 +1,132 @@
1
  ---
2
- dataset_info:
3
- features:
4
- - name: image_index
5
- dtype: int64
6
- - name: qid
7
- dtype: string
8
- - name: question_string
9
- dtype: string
10
- - name: answer_bbox
11
- dtype: string
12
- - name: template
13
- dtype: string
14
- - name: answer
15
- dtype: string
16
- - name: answer_id
17
- dtype: int64
18
- - name: type
19
- dtype: string
20
- - name: question_id
21
- dtype: int64
22
- splits:
23
- - name: train
24
- num_bytes: 1118292863
25
- num_examples: 5733893
26
- download_size: 224163587
27
- dataset_size: 1118292863
28
- configs:
29
- - config_name: default
30
- data_files:
31
- - split: train
32
- path: data/train-*
33
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - other
5
+ language:
6
+ - en
7
+ tags:
8
+ - dataset
9
+ - pandas
10
+ - parquet
11
+ size_categories:
12
+ - 1M<n<10M
13
+ pretty_name: Plotqa V1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  ---
15
+
16
+ # Plotqa V1
17
+
18
+ ## Dataset Description
19
+
20
+ This dataset was uploaded from a pandas DataFrame.
21
+
22
+ ## Dataset Structure
23
+
24
+ ### Overview
25
+
26
+ - **Total Examples**: 5,733,893
27
+ - **Total Features**: 9
28
+ - **Dataset Size**: ~2805.4 MB
29
+ - **Format**: Parquet files
30
+ - **Created**: 2025-09-22 20:12:01 UTC
31
+
32
+ ### Data Instances
33
+
34
+ The dataset contains 5,733,893 rows and 9 columns.
35
+
36
+ ### Data Fields
37
+
38
+ - **image_index** (int64): 0 null values (0.0%), Range: [0.00, 157069.00], Mean: 78036.26
39
+ - **qid** (object): 0 null values (0.0%), 74 unique values
40
+ - **question_string** (object): 0 null values (0.0%), 1,502,530 unique values
41
+ - **answer_bbox** (object): 0 null values (0.0%), 798,805 unique values
42
+ - **template** (object): 0 null values (0.0%), 6 unique values
43
+ - **answer** (object): 0 null values (0.0%), 1,002,651 unique values
44
+ - **answer_id** (int64): 0 null values (0.0%), Range: [0.00, 1481788.00], Mean: 185454.21
45
+ - **type** (object): 0 null values (0.0%), 4 unique values
46
+ - **question_id** (int64): 0 null values (0.0%), Range: [0.00, 2170651.00], Mean: 441648.27
47
+
48
+ ### Data Splits
49
+
50
+ | Split | Number of Examples |
51
+ |-------|-------------------|
52
+ | train | 5,733,893 |
53
+
54
+ ## Dataset Creation
55
+
56
+ This dataset was created by uploading a pandas DataFrame to Hugging Face Hub using the `datasets` library.
57
+
58
+ ### Source Data
59
+
60
+ The data was processed and uploaded as parquet files for efficient storage and loading.
61
+
62
+ ## Usage
63
+
64
+ ### Loading the Dataset
65
+
66
+ ```python
67
+ from datasets import load_dataset
68
+
69
+ # Load the dataset
70
+ dataset = load_dataset("Abd223653/PlotQA_V1")
71
+
72
+ # Convert to pandas DataFrame
73
+ df = dataset["train"].to_pandas()
74
+
75
+ print(f"Dataset shape: {df.shape}")
76
+ print(f"Columns: {list(df.columns)}")
77
+ ```
78
+
79
+ ### Streaming (Memory Efficient)
80
+
81
+ ```python
82
+ from datasets import load_dataset
83
+
84
+ # Load dataset in streaming mode
85
+ dataset = load_dataset("Abd223653/PlotQA_V1", streaming=True)
86
+ train_stream = dataset["train"]
87
+
88
+ # Process in batches
89
+ for batch in train_stream.iter(batch_size=1000):
90
+ # Process your batch here
91
+ print(f"Processing batch with {len(batch['column_name'])} examples")
92
+ ```
93
+
94
+ ### Basic Data Analysis
95
+
96
+ ```python
97
+ import pandas as pd
98
+ from datasets import load_dataset
99
+
100
+ # Load and explore the dataset
101
+ dataset = load_dataset("Abd223653/PlotQA_V1")
102
+ df = dataset["train"].to_pandas()
103
+
104
+ # Basic statistics
105
+ print(df.info())
106
+ print(df.describe())
107
+
108
+ # Check for missing values
109
+ print("Missing values per column:")
110
+ print(df.isnull().sum())
111
+ ```
112
+
113
+ ## Data Quality
114
+
115
+ ### Missing Values
116
+
117
+ - **Total missing values**: 0
118
+ - **Columns with missing values**: 0
119
+ - **Percentage of complete rows**: 100.0%
120
+
121
+ ### Data Types
122
+
123
+ - **int64**: 3 columns
124
+ - **object**: 6 columns
125
+
126
+ ## Limitations and Considerations
127
+
128
+ - This dataset is provided as-is without warranty
129
+ - Users should validate data quality for their specific use cases
130
+ - Consider the licensing terms when using this dataset
131
+ - Large datasets may require streaming or chunked processing
132
+