bansalaman18 commited on
Commit
f8684ab
·
verified ·
1 Parent(s): ce937a7

Upload UPLOAD_INSTRUCTIONS.md

Browse files
Files changed (1) hide show
  1. UPLOAD_INSTRUCTIONS.md +183 -0
UPLOAD_INSTRUCTIONS.md ADDED
@@ -0,0 +1,183 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Hugging Face Dataset Upload Instructions
2
+
3
+ ## Files to Upload
4
+
5
+ ### Core Dataset Files
6
+ 1. **README.md** - Complete dataset card with metadata, description, and usage examples
7
+ 2. **data.csv** - Clean CSV file with 516 scenarios and their misery scores
8
+ 3. **load_dataset.py** - Python script for easy dataset loading and exploration
9
+ 4. **requirements.txt** - Dependencies needed to use the dataset
10
+
11
+ ### Supporting Files (Optional)
12
+ - **misery_index.py** - Advanced datasets library loading script
13
+ - **UPLOAD_INSTRUCTIONS.md** - This file (for reference)
14
+
15
+ ## Upload Steps
16
+
17
+ ### Method 1: Using Hugging Face Hub (Recommended)
18
+
19
+ 1. **Install Hugging Face Hub**:
20
+ ```bash
21
+ pip install huggingface_hub
22
+ ```
23
+
24
+ 2. **Login to Hugging Face**:
25
+ ```bash
26
+ huggingface-cli login
27
+ ```
28
+
29
+ 3. **Create and upload the dataset**:
30
+ ```python
31
+ from huggingface_hub import HfApi, create_repo
32
+
33
+ # Create repository
34
+ repo_id = "your-username/misery-index"
35
+ create_repo(repo_id, repo_type="dataset")
36
+
37
+ # Upload files
38
+ api = HfApi()
39
+ api.upload_file(
40
+ path_or_fileobj="README.md",
41
+ path_in_repo="README.md",
42
+ repo_id=repo_id,
43
+ repo_type="dataset"
44
+ )
45
+
46
+ api.upload_file(
47
+ path_or_fileobj="data.csv",
48
+ path_in_repo="data.csv",
49
+ repo_id=repo_id,
50
+ repo_type="dataset"
51
+ )
52
+
53
+ api.upload_file(
54
+ path_or_fileobj="load_dataset.py",
55
+ path_in_repo="load_dataset.py",
56
+ repo_id=repo_id,
57
+ repo_type="dataset"
58
+ )
59
+
60
+ api.upload_file(
61
+ path_or_fileobj="requirements.txt",
62
+ path_in_repo="requirements.txt",
63
+ repo_id=repo_id,
64
+ repo_type="dataset"
65
+ )
66
+ ```
67
+
68
+ ### Method 2: Using Git
69
+
70
+ 1. **Clone the dataset repository**:
71
+ ```bash
72
+ git clone https://huggingface.co/datasets/your-username/misery-index
73
+ cd misery-index
74
+ ```
75
+
76
+ 2. **Copy files**:
77
+ ```bash
78
+ cp /path/to/your/files/* .
79
+ ```
80
+
81
+ 3. **Push to Hugging Face**:
82
+ ```bash
83
+ git add .
84
+ git commit -m "Add Misery Index Dataset"
85
+ git push
86
+ ```
87
+
88
+ ### Method 3: Web Interface
89
+
90
+ 1. Go to [Hugging Face Datasets](https://huggingface.co/new-dataset)
91
+ 2. Create a new dataset repository
92
+ 3. Upload files using the web interface
93
+ 4. Edit README.md directly in the browser if needed
94
+
95
+ ## Usage After Upload
96
+
97
+ Once uploaded, users can load the dataset in several ways:
98
+
99
+ ### Using Datasets Library
100
+ ```python
101
+ from datasets import load_dataset
102
+
103
+ # Load from Hugging Face Hub
104
+ dataset = load_dataset("your-username/misery-index")
105
+ print(dataset["train"][0])
106
+ ```
107
+
108
+ ### Using Pandas (Direct CSV)
109
+ ```python
110
+ import pandas as pd
111
+ from huggingface_hub import hf_hub_download
112
+
113
+ # Download and load CSV
114
+ file_path = hf_hub_download(
115
+ repo_id="your-username/misery-index",
116
+ filename="data.csv",
117
+ repo_type="dataset"
118
+ )
119
+ df = pd.read_csv(file_path)
120
+ ```
121
+
122
+ ### Using the Provided Script
123
+ ```python
124
+ # Download the load_dataset.py script and use it
125
+ from huggingface_hub import hf_hub_download
126
+ import importlib.util
127
+
128
+ # Download the script
129
+ script_path = hf_hub_download(
130
+ repo_id="your-username/misery-index",
131
+ filename="load_dataset.py",
132
+ repo_type="dataset"
133
+ )
134
+
135
+ # Load and use
136
+ spec = importlib.util.spec_from_file_location("load_dataset", script_path)
137
+ load_module = importlib.util.module_from_spec(spec)
138
+ spec.loader.exec_module(load_module)
139
+
140
+ df = load_module.load_misery_dataset("data.csv")
141
+ stats = load_module.get_dataset_statistics(df)
142
+ ```
143
+
144
+ ## Dataset Configuration
145
+
146
+ The dataset uses these configurations:
147
+ - **License**: CC-BY-4.0 (Creative Commons Attribution)
148
+ - **Language**: English (en)
149
+ - **Task**: Text regression, sentiment analysis, emotion prediction
150
+ - **Size**: 516 samples (100<n<1K category)
151
+
152
+ ## Validation Checklist
153
+
154
+ Before uploading, ensure:
155
+ - [ ] README.md contains comprehensive dataset card
156
+ - [ ] data.csv has all 516 rows with proper formatting
157
+ - [ ] No missing values in critical columns (scenario, misery_score)
158
+ - [ ] load_dataset.py script runs without errors
159
+ - [ ] requirements.txt includes all necessary dependencies
160
+ - [ ] License is properly specified (CC-BY-4.0)
161
+ - [ ] Dataset tags are appropriate for discoverability
162
+
163
+ ## Post-Upload Tasks
164
+
165
+ 1. **Test the uploaded dataset**:
166
+ ```python
167
+ from datasets import load_dataset
168
+ ds = load_dataset("your-username/misery-index")
169
+ print(f"Dataset loaded successfully with {len(ds['train'])} samples")
170
+ ```
171
+
172
+ 2. **Update dataset card** if needed using the web interface
173
+
174
+ 3. **Share the dataset** with relevant research communities
175
+
176
+ 4. **Consider creating a model** trained on this dataset
177
+
178
+ ## Support
179
+
180
+ If you encounter issues:
181
+ 1. Check the [Hugging Face documentation](https://huggingface.co/docs/datasets/)
182
+ 2. Visit the [Hugging Face Discord](https://discord.gg/huggingface)
183
+ 3. Create an issue in the dataset repository