# Hugging Face Dataset Upload Instructions ## Files to Upload ### Core Dataset Files 1. **README.md** - Complete dataset card with metadata, description, and usage examples 2. **data.csv** - Clean CSV file with 516 scenarios and their misery scores 3. **load_dataset.py** - Python script for easy dataset loading and exploration 4. **requirements.txt** - Dependencies needed to use the dataset ### Supporting Files (Optional) - **misery_index.py** - Advanced datasets library loading script - **UPLOAD_INSTRUCTIONS.md** - This file (for reference) ## Upload Steps ### Method 1: Using Hugging Face Hub (Recommended) 1. **Install Hugging Face Hub**: ```bash pip install huggingface_hub ``` 2. **Login to Hugging Face**: ```bash huggingface-cli login ``` 3. **Create and upload the dataset**: ```python from huggingface_hub import HfApi, create_repo # Create repository repo_id = "your-username/misery-index" create_repo(repo_id, repo_type="dataset") # Upload files api = HfApi() api.upload_file( path_or_fileobj="README.md", path_in_repo="README.md", repo_id=repo_id, repo_type="dataset" ) api.upload_file( path_or_fileobj="data.csv", path_in_repo="data.csv", repo_id=repo_id, repo_type="dataset" ) api.upload_file( path_or_fileobj="load_dataset.py", path_in_repo="load_dataset.py", repo_id=repo_id, repo_type="dataset" ) api.upload_file( path_or_fileobj="requirements.txt", path_in_repo="requirements.txt", repo_id=repo_id, repo_type="dataset" ) ``` ### Method 2: Using Git 1. **Clone the dataset repository**: ```bash git clone https://huggingface.co/datasets/your-username/misery-index cd misery-index ``` 2. **Copy files**: ```bash cp /path/to/your/files/* . ``` 3. **Push to Hugging Face**: ```bash git add . git commit -m "Add Misery Index Dataset" git push ``` ### Method 3: Web Interface 1. Go to [Hugging Face Datasets](https://huggingface.co/new-dataset) 2. Create a new dataset repository 3. Upload files using the web interface 4. Edit README.md directly in the browser if needed ## Usage After Upload Once uploaded, users can load the dataset in several ways: ### Using Datasets Library ```python from datasets import load_dataset # Load from Hugging Face Hub dataset = load_dataset("your-username/misery-index") print(dataset["train"][0]) ``` ### Using Pandas (Direct CSV) ```python import pandas as pd from huggingface_hub import hf_hub_download # Download and load CSV file_path = hf_hub_download( repo_id="your-username/misery-index", filename="data.csv", repo_type="dataset" ) df = pd.read_csv(file_path) ``` ### Using the Provided Script ```python # Download the load_dataset.py script and use it from huggingface_hub import hf_hub_download import importlib.util # Download the script script_path = hf_hub_download( repo_id="your-username/misery-index", filename="load_dataset.py", repo_type="dataset" ) # Load and use spec = importlib.util.spec_from_file_location("load_dataset", script_path) load_module = importlib.util.module_from_spec(spec) spec.loader.exec_module(load_module) df = load_module.load_misery_dataset("data.csv") stats = load_module.get_dataset_statistics(df) ``` ## Dataset Configuration The dataset uses these configurations: - **License**: CC-BY-4.0 (Creative Commons Attribution) - **Language**: English (en) - **Task**: Text regression, sentiment analysis, emotion prediction - **Size**: 516 samples (100