Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -247,9 +247,37 @@ This dataset enables research across multiple domains:
|
|
| 247 |
|
| 248 |
---
|
| 249 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 250 |
## π Repository Contents
|
| 251 |
|
| 252 |
-
This repository contains **
|
| 253 |
|
| 254 |
### **ποΈ Core Dataset Files (1.6GB)**
|
| 255 |
*Essential files for machine learning and analysis*
|
|
@@ -273,6 +301,9 @@ This repository contains **12 files (1.6GB total)** organized into three categor
|
|
| 273 |
- `README.md` - **This comprehensive dataset documentation**
|
| 274 |
- `LICENSE` - **MIT License for open research use**
|
| 275 |
|
|
|
|
|
|
|
|
|
|
| 276 |
---
|
| 277 |
|
| 278 |
## π Detailed File Descriptions
|
|
|
|
| 247 |
|
| 248 |
---
|
| 249 |
|
| 250 |
+
## π οΈ Data Processing
|
| 251 |
+
|
| 252 |
+
**Want to understand how this dataset was created?** The complete data processing pipeline is available in the repository:
|
| 253 |
+
|
| 254 |
+
**π Processing Script**: [`data_processing.py`](data_processing.py) - Complete pipeline for converting raw H5AD files to HuggingFace-ready parquet format
|
| 255 |
+
|
| 256 |
+
**Key Processing Steps**:
|
| 257 |
+
- β
**Expression Matrix**: Sparse-to-dense conversion with memory optimization
|
| 258 |
+
- β
**Metadata Processing**: Cell and gene annotation standardization
|
| 259 |
+
- β
**Dimensionality Reduction**: PCA, UMAP, t-SNE, scVI computation
|
| 260 |
+
- β
**Quality Control**: Pandas index bug fixes and validation
|
| 261 |
+
- β
**File Optimization**: Parquet compression and data type optimization
|
| 262 |
+
|
| 263 |
+
**π§ Technical Features**:
|
| 264 |
+
- Memory-efficient chunked processing for large matrices (183K Γ 29K)
|
| 265 |
+
- Automatic missing projection computation (PCA, t-SNE if needed)
|
| 266 |
+
- Built-in quality validation and error handling
|
| 267 |
+
- Comprehensive logging and progress tracking
|
| 268 |
+
|
| 269 |
+
```bash
|
| 270 |
+
# Run the processing pipeline
|
| 271 |
+
python3 data_processing.py
|
| 272 |
+
```
|
| 273 |
+
|
| 274 |
+
*This transparency enables reproducibility and helps researchers understand data transformations applied to the original study data.*
|
| 275 |
+
|
| 276 |
+
---
|
| 277 |
+
|
| 278 |
## π Repository Contents
|
| 279 |
|
| 280 |
+
This repository contains **13 files (1.6GB total)** organized into four categories:
|
| 281 |
|
| 282 |
### **ποΈ Core Dataset Files (1.6GB)**
|
| 283 |
*Essential files for machine learning and analysis*
|
|
|
|
| 301 |
- `README.md` - **This comprehensive dataset documentation**
|
| 302 |
- `LICENSE` - **MIT License for open research use**
|
| 303 |
|
| 304 |
+
### **π» Processing Scripts**
|
| 305 |
+
- `data_processing.py` - **Complete data processing pipeline** (H5AD β Parquet conversion)
|
| 306 |
+
|
| 307 |
---
|
| 308 |
|
| 309 |
## π Detailed File Descriptions
|