NonMatchingSplitsSizeError
Hello, there is an error when loading data. I try to re-download the data but it still the same
>>> dataset = load_dataset("OPTML-Group/UnlearnCanvas")
Resolving data files: 100%|ββββββββββββββββββββββββββ| 331/331 [00:00<00:00, 768.19it/s]
Downloading data: 100%|ββββββββββββββββββββββββββ| 331/331 [00:00<00:00, 1601.78files/s]
Generating train split: 52745 examples [24:56, 35.25 examples/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "site-packages/datasets/load.py", line 2609, in load_dataset
builder_instance.download_and_prepare(
File "site-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/site-packages/datasets/builder.py", line 1140, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "site-packages/datasets/utils/info_utils.py", line 101, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=76080381824.0, num_examples=24400, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=167271171931, num_examples=52745, shard_lengths=[160, 320, 320, 160, 160, 160, 320, 160, 160, 160, 320, 320, 320, 320, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 320, 320, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 320, 320, 320, 320, 320, 320, 320, 320, 320, 160, 160, 160, 160, 160, 160, 320, 320, 320, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 320, 320, 320, 160, 160, 160, 160, 160, 160, 160, 160, 160, 318, 318, 318, 159, 159, 159, 159, 159, 159, 159, 159, 159, 318, 318, 318, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 318, 318, 318, 159, 159, 159, 159, 159, 159, 159, 159, 159, 318, 318, 318, 318, 318, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 318, 318, 318, 318, 318, 318, 318, 318, 159, 159, 159, 159, 159, 318, 318, 318, 318, 318, 318, 318, 318, 159, 159, 318, 318, 318, 318, 318, 318, 318, 318, 318, 318, 318, 318, 318, 318, 318, 318, 159, 159, 159, 159, 159, 318, 159, 159, 159, 159, 159, 159, 159, 159, 318, 318, 318, 318, 318, 318, 318, 318, 318, 318, 159, 159, 159, 159], dataset_name='unlearn_canvas')}]
Does this mean the data from huggingface is not complete?
Hi, thanks for your interest in our work. This is wierd, as I have tried to reproduce your error in different environments on my side but I did not encounter this problem. Can you please provide more details?
Hello, I tried again on the datasets.load, but still I got the error:
(new) [ic084yyx@cirrus-login1 small_diffusion_finetuning]$ python data.py
Downloading readme: 100%|βββββββββββββββββββββββββββββββββββββββββ| 4.16k/4.16k [00:00<00:00, 2.61MB/s]
Resolving data files: 100%|ββββββββββββββββββββββββββββββββββββββββ| 331/331 [00:00<00:00, 1913.22it/s]
Downloading data: 100%|βββββββββββββββββββββββββββββββββββββββββ| 331/331 [1:54:48<00:00, 20.81s/files]
Generating train split: 52745 examples [12:42, 69.15 examples/s]
Traceback (most recent call last):
File "/scratch/space1/ic084/unlearning/finetuning/small_diffusion_finetuning/data.py", line 3, in <module>
ds = load_dataset("OPTML-Group/UnlearnCanvas")
File "/mnt/lustre/e1000/home/ic084/ic084/shared/team2/new/lib/python3.10/site-packages/datasets/load.py", line 2616, in load_dataset
builder_instance.download_and_prepare(
File "/mnt/lustre/e1000/home/ic084/ic084/shared/team2/new/lib/python3.10/site-packages/datasets/builder.py", line 1029, in download_and_prepare
self._download_and_prepare(
File "/mnt/lustre/e1000/home/ic084/ic084/shared/team2/new/lib/python3.10/site-packages/datasets/builder.py", line 1142, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/mnt/lustre/e1000/home/ic084/ic084/shared/team2/new/lib/python3.10/site-packages/datasets/utils/info_utils.py", line 77, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
datasets.exceptions.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=76080381824.0, num_examples=24400, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=167271171931, num_examples=52745, shard_lengths=[160, 320, 320, 160, 160, 160, 320, 160, 160, 160, 320, 320, 320, 320, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 320, 320, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 320, 320, 320, 320, 320, 320, 320, 320, 320, 160, 160, 160, 160, 160, 160, 320, 320, 320, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 320, 320, 320, 160, 160, 160, 160, 160, 160, 160, 160, 160, 318, 318, 318, 159, 159, 159, 159, 159, 159, 159, 159, 159, 318, 318, 318, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 318, 318, 318, 159, 159, 159, 159, 159, 159, 159, 159, 159, 318, 318, 318, 318, 318, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 159, 318, 318, 318, 318, 318, 318, 318, 318, 159, 159, 159, 159, 159, 318, 318, 318, 318, 318, 318, 318, 318, 159, 159, 318, 318, 318, 318, 318, 318, 318, 318, 318, 318, 318, 318, 318, 318, 318, 318, 159, 159, 159, 159, 159, 318, 159, 159, 159, 159, 159, 159, 159, 159, 318, 318, 318, 318, 318, 318, 318, 318, 318, 318, 159, 159, 159, 159], dataset_name='unlearn_canvas')}]
Hello,
I have the same issue
README.md: 4.16kB [00:00, 7.72MB/s]
Resolving data files: 100%|βββββββ| 331/331 [00:00<00:00, 20712.77it/s]
data/train-00000-of-00153-79d12fbacb5609(β¦): 100%|β| 509M/509M [00:06<0
data/train-00001-of-00153-660dbb31ce0bef(β¦): 100%|β| 497M/497M [00:06<0
......
data/train-00147-of-00153-c154cdbfa4fb3e(β¦): 100%|β| 488M/488M [00:05<00:00, 89
data/train-00148-of-00153-b5a1d848f16581(β¦): 100%|β| 389M/389M [00:05<00:00, 74
data/train-00148-of-00153-fac0f5956d5514(β¦): 100%|β| 389M/389M [00:04<00:00, 93
data/train-00149-of-00153-12740f67129224(β¦): 100%|β| 392M/392M [00:05<00:00, 67
data/train-00149-of-00153-26c3ca2a1f711b(β¦): 100%|β| 392M/392M [00:06<00:00, 59
data/train-00150-of-00153-89b88e87d97c09(β¦): 100%|β| 477M/477M [00:06<00:00, 74
data/train-00150-of-00153-9453f2cfec9971(β¦): 100%|β| 477M/477M [00:05<00:00, 90
data/train-00151-of-00153-b6721d875e3d9f(β¦): 100%|β| 528M/528M [00:06<00:00, 76
data/train-00151-of-00153-d28e3f7c214ab6(β¦): 100%|β| 528M/528M [00:05<00:00, 88
data/train-00152-of-00153-545d1ed5bc24a4(β¦): 100%|β| 537M/537M [00:06<00:00, 80
data/train-00152-of-00153-f5ae329560d540(β¦): 100%|β| 537M/537M [00:05<00:00, 98
Downloading data: 100%|βββββββββββ| 331/331 [32:50<00:00, 5.95s/files]0:00, 17
Generating train split: 52%|βββ | 12800/24400 [07:49<06:25, 30.12 exa
Generating train split: 53%|βββ | 12960/24400 [07:53<05:49, 32.71 exa
Generating train split: 54%|βββ | 13120/24400 [07:59<05:58, 31.48 exa
Generating train split: 54%|βββ | 13280/24400 [08:00<04:40, 39.58 exa
Generating train split: 55%|βββ | 13440/24400 [08:06<05:11, 35.23 exa
Generating train split: 56%|βββ | 13600/24400 [08:12<05:36, 32.06 example
Generating train split: 45590 examples [30:49, 23.64 examples/s]
There are multiple repeated packages, this results in more than 24400 example images being downloaded and the confirmation fails.