Dataset Preview
	The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
				Error code:   DatasetGenerationError
Exception:    ArrowNotImplementedError
Message:      Cannot write struct type 'torch_compile_config' with no child field to Parquet. Consider adding a dummy child field.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1870, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 620, in write_table
                  self._build_writer(inferred_schema=pa_table.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 441, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'torch_compile_config' with no child field to Parquet. Consider adding a dummy child field.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1886, in _prepare_split_single
                  num_examples, num_bytes = writer.finalize()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 639, in finalize
                  self._build_writer(self.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 441, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'torch_compile_config' with no child field to Parquet. Consider adding a dummy child field.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1417, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1049, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 924, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1000, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1741, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1897, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
| experiment_name
				 string | backend
				 dict | launcher
				 dict | benchmark
				 dict | environment
				 dict | timestamp
				 timestamp[us] | project_name
				 string | run_id
				 string | duration
				 float64 | emissions
				 float64 | emissions_rate
				 float64 | cpu_power
				 float64 | gpu_power
				 float64 | ram_power
				 float64 | cpu_energy
				 float64 | gpu_energy
				 float64 | ram_energy
				 float64 | energy_consumed
				 float64 | country_name
				 string | country_iso_code
				 string | region
				 string | cloud_provider
				 string | cloud_region
				 string | os
				 string | python_version
				 string | codecarbon_version
				 string | cpu_count
				 int64 | cpu_model
				 string | gpu_count
				 int64 | gpu_model
				 string | longitude
				 float64 | latitude
				 float64 | ram_total_size
				 float64 | tracking_mode
				 string | on_cloud
				 string | pue
				 float64 | 
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 
	text_generation | 
	{
  "name": "pytorch",
  "version": "2.4.0",
  "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend",
  "task": "text-generation",
  "model": "NousResearch/Hermes-3-Llama-3.1-8B",
  "processor": "NousResearch/Hermes-3-Llama-3.1-8B",
  "library": "transformers",
  "device": "cuda",
  "device_ids": "0",
  "seed": 42,
  "inter_op_num_threads": null,
  "intra_op_num_threads": null,
  "hub_kwargs": {
    "revision": "main",
    "force_download": false,
    "local_files_only": false,
    "trust_remote_code": true
  },
  "no_weights": true,
  "device_map": null,
  "torch_dtype": null,
  "amp_autocast": false,
  "amp_dtype": null,
  "eval_mode": true,
  "to_bettertransformer": false,
  "low_cpu_mem_usage": null,
  "attn_implementation": null,
  "cache_implementation": null,
  "torch_compile": false,
  "torch_compile_config": {},
  "quantization_scheme": null,
  "quantization_config": {},
  "deepspeed_inference": false,
  "deepspeed_inference_config": {},
  "peft_type": null,
  "peft_config": {}
} | 
	{
  "name": "process",
  "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher",
  "device_isolation": false,
  "device_isolation_action": "warn",
  "start_method": "spawn"
} | 
	{
  "name": "energy_star",
  "_target_": "optimum_benchmark.benchmarks.energy_star.benchmark.EnergyStarBenchmark",
  "dataset_name": "EnergyStarAI/text_generation",
  "dataset_config": "",
  "dataset_split": "train",
  "num_samples": 1000,
  "input_shapes": {
    "batch_size": 1
  },
  "text_column_name": "text",
  "truncation": true,
  "max_length": -1,
  "dataset_prefix1": "",
  "dataset_prefix2": "",
  "t5_task": "",
  "image_column_name": "image",
  "resize": false,
  "question_column_name": "question",
  "context_column_name": "context",
  "sentence1_column_name": "sentence1",
  "sentence2_column_name": "sentence2",
  "audio_column_name": "audio",
  "iterations": 10,
  "warmup_runs": 10,
  "energy": true,
  "forward_kwargs": {},
  "generate_kwargs": {
    "max_new_tokens": 10,
    "min_new_tokens": 10
  },
  "call_kwargs": {}
} | 
	{
  "cpu": " Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz",
  "cpu_count": 96,
  "cpu_ram_mb": 1204529.905664,
  "system": "Linux",
  "machine": "x86_64",
  "platform": "Linux-5.10.223-212.873.amzn2.x86_64-x86_64-with-glibc2.35",
  "processor": "x86_64",
  "python_version": "3.9.21",
  "gpu": [
    "NVIDIA A100-SXM4-80GB"
  ],
  "gpu_count": 1,
  "gpu_vram_mb": 85899345920,
  "optimum_benchmark_version": "0.2.0",
  "optimum_benchmark_commit": null,
  "transformers_version": "4.44.0",
  "transformers_commit": null,
  "accelerate_version": "0.33.0",
  "accelerate_commit": null,
  "diffusers_version": "0.30.0",
  "diffusers_commit": null,
  "optimum_version": null,
  "optimum_commit": null,
  "timm_version": null,
  "timm_commit": null,
  "peft_version": null,
  "peft_commit": null
} | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 
| null | null | null | null | null | 2025-01-14T02:35:39 | 
	codecarbon | 
	1b9facce-07dc-48ee-a9da-47fe33191783 | -1,725,969,928.739702 | 0.000019 | 0.00002 | 120 | 74.432008 | 0.364154 | 0.000032 | 0.00002 | 0 | 0.000052 | 
	United States | 
	USA | 
	virginia | 
	Linux-5.10.223-212.873.amzn2.x86_64-x86_64-with-glibc2.35 | 
	3.9.21 | 
	2.5.1 | 96 | 
	Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz | 1 | 
	1 x NVIDIA A100-SXM4-80GB | -77.4903 | 39.0469 | 1,121.805893 | 
	process | 
	N | 1 | 
No dataset card yet
- Downloads last month
- 12
