instance_id
stringlengths 21
53
| repo
stringclasses 188
values | language
stringclasses 1
value | pull_number
int64 20
148k
| title
stringlengths 6
144
| body
stringlengths 0
83.4k
| created_at
stringdate 2015-09-25 03:17:17
2025-07-10 16:50:35
| problem_statement
stringlengths 188
240k
| hints_text
stringlengths 0
145k
| resolved_issues
listlengths 1
6
| base_commit
stringlengths 40
40
| commit_to_review
dict | reference_review_comments
listlengths 1
62
| merged_commit
stringlengths 40
40
| merged_patch
stringlengths 297
9.87M
| metadata
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
voxel51__fiftyone-2353@02e9ba1
|
voxel51/fiftyone
|
Python
| 2,353 |
Provide custom task name for CVAT
|
## What changes are proposed in this pull request?
Closes #1753
1. Custom task name can be passed for labelling data in CVAT such as `dataset.annotate("anno_key", backend="cvat", task_name="Custom task name", ...)`
2. Default task name for CVAT to include an annotation key such as `FiftyOne_{dataset_name}_{annotation_key}`
## How is this patch tested? If it is not, please explain why.
added few lines to existing cvat unit-tests.
## Release Notes
### Is this a user-facing change that should be mentioned in the release notes?
<!--
Please fill in relevant options below with an "x", or by clicking the checkboxes
after submitting this pull request. Example:
- [x] Selected option
-->
- [ ] No. You can skip the rest of this section.
- [x] Yes. Give a description of this change to be included in the release
notes for FiftyOne users.
A custom task name can be provided with CVAT annotation backend. Default CVAT task name now includes an annotation key.
### What areas of FiftyOne does this PR affect?
- [ ] App: FiftyOne application changes
- [ ] Build: Build and test infrastructure changes
- [x] Core: Core `fiftyone` Python library changes
- [x] Documentation: FiftyOne documentation changes
- [ ] Other
|
2022-11-28T09:18:12Z
|
[FR] Allow task names to be provided when annotating with CVAT
Currently, when annotating with the CVAT integration, the name for the tasks that get generated are hardcoded as `FiftyOne_{dataset_name}` which is not ideal when launching multiple annotation runs on the same dataset. There should be an argument allowing the user to provide one or more task names to use for the generated tasks.
Also, changing the default to `FiftyOne_{dataset_name}_{annotation_key}` would be helpful as well.
|
[
{
"body": "Currently, when annotating with the CVAT integration, the name for the tasks that get generated are hardcoded as `FiftyOne_{dataset_name}` which is not ideal when launching multiple annotation runs on the same dataset. There should be an argument allowing the user to provide one or more task names to use for the generated tasks.\r\n\r\nAlso, changing the default to `FiftyOne_{dataset_name}_{annotation_key}` would be helpful as well.",
"number": 1753,
"title": "[FR] Allow task names to be provided when annotating with CVAT"
}
] |
0d0f1b51326a7859dea7c655e06c528aa775e02c
|
{
"head_commit": "02e9ba17a750b4f3193c54bff01b1dde443af821",
"head_commit_message": "provide custom task name when uploading to cvat",
"patch_to_review": "diff --git a/fiftyone/utils/cvat.py b/fiftyone/utils/cvat.py\nindex 4c33cfc741a..a0986607094 100644\n--- a/fiftyone/utils/cvat.py\n+++ b/fiftyone/utils/cvat.py\n@@ -3060,6 +3060,7 @@ class CVATBackendConfig(foua.AnnotationBackendConfig):\n default, no project is used\n project_id (None): an optional ID of an existing CVAT project to which\n to upload the annotation tasks. By default, no project is used\n+ task_name (None): an optional task name to use for the created CVAT task\n occluded_attr (None): an optional attribute name containing existing\n occluded values and/or in which to store downloaded occluded values\n for all objects in the annotation run\n@@ -3091,6 +3092,7 @@ def __init__(\n job_reviewers=None,\n project_name=None,\n project_id=None,\n+ task_name=None,\n occluded_attr=None,\n group_id_attr=None,\n issue_tracker=None,\n@@ -3109,6 +3111,7 @@ def __init__(\n self.job_reviewers = job_reviewers\n self.project_name = project_name\n self.project_id = project_id\n+ self.task_name = task_name\n self.occluded_attr = occluded_attr\n self.group_id_attr = group_id_attr\n self.issue_tracker = issue_tracker\n@@ -4290,8 +4293,11 @@ def upload_samples(self, samples, backend):\n project_id = self.create_project(project_name, cvat_schema)\n project_ids.append(project_id)\n \n- _dataset_name = samples_batch._dataset.name.replace(\" \", \"_\")\n- task_name = \"FiftyOne_%s\" % _dataset_name\n+ if config.task_name is None:\n+ _dataset_name = samples_batch._dataset.name.replace(\" \", \"_\")\n+ task_name = \"FiftyOne_%s\" % _dataset_name\n+ else:\n+ task_name = config.task_name\n \n (\n task_id,\ndiff --git a/tests/intensive/cvat_tests.py b/tests/intensive/cvat_tests.py\nindex 07766210f81..a6738f56876 100644\n--- a/tests/intensive/cvat_tests.py\n+++ b/tests/intensive/cvat_tests.py\n@@ -417,6 +417,7 @@ def test_task_creation_arguments(self):\n \n anno_key = \"anno_key\"\n bug_tracker = \"test_tracker\"\n+ task_name = \"test_task\"\n results = dataset.annotate(\n anno_key,\n backend=\"cvat\",\n@@ -427,6 +428,7 @@ def test_task_creation_arguments(self):\n job_assignees=users,\n job_reviewers=users,\n issue_tracker=bug_tracker,\n+ task_name=task_name,\n )\n task_ids = results.task_ids\n with results:\n@@ -436,6 +438,7 @@ def test_task_creation_arguments(self):\n task_json = api.get(api.task_url(task_id)).json()\n self.assertEqual(task_json[\"bug_tracker\"], bug_tracker)\n self.assertEqual(task_json[\"segment_size\"], 1)\n+ self.assertEqual(task_json[\"name\"], task_name)\n if user is not None:\n self.assertEqual(task_json[\"assignee\"][\"username\"], user)\n for job in api.get(api.jobs_url(task_id)).json():\n"
}
|
[
{
"diff_hunk": "@@ -4290,8 +4293,18 @@ def upload_samples(self, samples, backend):\n project_id = self.create_project(project_name, cvat_schema)\n project_ids.append(project_id)\n \n- _dataset_name = samples_batch._dataset.name.replace(\" \", \"_\")\n- task_name = \"FiftyOne_%s\" % _dataset_name\n+ # use custom task name if provided, else use default name\n+ if config.task_name is None:\n+ _dataset_name = samples_batch._dataset.name.replace(\n+ \" \", \"_\"\n+ )\n+ latest_anno_key = _get_latest_anno_key(samples)",
"line": null,
"original_line": 4301,
"original_start_line": null,
"path": "fiftyone/utils/cvat.py",
"start_line": null,
"text": "@user1:\nThis is a clever workaround to get the `anno_key`, but now that `config.task_name` is supported, I think it would be best to just rely on the user putting their `anno_key` in the task name if that's what they want.\n\n@user1:\nIt is unfortunate that we didn't include `anno_key` in either the `upload_annotations()` method or in the `AnnotationBackendConfig`, because it's definitely reasonable to want to use it to generate default values like this 🤦 \n\n@author:\nI guess it's enough for me that I can provide a custom task name to differentiate between tasks for the same dataset. \n\nDo you think that will require a lot of/breaking changes to pass `anno_key` to `upload_annotations`?"
}
] |
12208fbe141ad664e23f2b51c48d2f6d3a4414f1
|
diff --git a/docs/source/integrations/cvat.rst b/docs/source/integrations/cvat.rst
index e7b1b38f7ea..b094454d2b3 100644
--- a/docs/source/integrations/cvat.rst
+++ b/docs/source/integrations/cvat.rst
@@ -492,6 +492,7 @@ provided:
otherwise a new project is created. By default, no project is used
- **project_id** (*None*): an optional ID of an existing CVAT project to
which to upload the annotation tasks. By default, no project is used
+- **task_name** (None): an optional task name to use for the created CVAT task
- **occluded_attr** (*None*): an optional attribute name containing existing
occluded values and/or in which to store downloaded occluded values for all
objects in the annotation run
diff --git a/fiftyone/utils/cvat.py b/fiftyone/utils/cvat.py
index 4c33cfc741a..5c6d780b6dc 100644
--- a/fiftyone/utils/cvat.py
+++ b/fiftyone/utils/cvat.py
@@ -6,6 +6,7 @@
| `voxel51.com <https://voxel51.com/>`_
|
"""
+import math
from collections import defaultdict
from copy import copy, deepcopy
from datetime import datetime
@@ -3060,6 +3061,7 @@ class CVATBackendConfig(foua.AnnotationBackendConfig):
default, no project is used
project_id (None): an optional ID of an existing CVAT project to which
to upload the annotation tasks. By default, no project is used
+ task_name (None): an optional task name to use for the created CVAT task
occluded_attr (None): an optional attribute name containing existing
occluded values and/or in which to store downloaded occluded values
for all objects in the annotation run
@@ -3091,6 +3093,7 @@ def __init__(
job_reviewers=None,
project_name=None,
project_id=None,
+ task_name=None,
occluded_attr=None,
group_id_attr=None,
issue_tracker=None,
@@ -3109,6 +3112,7 @@ def __init__(
self.job_reviewers = job_reviewers
self.project_name = project_name
self.project_id = project_id
+ self.task_name = task_name
self.occluded_attr = occluded_attr
self.group_id_attr = group_id_attr
self.issue_tracker = issue_tracker
@@ -4226,6 +4230,7 @@ def upload_samples(self, samples, backend):
num_samples = len(samples)
batch_size = self._get_batch_size(samples, task_size)
+ num_batches = math.ceil(num_samples / batch_size)
samples.compute_metadata()
@@ -4290,8 +4295,16 @@ def upload_samples(self, samples, backend):
project_id = self.create_project(project_name, cvat_schema)
project_ids.append(project_id)
- _dataset_name = samples_batch._dataset.name.replace(" ", "_")
- task_name = "FiftyOne_%s" % _dataset_name
+ if config.task_name is None:
+ _dataset_name = samples_batch._dataset.name.replace(
+ " ", "_"
+ )
+ task_name = f"FiftyOne_{_dataset_name}"
+ else:
+ task_name = config.task_name
+ # append task number when multiple tasks are created
+ if num_batches > 1:
+ task_name += f"_{idx + 1}"
(
task_id,
diff --git a/tests/intensive/cvat_tests.py b/tests/intensive/cvat_tests.py
index 07766210f81..7c82c47d207 100644
--- a/tests/intensive/cvat_tests.py
+++ b/tests/intensive/cvat_tests.py
@@ -417,6 +417,7 @@ def test_task_creation_arguments(self):
anno_key = "anno_key"
bug_tracker = "test_tracker"
+ task_name = "test_task"
results = dataset.annotate(
anno_key,
backend="cvat",
@@ -427,15 +428,17 @@ def test_task_creation_arguments(self):
job_assignees=users,
job_reviewers=users,
issue_tracker=bug_tracker,
+ task_name=task_name,
)
task_ids = results.task_ids
with results:
api = results.connect_to_api()
self.assertEqual(len(task_ids), 2)
- for task_id in task_ids:
+ for idx, task_id in enumerate(task_ids):
task_json = api.get(api.task_url(task_id)).json()
self.assertEqual(task_json["bug_tracker"], bug_tracker)
self.assertEqual(task_json["segment_size"], 1)
+ self.assertEqual(task_json["name"], f"{task_name}_{idx + 1}")
if user is not None:
self.assertEqual(task_json["assignee"]["username"], user)
for job in api.get(api.jobs_url(task_id)).json():
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "New Feature Additions"
}
|
|
voxel51__fiftyone-1793@dbbc6d9
|
voxel51/fiftyone
|
Python
| 1,793 |
adding filename exception logging for failed xml parsing
|
Currently when an invalid or malformed xml file is parsed an `ExpatError` will be raised. This change logs the filename that produced this error before re-raising it to the calling code.
|
2022-05-26T16:12:10Z
|
[BUG] Importing VOCDetectionDataset from disk fails due to bad XML - but what file?!
### System information
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: MacOS 10.15.7
- **FiftyOne installed from (pip or source)**: pip
- **FiftyOne version (run `fiftyone --version`)**: FiftyOne v0.15.1, Voxel51, Inc.
- **Python version**: Python 3.6.8
### Commands to reproduce
As thoroughly as possible, please provide the Python and/or shell commands used
to encounter the issue. Application steps can be described in the next section.
From within a Jupyter notebook (so no source level debugging available). See code below.
### Describe the problem
One of my Pascal VOC format XML files evidently cannot be parsed for some reason. I'm not sure which one. FiftyOne fails with the following message. The exception provides no information about which file it failed on. So I cannot find and fix the problem in the Pascal VOC label file.
So the bug is that the exception needs to provide information about what file it failed on so that the user can fix or delete the offending file.
```
23% |███|-------------| 201/893 [644.9ms elapsed, 2.2s remaining, 323.8 samples/s]
---------------------------------------------------------------------------
ExpatError Traceback (most recent call last)
<ipython-input-23-18ab5bbecbe2> in <module>
----> 1 dataset = fo.Dataset.from_dir(dataset_type=fo.types.VOCDetectionDataset, data_path=dataset_path, labels_path=dataset_path, name=dataset_name)
~/envs/fiftyone/lib/python3.6/site-packages/fiftyone/core/dataset.py in from_dir(cls, dataset_dir, dataset_type, data_path, labels_path, name, label_field, tags, **kwargs)
3748 label_field=label_field,
3749 tags=tags,
-> 3750 **kwargs,
3751 )
3752 return dataset
~/envs/fiftyone/lib/python3.6/site-packages/fiftyone/core/dataset.py in add_dir(self, dataset_dir, dataset_type, data_path, labels_path, label_field, tags, expand_schema, add_info, **kwargs)
2536 tags=tags,
2537 expand_schema=expand_schema,
-> 2538 add_info=add_info,
2539 )
2540
~/envs/fiftyone/lib/python3.6/site-packages/fiftyone/core/dataset.py in add_importer(self, dataset_importer, label_field, tags, expand_schema, add_info)
3080 tags=tags,
3081 expand_schema=expand_schema,
-> 3082 add_info=add_info,
3083 )
3084
~/envs/fiftyone/lib/python3.6/site-packages/fiftyone/utils/data/importers.py in import_samples(dataset, dataset_importer, label_field, tags, expand_schema, add_info)
128 samples = map(parse_sample, iter(dataset_importer))
129 sample_ids = dataset.add_samples(
--> 130 samples, expand_schema=expand_schema, num_samples=num_samples
131 )
132
~/envs/fiftyone/lib/python3.6/site-packages/fiftyone/core/dataset.py in add_samples(self, samples, expand_schema, validate, num_samples)
1453 sample_ids = []
1454 with fou.ProgressBar(total=num_samples) as pb:
-> 1455 for batch in batcher:
1456 sample_ids.extend(
1457 self._add_samples_batch(batch, expand_schema, validate)
~/envs/fiftyone/lib/python3.6/site-packages/fiftyone/core/utils.py in __next__(self)
919 idx = 0
920 while idx < batch_size:
--> 921 batch.append(next(self._iter))
922 idx += 1
923
~/envs/fiftyone/lib/python3.6/site-packages/fiftyone/utils/voc.py in __next__(self)
134 if labels_path:
135 # Labeled image
--> 136 annotation = load_voc_detection_annotations(labels_path)
137
138 # Use image filename from annotation file if possible
~/envs/fiftyone/lib/python3.6/site-packages/fiftyone/utils/voc.py in load_voc_detection_annotations(xml_path)
742 a :class:`VOCAnnotation` instance
743 """
--> 744 return VOCAnnotation.from_xml(xml_path)
745
746
~/envs/fiftyone/lib/python3.6/site-packages/fiftyone/utils/voc.py in from_xml(cls, xml_path)
463 a :class:`VOCAnnotation`
464 """
--> 465 d = fou.load_xml_as_json_dict(xml_path)
466 return cls.from_dict(d)
467
~/envs/fiftyone/lib/python3.6/site-packages/fiftyone/core/utils.py in load_xml_as_json_dict(xml_path)
664 """
665 with open(xml_path, "rb") as f:
--> 666 return xmltodict.parse(f.read())
667
668
~/envs/fiftyone/lib/python3.6/site-packages/xmltodict.py in parse(xml_input, encoding, expat, process_namespaces, namespace_separator, disable_entities, process_comments, **kwargs)
376 parser.Parse(b'',True)
377 else:
--> 378 parser.Parse(xml_input, True)
379 return handler.item
380
ExpatError: not well-formed (invalid token): line 1, column 1
```
### Code to reproduce issue
Provide a reproducible test case that is the bare minimum necessary to generate
the problem.
```
import fiftyone as fo
dataset_path = '.../images/trailcam/scraped_jan_2022/scraped'
dataset_name = 'scraped_jan_2022'
dataset = fo.Dataset.from_dir(dataset_type=fo.types.VOCDetectionDataset, data_path=dataset_path, labels_path=dataset_path, name=dataset_name)
```
### Other info / logs
Include any logs or source code that would be helpful to diagnose the problem.
If including tracebacks, please include the full traceback. Large logs and
files should be attached. Please do not use screenshots for sharing text. Code
snippets should be used instead when providing tracebacks, logs, etc.
### What areas of FiftyOne does this bug affect?
- [ ] `App`: FiftyOne application issue
- [x] `Core`: Core `fiftyone` Python library issue
- [ ] `Server`: Fiftyone server issue
### Willingness to contribute
The FiftyOne Community encourages bug fix contributions. Would you or another
member of your organization be willing to contribute a fix for this bug to the
FiftyOne codebase?
- [ ] Yes. I can contribute a fix for this bug independently.
- [ ] Yes. I would be willing to contribute a fix for this bug with guidance
from the FiftyOne community.
- [ ] No. I cannot contribute a bug fix at this time.
|
Definitely agree that a more informative error that includes the offending filename is called for here 💪
|
[
{
"body": "### System information\r\n\r\n- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: MacOS 10.15.7\r\n- **FiftyOne installed from (pip or source)**: pip\r\n- **FiftyOne version (run `fiftyone --version`)**: FiftyOne v0.15.1, Voxel51, Inc.\r\n- **Python version**: Python 3.6.8\r\n\r\n### Commands to reproduce\r\n\r\nAs thoroughly as possible, please provide the Python and/or shell commands used\r\nto encounter the issue. Application steps can be described in the next section.\r\n\r\nFrom within a Jupyter notebook (so no source level debugging available). See code below.\r\n\r\n### Describe the problem\r\n\r\nOne of my Pascal VOC format XML files evidently cannot be parsed for some reason. I'm not sure which one. FiftyOne fails with the following message. The exception provides no information about which file it failed on. So I cannot find and fix the problem in the Pascal VOC label file.\r\nSo the bug is that the exception needs to provide information about what file it failed on so that the user can fix or delete the offending file.\r\n\r\n```\r\n 23% |███|-------------| 201/893 [644.9ms elapsed, 2.2s remaining, 323.8 samples/s] \r\n---------------------------------------------------------------------------\r\nExpatError Traceback (most recent call last)\r\n<ipython-input-23-18ab5bbecbe2> in <module>\r\n----> 1 dataset = fo.Dataset.from_dir(dataset_type=fo.types.VOCDetectionDataset, data_path=dataset_path, labels_path=dataset_path, name=dataset_name)\r\n\r\n~/envs/fiftyone/lib/python3.6/site-packages/fiftyone/core/dataset.py in from_dir(cls, dataset_dir, dataset_type, data_path, labels_path, name, label_field, tags, **kwargs)\r\n 3748 label_field=label_field,\r\n 3749 tags=tags,\r\n-> 3750 **kwargs,\r\n 3751 )\r\n 3752 return dataset\r\n\r\n~/envs/fiftyone/lib/python3.6/site-packages/fiftyone/core/dataset.py in add_dir(self, dataset_dir, dataset_type, data_path, labels_path, label_field, tags, expand_schema, add_info, **kwargs)\r\n 2536 tags=tags,\r\n 2537 expand_schema=expand_schema,\r\n-> 2538 add_info=add_info,\r\n 2539 )\r\n 2540 \r\n\r\n~/envs/fiftyone/lib/python3.6/site-packages/fiftyone/core/dataset.py in add_importer(self, dataset_importer, label_field, tags, expand_schema, add_info)\r\n 3080 tags=tags,\r\n 3081 expand_schema=expand_schema,\r\n-> 3082 add_info=add_info,\r\n 3083 )\r\n 3084 \r\n\r\n~/envs/fiftyone/lib/python3.6/site-packages/fiftyone/utils/data/importers.py in import_samples(dataset, dataset_importer, label_field, tags, expand_schema, add_info)\r\n 128 samples = map(parse_sample, iter(dataset_importer))\r\n 129 sample_ids = dataset.add_samples(\r\n--> 130 samples, expand_schema=expand_schema, num_samples=num_samples\r\n 131 )\r\n 132 \r\n\r\n~/envs/fiftyone/lib/python3.6/site-packages/fiftyone/core/dataset.py in add_samples(self, samples, expand_schema, validate, num_samples)\r\n 1453 sample_ids = []\r\n 1454 with fou.ProgressBar(total=num_samples) as pb:\r\n-> 1455 for batch in batcher:\r\n 1456 sample_ids.extend(\r\n 1457 self._add_samples_batch(batch, expand_schema, validate)\r\n\r\n~/envs/fiftyone/lib/python3.6/site-packages/fiftyone/core/utils.py in __next__(self)\r\n 919 idx = 0\r\n 920 while idx < batch_size:\r\n--> 921 batch.append(next(self._iter))\r\n 922 idx += 1\r\n 923 \r\n\r\n~/envs/fiftyone/lib/python3.6/site-packages/fiftyone/utils/voc.py in __next__(self)\r\n 134 if labels_path:\r\n 135 # Labeled image\r\n--> 136 annotation = load_voc_detection_annotations(labels_path)\r\n 137 \r\n 138 # Use image filename from annotation file if possible\r\n\r\n~/envs/fiftyone/lib/python3.6/site-packages/fiftyone/utils/voc.py in load_voc_detection_annotations(xml_path)\r\n 742 a :class:`VOCAnnotation` instance\r\n 743 \"\"\"\r\n--> 744 return VOCAnnotation.from_xml(xml_path)\r\n 745 \r\n 746 \r\n\r\n~/envs/fiftyone/lib/python3.6/site-packages/fiftyone/utils/voc.py in from_xml(cls, xml_path)\r\n 463 a :class:`VOCAnnotation`\r\n 464 \"\"\"\r\n--> 465 d = fou.load_xml_as_json_dict(xml_path)\r\n 466 return cls.from_dict(d)\r\n 467 \r\n\r\n~/envs/fiftyone/lib/python3.6/site-packages/fiftyone/core/utils.py in load_xml_as_json_dict(xml_path)\r\n 664 \"\"\"\r\n 665 with open(xml_path, \"rb\") as f:\r\n--> 666 return xmltodict.parse(f.read())\r\n 667 \r\n 668 \r\n\r\n~/envs/fiftyone/lib/python3.6/site-packages/xmltodict.py in parse(xml_input, encoding, expat, process_namespaces, namespace_separator, disable_entities, process_comments, **kwargs)\r\n 376 parser.Parse(b'',True)\r\n 377 else:\r\n--> 378 parser.Parse(xml_input, True)\r\n 379 return handler.item\r\n 380 \r\n\r\nExpatError: not well-formed (invalid token): line 1, column 1\r\n```\r\n\r\n### Code to reproduce issue\r\n\r\nProvide a reproducible test case that is the bare minimum necessary to generate\r\nthe problem.\r\n\r\n```\r\nimport fiftyone as fo\r\ndataset_path = '.../images/trailcam/scraped_jan_2022/scraped'\r\ndataset_name = 'scraped_jan_2022'\r\ndataset = fo.Dataset.from_dir(dataset_type=fo.types.VOCDetectionDataset, data_path=dataset_path, labels_path=dataset_path, name=dataset_name)\r\n```\r\n\r\n### Other info / logs\r\n\r\nInclude any logs or source code that would be helpful to diagnose the problem.\r\nIf including tracebacks, please include the full traceback. Large logs and\r\nfiles should be attached. Please do not use screenshots for sharing text. Code\r\nsnippets should be used instead when providing tracebacks, logs, etc.\r\n\r\n### What areas of FiftyOne does this bug affect?\r\n\r\n- [ ] `App`: FiftyOne application issue\r\n- [x] `Core`: Core `fiftyone` Python library issue\r\n- [ ] `Server`: Fiftyone server issue\r\n\r\n### Willingness to contribute\r\n\r\nThe FiftyOne Community encourages bug fix contributions. Would you or another\r\nmember of your organization be willing to contribute a fix for this bug to the\r\nFiftyOne codebase?\r\n\r\n- [ ] Yes. I can contribute a fix for this bug independently.\r\n- [ ] Yes. I would be willing to contribute a fix for this bug with guidance\r\n from the FiftyOne community.\r\n- [ ] No. I cannot contribute a bug fix at this time.\r\n",
"number": 1777,
"title": "[BUG] Importing VOCDetectionDataset from disk fails due to bad XML - but what file?!"
}
] |
1a9c9bf0fe3f1a5a332629e2f711ab8d4358a7c4
|
{
"head_commit": "dbbc6d949de59509441e459ca034d28ce23e23e5",
"head_commit_message": "adding filename exception logging for failed xml parsing",
"patch_to_review": "diff --git a/fiftyone/core/utils.py b/fiftyone/core/utils.py\nindex cbc6462d98f..f30e145b95d 100644\n--- a/fiftyone/core/utils.py\n+++ b/fiftyone/core/utils.py\n@@ -6,11 +6,6 @@\n |\n \"\"\"\n import atexit\n-from base64 import b64encode, b64decode\n-from collections import defaultdict\n-from contextlib import contextmanager\n-from copy import deepcopy\n-from datetime import date, datetime\n import hashlib\n import importlib\n import inspect\n@@ -19,8 +14,8 @@\n import logging\n import ntpath\n import os\n-import posixpath\n import platform\n+import posixpath\n import signal\n import struct\n import subprocess\n@@ -28,6 +23,15 @@\n import types\n import zlib\n \n+from base64 import b64decode\n+from base64 import b64encode\n+from collections import defaultdict\n+from contextlib import contextmanager\n+from copy import deepcopy\n+from datetime import date\n+from datetime import datetime\n+from xml.parsers.expat import ExpatError\n+\n try:\n import pprintpp as _pprint\n \n@@ -662,8 +666,12 @@ def load_xml_as_json_dict(xml_path):\n Returns:\n a JSON dict\n \"\"\"\n- with open(xml_path, \"rb\") as f:\n- return xmltodict.parse(f.read())\n+ try:\n+ with open(xml_path, \"rb\") as f:\n+ return xmltodict.parse(f.read())\n+ except ExpatError as ex:\n+ logger.error(f\"Failed to read {xml_path}: {ex}\")\n+ raise\n \n \n def parse_serializable(obj, cls):\n"
}
|
[
{
"diff_hunk": "@@ -19,15 +14,24 @@\n import logging\n import ntpath\n import os\n-import posixpath\n import platform\n+import posixpath\n import signal\n import struct\n import subprocess\n import timeit\n import types\n import zlib\n \n+from base64 import b64decode",
"line": null,
"original_line": 26,
"original_start_line": null,
"path": "fiftyone/core/utils.py",
"start_line": null,
"text": "@user1:\nnit: per [style guide](https://github.com/voxel51/fiftyone/blob/develop/STYLE_GUIDE.md#imports), we currently ignore `import`/`from` for grouping purposes.\r\n\r\nIE, import blocks are defined solely based on whether it is builtin/third-party/project, not by import strategy used."
},
{
"diff_hunk": "@@ -662,8 +666,12 @@ def load_xml_as_json_dict(xml_path):\n Returns:\n a JSON dict\n \"\"\"\n- with open(xml_path, \"rb\") as f:\n- return xmltodict.parse(f.read())\n+ try:\n+ with open(xml_path, \"rb\") as f:\n+ return xmltodict.parse(f.read())\n+ except ExpatError as ex:\n+ logger.error(f\"Failed to read {xml_path}: {ex}\")",
"line": null,
"original_line": 673,
"original_start_line": null,
"path": "fiftyone/core/utils.py",
"start_line": null,
"text": "@user1:\nnit: per [style guide](https://github.com/voxel51/fiftyone/blob/develop/STYLE_GUIDE.md#logging) we currently prefer directly raising exceptions to `logger.error()`:\r\n\r\n```py\r\nraise ExpatError(f\"Failed to read {xml_path}: {e}\")\r\n```"
}
] |
cbd342bb38a25805450d635dcec6a213f2ceba47
|
diff --git a/fiftyone/core/utils.py b/fiftyone/core/utils.py
index cbc6462d98f..3bec6a035cf 100644
--- a/fiftyone/core/utils.py
+++ b/fiftyone/core/utils.py
@@ -26,6 +26,7 @@
import subprocess
import timeit
import types
+from xml.parsers.expat import ExpatError
import zlib
try:
@@ -662,8 +663,11 @@ def load_xml_as_json_dict(xml_path):
Returns:
a JSON dict
"""
- with open(xml_path, "rb") as f:
- return xmltodict.parse(f.read())
+ try:
+ with open(xml_path, "rb") as f:
+ return xmltodict.parse(f.read())
+ except ExpatError as ex:
+ raise ExpatError(f"Failed to read {xml_path}: {ex}")
def parse_serializable(obj, cls):
|
{
"difficulty": "low",
"estimated_review_effort": 2,
"problem_domain": "Bug Fixes"
}
|
voxel51__fiftyone-1878@33ca8a8
|
voxel51/fiftyone
|
Python
| 1,878 |
Maintain active dataset fields
|
Resolves #1852
|
2022-06-13T19:26:55Z
|
[BUG] Changing session.view resets field visibility choices
On `fiftyone==0.16.2`, updating `session.view` resets any field visibility toggles I may have set (eg, unselected all label fields), and forces the defaults (all label fields visible). I don't think this used to be the case though?
This came up when I was trying to work with an interactive plot. I just wanted to see images with no labels, but every time I made a selection in the linked plot, the labels kept re-appearing, which was annoying. I don't recall facing this issue before.
Of course persisting sidebar settings is a bit tricky because views can change the label schema.
|
[
{
"body": "On `fiftyone==0.16.2`, updating `session.view` resets any field visibility toggles I may have set (eg, unselected all label fields), and forces the defaults (all label fields visible). I don't think this used to be the case though?\r\n\r\nThis came up when I was trying to work with an interactive plot. I just wanted to see images with no labels, but every time I made a selection in the linked plot, the labels kept re-appearing, which was annoying. I don't recall facing this issue before.\r\n\r\nOf course persisting sidebar settings is a bit tricky because views can change the label schema.",
"number": 1852,
"title": "[BUG] Changing session.view resets field visibility choices"
}
] |
9fb2226edb77fd0eb34a7f70d8621da5c84cd7ce
|
{
"head_commit": "33ca8a8acdc169b7f202a9256ee5ff89ceb44dc3",
"head_commit_message": "reset active field only on dataset change",
"patch_to_review": "diff --git a/app/packages/app/src/Root/Datasets/Dataset.tsx b/app/packages/app/src/Root/Datasets/Dataset.tsx\nindex cce7f8843d3..a0f86b00407 100644\n--- a/app/packages/app/src/Root/Datasets/Dataset.tsx\n+++ b/app/packages/app/src/Root/Datasets/Dataset.tsx\n@@ -1,18 +1,19 @@\n import { Route, RouterContext } from \"@fiftyone/components\";\n+import { toCamelCase } from \"@fiftyone/utilities\";\n import React, { useContext, useEffect } from \"react\";\n import { graphql, usePreloadedQuery } from \"react-relay\";\n+import { useRecoilValue } from \"recoil\";\n \n import DatasetComponent from \"../../components/Dataset\";\n import { useStateUpdate } from \"../../utils/hooks\";\n import { DatasetQuery } from \"./__generated__/DatasetQuery.graphql\";\n import { datasetName } from \"../../recoil/selectors\";\n-import { useRecoilValue } from \"recoil\";\n import transformDataset from \"./transformDataset\";\n+import * as atoms from \"../../recoil/atoms\";\n import { filters } from \"../../recoil/filters\";\n import { _activeFields } from \"../../recoil/schema\";\n import { State } from \"../../recoil/types\";\n import { similarityParameters } from \"../../components/Actions/Similar\";\n-import { toCamelCase } from \"@fiftyone/utilities\";\n \n const Query = graphql`\n query DatasetQuery($name: String!, $view: JSONArray) {\n@@ -97,17 +98,24 @@ export const Dataset: Route<DatasetQuery> = ({ prepared }) => {\n const update = useStateUpdate();\n \n useEffect(() => {\n- update(({ reset }) => {\n+ update(({ reset, get }) => {\n reset(filters);\n- reset(_activeFields({ modal: false }));\n reset(similarityParameters);\n \n+ const newDataset = transformDataset(dataset);\n+\n+ const oldDataset = get(atoms.dataset);\n+ oldDataset && console.log(oldDataset.id, newDataset.id);\n+ if (!oldDataset || oldDataset.id !== newDataset.id) {\n+ reset(_activeFields({ modal: false }));\n+ }\n+\n return {\n colorscale: router.state.colorscale,\n config: router.state.config\n ? (toCamelCase(router.state.config) as State.Config)\n : undefined,\n- dataset: transformDataset(dataset),\n+ dataset: newDataset,\n state: router.state.state,\n };\n });\n"
}
|
[
{
"diff_hunk": "@@ -97,17 +98,24 @@ export const Dataset: Route<DatasetQuery> = ({ prepared }) => {\n const update = useStateUpdate();\n \n useEffect(() => {\n- update(({ reset }) => {\n+ update(({ reset, get }) => {\n reset(filters);\n- reset(_activeFields({ modal: false }));\n reset(similarityParameters);\n \n+ const newDataset = transformDataset(dataset);\n+\n+ const oldDataset = get(atoms.dataset);\n+ oldDataset && console.log(oldDataset.id, newDataset.id);\n+ if (!oldDataset || oldDataset.id !== newDataset.id) {\n+ reset(_activeFields({ modal: false }));",
"line": null,
"original_line": 110,
"original_start_line": null,
"path": "app/packages/app/src/Root/Datasets/Dataset.tsx",
"start_line": null,
"text": "@user1:\nIs there a reason this isn't in `useStateUpdate()`? Looks like there are similar things there.\n\n@author:\nGood point, fixed"
},
{
"diff_hunk": "@@ -97,17 +98,24 @@ export const Dataset: Route<DatasetQuery> = ({ prepared }) => {\n const update = useStateUpdate();\n \n useEffect(() => {\n- update(({ reset }) => {\n+ update(({ reset, get }) => {\n reset(filters);\n- reset(_activeFields({ modal: false }));\n reset(similarityParameters);\n \n+ const newDataset = transformDataset(dataset);\n+\n+ const oldDataset = get(atoms.dataset);\n+ oldDataset && console.log(oldDataset.id, newDataset.id);",
"line": null,
"original_line": 108,
"original_start_line": null,
"path": "app/packages/app/src/Root/Datasets/Dataset.tsx",
"start_line": null,
"text": "@user1:\nStray console.log?\n\n@author:\nYes, thanks"
}
] |
81b0953addd1b6b11854df796bb64acce028989d
|
diff --git a/app/packages/app/src/Root/Datasets/Dataset.tsx b/app/packages/app/src/Root/Datasets/Dataset.tsx
index cce7f8843d3..d0085586cdb 100644
--- a/app/packages/app/src/Root/Datasets/Dataset.tsx
+++ b/app/packages/app/src/Root/Datasets/Dataset.tsx
@@ -1,18 +1,17 @@
import { Route, RouterContext } from "@fiftyone/components";
+import { toCamelCase } from "@fiftyone/utilities";
import React, { useContext, useEffect } from "react";
import { graphql, usePreloadedQuery } from "react-relay";
+import { useRecoilValue } from "recoil";
import DatasetComponent from "../../components/Dataset";
import { useStateUpdate } from "../../utils/hooks";
import { DatasetQuery } from "./__generated__/DatasetQuery.graphql";
import { datasetName } from "../../recoil/selectors";
-import { useRecoilValue } from "recoil";
import transformDataset from "./transformDataset";
import { filters } from "../../recoil/filters";
-import { _activeFields } from "../../recoil/schema";
import { State } from "../../recoil/types";
import { similarityParameters } from "../../components/Actions/Similar";
-import { toCamelCase } from "@fiftyone/utilities";
const Query = graphql`
query DatasetQuery($name: String!, $view: JSONArray) {
@@ -99,7 +98,6 @@ export const Dataset: Route<DatasetQuery> = ({ prepared }) => {
useEffect(() => {
update(({ reset }) => {
reset(filters);
- reset(_activeFields({ modal: false }));
reset(similarityParameters);
return {
diff --git a/app/packages/app/src/utils/hooks.ts b/app/packages/app/src/utils/hooks.ts
index 13561f6c82a..b6b5af6ce2e 100644
--- a/app/packages/app/src/utils/hooks.ts
+++ b/app/packages/app/src/utils/hooks.ts
@@ -38,6 +38,7 @@ import { getDatasetName } from "./generic";
import { RouterContext } from "@fiftyone/components";
import { RGB } from "@fiftyone/looker";
import { DatasetQuery } from "../Root/Datasets/__generated__/DatasetQuery.graphql";
+import { _activeFields } from "../recoil/schema";
export const useEventHandler = (
target,
@@ -310,7 +311,7 @@ export const useStateUpdate = () => {
const { colorscale, config, dataset, state } =
resolve instanceof Function ? resolve(t) : resolve;
- const { get, set } = t;
+ const { get, reset, set } = t;
if (state) {
const view = get(viewAtoms.view);
@@ -351,9 +352,9 @@ export const useStateUpdate = () => {
dataset.evaluations = Object.values(dataset.evaluations || {});
const groups = resolveGroups(dataset);
- const current = get(sidebarGroupsDefinition(false));
+ const currentSidebar = get(sidebarGroupsDefinition(false));
- if (JSON.stringify(groups) !== JSON.stringify(current)) {
+ if (JSON.stringify(groups) !== JSON.stringify(currentSidebar)) {
set(sidebarGroupsDefinition(false), groups);
set(
aggregationAtoms.aggregationsTick,
@@ -361,6 +362,11 @@ export const useStateUpdate = () => {
);
}
+ const previousDataset = get(atoms.dataset);
+ if (!previousDataset || previousDataset.id !== dataset.id) {
+ reset(_activeFields({ modal: false }));
+ }
+
set(atoms.dataset, dataset);
}
diff --git a/fiftyone/server/query.py b/fiftyone/server/query.py
index f76e4f39521..f94bec5d629 100644
--- a/fiftyone/server/query.py
+++ b/fiftyone/server/query.py
@@ -175,9 +175,7 @@ async def resolver(
view = fov.DatasetView._build(ds, view or [])
if view._dataset != ds:
d = view._dataset._serialize()
- dataset.id = (
- ObjectId()
- ) # if it is not the root dataset, change the id (relay requires it)
+ dataset.id = view._dataset._doc.id
dataset.media_type = d["media_type"]
dataset.sample_fields = [
from_dict(SampleField, s)
@@ -285,7 +283,7 @@ def serialize_dataset(dataset: fod.Dataset, view: fov.DatasetView) -> t.Dict:
if view is not None and view._dataset != dataset:
d = view._dataset._serialize()
data.media_type = d["media_type"]
- data.id = ObjectId()
+ data.id = view._dataset._doc.id
data.sample_fields = [
from_dict(SampleField, s)
for s in _flatten_fields([], d["sample_fields"])
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
}
|
|
voxel51__fiftyone-1283@8c0bc4a
|
voxel51/fiftyone
|
Python
| 1,283 |
Relax `opencv-python-headless` version requirement
|
Removes the pin on `opencv-python-headless`. The original reason for this was to prevent building wheels from source for new installs during the few hours the source dist was available, but binary dists were not.
`opencv-python-headless` is popular package, only updated a few times a year, and therefore a loose requirement is likely best.
### Issues
* Resolves #1282
|
2021-09-22T17:12:28Z
|
[FR] update opencv-python-headless
### Proposal Summary
Currently this repo requires opencv-python-headless<=4.4.0.46. To cut a long story short there are no wheels available for python3.9 and I am unable to install fiftyone (I am using docker `image: jupyter/scipy-notebook:latest`). However version `4.5.3.56` is available for install without issue, and I propose updating the requirement for this dependency.
|
[
{
"body": "### Proposal Summary\r\nCurrently this repo requires opencv-python-headless<=4.4.0.46. To cut a long story short there are no wheels available for python3.9 and I am unable to install fiftyone (I am using docker `image: jupyter/scipy-notebook:latest`). However version `4.5.3.56` is available for install without issue, and I propose updating the requirement for this dependency.",
"number": 1282,
"title": "[FR] update opencv-python-headless"
}
] |
634f707fe4c02f925906efced047d681e6f2d1ca
|
{
"head_commit": "8c0bc4a90ff7dde7f03929b8883ba4441b6d876c",
"head_commit_message": "rm opencv-python-headless pin",
"patch_to_review": "diff --git a/setup.py b/setup.py\nindex 84cb5a72929..ed5c33b62d7 100644\n--- a/setup.py\n+++ b/setup.py\n@@ -78,6 +78,7 @@ def get_version():\n \"mongoengine==0.20.0\",\n \"motor>=2.3,<3\",\n \"numpy\",\n+ \"opencv-python-headless>=4.4,<5\",\n \"packaging\",\n \"pandas\",\n \"Pillow>=6.2\",\n@@ -96,12 +97,6 @@ def get_version():\n \"universal-analytics-python3>=1.0.1,<2\",\n # internal packages\n \"voxel51-eta>=0.5.2,<0.6\",\n- # ETA dependency - restricted to a maximum version known to provide\n- # wheels here because it tends to publish sdists several hours before\n- # wheels. When users install FiftyOne in this window, they will need to\n- # compile OpenCV from source, leading to either errors or a\n- # time-consuming installation.\n- \"opencv-python-headless<=4.4.0.46\",\n ],\n classifiers=[\n \"Development Status :: 4 - Beta\",\n"
}
|
[
{
"diff_hunk": "@@ -78,6 +78,7 @@ def get_version():\n \"mongoengine==0.20.0\",\n \"motor>=2.3,<3\",\n \"numpy\",\n+ \"opencv-python-headless>=4.4,<5\",",
"line": null,
"original_line": 81,
"original_start_line": null,
"path": "setup.py",
"start_line": null,
"text": "@user1:\nIs there a reason to include a version requirement at all? I'm not aware of anything we're using that has or will change anytime soon.\n\n@author:\nI'll remove"
}
] |
275d1bf698d882303b53a8011e958c7f6423aa09
|
diff --git a/setup.py b/setup.py
index 84cb5a72929..7df55703079 100644
--- a/setup.py
+++ b/setup.py
@@ -78,6 +78,7 @@ def get_version():
"mongoengine==0.20.0",
"motor>=2.3,<3",
"numpy",
+ "opencv-python-headless",
"packaging",
"pandas",
"Pillow>=6.2",
@@ -96,12 +97,6 @@ def get_version():
"universal-analytics-python3>=1.0.1,<2",
# internal packages
"voxel51-eta>=0.5.2,<0.6",
- # ETA dependency - restricted to a maximum version known to provide
- # wheels here because it tends to publish sdists several hours before
- # wheels. When users install FiftyOne in this window, they will need to
- # compile OpenCV from source, leading to either errors or a
- # time-consuming installation.
- "opencv-python-headless<=4.4.0.46",
],
classifiers=[
"Development Status :: 4 - Beta",
|
{
"difficulty": "low",
"estimated_review_effort": 2,
"problem_domain": "Dependency Updates & Env Compatibility"
}
|
|
voxel51__fiftyone-4236@2af48de
|
voxel51/fiftyone
|
Python
| 4,236 |
Lazily connect to database when needed
|
Closes #182
Closes #1964
Closes #1804
Closes #3189
## What changes are proposed in this pull request?
Lazily connect to database so you can import `fiftyone` without a database connection.
Most of the work was testing.
## How is this patch tested? If it is not, please explain why.
### Lazy connection testing
A sampling of things users might want to be able to do without connecting to a database
#### Version
BEFORE
```shell
$ FIFTYONE_DATABASE_URI=invalid.com:27017 fiftyone --version
...
Traceback (most recent call last):
File "/Users/stuart/dev/fiftyone/fresh_venv/bin/fiftyone", line 5, in <module>
from fiftyone.core.cli import main
File "/Users/stuart/dev/fiftyone/fiftyone/__init__.py", line 25, in <module>
from fiftyone.__public__ import *
File "/Users/stuart/dev/fiftyone/fiftyone/__public__.py", line 17, in <module>
_foo.establish_db_conn(config)
File "/Users/stuart/dev/fiftyone/fiftyone/core/odm/database.py", line 216, in establish_db_conn
_validate_db_version(config, _client)
File "/Users/stuart/dev/fiftyone/fiftyone/core/odm/database.py", line 301, in _validate_db_version
raise ConnectionError("Could not connect to `mongod`") from e
ConnectionError: Could not connect to `mongod`
```
AFTER
```shell
$FIFTYONE_DATABASE_URI=invalid.com:27017 fiftyone --version
FiftyOne v0.24.0, Voxel51, Inc.
```
#### Config
BEFORE
```shell
$FIFTYONE_DATABASE_URI=invalid.com:27017 fiftyone config
...
ConnectionError: Could not connect to `mongod`
```
AFTER
```shell
$FIFTYONE_DATABASE_URI=invalid.com:27017 fiftyone config
FiftyOne v0.24.0, Voxel51, Inc.
```
#### Zoo
BEFORE
```shell
$FIFTYONE_DATABASE_URI=invalid.com:27017 fiftyone zoo datasets download quickstart
...
ConnectionError: Could not connect to `mongod`
```
AFTER
```shell
$FIFTYONE_DATABASE_URI=invalid.com:27017 fiftyone zoo datasets download quickstart
Dataset already downloaded
```
#### Data model
BEFORE
```python
import fiftyone as fo
...
ConnectionError: Could not connect to `mongod`
```
AFTER
```python
import fiftyone as fo
test_sample = fo.Sample(
"/path/to/sample.png",
ground_truth=fo.Detections(
detections=[
fo.Detection(label="thing", bounding_box=[0.5, 0.5, 0.5, 0.5])
]
),
)
assert test_sample.ground_truth.detections[0].label == "thing"
```
#### Actually requires DB stuff
BOTH
```python
import fiftyone as fo
ds = fo.Dataset()
...
ConnectionError: Could not connect to `mongod`
```
### Correctness Testing
Made sure things still work for normal operation.
This is where I'll likely need some input / ideas.
Works with both database_uri set and mongodb spun up for me.
1. Unit tests pass
2. `fiftyone datasets list`
3. `fiftyone app launch` and launch app from shell
1. Load dataset
2. Manipulate view via session
3. Save view
4. Filter via sidebar
5. Patches
6. Compute/view embeddings+viz
7. simple dataset-using operator (example_target_view)
4. Debug mode: run `server/main.py` and `yarn dev` separately
5. Make sure migration gets checked/run on first DB-usage also
6. mongoengine documents can be saved before other operations where connection established
Other
1. Spins up 1 mongodb which is shared by other fo's (that are using DB). And it's not killed until last user is done.
## Release Notes
### Is this a user-facing change that should be mentioned in the release notes?
- [ ] No. You can skip the rest of this section.
- [x] Yes. Give a description of this change to be included in the release
notes for FiftyOne users.
<pre>
Importing ``fiftyone`` doesn't attempt a database connection, rather it's deferred to when actually required.
</pre>
### What areas of FiftyOne does this PR affect?
- [ ] App: FiftyOne application changes
- [ ] Build: Build and test infrastructure changes
- [x] Core: Core `fiftyone` Python library changes
- [ ] Documentation: FiftyOne documentation changes
- [ ] Other
<!-- This is an auto-generated comment: release notes by coderabbit.ai -->
## Summary by CodeRabbit
- **Refactor**
- Improved database migration process based on environment variable configuration.
- Enhanced database connection management in the repository factory module.
- **Tests**
- Optimized unit tests for plugin secrets management with streamlined code.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
|
2024-04-05T15:38:30Z
|
Lazily start DB service to reduce import time?
Currently, running `fiftyone config`, which simply loads and prints one's FO config, takes ~2 seconds to execute on my machine, because `import fiftyone` currently triggers a DB service to be started, among other things.
Can we adopt a lazy initialization strategy where the DB service is only started when absolutely necessary?
There are many valid use cases, like the CLI, or utilizing many of the utility methods in `fiftyone.utils`, where no DB connection is needed.
[FR] Reduce FiftyOne import time
### System information
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Macos 12.0.1
- **FiftyOne installed from (pip or source)**: pip
- **FiftyOne version (run `fiftyone --version`)**: v0.15.1
- **Python version**: 3.10.4
### Commands to reproduce
`fiftyone`
or
`fiftyone --version`
or
`python -c "import fiftyone"`
### Describe the problem
Importing fiftyone in general is VERY slow often 4-6 seconds.
`import fiftyone`
This is not desirable if fiftyone is needed for command line applications.
The main culprits seem to be:
- submodules in fiftyone reference back to the public api which means importing submodules imports everything. Eg. `fiftyone.core.dataset` contains `import fiftyone`
- In `fiftyone/__public__.py` the following files seem to be the culprits:
+ **core.config**: 548.17ms
+ **core.odm**: 1019.60ms
+ **core.plots**: 620.60ms
+ core.session: 54.95ms
+ load_config(): 214.52ms
+ **establish_db_conn()**: 3761.12ms
### Code to reproduce issue
`import fiftyone`
`from fiftyone.core.dataset import Dataset`
- Should be noted that this seems to have back references to `fiftyone.__public__`. This should bot be the case across the library. Imports in submodules should make direct references to the components they need to speed up importing. Importing this should be expected to be faster than importing the entire public interface.
### Other info / logs
Output of running: `python3 -c "import fiftyone"` (With modifications given below)
Custom logs for each import from `fiftyone/__public__.py`
```bash
N/A: 0.00ms
core.config: 548.17ms
core.odm: 1019.60ms
load_config(): 214.52ms
load_annotation_config(): 0.19ms
load_app_config(): 0.07ms
N/A: 0.00ms
core.config: 278.28ms
core.odm: 582.77ms
load_config(): 58.93ms
load_annotation_config(): 0.11ms
load_app_config(): 0.04ms
establish_db_conn(): 0.01ms
core.aggregations: 2.35ms
core.config: 0.00ms
core.dataset: 122.92ms
core.expressions: 0.01ms
core.fields: 0.01ms
core.frame: 0.00ms
core.labels: 0.01ms
core.metadata: 0.00ms
core.models: 0.00ms
core.odm: 0.00ms
core.plots: 620.60ms
core.sample: 0.00ms
core.stages: 2.01ms
core.session: 54.95ms
core.utils: 0.01ms
core.view: 0.00ms
utils.eval.classification: 12.39ms
utils.eval.detection: 0.00ms
utils.eval.segmentation: 0.00ms
utils.eval.quickstart: 25.54ms
establish_db_conn(): 3761.12ms
core.aggregations: 2.75ms
core.config: 0.01ms
core.dataset: 68.54ms
core.expressions: 0.01ms
core.fields: 0.01ms
core.frame: 0.00ms
core.labels: 0.01ms
core.metadata: 0.00ms
core.models: 0.00ms
core.odm: 0.01ms
core.plots: 344.27ms
core.sample: 0.00ms
core.stages: 1.26ms
core.session: 31.29ms
core.utils: 0.01ms
core.view: 0.00ms
utils.eval.classification: 6.16ms
utils.eval.detection: 0.00ms
utils.eval.segmentation: 0.00ms
utils.eval.quickstart: 13.43ms
```
<details>
<summary>Code Modification of `__public__.py` Used To Generate the Above</summary>
<br>
<p>
import time
_t = time.time()
def print_elapsed(msg: str = None):
global _t
t = time.time() - _t
print(f'{msg}: {t*1000:.2f}ms' if msg else f'{t*1000:.2f}ms')
_t = time.time()
print_elapsed('N/A')
import fiftyone.core.config as _foc
print_elapsed('core.config')
import fiftyone.core.odm as _foo
print_elapsed('core.odm')
config = _foc.load_config()
print_elapsed('load_config()')
annotation_config = _foc.load_annotation_config()
print_elapsed('load_annotation_config()')
app_config = _foc.load_app_config()
print_elapsed('load_app_config()')
_foo.establish_db_conn(config)
print_elapsed('establish_db_conn()')
from .core.aggregations import (
Bounds,
Count,
CountValues,
Distinct,
HistogramValues,
Mean,
Std,
Sum,
Values,
)
print_elapsed('core.aggregations')
from .core.config import AppConfig
print_elapsed('core.config')
from .core.dataset import (
Dataset,
list_datasets,
dataset_exists,
load_dataset,
delete_dataset,
delete_datasets,
delete_non_persistent_datasets,
get_default_dataset_name,
make_unique_dataset_name,
get_default_dataset_dir,
)
print_elapsed('core.dataset')
from .core.expressions import (
ViewField,
ViewExpression,
VALUE,
)
print_elapsed('core.expressions')
from .core.fields import (
ArrayField,
BooleanField,
ClassesField,
DateField,
DateTimeField,
DictField,
EmbeddedDocumentField,
EmbeddedDocumentListField,
Field,
FrameNumberField,
FrameSupportField,
FloatField,
GeoPointField,
GeoLineStringField,
GeoPolygonField,
GeoMultiPointField,
GeoMultiLineStringField,
GeoMultiPolygonField,
IntField,
IntDictField,
KeypointsField,
ListField,
ObjectIdField,
PolylinePointsField,
StringField,
TargetsField,
VectorField,
)
print_elapsed('core.fields')
from .core.frame import Frame
print_elapsed('core.frame')
from .core.labels import (
Label,
Attribute,
BooleanAttribute,
CategoricalAttribute,
NumericAttribute,
ListAttribute,
Regression,
Classification,
Classifications,
Detection,
Detections,
Polyline,
Polylines,
Keypoint,
Keypoints,
Segmentation,
Heatmap,
TemporalDetection,
TemporalDetections,
GeoLocation,
GeoLocations,
)
print_elapsed('core.labels')
from .core.metadata import (
Metadata,
ImageMetadata,
VideoMetadata,
)
print_elapsed('core.metadata')
from .core.models import (
apply_model,
compute_embeddings,
compute_patch_embeddings,
load_model,
Model,
ModelConfig,
EmbeddingsMixin,
TorchModelMixin,
ModelManagerConfig,
ModelManager,
)
print_elapsed('core.models')
from .core.odm import KeypointSkeleton
print_elapsed('core.odm')
from .core.plots import (
plot_confusion_matrix,
plot_pr_curve,
plot_pr_curves,
plot_roc_curve,
lines,
scatterplot,
location_scatterplot,
Plot,
ResponsivePlot,
InteractivePlot,
ViewPlot,
ViewGrid,
CategoricalHistogram,
NumericalHistogram,
)
print_elapsed('core.plots')
from .core.sample import Sample
print_elapsed('core.sample')
from .core.stages import (
Concat,
Exclude,
ExcludeBy,
ExcludeFields,
ExcludeFrames,
ExcludeLabels,
Exists,
FilterField,
FilterLabels,
FilterKeypoints,
Limit,
LimitLabels,
GeoNear,
GeoWithin,
GroupBy,
MapLabels,
Match,
MatchFrames,
MatchLabels,
MatchTags,
Mongo,
Shuffle,
Select,
SelectBy,
SelectFields,
SelectFrames,
SelectLabels,
SetField,
Skip,
SortBy,
SortBySimilarity,
Take,
ToPatches,
ToEvaluationPatches,
ToClips,
ToFrames,
)
print_elapsed('core.stages')
from .core.session import (
close_app,
launch_app,
Session,
)
print_elapsed('core.session')
from .core.utils import (
pprint,
pformat,
ProgressBar,
)
print_elapsed('core.utils')
from .core.view import DatasetView
print_elapsed('core.view')
from .utils.eval.classification import (
evaluate_classifications,
ClassificationResults,
BinaryClassificationResults,
)
print_elapsed('utils.eval.classification')
from .utils.eval.detection import (
evaluate_detections,
DetectionResults,
)
print_elapsed('utils.eval.detection')
from .utils.eval.segmentation import (
evaluate_segmentations,
SegmentationResults,
)
print_elapsed('utils.eval.segmentation')
from .utils.quickstart import quickstart
print_elapsed('utils.eval.quickstart')
</p>
</details>
### What areas of FiftyOne does this bug affect?
- [x] `App`: FiftyOne application issue
- [x] `Core`: Core `fiftyone` Python library issue
- [ ] `Server`: Fiftyone server issue
### Willingness to contribute
The FiftyOne Community encourages bug fix contributions. Would you or another
member of your organization be willing to contribute a fix for this bug to the
FiftyOne codebase?
- [ ] Yes. I can contribute a fix for this bug independently.
- [ ] Yes. I would be willing to contribute a fix for this bug with guidance
from the FiftyOne community.
- [x] No. I cannot contribute a bug fix at this time.
[FR] Make fiftyone --version work even if FiftyOne can't be imported?
Currently, `fiftyone --version` requires `import fiftyone as fo` to be runnable, which in turn requires, among other things, the ability to successfully connect to the database.
This presents a problem in the following situation:
- I install an incompatible (eg old) version of `fiftyone` for my database
- I try to `import fiftyone` and get an error (as expected)
- Unsure what the problem is, I want to create a GitHub issue to get some help
- As instructed by FiftyOne's issue template, I try to run `fiftyone --version` to report my package version
- I get the same error!
Thus the FR: it would be ideal if `fiftyone --version` were runnable even for a greater range of "broken" installations. So at least I can obtain my basic system information.
One way to retrieve this info is:
```py
import pkg_resources
print(pkg_resources.get_distribution("fiftyone").version)
```
But it may be better to continue to access it via `fiftyone.constants.VERSION`, but in such a way that database connection issues don't interfere with the process...
[FR] Do not create a database connection during import
### Proposal Summary
Do not setup a database connection during import.
### Motivation
Setting up a database connection can be time consuming and error prone. Doing it implicitly during an import is unexpected (for me). It leads to errors and creates global state (that we all hate). This is a bad code smell.
There is a very good alternative: having a client object that represents the connection.
### What areas of FiftyOne does this feature affect?
- [x] App: FiftyOne application
- [x] Core: Core `fiftyone` Python library
- [ ] Server: FiftyOne server
### Details
We including FiftyOne in our multi-gpu training code and it crashed because of it. Multiple processes tried to connect to the same port. This is an unwanted side-effect of doing an import in a file that is not used during the training.
### Willingness to contribute
The FiftyOne Community welcomes contributions! Would you or another member of your organization be willing to contribute an implementation of this feature?
- [ ] Yes. I can contribute this feature independently
- [ ] Yes. I would be willing to contribute this feature with guidance from the FiftyOne community
- [x] No. I cannot contribute this feature at this time
|
Related: the DB is also spinning up unnecessarily (?) when connecting to a remote session.
@tylerganter do you have any thoughts on this? I don't have a good understanding of what operations should cause the DB to spin up. Another reason (besides import time) why lazily starting the DB would be good is that MongoDB logs an error when its port is already in use, which isn't relevant to tasks that aren't using the database.
Hi @nmichlo, per your benchmarking, most of the import time (~4 seconds) is due to establishing a MongoDB connection, which is mandatory for using FiftyOne (that's where all metadata is stored).
By default, importing FiftyOne spins up a MongoDB instance in a background process. This process is a singleton on the machine, however, so, if you import FiftyOne in multiple processes, imports after the first one will be faster.
One option to speed up imports is to [run MongoDB outside of FiftyOne](https://voxel51.com/docs/fiftyone/user_guide/config.html#configuring-a-mongodb-connection), in which case you'll avoid the 4 seconds of MongoDB-related setup at import.
I am not entirely sure about the architecture of FiftyOne, but some of the APIs that fiftyone provides are extremely useful on their own and I do not see any reason why they should require that MongoDB is initialised to use them?
Could fiftyone be configured to use an alternative more lightweight backend?
- Maybe something like `tinydb`?
With regards to Mongo DB
- Could initialisation be lazily run, only when required?
- Could initialisation be done in a background thread? And calls that actually need mongo access then become blocked?
It seems like there is still potential for import time optimisation:
- Many external libraries are loaded when only some features are used
- Not all the import time seems to be initialising mongo, but it is the majority.
Everything in FiftyOne requires MongoDB, because that's where all `Dataset` object's contents are stored. So deferring MongoDB startup is not an option, nor is swapping it out for another backend, because FiftyOne's API is tightly coupled with MongoDB-specific features.
That is unfortunate.
I hope in future there might be workarounds for this, especially with regards to dataset conversion and uploading to CVAT. Which are useful APIs on their own, without the fiftyone visualisation tools, but are limited by the startup times.
To reproduce
```python
from multiprocessing import Process
def main():
import fiftyone
if __name__ == "__main__":
processes = [Process(target=main) for i in range(2)]
for p in processes:
p.start()
for p in processes:
p.join()
```
Works with 1 process. Does not with 2.
```
Subprocess ['.../.venv/lib/python3.10/site-packages/fiftyone/db/bin/mongod', '--dbpath', '.../.fiftyone/var/lib/mongo', '--logpath', '.../.fiftyone/var/lib/mongo/log/mongo.log', '--port', '0', '--nounixsocket'] exited with error 100:
{"t":{"$date":"2023-06-12T15:49:34.228Z"},"s":"I", "c":"CONTROL", "id":20697, "ctx":"-","msg":"Renamed existing log file","attr":{"oldLogPath":".../.fiftyone/var/lib/mongo/log/mongo.log","newLogPath":".../.fiftyone/var/lib/mongo/log/mongo.log.2023-06-12T15-49-34"}}
```
Hi @pietermarsman this is something we definitely agree with having! I can't give an estimate of when this might be done though because it's not a quick fix unfortunately.
In your particular case, you could try spinning up your own mongodb instance and [pointing fiftyone to it before importing.](https://docs.voxel51.com/user_guide/config.html#configuring-a-mongodb-connection) I tested this and it worked for me locally with your script above.
e.g.,
```
# No
python pieter.py
# Yes (with mongod running at localhost:27017 of course)
FIFTYONE_DATABASE_URI=mongodb://localhost:27017 python pieter.py
```
As an aside, even without configuring a database URI as @swheaton described, FiftyOne *is* designed to support concurrent usage in multiple separate processes. EG if you import FiftyOne in multiple shells, a single child process will run the database and shut it down only when all processes are finished.
The specific problem that this issue has identified is that FiftyOne does not seem to support `multiprocessing` when no custom database URI is configured.
Actually I think it works with `multiprocessing` but the problem here is that the two processes both try to start a mongod at the exact same time. The toy script works if I add a 4s delay, for example.
```python
from multiprocessing import Process
import time
def main(i):
time.sleep(i)
import fiftyone
if __name__ == "__main__":
processes = [Process(target=main, args=(i*4,)) for i in range(2)]
for p in processes:
p.start()
for p in processes:
p.join()
```
Yeah that makes sense. The problem only arises when two processes simultaneously decide they need to be the one that spawns the `mongod` subprocess.
Either way, just confirming that using a custom `database_uri` is the most robust approach to multiple connections.
Same problem here. In my case I have a huge pipeline with a yaml config that defines how an experiment should be configured (I would like to configure a mongo connection in this file). The pipeline has some top level imports (one of which contains `import fiftyone` itself). All in all this means that I have to read the yaml and execute some code before importing files which is a huge pain, because I can't have top level imports.
Hi @YuseqYaseq, configuring the connection with the environment variable `FIFTYONE_DATABASE_URI` is what we can recommend right now. Again, cleaning this up is something we are thinking about
|
[
{
"body": "Currently, running `fiftyone config`, which simply loads and prints one's FO config, takes ~2 seconds to execute on my machine, because `import fiftyone` currently triggers a DB service to be started, among other things.\r\n\r\nCan we adopt a lazy initialization strategy where the DB service is only started when absolutely necessary?\r\n\r\nThere are many valid use cases, like the CLI, or utilizing many of the utility methods in `fiftyone.utils`, where no DB connection is needed.\r\n",
"number": 182,
"title": "Lazily start DB service to reduce import time?"
},
{
"body": "### System information\r\n\r\n- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Macos 12.0.1\r\n- **FiftyOne installed from (pip or source)**: pip\r\n- **FiftyOne version (run `fiftyone --version`)**: v0.15.1\r\n- **Python version**: 3.10.4\r\n\r\n### Commands to reproduce\r\n\r\n`fiftyone`\r\nor\r\n`fiftyone --version`\r\nor\r\n`python -c \"import fiftyone\"`\r\n\r\n### Describe the problem\r\n\r\nImporting fiftyone in general is VERY slow often 4-6 seconds.\r\n`import fiftyone`\r\n\r\nThis is not desirable if fiftyone is needed for command line applications.\r\n\r\nThe main culprits seem to be:\r\n- submodules in fiftyone reference back to the public api which means importing submodules imports everything. Eg. `fiftyone.core.dataset` contains `import fiftyone`\r\n- In `fiftyone/__public__.py` the following files seem to be the culprits:\r\n + **core.config**: 548.17ms\r\n + **core.odm**: 1019.60ms\r\n + **core.plots**: 620.60ms\r\n + core.session: 54.95ms\r\n + load_config(): 214.52ms\r\n + **establish_db_conn()**: 3761.12ms\r\n\r\n### Code to reproduce issue\r\n\r\n`import fiftyone`\r\n\r\n`from fiftyone.core.dataset import Dataset`\r\n- Should be noted that this seems to have back references to `fiftyone.__public__`. This should bot be the case across the library. Imports in submodules should make direct references to the components they need to speed up importing. Importing this should be expected to be faster than importing the entire public interface.\r\n\r\n### Other info / logs\r\n\r\n\r\nOutput of running: `python3 -c \"import fiftyone\"` (With modifications given below)\r\n\r\nCustom logs for each import from `fiftyone/__public__.py`\r\n```bash\r\nN/A: 0.00ms\r\ncore.config: 548.17ms\r\ncore.odm: 1019.60ms\r\nload_config(): 214.52ms\r\nload_annotation_config(): 0.19ms\r\nload_app_config(): 0.07ms\r\nN/A: 0.00ms\r\ncore.config: 278.28ms\r\ncore.odm: 582.77ms\r\nload_config(): 58.93ms\r\nload_annotation_config(): 0.11ms\r\nload_app_config(): 0.04ms\r\nestablish_db_conn(): 0.01ms\r\ncore.aggregations: 2.35ms\r\ncore.config: 0.00ms\r\ncore.dataset: 122.92ms\r\ncore.expressions: 0.01ms\r\ncore.fields: 0.01ms\r\ncore.frame: 0.00ms\r\ncore.labels: 0.01ms\r\ncore.metadata: 0.00ms\r\ncore.models: 0.00ms\r\ncore.odm: 0.00ms\r\ncore.plots: 620.60ms\r\ncore.sample: 0.00ms\r\ncore.stages: 2.01ms\r\ncore.session: 54.95ms\r\ncore.utils: 0.01ms\r\ncore.view: 0.00ms\r\nutils.eval.classification: 12.39ms\r\nutils.eval.detection: 0.00ms\r\nutils.eval.segmentation: 0.00ms\r\nutils.eval.quickstart: 25.54ms\r\nestablish_db_conn(): 3761.12ms\r\ncore.aggregations: 2.75ms\r\ncore.config: 0.01ms\r\ncore.dataset: 68.54ms\r\ncore.expressions: 0.01ms\r\ncore.fields: 0.01ms\r\ncore.frame: 0.00ms\r\ncore.labels: 0.01ms\r\ncore.metadata: 0.00ms\r\ncore.models: 0.00ms\r\ncore.odm: 0.01ms\r\ncore.plots: 344.27ms\r\ncore.sample: 0.00ms\r\ncore.stages: 1.26ms\r\ncore.session: 31.29ms\r\ncore.utils: 0.01ms\r\ncore.view: 0.00ms\r\nutils.eval.classification: 6.16ms\r\nutils.eval.detection: 0.00ms\r\nutils.eval.segmentation: 0.00ms\r\nutils.eval.quickstart: 13.43ms\r\n```\r\n\r\n<details>\r\n<summary>Code Modification of `__public__.py` Used To Generate the Above</summary>\r\n<br>\r\n<p>\r\nimport time\r\n\r\n_t = time.time()\r\ndef print_elapsed(msg: str = None):\r\n global _t\r\n t = time.time() - _t\r\n print(f'{msg}: {t*1000:.2f}ms' if msg else f'{t*1000:.2f}ms')\r\n _t = time.time()\r\n\r\nprint_elapsed('N/A')\r\nimport fiftyone.core.config as _foc\r\nprint_elapsed('core.config')\r\nimport fiftyone.core.odm as _foo\r\nprint_elapsed('core.odm')\r\n\r\nconfig = _foc.load_config()\r\nprint_elapsed('load_config()')\r\nannotation_config = _foc.load_annotation_config()\r\nprint_elapsed('load_annotation_config()')\r\napp_config = _foc.load_app_config()\r\nprint_elapsed('load_app_config()')\r\n\r\n_foo.establish_db_conn(config)\r\nprint_elapsed('establish_db_conn()')\r\n\r\nfrom .core.aggregations import (\r\n Bounds,\r\n Count,\r\n CountValues,\r\n Distinct,\r\n HistogramValues,\r\n Mean,\r\n Std,\r\n Sum,\r\n Values,\r\n)\r\nprint_elapsed('core.aggregations')\r\nfrom .core.config import AppConfig\r\nprint_elapsed('core.config')\r\nfrom .core.dataset import (\r\n Dataset,\r\n list_datasets,\r\n dataset_exists,\r\n load_dataset,\r\n delete_dataset,\r\n delete_datasets,\r\n delete_non_persistent_datasets,\r\n get_default_dataset_name,\r\n make_unique_dataset_name,\r\n get_default_dataset_dir,\r\n)\r\nprint_elapsed('core.dataset')\r\nfrom .core.expressions import (\r\n ViewField,\r\n ViewExpression,\r\n VALUE,\r\n)\r\nprint_elapsed('core.expressions')\r\nfrom .core.fields import (\r\n ArrayField,\r\n BooleanField,\r\n ClassesField,\r\n DateField,\r\n DateTimeField,\r\n DictField,\r\n EmbeddedDocumentField,\r\n EmbeddedDocumentListField,\r\n Field,\r\n FrameNumberField,\r\n FrameSupportField,\r\n FloatField,\r\n GeoPointField,\r\n GeoLineStringField,\r\n GeoPolygonField,\r\n GeoMultiPointField,\r\n GeoMultiLineStringField,\r\n GeoMultiPolygonField,\r\n IntField,\r\n IntDictField,\r\n KeypointsField,\r\n ListField,\r\n ObjectIdField,\r\n PolylinePointsField,\r\n StringField,\r\n TargetsField,\r\n VectorField,\r\n)\r\nprint_elapsed('core.fields')\r\nfrom .core.frame import Frame\r\nprint_elapsed('core.frame')\r\nfrom .core.labels import (\r\n Label,\r\n Attribute,\r\n BooleanAttribute,\r\n CategoricalAttribute,\r\n NumericAttribute,\r\n ListAttribute,\r\n Regression,\r\n Classification,\r\n Classifications,\r\n Detection,\r\n Detections,\r\n Polyline,\r\n Polylines,\r\n Keypoint,\r\n Keypoints,\r\n Segmentation,\r\n Heatmap,\r\n TemporalDetection,\r\n TemporalDetections,\r\n GeoLocation,\r\n GeoLocations,\r\n)\r\nprint_elapsed('core.labels')\r\nfrom .core.metadata import (\r\n Metadata,\r\n ImageMetadata,\r\n VideoMetadata,\r\n)\r\nprint_elapsed('core.metadata')\r\nfrom .core.models import (\r\n apply_model,\r\n compute_embeddings,\r\n compute_patch_embeddings,\r\n load_model,\r\n Model,\r\n ModelConfig,\r\n EmbeddingsMixin,\r\n TorchModelMixin,\r\n ModelManagerConfig,\r\n ModelManager,\r\n)\r\nprint_elapsed('core.models')\r\nfrom .core.odm import KeypointSkeleton\r\nprint_elapsed('core.odm')\r\nfrom .core.plots import (\r\n plot_confusion_matrix,\r\n plot_pr_curve,\r\n plot_pr_curves,\r\n plot_roc_curve,\r\n lines,\r\n scatterplot,\r\n location_scatterplot,\r\n Plot,\r\n ResponsivePlot,\r\n InteractivePlot,\r\n ViewPlot,\r\n ViewGrid,\r\n CategoricalHistogram,\r\n NumericalHistogram,\r\n)\r\nprint_elapsed('core.plots')\r\nfrom .core.sample import Sample\r\nprint_elapsed('core.sample')\r\nfrom .core.stages import (\r\n Concat,\r\n Exclude,\r\n ExcludeBy,\r\n ExcludeFields,\r\n ExcludeFrames,\r\n ExcludeLabels,\r\n Exists,\r\n FilterField,\r\n FilterLabels,\r\n FilterKeypoints,\r\n Limit,\r\n LimitLabels,\r\n GeoNear,\r\n GeoWithin,\r\n GroupBy,\r\n MapLabels,\r\n Match,\r\n MatchFrames,\r\n MatchLabels,\r\n MatchTags,\r\n Mongo,\r\n Shuffle,\r\n Select,\r\n SelectBy,\r\n SelectFields,\r\n SelectFrames,\r\n SelectLabels,\r\n SetField,\r\n Skip,\r\n SortBy,\r\n SortBySimilarity,\r\n Take,\r\n ToPatches,\r\n ToEvaluationPatches,\r\n ToClips,\r\n ToFrames,\r\n)\r\nprint_elapsed('core.stages')\r\nfrom .core.session import (\r\n close_app,\r\n launch_app,\r\n Session,\r\n)\r\nprint_elapsed('core.session')\r\nfrom .core.utils import (\r\n pprint,\r\n pformat,\r\n ProgressBar,\r\n)\r\nprint_elapsed('core.utils')\r\nfrom .core.view import DatasetView\r\nprint_elapsed('core.view')\r\nfrom .utils.eval.classification import (\r\n evaluate_classifications,\r\n ClassificationResults,\r\n BinaryClassificationResults,\r\n)\r\nprint_elapsed('utils.eval.classification')\r\nfrom .utils.eval.detection import (\r\n evaluate_detections,\r\n DetectionResults,\r\n)\r\nprint_elapsed('utils.eval.detection')\r\nfrom .utils.eval.segmentation import (\r\n evaluate_segmentations,\r\n SegmentationResults,\r\n)\r\nprint_elapsed('utils.eval.segmentation')\r\nfrom .utils.quickstart import quickstart\r\nprint_elapsed('utils.eval.quickstart')\r\n</p>\r\n</details>\r\n\r\n### What areas of FiftyOne does this bug affect?\r\n\r\n- [x] `App`: FiftyOne application issue\r\n- [x] `Core`: Core `fiftyone` Python library issue\r\n- [ ] `Server`: Fiftyone server issue\r\n\r\n### Willingness to contribute\r\n\r\nThe FiftyOne Community encourages bug fix contributions. Would you or another\r\nmember of your organization be willing to contribute a fix for this bug to the\r\nFiftyOne codebase?\r\n\r\n- [ ] Yes. I can contribute a fix for this bug independently.\r\n- [ ] Yes. I would be willing to contribute a fix for this bug with guidance\r\n from the FiftyOne community.\r\n- [x] No. I cannot contribute a bug fix at this time.\r\n",
"number": 1804,
"title": "[FR] Reduce FiftyOne import time"
},
{
"body": "Currently, `fiftyone --version` requires `import fiftyone as fo` to be runnable, which in turn requires, among other things, the ability to successfully connect to the database.\r\n\r\nThis presents a problem in the following situation:\r\n- I install an incompatible (eg old) version of `fiftyone` for my database\r\n- I try to `import fiftyone` and get an error (as expected)\r\n- Unsure what the problem is, I want to create a GitHub issue to get some help\r\n- As instructed by FiftyOne's issue template, I try to run `fiftyone --version` to report my package version\r\n- I get the same error!\r\n\r\nThus the FR: it would be ideal if `fiftyone --version` were runnable even for a greater range of \"broken\" installations. So at least I can obtain my basic system information.\r\n\r\nOne way to retrieve this info is:\r\n\r\n```py\r\nimport pkg_resources\r\n\r\nprint(pkg_resources.get_distribution(\"fiftyone\").version)\r\n```\r\n\r\nBut it may be better to continue to access it via `fiftyone.constants.VERSION`, but in such a way that database connection issues don't interfere with the process...",
"number": 1964,
"title": "[FR] Make fiftyone --version work even if FiftyOne can't be imported?"
},
{
"body": "### Proposal Summary\r\n\r\nDo not setup a database connection during import. \r\n\r\n### Motivation\r\n\r\nSetting up a database connection can be time consuming and error prone. Doing it implicitly during an import is unexpected (for me). It leads to errors and creates global state (that we all hate). This is a bad code smell. \r\n\r\nThere is a very good alternative: having a client object that represents the connection. \r\n\r\n### What areas of FiftyOne does this feature affect?\r\n\r\n- [x] App: FiftyOne application\r\n- [x] Core: Core `fiftyone` Python library\r\n- [ ] Server: FiftyOne server\r\n\r\n### Details\r\n\r\nWe including FiftyOne in our multi-gpu training code and it crashed because of it. Multiple processes tried to connect to the same port. This is an unwanted side-effect of doing an import in a file that is not used during the training. \r\n\r\n### Willingness to contribute\r\n\r\nThe FiftyOne Community welcomes contributions! Would you or another member of your organization be willing to contribute an implementation of this feature?\r\n\r\n- [ ] Yes. I can contribute this feature independently\r\n- [ ] Yes. I would be willing to contribute this feature with guidance from the FiftyOne community\r\n- [x] No. I cannot contribute this feature at this time\r\n",
"number": 3189,
"title": "[FR] Do not create a database connection during import"
}
] |
85339fcecf03978435fed6fd94565d1c63f58dd6
|
{
"head_commit": "2af48de0332c1254bea0e992ff1b8067e4387b37",
"head_commit_message": "Revert \"reorder\"\n\nThis reverts commit 3b26be28c9456042be521fa31cb53ab7a0f22bca.",
"patch_to_review": "diff --git a/docs/generate_docs.bash b/docs/generate_docs.bash\nindex 058c58d14dc..8ec30aedc68 100755\n--- a/docs/generate_docs.bash\n+++ b/docs/generate_docs.bash\n@@ -125,7 +125,7 @@ fi\n \n echo \"Building docs\"\n # sphinx-build [OPTIONS] SOURCEDIR OUTPUTDIR [FILENAMES...]\n-sphinx-build -M html source build $SPHINXOPTS\n+_FIFTYONE_FORCE_DB_CONNECT_ON_IMPORT=1 sphinx-build -M html source build $SPHINXOPTS\n \n # Remove symlink to fiftyone-teams\n if [[ -n \"${PATH_TO_TEAMS}\" ]]; then\ndiff --git a/fiftyone/__init__.py b/fiftyone/__init__.py\nindex 5ec4db198bc..fb5b3321eb3 100644\n--- a/fiftyone/__init__.py\n+++ b/fiftyone/__init__.py\n@@ -25,9 +25,15 @@\n from fiftyone.__public__ import *\n \n import fiftyone.core.logging as _fol\n-import fiftyone.migrations as _fom\n \n-_fol.init_logging()\n+# The old way of doing things, migrating database on import. If we\n+# REALLY need to do this, for example doc build, we can.\n+if (\n+ _os.environ.get(\"FIFTYONE_DISABLE_SERVICES\", \"0\") != \"1\"\n+ and \"_FIFTYONE_FORCE_DB_CONNECT_ON_IMPORT\" in _os.environ\n+):\n+ import fiftyone.migrations as _fom\n \n-if _os.environ.get(\"FIFTYONE_DISABLE_SERVICES\", \"0\") != \"1\":\n _fom.migrate_database_if_necessary()\n+\n+_fol.init_logging()\ndiff --git a/fiftyone/__public__.py b/fiftyone/__public__.py\nindex 57c91c4b0ac..ff900961f47 100644\n--- a/fiftyone/__public__.py\n+++ b/fiftyone/__public__.py\n@@ -5,16 +5,21 @@\n | `voxel51.com <https://voxel51.com/>`_\n |\n \"\"\"\n+import os as _os\n \n import fiftyone.core.config as _foc\n-import fiftyone.core.odm as _foo\n \n config = _foc.load_config()\n annotation_config = _foc.load_annotation_config()\n evaluation_config = _foc.load_evaluation_config()\n app_config = _foc.load_app_config()\n \n-_foo.establish_db_conn(config)\n+# The old way of doing things, connecting to database on import. If we\n+# REALLY need to do this, for example doc build, we can.\n+if \"_FIFTYONE_FORCE_DB_CONNECT_ON_IMPORT\" in _os.environ:\n+ import fiftyone.core.odm as _foo\n+\n+ _foo.establish_db_conn(config)\n \n from .core.aggregations import (\n Aggregation,\ndiff --git a/fiftyone/core/odm/database.py b/fiftyone/core/odm/database.py\nindex c3244c29b25..c498cf15769 100644\n--- a/fiftyone/core/odm/database.py\n+++ b/fiftyone/core/odm/database.py\n@@ -27,6 +27,7 @@\n \n import fiftyone as fo\n import fiftyone.constants as foc\n+\n from fiftyone.core.config import FiftyOneConfigError\n import fiftyone.core.fields as fof\n import fiftyone.core.service as fos\n@@ -34,6 +35,7 @@\n \n from .document import Document\n \n+fom = fou.lazy_import(\"fiftyone.migrations\")\n foa = fou.lazy_import(\"fiftyone.core.annotation\")\n fob = fou.lazy_import(\"fiftyone.core.brain\")\n fod = fou.lazy_import(\"fiftyone.core.dataset\")\n@@ -220,26 +222,37 @@ def establish_db_conn(config):\n \n connect(config.database_name, **_connection_kwargs)\n \n- config = get_db_config()\n- if foc.CLIENT_TYPE != config.type:\n+ db_config = get_db_config()\n+ if foc.CLIENT_TYPE != db_config.type:\n raise ConnectionError(\n \"Cannot connect to database type '%s' with client type '%s'\"\n- % (config.type, foc.CLIENT_TYPE)\n+ % (db_config.type, foc.CLIENT_TYPE)\n )\n \n+ # Migrate the database if necessary after establishing connection.\n+ # If we are forcing immediate connection upon import though, migration\n+ # is being performed at that time, and we cannot do it here as well,\n+ # or else it'll be a circular import.\n+ if (\n+ os.environ.get(\"FIFTYONE_DISABLE_SERVICES\", \"0\") != \"1\"\n+ and \"_FIFTYONE_FORCE_DB_CONNECT_ON_IMPORT\" not in os.environ\n+ ):\n+ fom.migrate_database_if_necessary()\n+\n \n def _connect():\n global _client\n if _client is None:\n global _connection_kwargs\n \n- _client = pymongo.MongoClient(\n- **_connection_kwargs, appname=foc.DATABASE_APPNAME\n- )\n- connect(fo.config.database_name, **_connection_kwargs)\n+ establish_db_conn(fo.config)\n \n \n def _async_connect(use_global=False):\n+ # Regular connect here first, to ensure connection kwargs are established\n+ # for below.\n+ _connect()\n+\n global _async_client\n if not use_global or _async_client is None:\n global _connection_kwargs\ndiff --git a/fiftyone/factory/repo_factory.py b/fiftyone/factory/repo_factory.py\nindex e224b9f1fe2..3badb603227 100644\n--- a/fiftyone/factory/repo_factory.py\n+++ b/fiftyone/factory/repo_factory.py\n@@ -15,11 +15,19 @@\n MongoDelegatedOperationRepo,\n )\n \n-db: Database = foo.get_db_conn()\n+_db: Database = None\n+\n+\n+def _get_db():\n+ global _db\n+ if _db is None:\n+ _db = foo.get_db_conn()\n+ return _db\n \n \n class RepositoryFactory(object):\n repos = {}\n+ db = None\n \n @staticmethod\n def delegated_operation_repo() -> DelegatedOperationRepo:\n@@ -30,7 +38,9 @@ def delegated_operation_repo() -> DelegatedOperationRepo:\n RepositoryFactory.repos[\n MongoDelegatedOperationRepo.COLLECTION_NAME\n ] = MongoDelegatedOperationRepo(\n- collection=db[MongoDelegatedOperationRepo.COLLECTION_NAME]\n+ collection=_get_db()[\n+ MongoDelegatedOperationRepo.COLLECTION_NAME\n+ ]\n )\n \n return RepositoryFactory.repos[\n"
}
|
[
{
"diff_hunk": "@@ -25,9 +25,15 @@\n from fiftyone.__public__ import *\n \n import fiftyone.core.logging as _fol\n-import fiftyone.migrations as _fom\n \n-_fol.init_logging()\n+# The old way of doing things, migrating database on import. If we\n+# REALLY need to do this, for example doc build, we can.\n+if (\n+ _os.environ.get(\"FIFTYONE_DISABLE_SERVICES\", \"0\") != \"1\"",
"line": null,
"original_line": 32,
"original_start_line": null,
"path": "fiftyone/__init__.py",
"start_line": null,
"text": "@user1:\n@author can you explain why the docs build is causing us problems here? Would love to avoid needing `_FIFTYONE_FORCE_DB_CONNECT_ON_IMPORT` if at all possible\n\n@author:\nYeah I really hate it also 😫😫\n\nBasically it is a mongoengine problem.\n\nAutodoc tries to go in and touch every member and inherited member of each class. Which includes a field on the mongoengine document called \"objects\" which has a getter that requires the db connection 🤬\n\nSo it goes in, the connection has not been made, the \"objects\" getter fails, and then doc build fails.\n\nI tried to play with autodoc settings including to ignore inherited members or specifically skip \"objects\", but didn't come up with anything that worked.\n\nI'll try to think of other solutions, but brainstorming welcome\n\n@user1:\nI see. One way to clean up a bit would be to only call `migrate_database_if_necessary()` inside `establish_db_conn()`.\r\n\r\nTo limit the number of places that `_FIFTYONE_FORCE_DB_CONNECT_ON_IMPORT` needs to exist.\n\n@user1:\nyou tried added `exclude-members` [to the conf](https://github.com/voxel51/fiftyone/blob/85339fcecf03978435fed6fd94565d1c63f58dd6/docs/source/conf.py#L73\r\n)? \r\n\r\n\r\n```py\r\nautodoc_default_options = {\r\n ...\r\n \"exclude-members\": \"objects\"\r\n}\r\n```\r\n\n\n@author:\nOk perhaps I do not understand sphinx directives like I thought I did, but adding to `conf.py` works where directly to the automodule directive did not.\r\nI thought I had tried that too but oh well glad it works!"
},
{
"diff_hunk": "@@ -5,16 +5,21 @@\n | `voxel51.com <https://voxel51.com/>`_\n |\n \"\"\"\n+import os as _os\n \n import fiftyone.core.config as _foc\n-import fiftyone.core.odm as _foo\n \n config = _foc.load_config()\n annotation_config = _foc.load_annotation_config()\n evaluation_config = _foc.load_evaluation_config()\n app_config = _foc.load_app_config()\n \n-_foo.establish_db_conn(config)\n+# The old way of doing things, connecting to database on import. If we\n+# REALLY need to do this, for example doc build, we can.\n+if \"_FIFTYONE_FORCE_DB_CONNECT_ON_IMPORT\" in _os.environ:",
"line": null,
"original_line": 19,
"original_start_line": null,
"path": "fiftyone/__public__.py",
"start_line": null,
"text": "@user1:\nIf the env var needs to stay, you could turn it into more of a feature by adding it to the [FiftyOneConfig](https://github.com/voxel51/fiftyone/blob/85339fcecf03978435fed6fd94565d1c63f58dd6/fiftyone/core/config.py#L63):\r\n\r\n```py\r\n self._lazy_database_connection = self.parse_bool(\r\n d,\r\n \"_lazy_database_connection\",\r\n env_var=\"_FIFTYONE_LAZY_DATABASE_CONNECTION\",\r\n default=True,\r\n )\r\n```\r\n\r\nPrivate parameters are not included in `print(fo.config)` or `$ fiftyone config` so it's both more cleanly organized and hidden by default.\r\n\r\nHaving said that, perhaps we should just make this a fully-documented public setting. It would have the benefit of allowing users to switch back to immediate db connections in the event that we discover some issue with lazy connections."
}
] |
5cdb08027d2acd1f7fa591916439d96a1ddd44b5
|
diff --git a/docs/source/conf.py b/docs/source/conf.py
index 49b1f2279b8..bee158f640e 100644
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -75,6 +75,7 @@
"inherited-members": True,
"member-order": "bysource",
"autosummary": True,
+ "exclude-members": "objects",
}
autodoc_inherit_docstrings = True
autoclass_content = "class"
diff --git a/fiftyone/__init__.py b/fiftyone/__init__.py
index 5ec4db198bc..1c933e5c853 100644
--- a/fiftyone/__init__.py
+++ b/fiftyone/__init__.py
@@ -7,8 +7,8 @@
| `voxel51.com <https://voxel51.com/>`_
|
"""
+
from pkgutil import extend_path as _extend_path
-import os as _os
#
# This statement allows multiple `fiftyone.XXX` packages to be installed in the
@@ -25,9 +25,6 @@
from fiftyone.__public__ import *
import fiftyone.core.logging as _fol
-import fiftyone.migrations as _fom
-_fol.init_logging()
-if _os.environ.get("FIFTYONE_DISABLE_SERVICES", "0") != "1":
- _fom.migrate_database_if_necessary()
+_fol.init_logging()
diff --git a/fiftyone/__public__.py b/fiftyone/__public__.py
index 57c91c4b0ac..5cb33e79b9d 100644
--- a/fiftyone/__public__.py
+++ b/fiftyone/__public__.py
@@ -7,15 +7,12 @@
"""
import fiftyone.core.config as _foc
-import fiftyone.core.odm as _foo
config = _foc.load_config()
annotation_config = _foc.load_annotation_config()
evaluation_config = _foc.load_evaluation_config()
app_config = _foc.load_app_config()
-_foo.establish_db_conn(config)
-
from .core.aggregations import (
Aggregation,
Bounds,
diff --git a/fiftyone/core/odm/database.py b/fiftyone/core/odm/database.py
index c3244c29b25..cee408c975b 100644
--- a/fiftyone/core/odm/database.py
+++ b/fiftyone/core/odm/database.py
@@ -6,6 +6,7 @@
|
"""
import atexit
+import dataclasses
from datetime import datetime
import logging
from multiprocessing.pool import ThreadPool
@@ -15,7 +16,6 @@
from bson import json_util, ObjectId
from bson.codec_options import CodecOptions
from mongoengine import connect
-import mongoengine.errors as moe
import motor.motor_asyncio as mtr
from packaging.version import Version
@@ -27,13 +27,11 @@
import fiftyone as fo
import fiftyone.constants as foc
+import fiftyone.migrations as fom
from fiftyone.core.config import FiftyOneConfigError
-import fiftyone.core.fields as fof
import fiftyone.core.service as fos
import fiftyone.core.utils as fou
-from .document import Document
-
foa = fou.lazy_import("fiftyone.core.annotation")
fob = fou.lazy_import("fiftyone.core.brain")
fod = fou.lazy_import("fiftyone.core.dataset")
@@ -54,26 +52,36 @@
#
# All past and future versions of FiftyOne must be able to deduce the
# database's current version and type from the `config` collection without an
-# error being raised so that migrations can be properly run and, if necsssary,
+# error being raised so that migrations can be properly run and, if necessary,
# informative errors can be raised alerting the user that they are using the
# wrong version or type of client.
#
# This is currently guaranteed because:
# - `DatabaseConfigDocument` is declared as non-strict, so any past or future
# fields that are not currently defined will not cause an error
-# - All declared fields are optional and we have promised ourselves that
+# - All declared fields are optional, and we have promised ourselves that
# their type and meaning will never change
#
-class DatabaseConfigDocument(Document):
[email protected](init=False)
+class DatabaseConfigDocument:
"""Backing document for the database config."""
- # strict=False lets this class ignore unknown fields from other versions
- meta = {"collection": "config", "strict": False}
+ version: str
+ type: str
+
+ def __init__(self, conn, version=None, type=None, *args, **kwargs):
+ # Create our own __init__ so we can ignore extra kwargs / unknown
+ # fields from other versions
+ self._conn = conn
+ self.version = version
+ self.type = type
- version = fof.StringField()
- type = fof.StringField()
+ def save(self):
+ self._conn.config.replace_one(
+ {}, dataclasses.asdict(self), upsert=True
+ )
def get_db_config():
@@ -82,19 +90,18 @@ def get_db_config():
Returns:
a :class:`DatabaseConfigDocument`
"""
+ conn = get_db_conn()
save = False
-
- try:
- # pylint: disable=no-member
- config = DatabaseConfigDocument.objects.get()
- except moe.DoesNotExist:
- config = DatabaseConfigDocument()
+ config_docs = list(conn.config.find())
+ if not config_docs:
save = True
- except moe.MultipleObjectsReturned:
- cleanup_multiple_config_docs()
-
- # pylint: disable=no-member
- config = DatabaseConfigDocument.objects.first()
+ config = DatabaseConfigDocument(conn)
+ elif len(config_docs) > 1:
+ config = DatabaseConfigDocument(
+ conn, **cleanup_multiple_config_docs(conn, config_docs)
+ )
+ else:
+ config = DatabaseConfigDocument(conn, **config_docs[0])
if config.version is None:
#
@@ -128,39 +135,27 @@ def get_db_config():
return config
-def cleanup_multiple_config_docs():
+def cleanup_multiple_config_docs(conn, config_docs):
"""Internal utility that ensures that there is only one
:class:`DatabaseConfigDocument` in the database.
"""
- # We use mongoengine here because `get_db_conn()` will not currently work
- # until `import fiftyone` succeeds, which requires a single config doc
- # pylint: disable=no-member
- docs = list(DatabaseConfigDocument.objects)
- if len(docs) <= 1:
- return
+ if not config_docs:
+ return {}
+ elif len(config_docs) <= 1:
+ return config_docs[0]
logger.warning(
"Unexpectedly found %d documents in the 'config' collection; assuming "
"the one with latest 'version' is the correct one",
- len(docs),
+ len(config_docs),
)
+ # Keep config with latest version. If no version key, use 0.0 so it sorts
+ # to the bottom.
+ keep_doc = max(config_docs, key=lambda d: Version(d.get("version", "0.0")))
- versions = []
- for doc in docs:
- try:
- versions.append((doc.id, Version(doc.version)))
- except:
- pass
-
- try:
- keep_id = max(versions, key=lambda kv: kv[1])[0]
- except:
- keep_id = docs[0].id
-
- for doc in docs:
- if doc.id != keep_id:
- doc.delete()
+ conn.config.delete_many({"_id": {"$ne": keep_doc["_id"]}})
+ return keep_doc
def establish_db_conn(config):
@@ -220,26 +215,30 @@ def establish_db_conn(config):
connect(config.database_name, **_connection_kwargs)
- config = get_db_config()
- if foc.CLIENT_TYPE != config.type:
+ db_config = get_db_config()
+ if foc.CLIENT_TYPE != db_config.type:
raise ConnectionError(
"Cannot connect to database type '%s' with client type '%s'"
- % (config.type, foc.CLIENT_TYPE)
+ % (db_config.type, foc.CLIENT_TYPE)
)
+ if os.environ.get("FIFTYONE_DISABLE_SERVICES", "0") != "1":
+ fom.migrate_database_if_necessary(config=db_config)
+
def _connect():
global _client
if _client is None:
global _connection_kwargs
- _client = pymongo.MongoClient(
- **_connection_kwargs, appname=foc.DATABASE_APPNAME
- )
- connect(fo.config.database_name, **_connection_kwargs)
+ establish_db_conn(fo.config)
def _async_connect(use_global=False):
+ # Regular connect here first, to ensure connection kwargs are established
+ # for below.
+ _connect()
+
global _async_client
if not use_global or _async_client is None:
global _connection_kwargs
@@ -372,6 +371,11 @@ async def _do_async_aggregate(collection, pipeline):
return [i async for i in collection.aggregate(pipeline, allowDiskUse=True)]
+def ensure_connection():
+ """Ensures database connection exists"""
+ _connect()
+
+
def get_db_client():
"""Returns a database client.
diff --git a/fiftyone/core/odm/document.py b/fiftyone/core/odm/document.py
index e3784594501..2d89431694a 100644
--- a/fiftyone/core/odm/document.py
+++ b/fiftyone/core/odm/document.py
@@ -17,6 +17,7 @@
import fiftyone.core.utils as fou
+from .database import ensure_connection
from .utils import serialize_value, deserialize_value
@@ -645,6 +646,8 @@ def _save(
"Cannot save an abstract document"
)
+ ensure_connection()
+
if validate:
self.validate(clean=clean)
diff --git a/fiftyone/factory/repo_factory.py b/fiftyone/factory/repo_factory.py
index e224b9f1fe2..2b031588da0 100644
--- a/fiftyone/factory/repo_factory.py
+++ b/fiftyone/factory/repo_factory.py
@@ -5,17 +5,23 @@
| `voxel51.com <https://voxel51.com/>`_
|
"""
-import pymongo
+
from pymongo.database import Database
-import fiftyone as fo
import fiftyone.core.odm as foo
from fiftyone.factory.repos.delegated_operation import (
DelegatedOperationRepo,
MongoDelegatedOperationRepo,
)
-db: Database = foo.get_db_conn()
+_db: Database = None
+
+
+def _get_db():
+ global _db
+ if _db is None:
+ _db = foo.get_db_conn()
+ return _db
class RepositoryFactory(object):
@@ -30,7 +36,9 @@ def delegated_operation_repo() -> DelegatedOperationRepo:
RepositoryFactory.repos[
MongoDelegatedOperationRepo.COLLECTION_NAME
] = MongoDelegatedOperationRepo(
- collection=db[MongoDelegatedOperationRepo.COLLECTION_NAME]
+ collection=_get_db()[
+ MongoDelegatedOperationRepo.COLLECTION_NAME
+ ]
)
return RepositoryFactory.repos[
diff --git a/fiftyone/migrations/runner.py b/fiftyone/migrations/runner.py
index fe05ea3f5f5..fe2511453b2 100644
--- a/fiftyone/migrations/runner.py
+++ b/fiftyone/migrations/runner.py
@@ -94,7 +94,9 @@ def migrate_all(destination=None, error_level=0, verbose=False):
)
-def migrate_database_if_necessary(destination=None, verbose=False):
+def migrate_database_if_necessary(
+ destination=None, verbose=False, config=None
+):
"""Migrates the database to the specified revision, if necessary.
If ``fiftyone.config.database_admin`` is ``False`` and no ``destination``
@@ -105,11 +107,14 @@ def migrate_database_if_necessary(destination=None, verbose=False):
destination (None): the destination revision. By default, the
``fiftyone`` client version is used
verbose (False): whether to log incremental migrations that are run
+ config (None): an optional :class:`DatabaseConfigDocument`. By default,
+ DB config is pulled from the database.
"""
if _migrations_disabled():
return
- config = foo.get_db_config()
+ if config is None:
+ config = foo.get_db_config()
head = config.version
default_destination = destination is None
diff --git a/tests/no_wrapper/multiprocess_tests.py b/tests/no_wrapper/multiprocess_tests.py
index da1d04ce581..d36452897e3 100644
--- a/tests/no_wrapper/multiprocess_tests.py
+++ b/tests/no_wrapper/multiprocess_tests.py
@@ -15,14 +15,18 @@
class MultiprocessTest(unittest.TestCase):
def test_multiprocessing(self):
- with multiprocessing.Pool(1, _check_process) as pool:
- for _ in pool.imap(_check_process, [None]):
+ food.establish_db_conn(fo.config)
+ port = food._connection_kwargs["port"]
+ with multiprocessing.Pool(2, _check_process, [port]) as pool:
+ for _ in pool.imap(_check_process, [port, port]):
pass
-def _check_process(*args):
+def _check_process(port):
assert "FIFTYONE_PRIVATE_DATABASE_PORT" in os.environ
- port = os.environ["FIFTYONE_PRIVATE_DATABASE_PORT"]
+ env_port = os.environ["FIFTYONE_PRIVATE_DATABASE_PORT"]
+ assert port == int(env_port)
+ food.establish_db_conn(fo.config)
assert int(port) == food._connection_kwargs["port"]
diff --git a/tests/unittests/plugins/secrets_tests.py b/tests/unittests/plugins/secrets_tests.py
index a92760a0d1a..d5b71fa9502 100644
--- a/tests/unittests/plugins/secrets_tests.py
+++ b/tests/unittests/plugins/secrets_tests.py
@@ -1,6 +1,6 @@
import os
import unittest
-from unittest.mock import MagicMock, Mock, patch
+from unittest.mock import MagicMock, patch
import pytest
@@ -22,7 +22,6 @@ def __init__(self, key, value):
class TestExecutionContext:
-
secrets = {SECRET_KEY: SECRET_VALUE, SECRET_KEY2: SECRET_VALUE2}
operator_uri = "operator"
plugin_secrets = [k for k, v in secrets.items()]
@@ -75,7 +74,7 @@ def test_secrets_property(self):
assert k in context._secrets
assert context._secrets[k] == v
assert context.secrets.get(k) == v
- except Exception as e:
+ except Exception:
pytest.fail(
"secrets proproperty items should be the same as _secrets items"
)
@@ -89,7 +88,7 @@ def test_secret_property_on_demand_resolve(self, mocker):
)
context._secrets = {}
assert "MY_SECRET_KEY" not in context.secrets.keys()
- secret_val = context.secrets["MY_SECRET_KEY"]
+ _ = context.secrets["MY_SECRET_KEY"]
assert "MY_SECRET_KEY" in context.secrets.keys()
assert context.secrets["MY_SECRET_KEY"] == "mocked_sync_secret_value"
assert context.secrets == context._secrets
@@ -117,7 +116,7 @@ def test_operator_add_secrets(self):
self.assertListEqual(operator._plugin_secrets, secrets)
-class PluginSecretResolverClientTests(unittest.TestCase):
+class PluginSecretResolverClientTests:
@patch(
"fiftyone.plugins.secrets._get_secrets_client",
return_value=fois.EnvSecretProvider(),
@@ -127,7 +126,7 @@ def test_get_secrets_client_env_secret_provider(self, mocker):
assert isinstance(resolver.client, fois.EnvSecretProvider)
-class TestGetSecret(unittest.TestCase):
+class TestGetSecret:
@pytest.fixture(autouse=False)
def secrets_client(self):
mock_client = MagicMock(spec=fois.EnvSecretProvider)
@@ -136,27 +135,20 @@ def secrets_client(self):
return mock_client
@pytest.fixture(autouse=False)
- def plugin_secrets_resolver(self):
+ def plugin_secrets_resolver(self, secrets_client):
resolver = fop.PluginSecretsResolver()
resolver._registered_secrets = {"operator": ["MY_SECRET_KEY"]}
+ resolver._instance.client = secrets_client
return resolver
- @patch(
- "fiftyone.plugins.secrets._get_secrets_client",
- return_value=fois.EnvSecretProvider(),
- )
@pytest.mark.asyncio
- async def test_get_secret(
- self, secrets_client, plugin_secrets_resolver, patched_get_client
- ):
+ async def test_get_secret(self, secrets_client, plugin_secrets_resolver):
result = await plugin_secrets_resolver.get_secret(
key="MY_SECRET_KEY", operator_uri="operator"
)
assert result == "mocked_secret_value"
- secrets_client.get.assert_called_once_with(
- key="MY_SECRET_KEY", operator_uri="operator"
- )
+ secrets_client.get.assert_called_once_with("MY_SECRET_KEY")
class TestGetSecretSync:
diff --git a/tests/unittests/utils_tests.py b/tests/unittests/utils_tests.py
index 44c7960d5e4..d2a755cfecd 100644
--- a/tests/unittests/utils_tests.py
+++ b/tests/unittests/utils_tests.py
@@ -444,16 +444,14 @@ def test_multiple_config_cleanup(self):
orig_config = foo.get_db_config()
# Add some duplicate documents
- d = dict(orig_config.to_dict())
- for _ in range(2):
- d.pop("_id", None)
- db.config.insert_one(d)
+ db.config.insert_one({"version": "0.14.4", "type": "fiftyone"})
+ db.config.insert_one({"version": "0.1.4", "type": "fiftyone"})
# Ensure that duplicate documents are automatically cleaned up
config = foo.get_db_config()
self.assertEqual(len(list(db.config.aggregate([]))), 1)
- self.assertEqual(config.id, orig_config.id)
+ self.assertEqual(config, orig_config)
from bson import ObjectId
|
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Performance Optimizations"
}
|
voxel51__fiftyone-4351@5e91389
|
voxel51/fiftyone
|
Python
| 4,351 |
Gracefully handle None-valued tag fields
|
## Change log
Resolves #3546
By convention, all non-required FO fields should be nullable, but the implementation of `tag_samples()`, `untag_samples()`, `tag_labels()`, and `untag_labels()` use the `$addToSet` operator, which gracefully handles missing fields, but unfortunately cannot handle `null` fields. So if a user clears their tags via `clear_sample_field("tags")`, for example, then all tagging operations will start to fail.
Note that the default behavior is to set `tags = []`, so having `tags` fields that contain `None` is likely **rare**.
## Implementation choice
Why use `try-except`? I considered two alternatives:
1. Refactor methods like `tag_samples()` to use two separate batch edits: one for documents whose `tags` field contains a list, and a separate one for documents whose `tags` field is missing/null:
```py
view1 = self.exists("tags").match_tags(tags, bool=False, all=True)
view1._edit_sample_tags({"$addToSet": {"tags": {"$each": list(tags)}}})
view2 = self.exists("tags", bool=False)
view2._edit_sample_tags({"$set": {"tags": tags}})
```
The drawback here is that this *always* requires two full passes over the samples collection, which would roughly halve performance just to handle the null case, which I assume is **rare**.
2. Encode the conditional operation in the update operation. Unfortunately, `$addToSet` is not available in aggregation pipelines, so the only way to achieve this would be something like this:
```py
# current
update = {"$addToSet": {"tags": {"$each": tags}}}
# new aggregation pipeline-style
pipeline = [
{
"$cond": {
"if": {"$gt": ["$tags", None]},
"then": {"$setUnion": ["$tags", tags]},
"else": tags,
},
}
]
```
Unfortunately, `$setUnion` is not a 1-1 replacement for `$addToSet`: it will silently remove existing duplicates and makes no guarantee that the ordering of the existing tags will stay the same. So, really a non-starter.
<!-- This is an auto-generated comment: release notes by coderabbit.ai -->
## Summary by CodeRabbit
- **Bug Fixes**
- Improved error handling in tagging functionalities to address issues with null values in tags, ensuring more robust data operations.
- **Documentation**
- Updated method descriptions in the dataset module for clarity and accuracy.
- **Tests**
- Added tests to verify the robustness of the tagging functionalities when encountering null values.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
|
2024-05-05T19:13:15Z
|
[BUG] Clearing the tags field and then tagging samples raises error
To reproduce:
```python
import fiftyone.zoo as foz
dataset = foz.load_zoo_dataset("quickstart").clone()
dataset.clear_sample_field("tags")
dataset.tag_samples("test")
# ValueError: Cannot apply $addToSet to non-array field. Field named 'tags' has non-array type null
```
This is happening because `clear_sample_field` sets the values of the field to `None` for each sample, but `tag_samples()` then tries to add a value to what it expects is a list.
We should probably add a check to `tag_samples()` to verify that the field isn't `None` before adding samples. Alternatively, we could have `clear_sample_field("tags")` check if you are clearing the tags field and set it to `[]` instead of `None`, but that field could still be set to `None` in other ways, so it seems the first approach would be preferred.
|
[
{
"body": "To reproduce:\r\n```python\r\nimport fiftyone.zoo as foz\r\n\r\ndataset = foz.load_zoo_dataset(\"quickstart\").clone()\r\ndataset.clear_sample_field(\"tags\")\r\ndataset.tag_samples(\"test\")\r\n\r\n# ValueError: Cannot apply $addToSet to non-array field. Field named 'tags' has non-array type null\r\n```\r\n\r\nThis is happening because `clear_sample_field` sets the values of the field to `None` for each sample, but `tag_samples()` then tries to add a value to what it expects is a list.\r\n\r\nWe should probably add a check to `tag_samples()` to verify that the field isn't `None` before adding samples. Alternatively, we could have `clear_sample_field(\"tags\")` check if you are clearing the tags field and set it to `[]` instead of `None`, but that field could still be set to `None` in other ways, so it seems the first approach would be preferred.",
"number": 3546,
"title": "[BUG] Clearing the tags field and then tagging samples raises error"
}
] |
ec20c512099e97a2ee012442dca62560696f48e6
|
{
"head_commit": "5e913890deef4191a6b42fecd41302e5c286265c",
"head_commit_message": "handle None-valued tag fields",
"patch_to_review": "diff --git a/fiftyone/core/collections.py b/fiftyone/core/collections.py\nindex 43098224227..ab15faf4f27 100644\n--- a/fiftyone/core/collections.py\n+++ b/fiftyone/core/collections.py\n@@ -1611,7 +1611,27 @@ def tag_samples(self, tags):\n \n # We only need to process samples that are missing a tag of interest\n view = self.match_tags(tags, bool=False, all=True)\n- view._edit_sample_tags(update)\n+\n+ try:\n+ view._edit_sample_tags(update)\n+ except ValueError as e:\n+ #\n+ # $addToSet cannot handle null-valued fields, so if we get an error\n+ # about null-valued fields, replace them with [] and try again.\n+ # Note that its okay to run $addToSet multiple times as the tag\n+ # won't be added multiple times.\n+ #\n+ # For future reference, the error message looks roughly like this:\n+ # ValueError: Cannot apply $addToSet to non-array field. Field\n+ # named 'tags' has non-array type null\n+ #\n+ if \"null\" in str(e):\n+ none_tags = self.exists(\"tags\", bool=False)\n+ none_tags.set_field(\"tags\", []).save()\n+\n+ view._edit_sample_tags(update)\n+ else:\n+ raise e\n \n def untag_samples(self, tags):\n \"\"\"Removes the tag(s) from all samples in this collection, if\n@@ -1680,9 +1700,31 @@ def _tag_labels(self, tags, label_field, ids=None, label_ids=None):\n tags = list(tags)\n update_fcn = lambda path: {\"$addToSet\": {path: {\"$each\": tags}}}\n \n- return self._edit_label_tags(\n- update_fcn, label_field, ids=ids, label_ids=label_ids\n- )\n+ try:\n+ return self._edit_label_tags(\n+ update_fcn, label_field, ids=ids, label_ids=label_ids\n+ )\n+ except ValueError as e:\n+ #\n+ # $addToSet cannot handle null-valued fields, so if we get an error\n+ # about null-valued fields, replace them with [] and try again.\n+ # Note that its okay to run $addToSet multiple times as the tag\n+ # won't be added multiple times.\n+ #\n+ # For future reference, the error message looks roughly like this:\n+ # ValueError: Cannot apply $addToSet to non-array field. Field\n+ # named 'tags' has non-array type null\n+ #\n+ if \"null\" in str(e):\n+ _, tags_path = self._get_label_field_path(label_field, \"tags\")\n+ none_tags = self.filter_labels(label_field, F(\"tags\") == None)\n+ none_tags.set_field(tags_path, []).save()\n+\n+ return self._edit_label_tags(\n+ update_fcn, label_field, ids=ids, label_ids=label_ids\n+ )\n+ else:\n+ raise e\n \n def untag_labels(self, tags, label_fields=None):\n \"\"\"Removes the tag from all labels in the specified label field(s) of\ndiff --git a/fiftyone/core/dataset.py b/fiftyone/core/dataset.py\nindex 67029587cc4..091c5710274 100644\n--- a/fiftyone/core/dataset.py\n+++ b/fiftyone/core/dataset.py\n@@ -1778,7 +1778,7 @@ def clear_sample_field(self, field_name):\n The field will remain in the dataset's schema, and all samples will\n have the value ``None`` for the field.\n \n- You can use dot notation (``embedded.field.name``) to clone embedded\n+ You can use dot notation (``embedded.field.name``) to clear embedded\n frame fields.\n \n Args:\n@@ -1792,7 +1792,7 @@ def clear_sample_fields(self, field_names):\n The field will remain in the dataset's schema, and all samples will\n have the value ``None`` for the field.\n \n- You can use dot notation (``embedded.field.name``) to clone embedded\n+ You can use dot notation (``embedded.field.name``) to clear embedded\n frame fields.\n \n Args:\n@@ -1807,7 +1807,7 @@ def clear_frame_field(self, field_name):\n The field will remain in the dataset's frame schema, and all frames\n will have the value ``None`` for the field.\n \n- You can use dot notation (``embedded.field.name``) to clone embedded\n+ You can use dot notation (``embedded.field.name``) to clear embedded\n frame fields.\n \n Only applicable to datasets that contain videos.\n@@ -1824,7 +1824,7 @@ def clear_frame_fields(self, field_names):\n The fields will remain in the dataset's frame schema, and all frames\n will have the value ``None`` for the field.\n \n- You can use dot notation (``embedded.field.name``) to clone embedded\n+ You can use dot notation (``embedded.field.name``) to clear embedded\n frame fields.\n \n Only applicable to datasets that contain videos.\ndiff --git a/tests/unittests/view_tests.py b/tests/unittests/view_tests.py\nindex ffb6e32da1c..491e5a742fd 100644\n--- a/tests/unittests/view_tests.py\n+++ b/tests/unittests/view_tests.py\n@@ -3314,6 +3314,31 @@ def test_tag_samples(self):\n tags = self.dataset.count_values(\"tags\")\n self.assertDictEqual(tags, {})\n \n+ def test_tag_samples_none(self):\n+ view = self.dataset[:2]\n+\n+ view.clear_sample_field(\"tags\")\n+\n+ for tags in view.values(\"tags\"):\n+ self.assertIsNone(tags)\n+\n+ view.untag_samples(\"test\")\n+ view.untag_samples([\"test1\", \"test2\"])\n+\n+ counts = view.count_sample_tags()\n+ self.assertDictEqual(counts, {})\n+\n+ view.tag_samples(\"test\")\n+ view.tag_samples([\"test1\", \"test2\"])\n+\n+ counts = view.count_sample_tags()\n+ self.assertDictEqual(counts, {\"test\": 2, \"test1\": 2, \"test2\": 2})\n+\n+ view.set_field(\"tags\", []).save()\n+\n+ for tags in view.values(\"tags\"):\n+ self.assertListEqual(tags, [])\n+\n def test_tag_labels(self):\n self._setUp_classification()\n self._setUp_detections()\n@@ -3344,6 +3369,58 @@ def test_tag_labels(self):\n tags = self.dataset.count_label_tags(\"test_dets\")\n self.assertDictEqual(tags, {})\n \n+ def test_tag_labels_none(self):\n+ self._setUp_classification()\n+ self._setUp_detections()\n+\n+ # Test classifications\n+ view = self.dataset.filter_labels(\"test_clf\", F(\"confidence\") > 0.95)\n+ view.clear_sample_field(\"test_clf.tags\")\n+\n+ for tags in view.values(\"test_clf.tags\"):\n+ self.assertIsNone(tags)\n+\n+ view.untag_labels(\"test\", label_fields=\"test_clf\")\n+ view.untag_labels([\"test1\", \"test2\"], label_fields=\"test_clf\")\n+\n+ counts = view.count_label_tags()\n+ self.assertDictEqual(counts, {})\n+\n+ view.tag_labels(\"test\", label_fields=\"test_clf\")\n+ view.tag_labels([\"test1\", \"test2\"], label_fields=\"test_clf\")\n+\n+ counts = view.count_label_tags()\n+ self.assertDictEqual(counts, {\"test\": 1, \"test1\": 1, \"test2\": 1})\n+\n+ view.set_field(\"test_clf.tags\", []).save()\n+\n+ for tags in view.values(\"test_clf.tags\"):\n+ self.assertListEqual(tags, [])\n+\n+ # Test detections\n+ view = self.dataset.filter_labels(\"test_dets\", F(\"confidence\") > 0.7)\n+ view.clear_sample_field(\"test_dets.detections.tags\")\n+\n+ for tags in view.values(\"test_dets.detections.tags\", unwind=True):\n+ self.assertIsNone(tags)\n+\n+ view.untag_labels(\"test\", label_fields=\"test_dets\")\n+ view.untag_labels([\"test1\", \"test2\"], label_fields=\"test_dets\")\n+\n+ counts = view.count_label_tags()\n+ self.assertDictEqual(counts, {})\n+\n+ view.tag_labels(\"test\", label_fields=\"test_dets\")\n+ view.tag_labels([\"test1\", \"test2\"], label_fields=\"test_dets\")\n+\n+ counts = view.count_label_tags()\n+ self.assertDictEqual(counts, {\"test\": 3, \"test1\": 3, \"test2\": 3})\n+\n+ view.set_field(\"test_dets.detections.tags\", []).save()\n+\n+ for tags in view.values(\"test_dets.detections.tags\", unwind=True):\n+ self.assertListEqual(tags, [])\n+\n def test_match(self):\n self.sample1[\"value\"] = \"value\"\n self.sample1.save()\n"
}
|
[
{
"diff_hunk": "@@ -1778,7 +1778,7 @@ def clear_sample_field(self, field_name):\n The field will remain in the dataset's schema, and all samples will\n have the value ``None`` for the field.\n \n- You can use dot notation (``embedded.field.name``) to clone embedded\n+ You can use dot notation (``embedded.field.name``) to clear embedded\n frame fields.",
"line": null,
"original_line": 1782,
"original_start_line": 1781,
"path": "fiftyone/core/dataset.py",
"start_line": null,
"text": "@user1:\nGood eye but now I think the word \"frame\" is another copy pasta accident.\r\n```suggestion\r\n You can use dot notation (``embedded.field.name``) to clear embedded\r\n sample fields.\r\n```"
},
{
"diff_hunk": "@@ -1792,7 +1792,7 @@ def clear_sample_fields(self, field_names):\n The field will remain in the dataset's schema, and all samples will\n have the value ``None`` for the field.\n \n- You can use dot notation (``embedded.field.name``) to clone embedded\n+ You can use dot notation (``embedded.field.name``) to clear embedded\n frame fields.",
"line": null,
"original_line": 1796,
"original_start_line": 1795,
"path": "fiftyone/core/dataset.py",
"start_line": null,
"text": "@user1:\n```suggestion\r\n You can use dot notation (``embedded.field.name``) to clear embedded\r\n sample fields.\r\n```"
}
] |
e08fb57295bba4b9263436be43c9d6015e99517c
|
diff --git a/fiftyone/core/collections.py b/fiftyone/core/collections.py
index 43098224227..ab15faf4f27 100644
--- a/fiftyone/core/collections.py
+++ b/fiftyone/core/collections.py
@@ -1611,7 +1611,27 @@ def tag_samples(self, tags):
# We only need to process samples that are missing a tag of interest
view = self.match_tags(tags, bool=False, all=True)
- view._edit_sample_tags(update)
+
+ try:
+ view._edit_sample_tags(update)
+ except ValueError as e:
+ #
+ # $addToSet cannot handle null-valued fields, so if we get an error
+ # about null-valued fields, replace them with [] and try again.
+ # Note that its okay to run $addToSet multiple times as the tag
+ # won't be added multiple times.
+ #
+ # For future reference, the error message looks roughly like this:
+ # ValueError: Cannot apply $addToSet to non-array field. Field
+ # named 'tags' has non-array type null
+ #
+ if "null" in str(e):
+ none_tags = self.exists("tags", bool=False)
+ none_tags.set_field("tags", []).save()
+
+ view._edit_sample_tags(update)
+ else:
+ raise e
def untag_samples(self, tags):
"""Removes the tag(s) from all samples in this collection, if
@@ -1680,9 +1700,31 @@ def _tag_labels(self, tags, label_field, ids=None, label_ids=None):
tags = list(tags)
update_fcn = lambda path: {"$addToSet": {path: {"$each": tags}}}
- return self._edit_label_tags(
- update_fcn, label_field, ids=ids, label_ids=label_ids
- )
+ try:
+ return self._edit_label_tags(
+ update_fcn, label_field, ids=ids, label_ids=label_ids
+ )
+ except ValueError as e:
+ #
+ # $addToSet cannot handle null-valued fields, so if we get an error
+ # about null-valued fields, replace them with [] and try again.
+ # Note that its okay to run $addToSet multiple times as the tag
+ # won't be added multiple times.
+ #
+ # For future reference, the error message looks roughly like this:
+ # ValueError: Cannot apply $addToSet to non-array field. Field
+ # named 'tags' has non-array type null
+ #
+ if "null" in str(e):
+ _, tags_path = self._get_label_field_path(label_field, "tags")
+ none_tags = self.filter_labels(label_field, F("tags") == None)
+ none_tags.set_field(tags_path, []).save()
+
+ return self._edit_label_tags(
+ update_fcn, label_field, ids=ids, label_ids=label_ids
+ )
+ else:
+ raise e
def untag_labels(self, tags, label_fields=None):
"""Removes the tag from all labels in the specified label field(s) of
diff --git a/fiftyone/core/dataset.py b/fiftyone/core/dataset.py
index 67029587cc4..3ce780c2fb5 100644
--- a/fiftyone/core/dataset.py
+++ b/fiftyone/core/dataset.py
@@ -1778,8 +1778,8 @@ def clear_sample_field(self, field_name):
The field will remain in the dataset's schema, and all samples will
have the value ``None`` for the field.
- You can use dot notation (``embedded.field.name``) to clone embedded
- frame fields.
+ You can use dot notation (``embedded.field.name``) to clear embedded
+ fields.
Args:
field_name: the field name or ``embedded.field.name``
@@ -1792,8 +1792,8 @@ def clear_sample_fields(self, field_names):
The field will remain in the dataset's schema, and all samples will
have the value ``None`` for the field.
- You can use dot notation (``embedded.field.name``) to clone embedded
- frame fields.
+ You can use dot notation (``embedded.field.name``) to clear embedded
+ fields.
Args:
field_names: the field name or iterable of field names
@@ -1807,7 +1807,7 @@ def clear_frame_field(self, field_name):
The field will remain in the dataset's frame schema, and all frames
will have the value ``None`` for the field.
- You can use dot notation (``embedded.field.name``) to clone embedded
+ You can use dot notation (``embedded.field.name``) to clear embedded
frame fields.
Only applicable to datasets that contain videos.
@@ -1824,7 +1824,7 @@ def clear_frame_fields(self, field_names):
The fields will remain in the dataset's frame schema, and all frames
will have the value ``None`` for the field.
- You can use dot notation (``embedded.field.name``) to clone embedded
+ You can use dot notation (``embedded.field.name``) to clear embedded
frame fields.
Only applicable to datasets that contain videos.
diff --git a/tests/unittests/view_tests.py b/tests/unittests/view_tests.py
index ffb6e32da1c..491e5a742fd 100644
--- a/tests/unittests/view_tests.py
+++ b/tests/unittests/view_tests.py
@@ -3314,6 +3314,31 @@ def test_tag_samples(self):
tags = self.dataset.count_values("tags")
self.assertDictEqual(tags, {})
+ def test_tag_samples_none(self):
+ view = self.dataset[:2]
+
+ view.clear_sample_field("tags")
+
+ for tags in view.values("tags"):
+ self.assertIsNone(tags)
+
+ view.untag_samples("test")
+ view.untag_samples(["test1", "test2"])
+
+ counts = view.count_sample_tags()
+ self.assertDictEqual(counts, {})
+
+ view.tag_samples("test")
+ view.tag_samples(["test1", "test2"])
+
+ counts = view.count_sample_tags()
+ self.assertDictEqual(counts, {"test": 2, "test1": 2, "test2": 2})
+
+ view.set_field("tags", []).save()
+
+ for tags in view.values("tags"):
+ self.assertListEqual(tags, [])
+
def test_tag_labels(self):
self._setUp_classification()
self._setUp_detections()
@@ -3344,6 +3369,58 @@ def test_tag_labels(self):
tags = self.dataset.count_label_tags("test_dets")
self.assertDictEqual(tags, {})
+ def test_tag_labels_none(self):
+ self._setUp_classification()
+ self._setUp_detections()
+
+ # Test classifications
+ view = self.dataset.filter_labels("test_clf", F("confidence") > 0.95)
+ view.clear_sample_field("test_clf.tags")
+
+ for tags in view.values("test_clf.tags"):
+ self.assertIsNone(tags)
+
+ view.untag_labels("test", label_fields="test_clf")
+ view.untag_labels(["test1", "test2"], label_fields="test_clf")
+
+ counts = view.count_label_tags()
+ self.assertDictEqual(counts, {})
+
+ view.tag_labels("test", label_fields="test_clf")
+ view.tag_labels(["test1", "test2"], label_fields="test_clf")
+
+ counts = view.count_label_tags()
+ self.assertDictEqual(counts, {"test": 1, "test1": 1, "test2": 1})
+
+ view.set_field("test_clf.tags", []).save()
+
+ for tags in view.values("test_clf.tags"):
+ self.assertListEqual(tags, [])
+
+ # Test detections
+ view = self.dataset.filter_labels("test_dets", F("confidence") > 0.7)
+ view.clear_sample_field("test_dets.detections.tags")
+
+ for tags in view.values("test_dets.detections.tags", unwind=True):
+ self.assertIsNone(tags)
+
+ view.untag_labels("test", label_fields="test_dets")
+ view.untag_labels(["test1", "test2"], label_fields="test_dets")
+
+ counts = view.count_label_tags()
+ self.assertDictEqual(counts, {})
+
+ view.tag_labels("test", label_fields="test_dets")
+ view.tag_labels(["test1", "test2"], label_fields="test_dets")
+
+ counts = view.count_label_tags()
+ self.assertDictEqual(counts, {"test": 3, "test1": 3, "test2": 3})
+
+ view.set_field("test_dets.detections.tags", []).save()
+
+ for tags in view.values("test_dets.detections.tags", unwind=True):
+ self.assertListEqual(tags, [])
+
def test_match(self):
self.sample1["value"] = "value"
self.sample1.save()
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
}
|
|
voxel51__fiftyone-1601@84c3032
|
voxel51/fiftyone
|
Python
| 1,601 |
Keypoint updates
| "Resolves #1581\r\n\r\nRequires https://github.com/voxel51/eta/pull/556\r\n\r\n## Change log\r\n\r\n(...TRUNCATED) |
2022-02-16T15:19:16Z
| "[FR] Add support for per-point confidence/visibility \n### Proposal Summary\r\n\r\nAdd support for (...TRUNCATED) | "Thanks for the FR!\r\n\r\nOne option we have to represent not visible is to insert `(NaN, NaN)` for(...TRUNCATED) | [{"body":"### Proposal Summary\r\n\r\nAdd support for per-point confidence/visibility - currently Ke(...TRUNCATED) |
53e0372fcc16068b10a32b6b9b1ad2e0edd941ac
| {"head_commit":"84c3032afbf73f11ca034edc4985d43b7178cae6","head_commit_message":"skeleton work","pat(...TRUNCATED) | [{"diff_hunk":"@@ -341,3 +343,115 @@ def classification_to_detections(sample_collection, in_field, o(...TRUNCATED) |
4ae35d90723bd5c4a2e0c52e1bff52c480c38796
| "diff --git a/app/packages/app/src/components/Actions/Options.tsx b/app/packages/app/src/components/(...TRUNCATED) |
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "New Feature Additions"
}
|
voxel51__fiftyone-322@5739367
|
voxel51/fiftyone
|
Python
| 322 |
Reorganize installation docs
| "Closes #272\r\nI think I cleaned up all of the cross-references between the two docs, but an extra (...TRUNCATED) |
2020-07-31T19:24:44Z
| "Migrate virtualenv setup instructions to separate docs page\nI like @lethosor's suggestion below. W(...TRUNCATED) | [{"body":"I like @lethosor's suggestion below. We can provide pip-only instructions first and then p(...TRUNCATED) |
3c21edf7840178286d03462c47fa8c0b345a93a7
| {"head_commit":"5739367f563d94fec967cd5bfafed627d222cb25","head_commit_message":"making link more ob(...TRUNCATED) | [{"diff_hunk":"@@ -0,0 +1,136 @@\n+\n+.. _virtualenv-guide:\n+\n+Virtual Environment Setup\n+=======(...TRUNCATED) |
c031b717f13294e9e7c4e2f03d61de184d88759c
| "diff --git a/docs/source/getting_started/install.rst b/docs/source/getting_started/install.rst\nind(...TRUNCATED) |
{
"difficulty": "low",
"estimated_review_effort": 1,
"problem_domain": "Documentation Updates"
}
|
|
voxel51__fiftyone-459@c311516
|
voxel51/fiftyone
|
Python
| 459 |
View stage enhancements
|
Closes #465 and #466.
And adds array slicing!
|
2020-08-26T22:01:38Z
| "View bar does not handle default values appropriately\nIf I try to create an instance of a view sta(...TRUNCATED) | [{"body":"If I try to create an instance of a view stage in the App with a parameter with a default (...TRUNCATED) |
4976c9cf20f04ffe6df547ed15633bf625bbed88
| {"head_commit":"c311516945ebc853dbdc1d6dd537665b98393663","head_commit_message":"include private fie(...TRUNCATED) | [{"diff_hunk":"@@ -1002,7 +1002,7 @@ class SelectFields(ViewStage):\n \"\"\"\n \n def __init(...TRUNCATED) |
ac73b19754c59941553b12491c82746de18a7742
| "diff --git a/docs/source/cli/index.rst b/docs/source/cli/index.rst\nindex 005a50d6f75..18a59dbc9c3 (...TRUNCATED) |
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
}
|
|
voxel51__fiftyone-1078@d9a17d0
|
voxel51/fiftyone
|
Python
| 1,078 |
Adding support for customizing dataset imports and exports
| "Resolves #707, and deprecates #1050.\r\n\r\nAdds a variety of new syntaxes for importing and export(...TRUNCATED) |
2021-06-23T05:20:08Z
| "[FR] Flag to avoid copying data when exporting Dataset labels\nI updated labels for a dataset in Fi(...TRUNCATED) |
yep makes sense, I like it.
| [{"body":"I updated labels for a dataset in FiftyOne and want to use them in my own scripts to train(...TRUNCATED) |
99d7c9686f93691088642a311d1a5c8817b27c7d
| {"head_commit":"d9a17d05d34f66dfdc4fcd6b424a610911288d21","head_commit_message":"improving label_fie(...TRUNCATED) | [{"diff_hunk":"@@ -47,13 +47,50 @@ a |DatasetView| into any format of your choice via the basic reci(...TRUNCATED) |
b576a8decc277316957ffdbc3b7610994d95ef19
| "diff --git a/docs/source/integrations/lightning_flash.rst b/docs/source/integrations/lightning_flas(...TRUNCATED) |
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "New Feature Additions"
}
|
SWE-CARE: A Comprehensiveness-aware Benchmark for Code Review Evaluation
Dataset Description
SWE-CARE (Software Engineering - Comprehensive Analysis and Review Evaluation) is a comprehensiveness-aware benchmark for evaluating Large Language Models (LLMs) on repository-level code review tasks. The dataset features real-world code review scenarios from popular open-source Python and Java repositories, with comprehensive metadata and reference review comments.
Dataset Summary
- Repository: inclusionAI/SWE-CARE
- Paper: CodeFuse-CR-Bench: A Comprehensiveness-aware Benchmark for End-to-End Code Review Evaluation
- Languages: Python
- License: Apache 2.0
- Splits:
test: 671 instances (primary evaluation set)dev: 7,086 instances (development/training set)
Dataset Structure
Data Instances
Each instance in the dataset represents a code review task with the following structure:
{
"instance_id": "voxel51__fiftyone-2353@02e9ba1",
"repo": "voxel51/fiftyone",
"language": "Python",
"pull_number": 2353,
"title": "Fix issue with dataset loading",
"body": "This PR fixes...",
"created_at": "2023-01-15T10:30:00Z",
"problem_statement": "Issue #2350: Dataset fails to load...",
"hints_text": "Comments from the issue discussion...",
"resolved_issues": [
{
"number": 2350,
"title": "Dataset loading error",
"body": "When loading datasets..."
}
],
"base_commit": "abc123...",
"commit_to_review": {
"head_commit": "def456...",
"head_commit_message": "Fix dataset loading logic",
"patch_to_review": "diff --git a/file.py..."
},
"reference_review_comments": [
{
"text": "Consider adding error handling here",
"path": "src/dataset.py",
"diff_hunk": "@@ -10,5 +10,7 @@...",
"line": 15,
"start_line": 14,
"original_line": 15,
"original_start_line": 14
}
],
"merged_commit": "ghi789...",
"merged_patch": "diff --git a/file.py...",
"metadata": {
"problem_domain": "Bug Fixes",
"difficulty": "medium",
"estimated_review_effort": 3
}
}
Data Fields
Core Fields
instance_id(string): Unique identifier in formatrepo_owner__repo_name-PR_number@commit_sha_shortrepo(string): GitHub repository in formatowner/namelanguage(string): Primary programming language (PythonorJava)pull_number(int): GitHub pull request numbertitle(string): Pull request titlebody(string): Pull request descriptioncreated_at(string): ISO 8601 timestamp of PR creation
Problem Context
problem_statement(string): Combined title and body of resolved issue(s)hints_text(string): Relevant comments from issues prior to the PRresolved_issues(list): Array of resolved issues with:number(int): Issue numbertitle(string): Issue titlebody(string): Issue description
Code Changes
base_commit(string): Base commit SHA before changescommit_to_review(dict): The commit being reviewed:head_commit(string): Commit SHA to reviewhead_commit_message(string): Commit messagepatch_to_review(string): Git diff of changes to review
merged_commit(string): Final merged commit SHAmerged_patch(string): Final merged changes (ground truth)
Reference Reviews
reference_review_comments(list): Human code review comments with:text(string): Review comment textpath(string): File path being revieweddiff_hunk(string): Relevant code diff contextline(int): Line number in new versionstart_line(int): Start line for multi-line commentsoriginal_line(int): Line number in original versionoriginal_start_line(int): Original start line
Metadata
metadata(dict): LLM-classified attributes:problem_domain(string): Category like "Bug Fix", "Feature", "Refactoring", etc.difficulty(string): "Easy", "Medium", or "Hard"estimated_review_effort(int): Scale of 1-5 for review complexity
Data Splits
| Split | Instances | Description |
|---|---|---|
| test | 671 | Primary evaluation set for benchmarking |
| dev | 7,086 | Development set for training/fine-tuning |
Usage
Loading the Dataset
from datasets import load_dataset
# Load the test split (default for evaluation)
dataset = load_dataset("inclusionAI/SWE-CARE", split="test")
# Load the dev split
dev_dataset = load_dataset("inclusionAI/SWE-CARE", split="dev")
# Load both splits
full_dataset = load_dataset("inclusionAI/SWE-CARE")
Using with SWE-CARE Evaluation Framework
from swe_care.utils.load import load_code_review_dataset
# Load from Hugging Face (default)
instances = load_code_review_dataset()
# Access instance data
for instance in instances:
print(f"Instance: {instance.instance_id}")
print(f"Repository: {instance.repo}")
print(f"Problem: {instance.problem_statement}")
print(f"Patch to review: {instance.commit_to_review.patch_to_review}")
print(f"Reference comments: {len(instance.reference_review_comments)}")
Running Evaluation
See the GitHub repository for detailed documentation and examples.
Evaluation Metrics and Baselines Results
See the paper for comprehensive evaluation metrics and baseline results on various LLMs.
Additional Information
Citation
If you use this dataset in your research, please cite:
@misc{guo2025codefusecrbenchcomprehensivenessawarebenchmarkendtoend,
title={CodeFuse-CR-Bench: A Comprehensiveness-aware Benchmark for End-to-End Code Review Evaluation in Python Projects},
author={Hanyang Guo and Xunjin Zheng and Zihan Liao and Hang Yu and Peng DI and Ziyin Zhang and Hong-Ning Dai},
year={2025},
eprint={2509.14856},
archivePrefix={arXiv},
primaryClass={cs.SE},
url={https://arxiv.org/abs/2509.14856},
}
Contributions
We welcome contributions! Please see our GitHub repository for:
- Data collection improvements
- New evaluation metrics
- Baseline model results
- Bug reports and feature requests
License
This dataset is released under the Apache 2.0 License. See LICENSE for details.
Changelog
- v0.2.0 (2025-10): Expanded dataset to 671 test instances
- v0.1.0 (2025-09): Initial release with 601 test instances and 7,086 dev instances
- Downloads last month
- 27