The dataset viewer is not available for this split.
Error code: FeaturesError
Exception: ArrowInvalid
Message: JSON parse error: Invalid value. in row 0
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 183, in _generate_tables
df = pandas_read_json(f)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 38, in pandas_read_json
return pd.read_json(path_or_buf, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 791, in read_json
json_reader = JsonReader(
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 905, in __init__
self.data = self._preprocess_data(data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 917, in _preprocess_data
data = data.read()
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 844, in read_with_retries
out = read(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "<frozen codecs>", line 322, in decode
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3608, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2368, in _head
return next(iter(self.iter(batch_size=n)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2573, in iter
for key, example in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2082, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 544, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 383, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 186, in _generate_tables
raise e
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 160, in _generate_tables
pa_table = paj.read_json(
^^^^^^^^^^^^^^
File "pyarrow/_json.pyx", line 342, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Invalid value. in row 0Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
DISBench: DeepImageSearch Benchmark
DISBench is the first benchmark for context-aware image retrieval over visual histories. It contains 122 queries across 57 users and 109,467 photos, requiring multi-step reasoning over corpus-level context.
Download
Option A: Hugging Face (Recommended)
huggingface-cli download RUC-NLPIR/DISBench --local-dir DISBench
Option B: Manual Download
python download_images.py --photo-ids-path photo_ids --images-path images
File Structure
DISBench/
βββ queries.jsonl # 122 annotated queries
βββ metadata/
β βββ {user_id}.jsonl # Photo metadata per user
βββ images/
β βββ {user_id}/
β βββ {photo_id}.jpg # Photo files
βββ photo_ids/
β βββ {user_id}.txt # Photo IDs and hashes per user
βββ download_images.py # Image download script
Data Format
queries.jsonl
Each line is a JSON object representing one query:
{
"query_id": "1",
"user_id": "10287726@N02",
"query": "Find photos from the musical performance identified by the blue and white event logo on site, where only the lead singer appears on stage.",
"answer": ["7759256930", "7759407170", "7759295108", "7759433016"],
"event_type": "intra-event"
}
| Field | Type | Description |
|---|---|---|
query_id |
string | Unique query identifier |
user_id |
string | User whose photo collection to search |
query |
string | Natural language query (text-only) |
answer |
list[string] | Ground-truth target photo IDs |
event_type |
string | "intra-event" or "inter-event" |
metadata/{user_id}.jsonl
Each line is a JSON object representing one photo's metadata:
{
"photo_id": "4517621778",
"metadata": {
"taken_time": "2010-04-10 13:52:57",
"longitude": -1.239802,
"latitude": 51.754123,
"accuracy": 16.0,
"address": "Y, Cherwell Street, St Clement's, East Oxford, Oxford, Oxfordshire, England, OX4 1BQ, United Kingdom",
"capturedevice": "Panasonic DMC-TZ5"
}
}
| Field | Type | Description |
|---|---|---|
photo_id |
string | Unique photo identifier |
metadata.taken_time |
string | Capture time in YY-MM-DD HH:MM:SS format |
metadata.longitude |
float | GPS longitude. Missing if unavailable. |
metadata.latitude |
float | GPS latitude. Missing if unavailable. |
metadata.accuracy |
float | GPS accuracy level. Missing if unavailable. |
metadata.address |
string | Reverse-geocoded address. Missing if unavailable. |
metadata.capturedevice |
string | Camera/device name. Missing if unavailable. |
Note: Optional fields (
longitude,latitude,accuracy,address,capturedevice) are omitted entirely when unavailable β they will not appear as keys in the JSON object.
images/{user_id}/{photo_id}.jpg
Photo files organized by user. Each user's collection contains approximately 2,000 photos accumulated chronologically from their photosets.
photo_ids/{user_id}.txt
Each line represents one photo ID and its hash on aws storage in the format {photo_id}\t{hash}:
1205732595 c45044fd7b5c9450b2a11adc6b42d
| Field | Type | Description |
|---|---|---|
photo_id |
string | Unique photo identifier |
hash |
string | Hashed value of the photo on aws storage |
Dataset Statistics
| Statistic | Value |
|---|---|
| Total Queries | 122 |
| Intra-Event Queries | 57 (46.7%) |
| Inter-Event Queries | 65 (53.3%) |
| Total Users | 57 |
| Total Photos | 109,467 |
| Avg. Targets per Query | 3.84 |
| Avg. History Span | 3.4 years |
| Query Retention Rate | 6.1% (122 / 2,000 candidates) |
| Inter-Annotator IoU | 0.91 |
Data Source
DISBench is constructed from YFCC100M, which preserves a hierarchical structure of users β photosets β photos. All images are publicly shared under Creative Commons licenses. Photoset boundaries are used during construction but are not provided to models during evaluation.
License
The DISBench dataset follows the Creative Commons licensing terms of the underlying YFCC100M data. Please refer to individual image licenses for specific usage terms.
- Downloads last month
- -