Context as Memory: Scene-Consistent Interactive Long Video Generation with Memory Retrieval
Paper
•
2506.03141
•
Published
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
To prepare the dataset for use, merge the parts into a single zip file using the following command:
cat Context-as-Memory-Dataset_* > Context-as-Memory-Dataset.zip
After extracting Context-as-Memory-Dataset.zip, the dataset will be organized as follows:
Context-as-Memory-Dataset
├── frames
│ ├── AncientTempleEnv_0
│ │ ├── 0000.png
│ │ ├── 0001.png
│ │ ├── 0002.png
│ │ └── ...
│ ├── AncientTempleEnv_1
│ │ ├── 0000.png
│ │ ├── 0001.png
│ │ ├── 0002.png
│ │ └── ...
│ └── ...
│
├── jsons
│ ├── AncientTempleEnv_0.json
│ ├── AncientTempleEnv_1.json
│ └── ...
│
├── overlap_labels
│ ├── AncientTempleEnv_0
│ │ ├── 0.json
│ │ ├── 1.json
│ │ ├── 2.json
│ │ └── ...
│ ├── AncientTempleEnv_1
│ │ ├── 0.json
│ │ ├── 1.json
│ │ ├── 2.json
│ │ └── ...
│ └── ...
│
└── captions.txt
frames/: 100 subdirectories, each containing 7,601 video frame images.jsons/: 100 JSON files, each storing the camera pose (position + rotation) of every frame in the corresponding long video.overlap_labels/: 100 subdirectories, each containing 7,601 JSON files, where each file records the indices of overlapping frames corresponding to that frame.captions.txt: Captions annotated for a segment of a long video, from a given starting frame to an ending frame.tools.py, which can convert (x, y, z, yaw, pitch) into RT, and can also select a specific frame as the reference frame to align the RT of other frames to its coordinate system.