dataset dict |
|---|
{
"observation": {
"aligned_timestamp": 1767790014208,
"audio": [
0.00006103515625
],
"lefthand": {
"handpose_glove": {
"data": [
-0.22716671228408813,
0.3503061532974243,
0.9705092906951904,
0.7776979207992554,
-0.24266494810581207... |
{
"observation": {
"aligned_timestamp": 1767790014241,
"audio": [
0.0008544921875
],
"lefthand": {
"handpose_glove": {
"data": [
-0.2270967960357666,
0.3500005006790161,
0.9705089926719666,
0.7778328657150269,
-0.2421271950006485,
... |
{
"observation": {
"aligned_timestamp": 1767790014275,
"audio": [
-0.003326416015625
],
"lefthand": {
"handpose_glove": {
"data": [
-0.22702926397323608,
0.3495653569698334,
0.9704463481903076,
0.7780124545097351,
-0.241395503282547... |
{
"observation": {
"aligned_timestamp": 1767790014308,
"audio": [
0.007354736328125
],
"lefthand": {
"handpose_glove": {
"data": [
-0.22700965404510498,
0.34933948516845703,
0.970399022102356,
0.7781417965888977,
-0.2409050613641739... |
{
"observation": {
"aligned_timestamp": 1767790014341,
"audio": [
-0.0103759765625
],
"lefthand": {
"handpose_glove": {
"data": [
-0.2269933819770813,
0.34935349225997925,
0.9703422784805298,
0.7781967520713806,
-0.24089275300502777... |
{
"observation": {
"aligned_timestamp": 1767790014375,
"audio": [
0.007720947265625
],
"lefthand": {
"handpose_glove": {
"data": [
-0.22695574164390564,
0.34944719076156616,
0.9702832698822021,
0.7782196998596191,
-0.241292268037796... |
{"observation":{"aligned_timestamp":1767790014408,"audio":[0.008819580078125],"lefthand":{"handpose_(...TRUNCATED) |
{"observation":{"aligned_timestamp":1767790014441,"audio":[-0.0582275390625],"lefthand":{"handpose_g(...TRUNCATED) |
{"observation":{"aligned_timestamp":1767790014475,"audio":[0.57086181640625],"lefthand":{"handpose_g(...TRUNCATED) |
{"observation":{"aligned_timestamp":1767790014508,"audio":[0.999969482421875],"lefthand":{"handpose_(...TRUNCATED) |
HORA: Hand–Object to Robot Action Dataset
Dataset Summary
HORA (Hand–Object to Robot Action) is a large-scale multimodal dataset that converts human hand–object interaction (HOI) demonstrations into robot-usable supervision for cross-embodiment learning. It combines HOI-style annotations (e.g., MANO hand parameters, object pose, contact) with embodied-robot learning signals (e.g., robot observations, end-effector trajectories) under a unified canonical action space.
HORA is constructed from three sources/subsets:
- HORA(Mocap): custom multi-view motion capture system with tactile-sensor gloves (includes tactile maps).
- HORA(Recordings): custom RGB(D) HOI recording setup (no tactile).
- HORA(Public Dataset): derived from multiple public HOI datasets and retargeted to robot embodiments (6/7-DoF arms).
Overall scale: ~150k trajectories across all subsets.
Key Features
- Unified multimodal representation across subsets, covering both HOI analysis and downstream robotic learning.
- HOI modalities: MANO hand parameters (pose/shape + global transform), object 6DoF pose, object assets, hand–object contact annotations.
- Robot modalities: wrist-view & third-person observations, and end-effector pose trajectories for robotic arms, all mapped to a canonical action space.
- Tactile (mocap subset): dense tactile map for both hand and object (plus object pose & assets).
Dataset Statistics
| Subset | Tactile | #Trajectories | Notes |
|---|---|---|---|
| HORA(Mocap) | ✅ | 63,141 | 6-DoF object pose + assets + tactile map |
| HORA(Recordings) | ❌ | 23,560 | 6-DoF object pose + assets |
| HORA(Public Dataset) | ❌ | 66,924 | retargeted cross-embodiment robot modalities |
| Total | ~150k |
Supported Tasks and Use Cases
HORA is suitable for:
- Imitation Learning (IL) / Visuomotor policy learning
- Vision–Language–Action (VLA) model training and evaluation
- HOI-centric research: contact analysis, pose/trajectory learning, hand/object dynamics
Data Format
Example Episode Structure
Each episode/trajectory may include:
HOI fields
hand_mano: MANO parameters (pose/shape, global rotation/translation)object_pose_6d: 6DoF object pose sequencecontact: hand–object contact annotationsobject_asset: mesh/texture id or path
Robot fields
Global Attributes
task_description: Natural language instruction for the task (stored as HDF5 attribute).total_demos: Total number of trajectories in the file.
Observations (
obsgroup)agentview_rgb: JPEG byte stream (variable lengthuint8). Decodes to(T, 480, 640, 3).eye_in_hand_{side}_rgb: JPEG byte stream (variable lengthuint8). Decodes to(T, 480, 640, 3).{prefix}_joint_states: Arm joint positions in radians. Shape(T, N_dof).{prefix}_gripper_states: Gripper joint positions. Shape(T, N_grip).{prefix}_eef_pos: End-effector position in Robot Base Frame. Shape(T, 3).{prefix}_eef_quat: End-effector orientation(w, x, y, z)in Robot Base Frame. Shape(T, 4).object_{name}_pos: Object ground truth position in World Frame. Shape(T, 3).object_{name}_quat: Object ground truth orientation(w, x, y, z)in World Frame. Shape(T, 4).
Actions & States
Note: For multi-robot setups, the fields below concatenate data from all robots in order (e.g.,
[robot0, robot1]).actions: Joint-space control targets. Shape(T, N_dof + 1). Format:[joint_positions, normalized_gripper]where gripper is in[0, 1].actions_ee: Cartesian control targets. Shape(T, 7). Format:[pos (3), axis-angle (3), normalized_gripper (1)].robot_states: Robot base pose in World Frame. Shape(T, 7 * N_robots). Format:[pos (3), quat (4)]per robot, quat is(w, x, y, z).
Tactile fields (mocap only)
tactile_hand: dense tactile map (time × sensors/vertices)tactile_object: dense tactile map
- Downloads last month
- 2,249