Skip to content

ILLIXR plugins

This page details the structure of ILLIXR's plugins and how they interact with each other.

audio_pipeline

Launches a thread for binaural: recording and one for binaural playback. Audio output is not yet routed to the system's speakers or microphone, but the plugin's compute workload is still representative of a real system. By default, this plugin is enabled (see native configuration).

Topic details:

  Details    Code

debugview

Renders incoming frames from the graphics pipeline for debugging live executions of the application.

Topic details:

  Details    Code

depthai

Enables access to the DepthAI library.

Topic details:

  Details    Code

gldemo 1

Renders a static scene (into left and right eye buffers) given the pose from pose_prediction.

Topic details:

  • Calls pose_prediction
  • Publishes rendered_frame to eyebuffer topic.
  • Publishes image_handle to image_handle topic.
  • Asynchronously reads time_point from vsync_estimate topic.

  Details    Code

ground_truth_slam

Reads the ground truth from the same dataset as the offline_imu plugin. Ground truth data can be compared against the head tracking results (e.g. from VIO, IMU integrator, or pose predictor) for accuracy. Timing information is taken from the offline_imu measurements/data.

Topic details:

  • Publishes pose_type to true_pose topic.
  • Publishes Eigen::Vector3f to ground_truth_offset topic.
  • Asynchronously reads imu_type from imu topic.

  Details    Code

gtsam_integrator

Integrates over all IMU samples since the last published visual-inertial pose to provide a fast pose every time a new IMU sample arrives using the GTSAM library (upstream).

Topic details:

  Details Code

hand_tracking

Detects and identifies hands in an image, CPU based calculations. The output from this plugin can be used to track hand movements and recognize hand gestures.

Topic details:

  • Synchronously reads one of monocular_cam_type from webcam topic, binocular_cam_type from cam topic, or cam_type_zed from cam_zed topic. This is selectable at run time via an environment variable.
  • Asynchronously reads camera_data from cam_data topic, only once as values are static
  • If reading from webcam
    • Asynchronously reads pose_type from pose topic
    • Asynchronously reads one of depth_type from depth topic or rgb_depth_type from rgb_depth topic, depending on which is available
  • If reading from cam
    • Asynchronously reads pose_type from pose topic
    • Asynchronously reads one of depth_type from depth topic or rgb_depth_type from rgb_depth topic, if either is available, but not required
  • If reading from cam_zed, no additional data are required.
  • Publishes ht_frame to ht topic.

  Details    Code

hand_tracking_gpu

Detects and identifies hands in an image, GPU based calculations. The output from this plugin can be used to track hand movements and recognize hand gestures. This plugin is currently experimental.

Topic details:

  • Synchronously reads one of monocular_cam_type from webcam topic, binocular_cam_type from cam topic, or cam_type_zed from cam_zed topic. This is selectable at run time via an environment variable.
  • Asynchronously reads camera_data from cam_data topic, only once as values are static
  • If reading from webcam
    • Asynchronously reads pose_type from pose topic
    • Asynchronously reads one of depth_type from depth topic or rgb_depth_type from rgb_depth topic, depending on which is available
  • If reading from cam
    • Asynchronously reads pose_type from pose topic
    • Asynchronously reads one of depth_type from depth topic or rgb_depth_type from rgb_depth topic, if either is available, but not required
  • If reading from cam_zed, no additional data are required.
  • Publishes ht_frame to ht topic.

  Details    Code

hand_tracking.viewer

Reads the output of the hand_tracking plugin and displays the results on the screen. This is most useful for debugging. The capabilities of this plugin will be merged into the debugview plugin in the future.

Topic details:

  • Synchronously reads ht_frame from ht topic.

  Details    Code

lighthouse

Enables lighthouse tracking using the libsurvive library

Topic details:

  Details    Code

native_renderer

Constructs a full rendering pipeline utilizing several ILLIXR components.

Topic details:

  Details    Code

offline_cam

Reads camera images from files on disk, emulating real cameras on the headset (feeds the application input measurements with timing similar to an actual camera).

Topic details:

  Details    Code

offline_imu

Reads IMU data files on disk, emulating a real sensor on the headset (feeds the application input measurements with timing similar to an actual IMU).

Topic details:

  Details    Code

offload_data

Writes frames and poses output from the asynchronous reprojection plugin to disk for analysis.

Topic details:

  • Synchronously reads texture_pose to texture_pose topic.

  Details    Code

offload_rendering_client

Receives encoded frames from the network, sent by offload_rendering_server

Topic details:

  Details    Code

offload_rendering_server

Encodes and transmits frames to one of the offload_rendering_clients.

Topic details:

  Details    Code

offload_vio

Four plugins which work in unison to allow head tracking (VIO) to be rendered remotely.

Topic details:

  • offload_vio.device_rx
  • Asynchronously reads a string from topic vio_pose.
  • Synchronously reads imu_type from imu topic
  • Publishes pose_type to slow_pose topic.
  • Publishes imu_integrator_input to imu_integrator_input topic.
  • offload_vio.device_tx
  • Asynchronously reads binocular_cam_type from cam topic
  • Publishes a string to compressed_imu_cam topic
  • offload_vio.server_rx
  • Asynchronously reads a string from compressed_imu_cam topic
  • Publishes imu_type to imu topic.
  • Publishes binocular_cam_type to cam topic.
  • offload_vio.server_tx
  • Asynchronously reads imu_integrator_input from imu_integrator_input topic.
  • Synchronously reads pose_type from slow_pose topic from open_vins
  • Publishes a string to vio_pose topic.

  Details    Code

openni

Enables an interface to the Openni algorithms.

Topic details:

  Details    Code

open_vins

An alternate head tracking (upstream) implementation that uses a MSCKF (Multi-State Constrained Kalman Filter) to determine poses via camera/IMU.

Topic details:

  Details    Code

openwarp_vk

Provides a Vulkan-based reprojection service.

Topic details:

  Details    Code

orb_slam3

Utilizes the ORB_SLAM3 library to enable real-time head tracking.

Topic details:

  Details    Code

passthrough_integrator

Provides IMU integration.

Topic details:

  Details    Code

realsense

Reads images and IMU measurements from the Intel Realsense.

Topic details:

  Details    Code

record_imu_cam

Writes imu_type and binocular_cam_type data to disk.

Topic details:

  Details    Code

record_rgb_depth

Writes rgb_depth_type data to disk.

Topic details:

Details Code

rk4_integrator

Integrates over all IMU samples since the last published visual-inertial pose to provide a fast pose every time a new IMU sample arrives using RK4 integration.

Topic details:

  Details    Code

tcp_network_backend

Provides network communications over TCP.

Details    Code

timewarp_gl 1

Asynchronous reprojection of the eye buffers. The timewarp ends right before vsync, so it can deduce when the next vsync will be.

Topic details:

  • Calls pose_prediction
  • Publishes hologram_input to hologram_in topic.
  • If using Monado
    • Asynchronously reads rendered_frame on eyebuffer topic, if using Monado.
    • Publishes time_point to vsync_estimate topic.
    • Publishes texture_pose to texture_pose topic if ILLIXR_OFFLOAD_ENABLE is set in the env.
  • If not using Monado
    • Publishes signal_to_quad to signal_quad topic.

  Details    Code

timewarp_vk

Asynchronous reprojection of the eye buffers. The timewarp ends right before vsync, so it can deduce when the next vsync will be.

Topic details:

  Details    Code

webcam

Uses a webcam to capture images for input into the hand_tracking plugin. This plugin is useful for debugging and is not meant to be used in a production pipeline.

Topic details:

  Details    Code

zed

Reads images and IMU measurements from the ZED Mini. Unlike offline_imu, zed additionally has RGB and depth data.

Note

This plugin implements two threads: one for the camera, and one for the IMU.

Topic details:

  Details    Code

zed.data_injection

Reads images and pose information from disk and publishes them to ILLIXR.

Topic details:

  Details    Code

Below this point, we will use Switchboard terminology. Read the API documentation on Switchboard for more information.

Dataflow image Current dataflow between all plugins and services. Dark blue boxes represent plugins and cyan component boxes represent services. Data types are represented with cylinders and labelled as topic <data_type>. Service data types are represented by octagons. Solid lines point from the plugin/service which publishes them to the data type. Dashed lines point from data types to the plugin/service which reads them synchronously. Dotted lines point from data types to the plugin/service which reads them asynchronously.

See Writing Your Plugin to extend ILLIXR.

Plugin Interdependencies

Some plugins require other plugins to be loaded in order to work. The table below gives a listing of the plugin interdependencies.

Plugin Requires Provided by plugin
debugview pose_prediction fauxpose, pose_lookup, pose_prediction
gldemo pose_prediction fauxpose, pose_lookup, pose_prediction
native_renderer app vkdemo
display_sink display_vk
pose_prediction fauxpose, pose_lookup, pose_prediction
timewarp timewarp_vk
timewarp_gl pose_prediction fauxpose, pose_lookup, pose_prediction
timewarp_vk display_sink display_vk
pose_prediction fauxpose, pose_lookup, pose_prediction
vkdemo display_sink display_vk

See Getting Started for more information on adding plugins to a profile file.


  1. ILLIXR has switched to a Vulkan back end, thus OpenGL based plugins may not work on every system.