API Reference

This section contains the complete API documentation for 🌈 regenbogen 🌈, automatically generated from class definitions and docstrings in the source code.

Core Module

The core module contains the fundamental building blocks of the framework: pipelines and nodes.

Base Node class for regenbogen framework.

All processing nodes inherit from this base class and implement the process method.

class regenbogen.core.node.InputPort(node: Node)[source]

Bases: object

Container for node inputs accessible as attributes.

class regenbogen.core.node.InputReference(node: Node, attr_name: str)[source]

Bases: object

Reference to a node’s input attribute.

class regenbogen.core.node.Node(name: str | None = None, **kwargs)[source]

Bases: ABC

Base class for all processing nodes in the regenbogen framework.

Each node represents a single processing step in the pipeline and communicates through standardized data interfaces.

Nodes declare their inputs and outputs using type annotations on process() method.

Example

class MyNode(Node):
def process(self, frame: Frame) -> ObjectModel:

return model

static connect(source: OutputReference, target: InputReference) None[source]

Explicitly connect an output to an input.

Parameters:
  • source – Output reference

  • target – Input reference

abstractmethod process(input_data: Any) Any[source]

Process input data and return output.

Parameters:

input_data – Input data following framework interfaces

Returns:

Output data following framework interfaces

class regenbogen.core.node.OutputPort(node: Node)[source]

Bases: object

Container for node outputs accessible as attributes.

class regenbogen.core.node.OutputReference(node: Node, attr_name: str)[source]

Bases: object

Reference to a node’s output attribute.

connect(target: InputReference) OutputReference[source]

Connect this output to an input.

Parameters:

target – Input reference to connect to

Returns:

Self for chaining

Pipeline class for regenbogen framework.

The Pipeline orchestrates the execution of nodes in sequence or as a directed graph.

class regenbogen.core.pipeline.Pipeline(name: str = 'Pipeline', enable_rerun_logging: bool = False, rerun_recording_name: str | None = None, rerun_spawn_viewer: bool = True)[source]

Bases: object

Pipeline for orchestrating the execution of processing nodes.

The pipeline manages the flow of data between nodes and provides utilities for debugging, profiling, and visualization.

add_node(node: Node) Pipeline[source]

Add a node to the pipeline.

Parameters:

node – Node to add to the pipeline

Returns:

Self for method chaining

clear()[source]

Clear all nodes from the pipeline.

get_node(name: str) Node | None[source]

Get a node by name.

Parameters:

name – Name of the node to find

Returns:

Node if found, None otherwise

log_pipeline_graph(entity_path: str = 'pipeline_graph')[source]

Log the pipeline structure as a graph visualization to Rerun.

Parameters:

entity_path – Entity path for the graph visualization

process(input_data: Any) Any[source]

Process input data through all nodes in sequence.

Parameters:

input_data – Input data for the first node

Returns:

Output data from the last node

process_dataset(dataset_path: str, **kwargs) List[Any][source]

Process an entire dataset through the pipeline.

Parameters:
  • dataset_path – Path to the dataset

  • **kwargs – Additional parameters for dataset loading

Returns:

List of processing results

process_stream(input_data: Any = None) Iterator[Any][source]

Process a stream of data through the pipeline.

This method is designed to work with nodes that generate multiple outputs (like VideoReaderNode). The first node should return an iterator/generator, and each item from that iterator will be processed through the remaining nodes.

Parameters:

input_data – Input data for the first node (can be None for source nodes like VideoReader)

Yields:

Output data from the last node for each input item

remove_node(name: str) bool[source]

Remove a node by name.

Parameters:

name – Name of the node to remove

Returns:

True if node was removed, False if not found

Graph-based pipeline for directed acyclic graph execution.

Supports complex pipelines with branching and merging using natural output reuse. Can be used explicitly or connections can be discovered automatically from nodes.

class regenbogen.core.graph_pipeline.GraphPipeline(name: str = 'GraphPipeline', enable_rerun_logging: bool = True, rerun_recording_name: str | None = None, rerun_spawn_viewer: bool = True)[source]

Bases: object

Pipeline supporting directed acyclic graph (DAG) execution.

Nodes connect by referencing output attributes from other nodes. Multiple nodes can read from the same output (natural branching).

Can be used explicitly or connections can be discovered automatically.

Example 1 - PyTorch style (no explicit pipeline):

source = SourceNode(name=”src”) processor = ProcessorNode(name=”proc”)(source) output = execute(processor)

Example 2 - Explicit connections:

source = SourceNode(name=”src”) processor = ProcessorNode(name=”proc”) source.outputs.frame.connect(processor.inputs.frame) output = execute(processor)

Example 3 - Using Pipeline class:

pipeline = GraphPipeline() pipeline.add_node(source) pipeline.add_node(processor, inputs={“frame”: source.outputs.frame}) results = pipeline.process()

add_node(node: Node, inputs: Dict[str, Any] | None = None) GraphPipeline[source]

Add a node to the graph pipeline.

Parameters:
  • node – Node to add

  • inputs – Dictionary mapping input parameter names to output attributes from other nodes (e.g., {“frame”: bop_node.outputs.frame})

Returns:

Self for method chaining

classmethod from_nodes(*nodes: Node, name: str = 'AutoPipeline') GraphPipeline[source]

Create a pipeline from a set of nodes, discovering connections automatically.

Parameters:
  • *nodes – Nodes to include in pipeline

  • name – Pipeline name

Returns:

GraphPipeline instance

log_pipeline_graph(entity_path: str = 'pipeline_graph')[source]

Log the pipeline structure as a DAG visualization to Rerun.

Parameters:

entity_path – Entity path for the graph visualization

process(input_data: Any = None) Dict[str, Any][source]

Execute the pipeline on input data.

Parameters:

input_data – Input data for source nodes

Returns:

Dictionary mapping node names to their outputs

process_stream(input_data: Any = None)[source]

Process streaming data through the pipeline.

Parameters:

input_data – Input data for source nodes

Yields:

Dictionary mapping node names to their outputs for each item

validate() tuple[source]

Validate the pipeline structure.

Returns:

Tuple of (is_valid, list of error messages)

regenbogen.core.graph_pipeline.execute(*nodes: Node, return_all: bool = False) Any[source]

Execute a graph of connected nodes without explicit Pipeline.

Parameters:
  • *nodes – One or more nodes to execute (in any order)

  • return_all – If True, return dict of all outputs; if False, return last node’s output

Returns:

Output from the last node, or dict of all outputs if return_all=True

Data Interfaces

Standardized data structures used for communication between nodes in the pipeline.

Core data interfaces for regenbogen framework.

These interfaces define the standardized data structures used for communication between nodes in the pipeline.

class regenbogen.interfaces.BoundingBoxes(boxes: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.float32]], scores: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.float32]], labels: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.int32]], class_names: ~typing.List[str] = <factory>, metadata: ~typing.Dict[str, ~typing.Any] = <factory>)[source]

Bases: object

Bounding boxes interface for object detection results.

boxes

Bounding boxes as numpy array (N, 4) in [x1, y1, x2, y2] format

Type:

numpy.ndarray[Any, numpy.dtype[numpy.float32]]

scores

Confidence scores as numpy array (N,)

Type:

numpy.ndarray[Any, numpy.dtype[numpy.float32]]

labels

Class labels as numpy array (N,)

Type:

numpy.ndarray[Any, numpy.dtype[numpy.int32]]

class_names

List of class names corresponding to labels

Type:

List[str]

metadata

Additional metadata dictionary

Type:

Dict[str, Any]

boxes: ndarray[Any, dtype[float32]]
class_names: List[str]
labels: ndarray[Any, dtype[int32]]
metadata: Dict[str, Any]
scores: ndarray[Any, dtype[float32]]
class regenbogen.interfaces.ErrorMetrics(add: float | None = None, add_s: float | None = None, projection_error: float | None = None, runtime: float | None = None, metadata: ~typing.Dict[str, ~typing.Any] = <factory>)[source]

Bases: object

Error metrics interface for evaluation results.

add

Average Distance Error

Type:

float | None

add_s

Average Distance Error - Symmetric

Type:

float | None

projection_error

2D projection error

Type:

float | None

runtime

Processing runtime in seconds

Type:

float | None

metadata

Additional metadata dictionary

Type:

Dict[str, Any]

add: float | None = None
add_s: float | None = None
metadata: Dict[str, Any]
projection_error: float | None = None
runtime: float | None = None
class regenbogen.interfaces.Features(descriptors: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.float32]] | None = None, keypoints: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.float32]] | None = None, keypoints_3d: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.float32]] | None = None, embeddings: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.float32]] | None = None, metadata: ~typing.Dict[str, ~typing.Any] = <factory>)[source]

Bases: object

Features interface containing descriptors and embeddings.

descriptors

Feature descriptors as numpy array or None

Type:

numpy.ndarray[Any, numpy.dtype[numpy.float32]] | None

keypoints

2D keypoints as numpy array (N, 2) or None

Type:

numpy.ndarray[Any, numpy.dtype[numpy.float32]] | None

keypoints_3d

3D keypoints as numpy array (N, 3) or None

Type:

numpy.ndarray[Any, numpy.dtype[numpy.float32]] | None

embeddings

Feature embeddings as numpy array or None

Type:

numpy.ndarray[Any, numpy.dtype[numpy.float32]] | None

metadata

Additional metadata dictionary

Type:

Dict[str, Any]

descriptors: ndarray[Any, dtype[float32]] | None = None
embeddings: ndarray[Any, dtype[float32]] | None = None
keypoints: ndarray[Any, dtype[float32]] | None = None
keypoints_3d: ndarray[Any, dtype[float32]] | None = None
metadata: Dict[str, Any]
class regenbogen.interfaces.Frame(rgb: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.uint8]], idx: int | None = None, depth: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.float32]] | None = None, intrinsics: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.float64]] | None = None, extrinsics: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.float64]] | None = None, pointcloud: ~regenbogen.interfaces.PointCloud | None = None, metadata: ~typing.Dict[str, ~typing.Any] = <factory>, masks: ~regenbogen.interfaces.Masks | None = None)[source]

Bases: object

Frame interface containing RGB, depth, camera intrinsics/extrinsics, and pointcloud data.

rgb

RGB image as numpy array (H, W, 3)

Type:

numpy.ndarray[Any, numpy.dtype[numpy.uint8]]

idx

Optional global frame index

Type:

int | None

depth

Depth image as numpy array (H, W) or None

Type:

numpy.ndarray[Any, numpy.dtype[numpy.float32]] | None

intrinsics

Camera intrinsics matrix (3, 3) or None

Type:

numpy.ndarray[Any, numpy.dtype[numpy.float64]] | None

extrinsics

Camera extrinsics matrix (4, 4) or None

Type:

numpy.ndarray[Any, numpy.dtype[numpy.float64]] | None

pointcloud

PointCloud instance or None

Type:

regenbogen.interfaces.PointCloud | None

metadata

Additional metadata dictionary

Type:

Dict[str, Any]

depth: ndarray[Any, dtype[float32]] | None = None
extrinsics: ndarray[Any, dtype[float64]] | None = None
idx: int | None = None
intrinsics: ndarray[Any, dtype[float64]] | None = None
masks: Masks | None = None
metadata: Dict[str, Any]
pointcloud: PointCloud | None = None
rgb: ndarray[Any, dtype[uint8]]
class regenbogen.interfaces.Masks(masks: ndarray[Any, dtype[bool_]], boxes: ndarray[Any, dtype[float32]], scores: ndarray[Any, dtype[float32]], labels: ndarray[Any, dtype[int32]] | None = None, class_names: List[str] | None = None, metadata: Dict[str, Any] = None)[source]

Bases: object

Segmentation masks interface for instance segmentation results.

masks

Binary segmentation masks as numpy array (N, H, W) where N is number of instances

Type:

numpy.ndarray[Any, numpy.dtype[numpy.bool_]]

boxes

Bounding boxes as numpy array (N, 4) in [x1, y1, x2, y2] format

Type:

numpy.ndarray[Any, numpy.dtype[numpy.float32]]

scores

Confidence scores as numpy array (N,)

Type:

numpy.ndarray[Any, numpy.dtype[numpy.float32]]

labels

Class labels as numpy array (N,) or None

Type:

numpy.ndarray[Any, numpy.dtype[numpy.int32]] | None

class_names

List of class names corresponding to labels

Type:

List[str] | None

metadata

Additional metadata dictionary

Type:

Dict[str, Any]

boxes: ndarray[Any, dtype[float32]]
class_names: List[str] | None = None
labels: ndarray[Any, dtype[int32]] | None = None
masks: ndarray[Any, dtype[bool_]]
metadata: Dict[str, Any] = None
scores: ndarray[Any, dtype[float32]]
class regenbogen.interfaces.ObjectModel(mesh_vertices: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.float32]] | None = None, mesh_faces: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.int32]] | None = None, mesh_normals: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.float32]] | None = None, pointcloud: ~regenbogen.interfaces.PointCloud | None = None, name: str = '', metadata: ~typing.Dict[str, ~typing.Any] = <factory>)[source]

Bases: object

Object model interface containing mesh or reference pointcloud data.

mesh_vertices

Mesh vertices as numpy array (N, 3) or None

Type:

numpy.ndarray[Any, numpy.dtype[numpy.float32]] | None

mesh_faces

Mesh faces as numpy array (M, 3) or None

Type:

numpy.ndarray[Any, numpy.dtype[numpy.int32]] | None

mesh_normals

Mesh normals as numpy array (N, 3) or None

Type:

numpy.ndarray[Any, numpy.dtype[numpy.float32]] | None

pointcloud

PointCloud instance or None

Type:

regenbogen.interfaces.PointCloud | None

name

Object name/identifier

Type:

str

metadata

Additional metadata dictionary

Type:

Dict[str, Any]

mesh_faces: ndarray[Any, dtype[int32]] | None = None
mesh_normals: ndarray[Any, dtype[float32]] | None = None
mesh_vertices: ndarray[Any, dtype[float32]] | None = None
metadata: Dict[str, Any]
name: str = ''
pointcloud: PointCloud | None = None
class regenbogen.interfaces.PointCloud(points: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.float32]], colors: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.float32]] | None = None, normals: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.float32]] | None = None, metadata: ~typing.Dict[str, ~typing.Any] = <factory>)[source]

Bases: object

Point cloud interface.

points

3D points as numpy array (N, 3)

Type:

numpy.ndarray[Any, numpy.dtype[numpy.float32]]

colors

Point colors as numpy array (N, 3) or None

Type:

numpy.ndarray[Any, numpy.dtype[numpy.float32]] | None

normals

Point normals as numpy array (N, 3) or None

Type:

numpy.ndarray[Any, numpy.dtype[numpy.float32]] | None

metadata

Additional metadata dictionary

Type:

Dict[str, Any]

colors: ndarray[Any, dtype[float32]] | None = None
metadata: Dict[str, Any]
normals: ndarray[Any, dtype[float32]] | None = None
points: ndarray[Any, dtype[float32]]
class regenbogen.interfaces.Pose(rotation: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.float64]], translation: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy.float64]], scores: ~typing.Dict[str, float] = <factory>, metadata: ~typing.Dict[str, ~typing.Any] = <factory>)[source]

Bases: object

Pose interface containing rotation, translation and confidence scores.

rotation

Rotation matrix (3, 3) or quaternion (4,)

Type:

numpy.ndarray[Any, numpy.dtype[numpy.float64]]

translation

Translation vector (3,)

Type:

numpy.ndarray[Any, numpy.dtype[numpy.float64]]

scores

Confidence scores dictionary

Type:

Dict[str, float]

metadata

Additional metadata dictionary

Type:

Dict[str, Any]

metadata: Dict[str, Any]
rotation: ndarray[Any, dtype[float64]]
scores: Dict[str, float]
translation: ndarray[Any, dtype[float64]]

Utility Functions

Helper utilities for visualization and device management.

Rerun logging utility for regenbogen framework.

This module provides functionality to log intermediate pipeline results to Rerun for interactive visualization and debugging.

class regenbogen.utils.rerun_logger.RerunLogger(recording_name: str = 'regenbogen_pipeline', enabled: bool = True, spawn: bool = True)[source]

Bases: object

Logger for pipeline intermediate results using Rerun.

This class handles logging various data types (images, point clouds, poses, etc.) to Rerun for interactive visualization during pipeline execution.

log_bounding_boxes(boxes: BoundingBoxes, entity_path: str = 'detections')[source]

Log BoundingBoxes to Rerun.

Parameters:
  • boxes – BoundingBoxes object containing detection results

  • entity_path – Entity path for logging

log_features(features: Features, entity_path: str = 'features')[source]

Log Features to Rerun.

Parameters:
  • features – Features object containing keypoints and descriptors

  • entity_path – Entity path for logging

log_frame(frame: Frame, entity_path: str = 'frame', log_poses: List[Pose] | None = None, log_object_ids: List[int] | None = None, object_models: Dict[int, ObjectModel] | None = None)[source]

Log a frame to Rerun with optional ground truth poses and object models.

Parameters:
  • frame – Frame containing RGB, depth, and intrinsics

  • entity_path – Entity path for the frame

  • log_poses – Optional list of ground truth poses to visualize

  • log_object_ids – Optional list of object IDs corresponding to poses

  • object_models – Optional dictionary of object models for visualization

log_graph_dag(nodes: list, node_inputs: dict, entity_path: str = 'dag_graph')[source]

Log a directed acyclic graph (DAG) structure to Rerun showing actual connections.

Parameters:
  • nodes – List of Node objects

  • node_inputs – Dictionary mapping nodes to their input connections

  • entity_path – Entity path for the graph visualization

log_masks(masks: Masks, entity_path: str = 'segmentation')[source]

Log instance segmentation masks to Rerun.

Parameters:
  • masks – Masks object containing segmentation results

  • entity_path – Entity path for logging

log_metadata(metadata: Dict[str, Any], entity_path: str = 'metadata')[source]

Log metadata as text.

Parameters:
  • metadata – Metadata dictionary

  • entity_path – Entity path for logging

log_object_model(model: ObjectModel, entity_path: str = 'object_model', pose: Pose | None = None, scale_factor: float = 1.0)[source]

Log an ObjectModel to Rerun with optional pose transformation.

Parameters:
  • model – ObjectModel containing mesh or pointcloud data

  • entity_path – Entity path for logging

  • pose – Optional pose to transform the object model

  • scale_factor – Scale factor to apply to the model (e.g., 0.001 for mm to meters)

log_pipeline_graph(nodes: list, entity_path: str = 'pipeline_graph')[source]

Log a pipeline structure as a graph visualization to Rerun.

Parameters:
  • nodes – List of pipeline nodes

  • entity_path – Entity path for the graph visualization

log_pipeline_step(step_name: str, data: Any, step_index: int)[source]

Log pipeline step results with automatic type detection.

Parameters:
  • step_name – Name of the pipeline step

  • data – Output data from the pipeline step

  • step_index – Index of the step in the pipeline

log_pointcloud(points: ndarray, entity_path: str = 'pointcloud', colors: ndarray | None = None)[source]

Log a pointcloud to Rerun.

Parameters:
  • points – Point cloud array (N, 3)

  • entity_path – Entity path for logging

  • colors – Optional colors array (N, 3)

log_pose(pose: Pose, entity_path: str = 'pose')[source]

Log a Pose object to Rerun.

Parameters:
  • pose – Pose object containing rotation and translation

  • entity_path – Entity path for logging

set_time_sequence(timeline_name: str, sequence_number: int)[source]

Set the time sequence for timeline-based logging.

Parameters:
  • timeline_name – Name of the timeline (e.g., “frame”)

  • sequence_number – Sequence number for this point in time