Product 05 · Open Source
by dyslexAI LLC
Synthetic training data for the built world.
Open-source pipeline for generating photorealistic synthetic 3D training data. Domain and taxonomy agnostic — define your own object classes and scene configurations, and kubric-stair renders annotated datasets for any computer vision task.
Core Capabilities
kubric-stair generates photorealistic synthetic scenes with automatic annotation — bounding boxes, segmentation masks, and depth maps — for training and benchmarking any computer vision model. Define your domain, bring your taxonomy, and render at scale.
Blender-based 3D rendering with physically accurate lighting, materials, and camera models. Every frame is training-ready.
Define the object classes that matter to your problem. dyslexAI uses it for stairs, railings, and electrical panels — but the pipeline works with whatever you need to detect.
Bounding boxes, segmentation masks, and depth maps generated automatically. No manual labeling required.
Fully open source. Use it for your own domain, extend the pipeline, or contribute back.
Use Cases
Train RF-DETR, YOLO, and other detection models on your object classes. Photorealistic scenes with perfect ground-truth annotations — no manual labeling pipeline.
Augment small real-world datasets with synthetic diversity. Randomized lighting, materials, camera angles, and component configurations multiply your effective dataset size.
Benchmark monocular depth estimation models against known ground-truth depth. Every rendered frame includes pixel-perfect depth maps for quantitative evaluation.
Reproducible, configurable synthetic data for any computer vision research domain. Cite it, fork it, extend it — Apache 2.0 means no licensing barriers.
kubric-stair is open source and available now. Clone the repo, run the pipeline, and start generating training data for your models.
View on GitHub