Use a preset training mode below or customize an order to your needs.
All settings influence the final annotation structure and data output format.
Select which sensors should be simulated in the synthetic dataset.
Randomly generated variations in environmental conditions such as rain, fog, or clear skies.
Randomly generated lighting angles created by changing the light source’s position.
Randomized material or texture variations applied to objects.
Minimum and maximum distances define the range of camera positions, and step sets the spacing between samples. Note: there is no Camera Roll setting because roll can be programmatically added during training by rotating the images, eliminating the need to generate roll variations here.
Controls how close or far the virtual camera is from the target.
Rotates the camera left and right around the object.
Tilts the camera up and down to view from above or below.
Select a known camera or manually enter a stereo camera separation. This affects depth estimation fidelity.