In Q4 2025, we will release NODAR Cloud for customers who want to create 3D pointclouds from their 2D image-pairs. Until then, NODAR will process images for qualified customers. This may include a quote depending on the level of effort. The closer the customer can provide data according to our format specified below, the lower the price would be.

The processing options are shown in the following block diagram. The customer supplies:

  1. Left and right images
  2. Left and right intrinsic camera parameters
  3. A rough estimate of camera positions (extrinsic camera parameters), e.g., from a CAD model
  4. Object detection parameters (described below in griddetect_config.ini)

NODAR can then offer three levels of processing:

  1. Hammerhead. NODAR processes customer data and returns point clouds, rectified images, confidence maps, and depth maps.
  2. Hammerhead + GridDetect. NODAR processes customer data and returns the data products listed in #1, plus occupancy maps.
  3. Hammerhead + GridDetect + Analysis. NODAR processes customer data and returns the data products in #1 and #2, plus true positive and false positive (TP/FP) detection rates. This requires manual labeling of the images for objects of interest (e.g., bricks, lumber, tires, animals, humans, etc.).
graph TB
  A[Left & Right Images] --> NODAR
  B[Left & Right Camera Intrinsic Parameters] --> NODAR
  C[Camera Extrinsic Parameters from CAD Model] --> NODAR

	D[Colorized Point Clouds]
	E[Occupancy Maps]
	F[Rectified Images]
	G[Confidence Maps]
	H[Depth Maps]

  NODAR --> Hammerhead
  Hammerhead --> D --> Analysis
  Hammerhead --> F
  Hammerhead --> G
  Hammerhead --> H
  
  J[Object Detection Parameters] --> GridDetect
  D --> GridDetect
  GridDetect --> E --> Analysis

	F -- Labeling --> Analysis --> I[TP/FP Summary]

  classDef no-outline stroke:none,fill:none
  class A,B,C,NODAR,D,E,F,G,H,I,J no-outline
  

The steps to get your images processed by NODAR are shown in the following block diagram and explained below.

graph LR
  A[Step 0 </br> Qualify]
  B[Step 1 </br> Format Data]
  C[Step 2 </br> Upload Data]
  D[Step 3 </br> Email to Notify]
  A --> B --> C --> D

customer_name/
├── topbot/
│   ├── 000000000.tiff
│   ├── 000000001.tiff
│   ├── 000000002.tiff
│   ├── ...
│   └── 000000123.tiff
├── griddetect_config.ini (optional for GridDetect processing)
└── rectification_config.ini

The directory should be zipped into customer_name.zip. The image files are numbered with a nine-digit value from 000000000.tiff. The configuration file, rectification_config.ini, is defined as follows (with example values):

# Enable rectification
#   0 = Don't rectify input images (do this for raw images that are already rectified)
#   1 = Rectify images according to the parameters in this file
enable = 1

##### Camera 1 - "Left"
# Camera model
#   0 = OpenCV pinhole model, which is a variant of Brown-Conrady lens
#       distortion model. Uses k1..k6 and p1..p2.
#       <https://docs.opencv.org/4.x/d9/d0c/group__calib3d.html#ga7dfb72c9cf9780a347fbe3d1c47e5d5a>
#       <https://docs.opencv.org/4.x/dc/dbb/tutorial_py_calibration.html>
#   1 = OpenCV fisheye model, which is radial distortion model with
#       fisheye (equidistance) distortion model. Uses k1..k4.
#       <https://docs.opencv.org/4.x/db/d58/group__calib3d__fisheye.html>
i1_model = 0
# Camera focal length in units of pixels
i1_fx = 5368.72291
i1_fy = 5368.72291
# Piercing point in units of pixels
i1_cx = 1458.95296
i1_cy = 936.28799
# Lens distortion parameters
i1_k1 = -0.13332
i1_k2 = 0.98883
i1_k3 = -5.9473
i1_k4 = 0.0
i1_k5 = 0.0
i1_k6 = 0.0
i1_p1 = 0.001
i1_p2 = 0.00053

##### Camera 2 - "Right"
i2_model = 0
i2_fx = 5370.75916
i2_fy = 5370.75916
i2_cx = 1431.27415
i2_cy = 935.70973
i2_k1 = -0.13181
i2_k2 = 0.80715
i2_k3 = -3.7122
i2_k4 = 0.0
i2_k5 = 0.0
i2_k6 = 0.0
i2_p1 = -6e-05
i2_p2 = -0.00035

##### Extrinsic camera parameters
# Relative translation from left to right camera in **meters**
#   T1 = Tx = translation along the x-axis
#   T2 = Ty = translation along the y-axis
#   T3 = Tz = translation along the z-axis
# The left camera is at the origin (0,0,0)
# x-axis points from the left camera to the right camera
# y-axis points down from the left camera
# z-axis points along the line-of-sight of the left camera
T1 = 1.0
T2 = 0.0
T3 = 0.0
# Relative rotation from left to right camera in **degrees**
#   phi = (pitch) rotation around the x-axis
#   theta = (yaw) rotation around the y-axis
#   psi = (roll) rotation around the z-axis
# The order of the rotations applied to the right camera is
# Pitch then Yaw then Roll (R = Rz * Ry * Rx).
phi = 0.231
theta = -0.520
psi = -0.152

The coordinate system is drawn below.

Screenshot 2024-12-20 at 11.59.38 AM.png

You can get the code to convert from euler angles to/from rotation matricies here.