Skip to content

KingKong & OAK Usage and Programming

Work with OAK & libcamera

Use libcamera to initialize camera

In this section, we will learn how to use libcamera to initialize and configure the camera, and obtain image data.

# initialize libcamera
picam2 = Picamera2()
capture_config = picam2.create_still_configuration()
# set camera configuration
picam2.configure(picam2.create_preview_configuration(main={"size": (1920, 1200)}))
# Turn on the camera

Build OAK Pipeline

Next, we will create an OAK pipeline to connect the camera and the neural network model.

First, we need to specify the path to the neural network model, then create the pipeline and define the individual nodes and connections.

# Neural network model path
parentDir = Path(__file__).parent
nnPath = str((parentDir / Path('./models/mobilenet-ssd_openvino_2021.4_8shave.blob')).resolve().absolute())

# Create pipeline
pipeline = dai.Pipeline()

# Define sources and outputs
# Create neural network node
nn = pipeline.create(dai.node.MobileNetDetectionNetwork)

# Create image output node
xinFrame = pipeline.create(dai.node.XLinkIn)
# Create output node
nnOut = pipeline.create(dai.node.XLinkOut)


# Properties
# Set neural network model path

# Linking

Use libcamera to obtain/display image data

Here we will get the image data from libcamera, then convert it to the size required by the OAK model and pass it into the OAK input pipeline.

frame = picam2.capture_array()

img = dai.ImgFrame()
img.setData(to_planar(frame, (300, 300)))

After the OAK neural network detection is completed, the detection results are obtained from the OAK output pipeline.

inDet = qDet.tryGet()

if inDet is not None:
    detections = inDet.detections

Finally, we will get the detection results from OAK's output pipeline and plot them on the image to display them.

frame = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
for detection in detections:
    # print((detection.xmin, detection.ymin, detection.xmax, detection.ymax))
    bbox = frameNorm(frame, (detection.xmin, detection.ymin, detection.xmax, detection.ymax))
    cv2.putText(frame, labelMap[detection.label], (bbox[0] + 10, bbox[1] + 20), cv2.FONT_HERSHEY_TRIPLEX, 0.5, 255)
    cv2.putText(frame, f"{int(detection.confidence * 100)}%", (bbox[0] + 10, bbox[1] + 40), cv2.FONT_HERSHEY_TRIPLEX, 0.5, 255)
    cv2.rectangle(frame, (bbox[0], bbox[1]), (bbox[2], bbox[3]), (255, 0, 0), 2)
temp = device.getChipTemperature()
print("average: {}, css: {}, dss: {}, mss: {}, upa: {}".format(temp.average, temp.css, temp.dss, temp.mss, temp.upa))
# cv2.putText(frame, f"{device.getChipTemperature()}")
# Show the frame
cv2.imshow(name, frame)

Flow Chart

OAK API Reference

For more detailed tutorials on OAK, please refer to: