OAK SR
介绍¶
这个脚本用于创建一个基于 depthai 库的 Pipeline,用于处理摄像头输入和目标检测神经网络推理。它可以根据传入的参数配置摄像头和神经网络模型,并构建一个数据流处理的 Pipeline。
源码¶
create_pipeline_for_sr.py
# coding=utf-8
from typing import Union
import depthai as dai
def create_pipeline(**kwargs):
model_data = kwargs.get("model_data")
config_data = kwargs.get("config_data")
nn_config = config_data.nn_config
color = kwargs.get("color", False)
res = (
kwargs.get("color_res", dai.ColorCameraProperties.SensorResolution.THE_720_P)
if color
else kwargs.get("mono_res", dai.MonoCameraProperties.SensorResolution.THE_720_P)
)
# Create pipeline
pipeline = dai.Pipeline()
# Define sources and outputs
mono_right = pipeline.create(dai.node.ColorCamera) if color else pipeline.create(dai.node.MonoCamera)
mono_right.setBoardSocket(dai.CameraBoardSocket.CAM_C)
mono_right.setResolution(res)
mono_right.setFps(kwargs.get("fps", 30))
detection_network = (
create_stereo(pipeline, res=res, mono_right=mono_right, **kwargs)
if kwargs.get("spatial", False)
else pipeline.create(dai.node.YoloDetectionNetwork)
)
xout_image = pipeline.create(dai.node.XLinkOut)
nn_out = pipeline.create(dai.node.XLinkOut)
xout_image.setStreamName("image")
nn_out.setStreamName("nn")
if not color:
image_manip = pipeline.createImageManip()
image_manip.initialConfig.setResize(nn_config.nn_width, nn_config.nn_height)
image_manip.initialConfig.setKeepAspectRatio(not kwargs.get("fullFov", False))
image_manip.initialConfig.setFrameType(dai.RawImgFrame.Type.BGR888p)
else:
image_manip = None
# Network specific settings
detection_network.setConfidenceThreshold(nn_config.NN_specific_metadata.confidence_threshold)
detection_network.setNumClasses(nn_config.NN_specific_metadata.classes)
detection_network.setCoordinateSize(nn_config.NN_specific_metadata.coordinates)
detection_network.setAnchors(nn_config.NN_specific_metadata.anchors)
detection_network.setAnchorMasks(nn_config.NN_specific_metadata.anchor_masks)
detection_network.setIouThreshold(nn_config.NN_specific_metadata.iou_threshold)
detection_network.setBlob(model_data)
# detection_network.setNumInferenceThreads(2)
detection_network.input.setBlocking(False)
detection_network.input.setQueueSize(1)
# Linking
if color:
mono_right.preview.link(detection_network.input)
else:
mono_right.out.link(image_manip.inputImage)
image_manip.out.link(detection_network.input)
if kwargs.get("syncNN", False):
detection_network.passthrough.link(xout_image.input)
elif color and kwargs.get("high_res", False):
mono_right.video.link(xout_image.input)
elif color:
mono_right.preview.link(xout_image.input)
else:
mono_right.out.link(xout_image.input)
detection_network.out.link(nn_out.input)
return pipeline
def create_stereo(pipeline, **kwargs):
res = kwargs.get("res")
mono_right = kwargs.get("mono_right")
mono_left = (
pipeline.create(dai.node.ColorCamera) if kwargs.get("color", False) else pipeline.create(dai.node.MonoCamera)
)
mono_left.setBoardSocket(dai.CameraBoardSocket.CAM_B)
mono_left.setResolution(res)
mono_left.setFps(kwargs.get("fps", 30))
stereo = pipeline.createStereoDepth()
stereo.initialConfig.setConfidenceThreshold(245)
stereo.setLeftRightCheck(kwargs.get("lr_check", False))
stereo.setExtendedDisparity(kwargs.get("extended_disparity", False))
stereo.setSubpixel(kwargs.get("subpixel", False))
stereo.setDepthAlign(dai.CameraBoardSocket.CAM_C)
detection_network: Union[dai.node.NeuralNetwork, dai.node.YoloSpatialDetectionNetwork] = pipeline.create(
dai.node.YoloSpatialDetectionNetwork
)
detection_network.setDepthLowerThreshold(100) # mm
detection_network.setDepthUpperThreshold(10_000) # mm
detection_network.setBoundingBoxScaleFactor(0.3)
mono_left.out.link(stereo.left)
mono_right.out.link(stereo.right)
stereo.depth.link(detection_network.inputDepth)
return detection_network
用法¶
这个脚本可以通过调用create_pipeline
函数来创建一个 Pipeline 对象。函数接受以下参数:
model_data
:神经网络模型的数据文件路径。config_data
:神经网络模型的配置文件路径。color
:一个布尔值,指示是否使用彩色摄像头。默认为False
。color_res
:彩色摄像头的分辨率。默认为dai.ColorCameraProperties.SensorResolution.THE_720_P
。mono_res
:单色摄像头的分辨率。默认为dai.MonoCameraProperties.SensorResolution.THE_720_P
。fps
:摄像头的帧率。默认为 30。spatial
:一个布尔值,指示是否使用空间目标检测网络。默认为False
。fullFov
:一个布尔值,指示是否保持全视场比例。默认为False
。syncNN
:一个布尔值,指示是否同步神经网络推理。默认为False
。high_res
:一个布尔值,指示是否使用高分辨率预览。默认为False
。
示例¶
下面是一个使用示例:
import depthai as dai
from create_pipeline_for_sr import create_pipeline
# 指定模型文件和配置文件路径
model_data = "path/to/model.blob"
config_data = "path/to/config.json"
# 创建 Pipeline
pipeline = create_pipeline(model_data=model_data, config_data=config_data, color=True)
# 创建 Device
with dai.Device(pipeline) as device:
# 启动 Pipeline
device.startPipeline()
# 获取输出流
image_queue = device.getOutputQueue(name="image", maxSize=4, blocking=False)
nn_queue = device.getOutputQueue(name="nn", maxSize=4, blocking=False)
while True:
# 从输出队列中获取数据
image_data = image_queue.get()
nn_data = nn_queue.get()
# 处理数据...
常见问题解答¶
Q: 该脚本支持哪些深度摄像头设备?
A: 该脚本使用 depthai 库,支持 OAK_D,OAK-D-SR 以及使用 OV9*82
相机模组作为左右相机的 OAK-FFC。
Q: 如何指定自定义的模型和配置文件?
A: 在调用create_pipeline
函数时,通过model_data
和config_data
参数指定模型和配置文件的路径。
Q: 是否支持其他类型的神经网络模型?
A: 目前这个脚本支持 YoloDetectionNetwork 和 YoloSpatialDetectionNetwork 两种类型的神经网络模型。