UseFrameOutputProps

interface UseFrameOutputProps extends Partial

Properties

allowDeferredStart?

iOS
optional allowDeferredStart: boolean

Allow this output to start later in the capture pipeline startup process.

Enabling this lets the camera prioritize outputs needed for preview first, then start the CameraFrameOutput shortly afterwards.

This can improve startup behavior when preview responsiveness is more important than receiving frame-processor frames immediately.

Inherited from

FrameOutputOptions.allowDeferredStart


dropFramesWhileBusy?

optional dropFramesWhileBusy: boolean

Whether to drop new Frames when they arrive while the Frame Processor is still executing.

  • If set to true, the CameraFrameOutput will automatically drop any Frames that arrive while your Frame Processor is still executing to avoid exhausting resources, at the risk of loosing information since Frames may be dropped.
  • If set to false, the CameraFrameOutput will queue up any Frames that arrive while your Frame Processor is still executing and immediatelly call it once it is free again, at the risk of exhausting resources and growing RAM.

Default

true

Inherited from

FrameOutputOptions.dropFramesWhileBusy


enableCameraMatrixDelivery?

iOS
optional enableCameraMatrixDelivery: boolean

Gets or sets whether the CameraFrameOutput attaches a Camera Intrinsic Matrix to the Frames it produces.

Intrinsic Matrixes are only supported if video stabilization is 'off'.

See

Frame.cameraIntrinsicMatrix

Default

false

Inherited from

FrameOutputOptions.enableCameraMatrixDelivery


enablePhysicalBufferRotation?

optional enablePhysicalBufferRotation: boolean

Enable (or disable) physical buffer rotation.

Setting enablePhysicalBufferRotation to true introduces processing overhead.

Default

false

Inherited from

FrameOutputOptions.enablePhysicalBufferRotation


enablePreviewSizedOutputBuffers?

optional enablePreviewSizedOutputBuffers: boolean

Deliver smaller, preview-sized output buffers for Frame Processing.

This is useful for ML and computer vision workloads where full-resolution buffers are unnecessary and would only increase memory bandwidth and processing costs.

Other camera outputs (for example CameraVideoOutput) keep using the selected full-resolution CameraFormat.

Default

false

Inherited from

FrameOutputOptions.enablePreviewSizedOutputBuffers


onFrame()?

optional onFrame: (frame: Frame) => void

A callback that will be called for every Frame the Camera sees.

This must be a synchronous function, like a Worklet.

The Frame must be disposed as soon as it is no longer needed to avoid stalling the Camera pipeline.

Worklet

Example

const frameOutput = useFrameOutput({
  onFrame(frame) {
    'worklet'
    // some frame processing
    frame.dispose()
  }
})

onFrameDropped()?

optional onFrameDropped: (reason: FrameDroppedReason) => void

A callback that will be called for every time the Camera pipeline has to drop a Frame.

If FrameDroppedReason is 'out-of-buffers', a Frame was dropped because the onFrame(...) callback has been running longer than one frame interval.

If your Frame Processor drops a lot of Frames you should speed it up - for example;


pixelFormat?

optional pixelFormat: TargetVideoPixelFormat

Sets the TargetVideoPixelFormat of the CameraFrameOutput.

Discussion

It is recommended to use 'native' if possible, as this will use a zero-copy GPU-only path. Other formats almost always require conversion at some point, especially on Android.

If you need CPU-access to pixels, use 'yuv' instead of 'rgb' as a next best alternative, as 'rgb' uses ~2.6x more bandwidth than 'yuv' and requires additional conversions as it is not a Camera-native format.

Only use 'rgb' if you really need to stream Frames in an RGB format.

Discussion

It is recommended to use 'native' and design your Frame Processing pipeline to be fully GPU-based, such as performing ML model processing on the GPU/NPU and rendering via Metal/Vulkan/OpenGL by importing the Frame as an external sampler/texture (or via Skia/WebGPU which use NativeBuffer zero-copy APIs), as the Frame's data will already be on the GPU then. If you use a non-'native' pixelFormat in a GPU pipeline, your pipeline will be noticeably slower as CPU <-> GPU downloads/uploads will be performed on every frame.

Inherited from

FrameOutputOptions.pixelFormat