useFrameOutput
function useFrameOutput(__namedParameters: UseFrameOutputProps): CameraFrameOutputUse a CameraFrameOutput.
The onFrame(...) callback
will be called for every Frame the Camera
sees. It is a synchronous JS function running on the
CameraFrameOutput's thread -
aka a "worklet".
Note
useFrameOutput(...) requires
react-native-worklets
to be installed.
Discussion
You must dispose the Frame after your
Frame Processor has finished processing, otherwise subsequent Frames
may be dropped (see onFrameDropped(...)).
Discussion
Choosing an appropriate pixelFormat
depends on your Frame Processor's usage. While the most commonly used
format in visual recognition models is 'rgb',
it is by far not the most efficient format for a Camera pipeline as it
requires an additional conversion and uses ~2.6x more bandwidth than
'yuv'.
If you render to native Surfaces (e.g. via GPU pipelines or Media Encoders),
you may also be able to use 'native',
which chooses whatever the currently selected CameraFormat's
nativePixelFormat is, and
requires zero conversions.
Use 'native' with caution, as your
CameraFormat's nativePixelFormat
might also be a RAW format like 'raw-bayer-packed96-12-bit',
or a vendor-specific private format ('private').
Examples:
- MLKit natively supports YUV,
so streaming in
'yuv'is most efficient. - OpenCV natively supports YUV, so streaming in
'yuv'is most efficient. - LiteRT supports YUV, but
converts to RGB internally - so streaming in
'rgb'directly is more efficient as conversion is handled in the Camera pipeline. - react-native-skia supports
both YUV and RGB, so streaming in
'yuv'is more efficient.
See
Example
const frameOutput = useFrameOutput({
onFrame(frame) {
'worklet'
// some frame processing
frame.dispose()
}
})