Media summit in Prague 2017


Meeting notes

CEC status

Latest CEC status is always available here:

Explicit synchronization

In-fences are easy, there is no large issue in the RFC.

Out-fences, however, are subject to buffer reordering issues. A new event has been created to report buffer order to userspace. Fences are created at QBUF time, but can't be used by userspace until an event reports which buffer will be available next.

Not all drivers perform reordering. As an optimization a capability flag could tell userspace that buffers are always ordered and that events are not needed. The capability flag should likely happen at/after setting format, as out-of-order depends on the video format

Reordering is mostly useful for codecs, and can also occur from buffer recycling internally in drivers when capture errors happen. While recycling is allowed by V4L2, we could forbid it when fences are used. In that case all buffers would be returned to userspace in order, and errors be reported with the error flag set.

We may need to make capture queues not reorder buffers as well, so basically if we are using in-fences we don't let the bufffers reorder.

So when there is no reorder we can return the buffer's fence at the QBUF time and not need the OUT_FENCE event for these cases.

If the user doesn't pickup the outstanding OUT_FENCE event before DQBUF just set the fence to -1 at DQBUF time.

Discussion if timestamps are enough for audio/video synchronization... and what kind of timestamps to use. Wall clock timestamps are not usable for synchronisation, monotonic is the way to go. Converting device timestmaps to monotonic in the kernel can be troublesome.
Time stamping with fences --- time stamps related to buffers are only available at dqbuf time (unless SoF events are being used, and they're supported by only some hardware). This is too late in some use cases.

Capture to networking (a.k.a. partial fences) - We need to move from a frame-based API to a slice-based API. The problem isn't limited to fences, the whole V4L2 API is based on frames. Latency could be lowered in this way, but it isn't clear whether frame-based handling is the main source of latency in any use case. To support very low latency (mostly for professional video use cases) we would need to redesign all buffer handling in V4L2. We'll postpone this for now until someone comes with a convincing use case and enough resources to implement it.

Subsystem Maintainance

Media subsystem documentation

Request API

    Laurent: check the suitability of DRM tooling for V4L2
    Mauro: check if improved patchwork support.
    Hans: tentative (not this year) convert old videobuf documentation to videobuf2.
    Alexandre: MC Request API (REQUEST_CMD), with request components through V4L2 devices, request creation and queueing through MC
    Hans: control framework API: sync with Alexandre by 2nd week of November at the latest.
    Laurent: find a video of a corresponding developer workstation security presentation given in ELC-E

GPG key fingerprints