Hardware Accelerated QtMultimedia Backend for Raspberry Pi and OpenGL Shaders on Video
EDIT: The software described in this post only supports Qt 5.
In some previous posts I developed a custom QML component to render video in a QML scene using hardware accelerated decoding capabilities and rendering without passing buffers to the ARM CPU. This resulted in good performance of the Raspberry Pi even with 1080p high profile h264 videos.
Many bugs need to be fixed, code should be refactored a little but still it shows it is possible and that it works fine. So I decided to move to the following step: modifying Qt to make it possible to use the “standard” QtMultimedia module to access the same decoding/rendering implementation. This would make it possible to better integrate with Qt and allow users to recompile an application without changing anything in its implementation.
The QtMultimedia module uses gstreamer on Linux to provide multimedia capabilities: gstreamer is unfortunately not hardware accelerated on the Pi unless you use something like gst-omx (which never worked properly for me).
Therefore, I started to look into the QtMultimedia module sources in Qt5 and found out (as I was hoping), that Qt guys have done, as usual, a very good job in designing it, providing the classic plugin structure also for multimedia backends. Unfortunately, also as usual, not much documentation is provided on how to implement a new backend, but it is not that difficult anyway by looking at the other implementations.
At the end, I came up with a structure like this: I implemented a new QtMultimedia backend providing the MediaPlayer and VideoOutput minimal functionalities leveraging a “library-version” of the PiOmxTextures sample code which in turn uses a “bundled” version of omxplayer implemented using the OpenMAX texture render component as a sink for the video.
As said, Qt guys have done a good job! I didn’t have to change anything in the Qt Multimedai module; all the implementation is inside the plugin.
The result is pretty good, I don’t see many differences from the previous custom QML component (the decoding and rendering code is the same and the QML component is implemented using the same exact principle, so nothing really changed).
I’m only beginning to play a little bit with this, I just tried a couple of things. In the video you can see the “standard” qmlvideo and qmlvideofx examples provided with the Qt sources.
Code is available on GitHub here: https://github.com/carlonluca/pot.
Have fun! 😉