# QML Components for Video Decoding and Rendering POC Code Available

As requested I shared the sources of the demo videos I posted recently. I tested these components with a few videos and I saw that it seems to work “reasonably well” also for 1080p h264 high profile with 5.1 audio. The current implementation uses a player class which decodes data and a surface class that renders. Rendering the video on more surfaces seems to work.

Beware that the code is not complete, it is only a proof of concept of how to implement. If you need to use it in production code, you’ll have to work pretty much on it. There are many TODO’s left and no testing has been run on the classes. The cleanup code must be completely rewritten and only pause/resume/stop commands are implemented at the moment. Also consider going through the relevant code for leaks, I didn’t pay much attention when implementing because it was my idea to refactor, sorry.

Only 1080p resolution is currently supported, never even tried anything different, you’ll probably have to look around and see where I hardcoded those values (I was in a hurry :-))
There are many unused classes in the code, I left those there only because those might be useful for new implementations.

I started to work on other things recently, so I really have few time to work on this. But still I see that many are interested, so I decided that incomplete code is better than no code. Also, I have to say I have no practical need of these components, I only worked on this as a challenge in my spare time. Now that there is no challenge anymore, I have to say I lost some interest and I’m looking for a new one 😀

This is the github URL of the repo (PiOmxTextures is the project directory): https://github.com/carlonluca/pi.

The current implementation of the OMX_MediaProcessor class uses the components implemented in the omxplayer code, with modifications to some of those. Those modified sources are placed in the omxplayer_lib directory in the project sources: I chose this architecture to make it relatively simple to merge the changes from the omxplayer sources.

## How to build

To build the project, you’ll need a build of the Qt libraries, version 5.0.0 at least. Instructions on how to build can be found around the web. I also wrote a quick article on that if you need it (this is the updated version for 5.0.1).

Once you have your Qt build and Qt Creator setup, you can open the .pro file. You should also have the Raspberry sysroot somewhere in your system, and that should be known by Qt Creator. The project is based on the same dependencies of omxplayer, so you need those as well. I tested this only against a specific build of ffmpeg, which is the one omxplayer was using when I last merged: to compile you can use this script, which is included in the source tree. Just running it passing the number of compilation thread to use should be sufficient:

git clone https://github.com/carlonluca/pi
cd pi/PiOmxTextures/tools
./compile_ffmpeg.sh n
cd ../../
mkdir PiOmxTextures-build-rasp-release
cd PiOmxTextures-build-rasp-release
path_to_qmake/qmake "DEFINES+=CONFIG_APP" ../PiOmxTextures
make -jn

Pay attention that the sample application renders a file whose path is hardcoded in the qml file. Change that if you want to test the sample code.

As said, this is just a proof of concept and I do not have much time to maintain it. If you want to help to fix the code and to merge new changes, issue a merge request!
Hope this code might be of help for other open source project! Feel free to leave a comment if you have any! Bye!

## 25 thoughts on “QML Components for Video Decoding and Rendering POC Code Available”

1. Thanks luca for your pointers.
Following settings helped to resolve the color depth issue in QML!

QSurfaceFormat curSurface = QQuickWindow::format();
curSurface.setSamples(24);
curSurface.setRedBufferSize(8);
curSurface.setGreenBufferSize(8);
curSurface.setBlueBufferSize(8);
QQuickWindow::setFormat( curSurface );

Now, how to optimize rendering performance, I am seeing some jittery video while rendering in QML as compared to smooth video while directly playing in omxplyaer. Any idea where to look at for the source of the issue?

Please note this performance issue we are getting with 16bit default rendering as well.

2. Images also loosing the colors at gradient areas rendered directly in QML. Any idea what going wrong with QML?

3. This comment has been removed by the author.

4. yes.
1. I tried both fbset -depth 24/32
2. In addition i also confirmed hard coding 8/8/8/8 in libqeglfs.so where it enumerated from driver.
Do i need test 24bit only afresh both the things above?

5. I am seeing the video quality issue (looks image color quality problem) while comparing video playback between omxplayer directly and omx-component within the QML, that luca published. Is this because of color conversion (YUV->RGB) is happening between OMX.egl.video.renderer used in the omx pipline and QML domain. Is this problem could be resolved by exploring direct YUV rendering support in QML? If some one has explored this part i am looking for the help/suggestions to resolve the issue.

6. What you report seems to be some days old. Please, refer to github.

cd ffmpeg
git clone git://source.ffmpeg.org/ffmpeg.git ffmpeg_src
git checkout master
cd ffmpeg_src

it should be
cd ffmpeg
git clone git://source.ffmpeg.org/ffmpeg.git ffmpeg_src
cd ffmpeg_src
git checkout master

otherwise it's operating on the Pi git repo, not the FFMpeg one!

8. Hi. Thank you very much for your work!

First of all: I tried the revision b30c3acffdb148e13897123640a41acece5ab6cf, which I suppose is the one you're working on. I got no build issue with that. Consider in the last weeks I merged recent sources of the omxplayer, which is based on a more recent version of ffmpeg. Maybe you're building against a different version of ffmpeg. The compile_ffmpeg.sh script now clones the correct revision of ffmpeg and builds it.
If I'm not mistaken, CodecID was removed in a recent version of ffmpeg.

As for the lib, I started to do that work a week ago and it is now available in git, as I need it for another project. Anyway, it is not complete, it only exports the OMX_MediaProcessor class. If you want to merge your changes it will surely be very useful!

For the resource leaks, I have to say I never tested more than a couple of media 🙂 Any work you do on that is surely useful, and I don't plan to work on that soon.
Consider that there are leaks I intentionally left behind for testing, and I added TODO comments. You might want to have a look at those.

9. Hi,
Awesome effort…I've managed to use your stuff and turn it into a shared library for use in another project, but I cloned your repo the other day, and tried compiling and a few things broke, I'm using the FFMpeg and the buildFFMpeg script, but getting :
In file included from ../PiOmxTextures/omxplayer_lib/DllAvFormat.h:27:0,
from ../PiOmxTextures/omxplayer_lib/OMXCore.h:51,
../PiOmxTextures/omxplayer_lib/DllAvCodec.h:75:46: error: use of enum 'CodecID' without previous declaration

Any ideas where to start looking? Cheers

Also, you interested in the plugin stuff I've done, also added an autoplay property and currently tracking down a resource leak, if you change media 97 times, it deadlocks (consistently, and a reboot is the only way to clear it!)..Once I've got these sorted, I'll send you a pull request if you want.
Matthew

10. Ignoring the audio track is not supported. You'll have to implement yourself.

OMXImage was left unmaintained since I started working on video, so you'll have to debug to see what is wrong.

11. Anonymous says:

I would also want to know what is the easiest way to disable audio decoding. Could you explain a little better?

I was not able to get OMXImage to function. I am not sure how I should use it so I simply tried in this way:
OMXImage{
source: "path/to/image.jpg"
}
The problem is that the app seems to go into infinite loop. Here’s the output: http://pastebin.com/3pBrC7cW

OMXVideoSurface and OMXMediaProcessor are working fine. Thank you for the efforts and the code.

12. Never worked on something like this, but should be sufficient to just discard the audio packets coming from ffmpeg.

13. I've been playing with this and while trying to disable the audio processing of a given media stream, I'm finding that the result is that both audio and video are missing – although application outputs state that the texture is being properly set up…

Any ideas on where to go to achieve this? I noticed that there is a remark in the code for later improvements on runtime audio track selection – in this case it would be more the case of not processing it at all… without messing with the original media (which will have it).

14. Well, yes… provide that header 🙂 I think the package you need is libboost-dev, but libboost-all-dev should bring that in. So check that file is in your sysroot.

15. Thanks for your excellent work on this subject! I keep finding myself reading articles on your blog when I have troubles…

I tried to download and run your example, but ran into a bit of trouble compiling it with Boost. I installed boost using apt-get install libboost-all-dev but I still receive boost/config.hpp not found errors.

Do you have any advice for getting this up and running?

16. Anonymous says: