QtCreator and kdesrc-build Integration

In KDE we have this awesome meta-build system, called “kdesrc-build“. It allows developers to compile tens and hundreds of KDE (and also external!) projects with only a few command line interactions. Since a few years, specifically after the migration from SVN to Git, it is the de-facto standard way for many contributors to update and build libraries, on top of which they develop their applications. This post is about how efficiently integrating it with a QtCreator workflow. (footnote: this does not mean that you should use QtCreator, especially if you are happy with Kdevelop or other IDEs; this post just explains how the interaction works nicely if you do use QtCreator)

Before looking at QtCreator, let’s look a little bit deeper into kdesrc-build. I will not explain how to set it up, how to use it in detail, or how to configure it to use a self-built Qt . I simply assume that you are already configured kdesrc-build and that it works on your system.

In this case, you will have two important files:

  • ~/.kdesrc-buildrc : this is your main configuration file that defines where Qt is installed, where kdesrc-build shall checkout source code, where build directories shall be created and where the final artifacts shall be installed.
  • ~/.config/kde-env-master.sh : this file provides a ready-to use environment for starting applications in exact the same setup as they are compiled with kdesrc-build.

Use QtCreator on the Same Source & Build Directories

Now let’s assume that you have a certain project checked out and compiled with kdesrc-build and then want to work on it with QtCreator.

QtCreator has the idea of configurable “Kits”. A Kit is always the combination of a Qt version, a compiler, a debugger, a CMake version, a target device (if you are not an embedded developer, then this is always your Desktop computer), and a few more things. The importance concept here is that QtCreator assumes that you are taking all these decisions when creating a Kit and then, wherever possible, QtCreator will enforces these settings and overwrite anything that looks different. Thus, for interacting with a kdesrc-build build and install directory, we have to create a Kit that is correctly configured for your specific kdesrc-build configuration.

Kit configuration window in QtCreator

Configure the environment

Until recently QtCreator only supported the setup of the build environment and used this also for the runtime environment when starting applications form QtCreator. In the version I am using (4.14.1), there is still only the main environment; for later versions you might need to adapt the following.

Into this environment you have to set several environment variables, which I took from the generated kde-env-master.sh. Either you copy it yourself or just take the following snippet and adapt the KDEDIR and the QTDIR variables.

KDEDIR=/opt/kde/install
QTDIR=/opt/qt5-build/qtbase
LD_LIBRARY_PATH=${QTDIR}/lib:${KDEDIR}/usr/lib:$KDEDIR/usr/lib/x86_64-linux-gnu/:${LD_LIBRARY_PATH}
PATH=${QTDIR}/bin:${KDEDIR}/usr/bin:${PATH}
QML2_IMPORT_PATH=${QTDIR}/qml:$KDEDIR/usr/lib/x86_64-linux-gnu/qml/:${QML2_IMPORT_PATH}
QML_IMPORT_PATH=${QML2_IMPORT_PATH}:${QML_IMPORT_PATH}
QT_PLUGIN_PATH=${QTDIR}/plugin:${QT_PLUGIN_PATH}:${KDEDIR}/usr/lib/x86_64-linux-gnu/plugins/
XDG_DATA_DIRS=${KDEDIR}/usr/share:${XDG_DATA_DIRS}

Another crucial point is to adapt the CMake configuration to find all build artifacts from libraries that are build with kdesrc-build. For the CMake configuration add one additional line (again, just adapt the path)

CMAKE_PREFIX_PATH:STRING=/opt/kde/install/usr/;%{Qt:QT_INSTALL_PREFIX}

And, well, that’s it.

Now, you can simply:

  1. Open a CMake based project in QtCreator (if you want to use QMake, you also have to configure the mkspec values, which I omitted above) from your <KDESRC-SRC-FOLDER>/src/ folder.
  2. Go to the “Projects” tab and add the “kdesrc-build” Kit for your project.
  3. Press at the project button at the lower left main window (the one that looks like a computer) and select the “kdesrc-build” Kit.
  4. Press compile or press run, and everything is done within the kdesrc-build setup.

Qt Based Journald API Abstraction (& yet another journald browser)

On modern Linux systems usually you will find systemd as init system. Along with it, there comes journald as a logging backend with many nice and cool features (which I will not tell you anymore about, the Internet will have answers for you 😉 ). But also when you are looking on embedded devices with the power of a smart phone or like a Raspberry Pi, journald is a really nice logging data sink for you.

When analyzing logs of embedded devices, usually you are not working on the device “directly”, meaning not using the tiny konsole application of your smart phone to browse through the logs. Instead, you are either (1) grabbing the full log database from there for offline analysis or (2) you read the logs online via a network stream. Both is easily doable with journald. For the first use case you can simply (please remember to configure journald to use persistent logs!) copy the database from /var/logs/journal and access them on your developer system via “journalctl -D <path>” and get all the nice processing tooling from journalctrl — journalctl is the default CLI frontent for journald. For the second variant, you can start the journal remote service on the target device and receive the online stream of log information on your host system for analysis.

For the second case there are a few GUI applications available, which nicely solve this problem for you, e.g. qjournalctl (which parses the journalctl CLI input/output) or ksystemlog (yet with the focus of being a generic front-end for various log sinks). Yet, both do not support the parsing of non-system offline logs.

KJournald

This was the trigger for my new pet-project “kjournald“, which is meant as a Qt based model/view abstraction API for the journald C-style API. The goal is to provide an abstraction that allows easy creation of QtQuick applications that then provide journald browsing functionalities. I can think of some use cases, like a small Plasma KCM in the system settings or even something for Plasma Mobile, such that you have basic (or advanced, if you want 🙂 ) log browsing capabilities directly on your smart phone. Moreover, I imagine that this might be a possible replacement for existing places that also use journald data, but with a small self-contained and unit-tested library.

Journald-Browser

With the main goal to test my own API and providing a reference usage of the library, there is also a journald-browser application provided. At the moment, I do not see it as a stand-alone product but only to show that a full journald browser can be built with a few hundred lines of QML and a little bit of C++ gluing code.

Current state of the journald-browser: unit filter, priority filter, boot selection, rainbow coloring, basic highlight support; work for system logs and offline logs.

Current State

Everything is still in an early project state, yet all unit tests pass 🙂 This means that the API is not stable and I know many areas for improvement, especially in the browser application. But on the other hand, I am using it daily to analyze journald logs that I get into my hands and for me it is quite helpful.

Since my focus originally was on offline logs, those are the type of logs for which the support works quite well right now. But both online local journals (ie. new log entries are attached while the log is open) as well as remote logs will join soon.

Please feel free to pull it, patch it and provide features and bugfixes back!

RPi4 and Yocto Updates

I recently obtained a brand new Raspberry Pi4 device and took the free days around x-mas to play a little bit around with it. And I must say, I am very pleased with this device!

Raspberry Pi 4 B

The important updates for me, comparing to the older Pis are:

  • Two displays can be connected! Either two HDMI or one DSI plus one HDMI.
  • It has a VideoCore VI GPU (very different to the IV from RPi3), which is driven by the MESA V3D driver.

My goal was to get a Yocto-built multi-display plasma-mobile environment on the device. Except two magic lines for the /boot/config.txt configuration that enabled multi-display output for me, it nearly worked out of the box.

RPi4 Multi-Display Setup

The important configuration step, compared to the default configuration as provided by meta-raspberry, are the following two lines that I had to add to the /boot/config.txt boot configuration:

dtoverlay=vc4-fkms-v3d
max_framebuffers=2

Without these lines, the second screen always displayed just the Raspberry’s rainbow boot screen but it was never detected. I tested with both DSI+HDMI and HMDI+HDMI and both screens were always correctly detected at boot with these configuration.

Running Qt on Yocto-Build Image

Having the above configuration added, I was able to run a simple multi-screen QtWayland Compositor on the device. Note that I built Qt with

PACKAGECONFIG_append = " gbm kms eglfs"

and QtWayland with

PACKAGECONFIG_append_raspberrypi4 = " wayland-drm-egl-server-buffer"

With these options and having all requirements installed, the compositor runs via

export XDG_RUNTIME_DIR=/var/run/
export GALLIUM_HUD=fps # gives nice profiling information about fps
export QT_QPA_EGLFS_ALWAYS_SET_MODE=1
export QT_WAYLAND_CLIENT_BUFFER_INTEGRATION=linux-dmabuf-unstable-v1
qmlscene Compositor.qml -platform eglfs

It is important to note that qmlscene internally sets the Qt::AA_ShareOpenGLContexts attribute which you have to do yourself when running a compositor with your own main file.

Having this compositor running, I could run a simple spinning rectangle application via

export XDG_RUNTIME_DIR=/var/run/
qmlscene SpinningRectangle.qml -platform wayland-egl

Plasma Mobile

The final step though was to get our KDE Demo Setup running. Since there were no live conferences this year, some parts were slightly outdated. So, this was a good opportunity to update our meta layers:

  • meta-kf5 is now updated to KDE Frameworks 5.77.0. Note that we also cleaned up the license statements a few months ago, which was only possible due to much better license information via the SPDX/REUSE conversion of frameworks.
  • meta-kde also gained an update to the latest Plasma release and to the latest KDE Applications release. The number of provided applications — additional to Plasma — is still small, but I also used the opportunity to add some more KDE Edu applications (Blinken, Marble, Kanagram, KHangman, GCompris).

Final Result

Plasma mobile running with two screens \o/

PS: My whole test configuration if available by my (quick and dirty) umbrella test repository, in which I have all used meta layers integrated as submodules.

Performance when using QPainter with QSceneGraph

When using a profiler to look into your programs, sometimes it feels like looking behind the stage of magician and suddenly grasping the trick behind the magic… Quite recently, I had an application in front of me, which demanded surprisingly much CPU time. In a nutshell, this application has some heavy computational operations in its core and (primarily) produces a rectangular 2D output image, which is rendered by QPainter to display the results. This output is updated once every few milliseconds and is embedded inside a QtQuick window. The handover of the rendered QImage is done by a harmless looking Q_PROPERTY.

So, I wondered: How big can the impact of handing over a QImage to the QSG renderer be? In particular — as we all know — copying a big chunk of memory is a CPU expensive operation which should be avoided if possible. For getting proper profiling results, I created a simple test application. This application just creates a QtQuick scene with a QQuickPaintedItem derived render object, which updates its output every millisecond (thus renders whenever the render-loop iterates). I use a big output rectangle of 640×640, because I want to focus on the memory copying effect, which is more obvious with bigger outputs.

When using QQuickPaintedItem::Image as render target for the QQuickPaintedItem object, on my computer I can see a quite constant 30% CPU usage (one core) and the following flamegraph when looking into the process with Perf (sudo perf record --call-graph dwarf -p $(pidof qmlwithqpainter-experiment) -o perf-qimage.data) and visualizing the result with Hotspot:

However, when simply changing the render target to QQuickPaintedItem::FramebufferObject, the application’s CPU usage drops to about 11-12% (of one core) and I get the following result:

Actually, this change is to be expected! We get rid of a quite expensive copy operation that has to be done on every image update. Let’s look into the QQuickPaintedItem documentation for confirmation:

When the render target is a QImage, QPainter first renders into the image then the content is uploaded to the texture. When a QOpenGLFramebufferObject is used, QPainter paints directly onto the texture.

So, what is the story I want to tell? — When facing performance problems inside an application, one can only guess until looking at the problem with a decent profiler (and Perf + Hotspot is an excellent combination for that!). And even then, you have to think about what your application is doing when much of CPU time is lost and ponder whether there are better code paths for your specific situations. In my example, the output still looks the same after the change, but note that I lost all of the fancy anti-aliasing of QImage and now resizing the output became a much more expensive operation.

Hence, for my scenario this change made sense, because the CPU usage drops from 30% to 11% and I do not need to support resizing operations. For other scenarios, this might be different.

REUSE Machine Readable License Information

Some weeks ago I wrote about SPDX identifiers and how they can be used to annotate source code files with machine readable license information. In this post, now I want to compile the things I learned after looking more deeply into this topic and how it might be applied to KDE.

SPDX identifiers are an important step in order to allow tools an automatic reading and checking of license information. However, as most norms, the SPDX specification is quite general, for many people cumbersome to read and allows many options on how to use the identifiers; while me as a developer, I just want to have a small howto that explains how I have to state my license inormation. Another point is that in my experience any source code annotation with machine readable information is pointless unless you have a tool that automatically checks the correctness. Otherwise, there is a big burden on code reviews that would have to check tiny syntactical requirements from a specification. — If you look deeply into the used license headers in KDE (I did this), there is a shocking number of different license statements that often state exactly the same. This might be due to different formatting or typos but also due to actual faults when trying to pick the correct license for a use case, which somehow got mixed up.

REUSE.software

While doing research on best practices for applying machine readable license information, I was pointed to the REUSE.software initiative, which was started by the Free Software Foundation Europe (FSFE) to provide a set of recommendations to make licensing easier. What they provide is (in my opinion) a really good policy for how to uniformly state SPDX based license headers in source files, how to put license texts into the repository and a way to automatically check the syntactical correctness of the license statements with a small conformance testing tool.

I really like the simplicity of their approach, where I mean simplicity in the amount of documentation you have to read to understand how to do it correctly.

Meanwhile in KF5…

As already said, I want to see machine readable license information in KDE Frameworks in order to increase their quality and to make them easier to use outside of KDE. The first step to be done before introducing any system for machine readable identifiers is to understand what we have inside our repositories right now.

Disclaimer: I know that there are many license parsing tools out there in the wild and I know that several of them are even well established.

Yet, I looked into what we have inside our KF5 repositories and what has to be detected: Most of our licenses are GPL*, LGPL*, BSD-*-Clause, MIT or a GPL/LGPL variant with the “any later version accepted by the membership of KDE e.V. (or its successor approved by the membership of KDE e.V.), which shall act as a proxy […] of the license.” addition. After a lot of reasoning, I came to the result that for the specific use case of detecting the license headers inside KDE project (even focused only on frameworks right now) it makes most sense to have a small tool only working for this use case. The biggest argument for me was that we need a way to deal with the many historic license statements from up to 20 years ago.

Thus, I started a small tool in a scratch repository, named it licensedigger and started the adventure to parse all license headers of KDE Frameworks. From all source files right now I am done with about 90%. Yet, I neglected to look into KHTML, KJS, KDE4LibsSupport and the other porting aid frameworks until now. Specifically for the Tier 1 and Tier 2 frameworks I am mostly done and even beasts like KIO can be detected correctly right now. Thus, I am still working on it to increase the number of headers to be detected.

The approach I took is the following:

  • For every combination of licenses there is one regular expression (which only has the task to remove whitespace and any “*” characters).
  • For every license check there is a unit test consisting of a plaintext license header and a original source code file that guarantees that the header is found.
  • Licenses are internally handled with SPDX markers.
  • For a new license or a license header statement for an already contained license, the license name must be stated multiple times to ensure that copy-past errors with licenses are minimized.
  • It is OK if the tool only detects ~95% of the license headers, marks unknown headers clearly and requires that the remaining 2-3 files per repository have to be identified by hand.

At the moment, the tool can be run to provide a list of license for any folder structure, e.g. pointing it to “/opt/kde/src/frameworks/attica” or even on “/opt/kde/src/frameworks” will produce a long list of file names with their licenses. A next (yet simple) step will be to introduce a substitution mode that replaces the found headers with SPDX markers and further to add the license files in a way that is compatible with REUSE.

Please note that there was no discussion yet on the KDE mailing list if this (i.e. the REUSE way) is the right way to go. But I will send respective mails soon. This post is mostly meant to provide a little bit of background before starting a discussion, such that I can keep mails shorter.

The Road Towards KF6 & SPDX License Identifiers

Last weekend, I had the opportunity to join the planning sprint for KDE Frameworks 6 in Berlin. KF6 will be the next major version release of the KDE Frameworks (a set of add-on libraries to make your life much easier when developing libraries on top of Qt), which will be based on Qt6. There are several blogs out in the wild about the goals for this release. Mainly, we aim for the following:

  • Getting a better separation between logic and platform UI + backend, which will help much on non-Linux systems as Android, MacOS, and Windows.
  • Cleaning up dependencies and making it easier to use the existing Tier 3 frameworks. Note that the Framework libraries are organized in Tiers, which define a layer based dependency tree. Tier 1 libraries may only depend on Qt; Tier 2 libraries may depend on Qt and Tier 1 libraries; and Tier 3 libraries may depend on Qt, Tier 1 and Tier 2 libraries — you see the problem with Tier 3 😉

For details about the framework splittings and cleanups I want to point to the excellent blog posts by David, Christoph 1 / 2 / 3, Kevin, Kai Uwe, Volker, and Nico. However, in this post I want to focus on one of my pet projects in the KF6 cleanup:

Software Package Data Exchange (SPDX)

With KF6, I want to see SPDX license identifiers being introduced into KDE frameworks in order to ease the framework re-use in other projects. This follows the same approach e.g. the Linux Kernel took over the last years.

The problem that the SPDX markers address is the following: When publishing source code under an open source license, each source code file shall explicitly state the license it is released with. The usual way this is done is that a developer copies a license header text from the KDE licensing policies wiki, from another source file, or from somewhere else from the internet and puts it at the top of their newly created source code file. Thus the result is that today we have many slightly different license headers all over our frameworks source files (even if they only differ in formatting). Yet, these small differences make it very hard to introduce automatic checks for the source code licenses in terms of static analysis. This problem becomes even more urgent when one wants to check that a library, which consists of several source files with different licenses, does only contain compatible licenses.

The SPDX headers solve this problem by introducing a standardized language that annotates every source code file with license information in the SPDX syntax. This syntax is rich enough to express all of our existing license information and it can also cover more complicated cases like e.g. dual-licensed source files. For example, an “LGPL 2.1 or any later version” license header of a source file looks as:

// SPDX-License-Identifier: LGPL-2.1-or-later

The full list of all existing SPDX markers are available in the SPDX license registry.

The first step now is to define how to handle the GPL and LGPL license headers with specific KDE mentioning, as their is no direct equivalent in the SPDX registry. This is a question we are about to discuss with OSI. After deciding that we have to discuss in the KDE community if SPDX is the way to go (gladly, there was no objection yet to my mail to the community list) and to adapt our KDE licensing policy. And the final big step then will be to get the tooling ready for checking all existing licenses headers and to replace them (after review) with SPDX markers.

PS: Many thanks to MBition for the great planning location for the KF6 sprint in the MBition offices and to the KDE e.V. for the travel support!

 

Smart Pointers in Qt Projects

Actually, a smart pointer is quite simple: It is an object that manages another object by a certain strategy and cleans up memory, when the managed object is not needed anymore. The most important types of smart pointers are:

  • A unique pointer that models access to an object that is exclusively maintained by someone. The object is destroyed and its memory is freed then the managing instance destroys the unique pointer. Typical examples are std::unique_ptr or QScopedPointer.
  • A shared pointer is a reference counting pointer that models the shared ownership of an object, which is managed by several managing instances. If all managing instances release their partly ownership, the managed object is automatically destroyed. Typical examples are std::shared_ptr or QSharedPointer.
  • A weak pointer is a pointer to an object that is managed by someone else. The important use case here is to be able to ask if the object is still alive and can be accessed. One example is std::weak_ptr that can point to a std::shared_ptr managed object and can be used to check if the object managed by the shared pointer still exists and it can be used to obtain a shared pointer to access the managed object. Another example is QPointer, which is a different kind of weak pointer and can be used to check if a QObject still exists before accessing it.

For all these pointers one should always keep one rule in mind: NEVER EVER destroy the managed objects by hand, because the managed object must only be managed by the smart pointer object. Otherwise, how could the smart pointer still know if an object can still be accessed?! E.g. the following code would directly lead to a crash because of a double delete:

{
    auto foo = std::make_unique<Foo>();
    delete foo.get();
} // crash because of double delete when foo gets out of scope

This problem is obvious, now let’s look at the less obvious problems one might encounter when using smart pointers with Qt.

QObject Hierarchies

QObject objects and instances of QObject derived classes can have a parent object set, which ensures that child objects get destroyed whenever the parent is destroyed. E.g., think about a QWidget based dialog where all elements of the dialog have the QDialog as parent and get destroyed when the dialog is destroyed. However, when looking at smart pointers there are two problems that we must consider:

1. Smart Pointer Managed objects must not have a QObject parent

It’s as simple as the paragraph’s headline: When setting a QObject parent to an object that is managed by a smart pointer, Qt’s cleanup mechanism destroys your precious object whenever the parent is destroyed. You might be lucky and destroy your smart pointer always before the QObject parent is destroyed (and nothing bad will happen), but future developers or user of your API might not do it.

2. Smart Pointers per default call delete and not deleteLater

Calling delete on a QObject that actively participates in the event loop is dangerous and might lead to a crash. So, do not do it! – However, all smart pointers that I am aware of call “delete” to destroy the managed object. So, you actively have to take care of this problem by specifying a custom cleanup handler/deleter function. For QScopedPointer there already exists “QScopedPointerDeleteLater” as a predefined cleanup handler that you can specify. But you can do the same for std::unique_ptr, std::shared_ptr and QSharedPointer by just defining a custom deleter function and specifying it when creating the smart pointer.

Wrestling for Object Ownership with the QQmlEngine

Besides the QObject ownerships there is another, more subtle problem that one should be aware of when injecting objects into the QQmlEngine. When using QtQuick in an application, often there is the need to inject objects into the engine (I will not go into detail here, but for further reading see https://doc.qt.io/qt-5/qtqml-cppintegration-topic.html). The important important fact one should be aware of is that at this point there is a heuristic that decides whether the QML engine and its garbage collector assumes ownership of the injected objects or if the ownership is assumed to be on C++ side (thus managed by you and your smart pointers).

The general rule for the heuristic is named in the QObjectOwnership enum. Here, make sure that you note the difference between QObjects returned via a Q_PROPERTY property and via a call of a Q_INVOKABLE methods. Moreover, note that the description there misses the special case of when an Object has a QObject parent, then also the CppOwnership is assumed. For a detailed discussion of the issues there (which might show you a surprisingly hard to understand stack trace coming from the depths of the QML engine), I suggest reading this blog post.

Summing up the QML part: When you are using a smart pointer, you will hopefully not set any QObject parent (which automatically would have told the QML engine not to take ownership…). Thus, when making the object available in the QML engine, you must be very much aware about the way you are using to put the object into the engine and if needed, you must call the QQmlEngine::setObjectOwnership() static method to mark your objects specifically that they are handled by you (otherwise, bad things will happen).

Conclusion

Despite of the issues above, I very much favor the use of smart pointers. Actually, I am constantly switching to smart pointers for all projects I am managing or contributing. However, one must be a little bit careful and conscious about the side effects when using them in Qt-based projects. Even if they bring you a much simpler memory management, they do not relieve you from the need to understand how memory is managed in your application.

PS: I plan to continue with a post about how one could avoid those issues with the QML integration on an architectural level, soon; but so much for now, the post is already too long 🙂

DMA-Buf Support in QtWayland for Client Buffers

This post summarized in one sentence: QtWayland’s dev branch gained support for the linux-unstable-dmabuf-v1 protocol (version 3), which allows QtWayland compositors to be used on top of the open-source etnaviv driver and moreover, brings a performance improvement on top of all drivers.

With a Few More Details

Sharing buffers between Wayland clients and the Wayland compositor is a good idea to avoid unnecessary buffer copies. For doing buffer sharing, however, descriptors are needed that explain the client’s buffer memory layout to the compositor (look here for more details about DMA buffer modifiers and look here for more background about buffer sharing). For dealing with this task, there is a Wayland protocol extension called “linux_dmabuf_unstable_v1”, which introduces a communication interfaces between Wayland client and compositor to provide buffers in the form of file handles and to describe them with so-called buffer modifiers, such that the compositor is able to understand the memory organization of the received buffers.

During the hacking hours of last Akademy I started to look into this topic and how to introduce the DMA client buffer handling interface into the QtWayland compositor framework. My main focus for this protocol extension was not on the rendering speed aspect alone though, but to make QtWayland based compositors available on the i.MX6 hardware with the etnaviv open source driver (for details why linux_dmabuf_unstable_v1 is required for this, see this blog post about making Weston compatible with etnaviv).

Fortunately, some initial base work was already done and I had two extremely helpful guys on IRC who I could bother with questions (many thanks to Johan Helsing and Daniel Stone!) After much testing, code improvements and refactorings during the last weeks, mostly by testing the my patch on i.MX6 development boards, the patch finally got into a decent quality to be merged upstream into Qt’s dev branch, which will make the linux-dmabuf-unstable-v1 support available with the Qt 5.13 release. However, I can assure that this patch (with minor changes) works also quite well on Qt 5.10, 5.11 and 5.12 (I tested with all of these version but will not provide support or compatibility patches 😉 ). Also thanks to my employer that I could spent some of my work time to fight memory leaks and to get the patch into a decent quality.

Using linux-dmabuf-unstable-v1 with QtWayland

The protocol extension, which is available in Qt’s dev branch, is opt-in. So the QtWayland compositor must be told explicitly to use DMA buffers. This can be done by setting the following environment variable:

QT_WAYLAND_CLIENT_BUFFER_INTEGRATION=linux-dmabuf-unstable-v1

Please also note that the extension is being built conditionally based on whether the libdrm headers are found or not. In case that the extension is not there (starting the compositor with WAYLAND_DEBUG=1 is a good idea for gaining insights) please have a look into Qt’s configuration logs. Moreover, you want to use Mesa with version of at least 18.1.5 or otherwise to have this patch applied; otherwise surfaces are not properly updated on a buffer change.

In the case that you work on an i.MX6 with etnativ, you also want to use this patch to avoid leaking a buffer every time a client window is destroyed.

Syntax Highlighter for Wayland Traces

When debugging window compositing problems with the Wayland client-server protocol, often it is a good idea to set the environment variable “WAYLAND_DEBUG=1” and to take a deep look on the messages that are sent via this protocol. But as always, a lot of output is generated and highlighting can help very much. So far, you could use Johan’s excellent QML based highlighter with many cool features (e.g. rainbow colors for different objects).

However, in my workflow usually I already have Kate open and simply want to paste a trace therein and to use Kate’s cool syntax highlighting features. So, yesterday I sat down and created an initial set version of highlighting rules for Wayland trace logs. These rules are already merged and will be available with the next KF5 release.

If you do not want to wait until the next KF5 release, just save wayland-trace.xml file to “~/.local/share/org.kde.syntax-highlighting/syntax/” in your local home folder, restart Kate, and then select the highlighting scheme “Other -> Wayland Trace”.wayland-log-highlight-example

FOSDEM 2017 & the QtWayland Compositor Framework

This will be a rather short blog post but since I completely missed to making it before this year’s FOSDEM, just let me give you a short hint to my current talk: This year, for the first time, I submitted a talk to the Embedded & Automotive DevRoom. If you think that this sounds crazy, actually, what we see on modern embedded devices, like in cars or in even bigger machines, this tends gain a similar complexity like the good old Linux desktop environments. In terms of multiple processes, window compositing and UI requirements, a lot of such demands are already on the table…

Recently, I looked into the QtWayland Compositor framework, which is an awesome new tool if you want to create a small but use case specific Wayland compositor, as it is often the case in the embedded world. The framework was just released as stable API with Qt 5.8. If you want to read more, I just gave a talk about it yesterday:

Have fun 🙂