"Qt for GTK Developers" - a talk at UbuCon 2009, Göttingen, Germany

This Sunday, I gave a presentation at UbuCon 2009, the German Ubuntu Developers and Users conference, held at the wonderful historic town of Göttingen, in northern Germany. The conference covers a variety of distribution development topics, with about 250 participants, and a 5-track presentation schedule (!). The talk I submitted was about a topic that fascinates me a lot lately - the convergence of Free Software desktop technologies under the hood, which makes Qt developers get in touch with GTK based technologies more and more, and vice versa, and about our experience at KDAB with developing such technologies. It was called "Qt for GTK Developers", purposefully slightly on the provocative side, with a smirk. After all, I am used to working in areas with unusual risk conditions, so why the heck not? Also, this comes not even close to the stress levels created by parenting. The talk was about how Qt and GTK are both used for developing base desktop services, which toolkit dominates in what areas, and what our guesses are on what the future brings. The central part was an overview of Qt technologies and practises, presented in contrast to GTK. The talk consisted of four sections dubbed "Everything was better in the old days", "Everyone does what he wants", "Qt is not what it used to be", and "This is as good as it gets". There were quite a few good laughs between the audience and me. Read on for more details.

Everything was better in the old days
It started out with a flashback to the GCDS, and how the Gnome and KDE communities collided and formed a new sun of creativity. Are Tracker, Soprani, Strigi, and Nepomuk all doing the same thing, and does it really make sense to run two separate indexing services in the desktop to be able to use the KDE and Gnome infrastructure? Is DBus a Gnome, a KDE, or an obsolete technology? The user wants similar desktop styles, do we have to duplicate them, then, always? Will Evolution and Kontact store, index, and syncronize the same information of the user's data, or two independent copies? The discussion culminates in uncomfortable questions - does the desktop have an identity for the regular Linux user? Are KDE and Gnome converging into one user experience were both the distributions and the user mix and match whatever applications, window manager, and workspace they please? What do the lofty names for the core technologies actually mean in the long run? There is, of course, no ready made answer for these questions.

Everyone does what he wants
In more detail, the current situation was discussed. GTK and Qt use rather different ways of development and packaging, GTK being more of a bundle of technologies, which combined make up the user experience. Qt integrates its modules into one (huge) package. The similarity in the development model is obvious - Qt being developed by a single company, while in GTK, multiple smaller companies specialize in developing individual GTK technologies, potentially backed by the larger GTK using companies, funding their development efforts. The differing license models of Qt and GTK were discussed, which a focus on what difference those make today, and how they influence new software projects when the GUI toolkit is selected. This was about as much abstract reasoning a human being can take at a time, so the next section had some code. Some.

Qt is not what it used to be
It was an overview of what Qt is today, and how it changed from a GUI toolkit to a comprehensive application framework. Qt is still mostly seen as purely a GUI toolkit by those not using it, so I presented which modules it consists of, and which tools, and also what programming practices it facilitates. The fact that Qt has fully dynamic integration of DBus, for example, took many by surprise. The XML modules (and the fact that they are written as part of Qt, and tend to not break with the next Qt version), the Webkit implementation, the SVG renderer, QtScript, all these are obviously much unknown outside of the Qt developer's world. Then I presented an overview of the Qt development tools, both the fully flegded GUI applications, and the command line tools that are part of the build process. The resource system lifted a few eye brows ("Could I use that to link some resources, and access them from a GTK program if I write a wrapper to get to the files that uses Qt? Sure.") I explained how Qt starts to be used for defining platform APIs as well, but at the same time integrates with glib. It is amazing to see how the different technologies go in circles around each other. I demonstrated a few bits of Qt code, starting, of course, with "Hello World". I showed a demo of QGraphicsView, and the QUnitTest-based QThreadPool performance test I also used at the DevDays (me being me, there had to be some multi-threading in the presentation). It is apparent that the ease and the straightforward way in which such examples can be written in Qt did surprise some in the audience. I did point out that the fact that platform APIs are developed using Qt does not automatically mean that the KDE guys get their way, this seemed comforting, in a weird way of shared misery.

This is as good as it gets
The outlook and summary wrapup is probably the part where I made the fewest friends, so to say. I explained why I think that natively compiled languages will dominate desktop applications for quite some time to come. The inefficiency of running many small desktop applications in (J)VMs seems to impede the use of Java for the kind of applications we develop. I declared Mono to be a bad idea (tm), because the Free Software world is trotting behind Microsoft. It means we are re-implementing the idea of how they think future (Windows?) applications should be developed, again. That must be a waste of Free Software engineering resources, we are innovators, not followers. To be precise and to avoid misunderstandings, I did not criticize C# as a bad programming language, but the process of how the .NET API is developed further and extended. And then I said I think Vala is a bad idea as well, because I find it highly improbable that a couple of people, no matter how brilliant, could define a strikingly better programming language in a few weeks than whole research departments in a community process over many years. Even more doubt is cast by the fact that it was invented solely to get rid of C, without admitting that C++ would have been a better choice. As expected, these statements started an interesting and somewhat heated discussion. No animals or humans have been harmed in the process, and I made it out safely to the train station. Overall, this was a very interesting encounter. The organizers of the conference did a very good job, with a weekend flatrate for food and drinks, this is a really great idea. The slides will come up on the conference site in a few days.


Things I like about Vala:

  • It has lots of nice high level features.
  • It is natively compiled.
  • It has some kind of memory management (refcounts).
  • It automatically generates a C (glib) interface so C applications can use Vala libs.

What I don't like about Vala:

  • No real garbage collector.
  • Stone age exception handling. In modern languages a try statement introduces no overhead whatsoever, therefore in a normal course execution a try-catch-black does no add overhead. When an exception is thrown the stack is walked and lookup tables are used to find the right catch very fast. Is is of course not possible to implement when your language compiles to C, because there is no try-catch in C. In garbage collected languages this can be implemented very efficiently, because no decrefs or destructors have to be called while the stack is walked (whole activation records can be skipped, when there is no appropriate catch or finally). In Vala a returned value has to be checked (conditional branch!) even in the normal course. For this reason using exceptions is discouraged in Vala. As a matter of fact the most expensive operation when throwing an exception in Java is the allocation of the exception object (I think as long as no stacktrace has to be printed and therefore the stack has not to be analysed in this detailed way). I think Vala should be honest and just not implement exception handling using try-catch.

By Mathias Panzenböck at Wed, 10/21/2009 - 16:39

Everything was better in the old days...

This weren't a problem if the primary point of computers still wouldn't just be: Running applications - the ones which fit your needs best, no matter which toolkit they are written in. And the lack of applications is a predominant problem anyway. The way how many desktop developers tend to forget about that, is like reckoning without one’s host. People seem to ignore the fact that a technology like KIO is pretty useless - if it's only linked by 10% of the applications...

Sometimes i'm wondering, whether the motivation for having two desktops is not having two different desktop flavours (which i would understand), but rather that we have two different toolkit fanclubs. Desktop developers not even "cheat" on their favorite toolkit once in a while. Sharing code seems like an impossible thought for many of them. Will FOSS ever be able to compete with commercial desktops, when design is primarily driven by such weird feelings? Sometimes I doubt that.

Deciding about a single style and framework for basic platform APIs, for companies like MS or Apple that's a child's play - compared to FOSS desktops. When I - for instance - once argued with KDE developers that GObject+C might be a good option for some (really) low-level platform APIs, everyone was upset, pointing out how much better Qt/C++ was. But that's missing the point. In this case it's important to decide, not so much for which technology.

By nf2 at Fri, 10/23/2009 - 00:17