Upstream and Downstream: why packaging takes time

Here in the KDE office in Barcelona some people spend their time on purely upstream KDE projects and some of us are primarily interested in making distros work which mean our users can get all the stuff we make. I've been asked why we don't just automate the packaging and go and do more productive things. One view of making on a distro like Kubuntu is that its just a way to package up the hard work done by others to take all the credit. I don't deny that, but there's quite a lot to the packaging of all that hard work, for a start there's a lot of it these days.

"KDE" used to be released once every nine months or less frequently. But yesterday I released the first bugfix update to Plasma, to make that happen I spent some time on Thursday with David making the first update to Frameworks 5. But Plasma 5 is still a work in progress for us distros, let's not forget about KDE SC 4.13.3 which Philip has done his usual spectacular job of updating in the 14.04 LTS archive or KDE SC 4.14 betas which Scarlett has been packaging for utopic and backporting to 14.04 LTS. KDE SC used to be 20 tars, now it's 169 and over 50 langauge packs.


If we were packaging it without any automation as used to be done it would take an age but of course we do automate the repetative tasks, the KDE SC 4.13.97 status page shows all the packages and highlights obvious problems. But with 169 tars even running the automated script takes a while, then you have to fix any patches that no longer apply. We have policies to disuade having patches, any patches should be upstream in KDE or on their way upstream, but sometimes it's unavoidable that we have some to maintain which often need small changes for each upstream release.


Much of what we package are libraries and if one small bit changes in the library, any applications which use that library will crash. This is ABI and the rules for binary compatibility in C++ are nuts. Not infrequently someone in KDE will alter a library ABI without realising. So we maintain symbol files to list all the symbols, these can often feel like more trouble than they're worth because they need updated when a new version of GCC produces different symbols or when symbols disappear and on investigation they turn out to be marked private and nobody will be using them anyway, but if you miss a change and apps start crashing as nearly happened in KDE PIM last week then people get grumpy.


Debian, and so Ubuntu, documents the copyright licence of every files in every package. This is a very slow and tedious job but it's important that it's done both upstream and downstream because it you don't people won't want to use your software in a commercial setting and at worst you could end up in court. So I maintain the licensing policy and not infrequently have to fix bits which are incorrectly or unclearly licenced and answer questions such as today I was reviewing whether a kcm in frameworks had to be LGPL licenced for Eike. We write a copyright file for every package and again this can feel like more trouble than its worth, there's no easy way to automate it but by some readings of the licence texts it's necessary to comply with them and it's just good practice. It also means that if someone starts making claims like requiring licencing for already distributed binary packages I'm in an informed position to correct such nonsense.


When we were packaging KDE Frameworks from scratch we had to find a descirption of each Framework. Despite policies for metadata some were quite underdescribed so we had to go and search for a sensible descirption for them. Infact not infrequently we'll need to use a new library which doesn't even have a sensible paragraph describing what it does. We need to be able to make a package show something of a human face.


A recent addition to the world of .deb packaging is MultiArch which allows i386 packages to be installed on amd64 computers as well as some even more obscure combinations (powerpc on ppcel64 anyone?). This lets you run Skype on your amd64 computer without messy cludges like the ia32-libs package. However it needs quite a lot of attention from packagers of libraries marking which packages are multiarch, which depend on other multiarch or arch independent packages and even after packaging KDE Frameworks I'm not entirely comfortable with doing it.

Splitting up Packages

We spend lots of time splitting up packages. When say Calligra gets released it's all in one big tar but you don't want all of it on your system because you just want to write a letter in Calligra Words and Krita has lots of image and other data files which take up lots of space you don't care for. So for each new release we have to work out which of the installed files go into which .deb package. It takes time and even worse occationally we can get it wrong but if you don't want heaps of stuff on your computer you don't need then it needs to be done. It's also needed for library upgrades, if there's a new version of libfoo and not all the programs have been ported to it then you can install libfoo1 and libfoo2 on the same system without problems. That's not possible with distros which don't split up packages.

One messy side effect of this is that when a file moves from one .deb to another .deb made by the same sources, maybe Debian chose to split it another way and we want to follow them, then it needs a Breaks/Replaces/Conflicts added. This is a pretty messy part of .deb packaging, you need to specify which version it Breaks/Replaces/Conflicts and depending on the type of move you need to specify some combination of these three fields but even experienced packages seem to be unclear on which. And then if a backport (with files in original places) is released which has a newer version than the version you specify in the Breaks/Replaces/Conflicts it just refuses to install and stops half way through installing until a new upload is made which updates the Breaks/Replaces/Conflicts version in the packaging. I'd be interested in how this is solved in the RPM world.

Debian Merges

Ubuntu is forked from Debian and to piggy back on their work (and add our own bugs while taking the credit) we merge in Debian's packaging at the start of each cycle. This is fiddly work involving going through the diff (and for patches that's often a diff of a diff) and changelog to work out why each alternation was made. Then we merge them together, it takes time and it's error prone but it's what allows Ubuntu to be one of the most up to date distros around even while much of the work gone into maintaining universe packages not part of some flavour has slowed down.

Stable Release Updates

You have Kubuntu 14.04 LTS but you want more? You want bugfixes too? Oh but you want them without the possibility of regressions? Ubuntu has quite strict definition of what's allowed in after an Ubuntu release is made, this is because once upon a time someone uploaded a fix for X which had the side effect of breaking X on half the installs out there. So for any updates to get into the archive they can only be for certain packages with a track record of making bug fix releases without sneaking in new features or breaking bits. They need to be tested, have some time passed to allow for wider testing, be tested again using the versions compiled in Launchpad and then released. KDE makes bugfix releases of KDE SC every month and we update them in the latest stable and LTS releases as 4.13.3 was this week. But it's not a process you can rush and will take a couple of weeks usually. That 4.13.3 update was even later then usual because we were busy with Plasma 5 and whatnot. And it's not perfect, a bug in Baloo did get through with 4.13.2. But it would be even worse if we did rush it.


Ah but you want new features too? We don't allow in new features into the normal updates because they will have more chance of having regressions. That's why we make backports, either in the kubuntu-ppa/backports archive or in the ubuntu backports archive. This involves running the package through another automation script to change whever needs changed for the backport then compiling it all, testing it and releasing it. Maintaining and running that backport script is quite faffy so sending your thanks is always appreciated.

We have an allowance to upload new bugfix (micro releases) of KDE SC to the ubuntu archive because KDE SC has a good track record of fixing things and not breaking them. When we come to wanting to update Plasma we'll need to argue for another allowance. One controvertial issue in KDE Frameworks is that there's no bugfix releases, only monthly releases with new features. These are unlikely to get into the Ubuntu archive, we can try to argue the case that with automated tests and other processes the quality is high enough, but it'll be a hard sell.

Crack of the Day
Project Neon provides packages of daily builds of parts of KDE from Git. And there's weekly ISOs that are made from this too. These guys rock. The packages are monolithic and install in /opt to be able to live alongside your normal KDE software.


You should be able to run KDELibs 4 software on a Plasma 5 desktop. I spent quite a bit of time ensuring this is possible by having no overlapping files in kdelibs/kde-runtime and kde frameworks and some parts of Plasma. This wasn't done primarily for Kubuntu, many of the files could have been split out into .deb packages that could be shared between KDELibs 4 and Plasma 5, but other disros which just installs packages in a monolithic style benefitted. Some projects like Baloo didn't ensure they were co-installable, fine for Kubuntu as we can separate the libraries that need to be coinstalled from the binaries, but other distros won't be so happy.

Automated Testing
Increasingly KDE software comes with its own test suite. Test suites are something that has been late coming to free software (and maybe software in general) but now it's here we can have higher confidence that the software is bug free. We run these test suites as part of the package compilation process and not infrequently find that the test suite doesn't run, I've been told that it's not expected for packagers to use it in the past. And of course tests fail.

Obscure Architectures
In Ubuntu we have some obscure architectures. 64-bit Arm is likely to be a useful platform in the years to come. I'm not sure why we care about 64-bit powerpc, I can only assume someone has paid Canonical to care about it. Not infrequently we find software compiles fine on normal PCs but breaks on these obscure platforms and we need to debug why they is. This can be a slow process on ARM which takes an age to do anything, or very slow where I don't even have access to a machine to test on, but it's all part of being part of a distro with many use-cases.

Future Changes
At Kubuntu we've never shared infrstructure with Debian despite having 99% the same packaging. This is because Ubuntu to an extent defines itself as being the technical awesomeness of Debian with smoother processes. But for some time Debian has used git while we've used the slower bzr (it was an early plan to make Ubuntu take over the world of distributed revision control with Bzr but then Git came along and turned out to be much faster even if harder to get your head around) and they've also moved to team maintainership so at last we're planning shared repositories. That'll mean many changes in our scripts but should remove much of the headache of merges each cycle.

There's also a proposal to move our packaging to daily builds so we won't have to spend a lot of time updating packaging at every release. I'm skeptical if the hassle of the infrastructure for this plus fixing packaging problems as they occur each day will be less work than doing it for each release but it's worth a try.

ISO Testing
Every 6 months we make an Ubuntu release (which includes all the flavours of which Ubuntu [Unity] is the flagship and Kubuntu is the most handsome) and there's alphas and betas before that which all need to be tested to ensure they actually install and run. Some of the pain of this has reduced since we've done away with the alternative (text debian-installer) images but we're nowhere near where Ubuntu [Unity] or OpenSUSE is with OpenQA where there are automated installs running all the time in various setups and some magic detects problems. I'd love to have this set up.

I'd welcome comments on how any workflow here can be improved or how it compares to other distributions. It takes time but in Kubuntu we have a good track record of contributing fixes upstream and we all are part of KDE as well as Kubuntu. As well as the tasks I list above about checking copyright or co-installability I do Plasma releases currently, I just saw Harald do a Phonon release and Scott's just applied for a KDE account for fixes to PyKDE. And as ever we welcome more people to join us, we're in #kubuntu-devel where free hugs can be found, and we're having a whole day of Kubuntu love at Akademy.


Thanks for letting us see a little bit what packagers actually do every day. And thanks a lot for doing it. Sounds like it's not always fun ;)

By lhugs at Wed, 08/13/2014 - 21:41

So let me give my view on this as one of the packagers involved with Fedora KDE packaging. (Please note that these are my personal perceptions, I am not speaking for Fedora as a whole or even the Fedora KDE SIG as a whole.)

As I see it, we definitely share some of the problems. Some are less drastic in Fedora though, and some are entirely specific to Kubuntu.

> Patches
This one, we of course have, too. We also have policies discouraging patches, but we also end up carrying some for various reasons.

> Symbols
Silent ABI breaks are indeed a major annoyance. Around here, we do not really have the tools to automatically detect those, so we more or less just punt on this issue, i.e., we basically do 2 things:

  1. If things are released together (as kdelibs and kde-workspace were in the Software Compilation times), we always try to build the applications against the matching libraries, and use Requires with '=' versioning for known repeat offenders to forcefully version-lockstep them. (This used to be the case for kde-workspace and kdelibs before the permafreeze.)
  2. All the other cases are handled in a purely reactive way: If people report crashes, we rebuild it.

> Copyright
Fedora is also a very picky distribution on this issue, but we handle things in a somewhat different way. We do not ship a "copyright" file. Instead, we ship each upstream COPYING file (in fact, we are required to ship the GPL COPYING with every single GPL-licensed package, the only exception being if it Requires another subpackage of the same source package that already ships the license; in fact, this is actually required by the GPL if you read it closely!), and a one-line License tag which lists the names of the licenses involved (which should be preceded by comments saying which license applies to what parts if there is more than one). But to fill in the License tag properly, we also need to check the licenses of all the files in the tarball. We also need upstream to ship the COPYING files for all licenses involved, which is way too often not the case. (Our guidelines say that we do not need to add missing COPYING files if upstream does not ship them, but that we should report it to upstream and try to get them to add the files in future releases.)

> Descriptions
Here, we found:
to do wonders. ;-) (To those who don't understand RPM: This just takes the contents of the Summary tag, adds a dot at the end, and makes that the description.) But of course having a real description would be better. This is something where we could try to work together in a better way, rather than every distro reinventing the wheel.

> Multiarch
For this, we use FHS multilib (/usr/lib64) rather than Debian multiarch (/usr/lib/x86_64-linux-gnu). We also actually multilib all our libraries (in particular, all the 64-bit libraries go to /usr/lib64), which means we don't have to figure out what should be multilib and what not. We do sometimes have multilib-related issues though, mainly stuff that hardcodes "lib" as the install destination, which should at least use lib${LIB_SUFFIX} if it isn't going to use the LIB_INSTALL_DIR. We also try to have -libs subpackages for applications that include shared libraries, so that we don't end up with the entire application getting needlessly multilibbed by our repository compose tools (i.e., we don't want to unnecessarily have e.g. amarok.i686 in the x86_64 repository).

> Splitting up Packages
This is something we've been increasingly doing now. In some cases, it got forced on us because upstream split the tarballs, but even where they didn't, we have moved to doing more splits recently. (User requests for it have always been there, and with most of the monolithic modules gone, the arguments for keeping the remaining ones unsplit are not as strong as they used to be.) In some cases like Calligra/KOffice, we have been doing split packages for years though.

What we do not do in Fedora is to split out every library and to give it a package name that includes the soversion. RPM does automatic Provides and Requires on sonames, so it will already enforce that we get the correct soversion without encoding it in the package name. And in Fedora, we try to avoid shipping multiple versions of the same library whenever possible; instead, we try hard to get everything to build with the same version, even if it means backporting or reverting some patch(es). We only ship compatibility versions of a library when it really cannot be avoided, and then we only suffix the non-default versions of the library with a version number, and we use a user-readable version number rather than a soversion, e.g., the KDE 3 kdelibs is kdelibs3, not kdelibs4.

As for the Breaks/Replaces/Conflicts stuff, we do not have a distinction between "Breaks" and "Conflicts" in RPM. RPM does not handle configuration the way dpkg does with debconf, so there is no such difference to be made in RPM. We have Obsoletes (i.e., your Replaces) and Conflicts. Otherwise, the usage is very similar. But at least in KDE packaging, we do not really run into the issue you describe with backports because we'll typically backport the file moves, too. (We are allowed to do packaging changes in updates as long as the upgrade path is reasonable, i.e., as long as our users do not end up with some of the stuff they had installed magically removed.)

> Debian Merges
This is an issue that we do not have in Fedora. The RHEL packagers probably share some of your issues (with Fedora as the upstream distro instead of Debian), but as a non-Red-Hat Fedora packager, I cannot really relate to that.

> Stable Release Updates
> Backports
There, the Fedora policies, even though they have become more stringent than in the past, are still much more lenient than yours. We are allowed to push bugfix releases of any package by policy. We also have an allowance to push feature releases of the KDE SC (and by the way, the kernel also gets updated with feature releases in Fedora). I think the plan for KF5 is that we'll just push all the new releases as updates. Assuming they don't bump their sonames, we will probably be allowed to do that, especially considering that no bugfix-only releases are planned. (Soname bumps are not entirely impossible, but a lot harder to argue for, especially if they mean rebuilding dozens of packages as a grouped update. The guidelines definitely strongly discourage them.)

> Crack of the Day
This is something we haven't really been looking into. We do not currently have such a service, nor am I aware of any plans to put one up in the near future. The people working on Plasma 5 packaging for Fedora have been manually doing some preview ISOs with Plasma 5, but the packages only track the releases (not daily snapshots), and the live images are then built manually from those packages.

> Co-installability
This is also important to us, thank you for the work you did on this issue!

> Automated Testing
I must admit that most of the KDE packages in Fedora still don't run the testsuites. And if tests fail in a Fedora package, the "fix" is typically to add "|| :" to the invocation.

> Obscure Architectures
Fedora currently has 3 primary architectures: i686, x86_64 and armv7hl. All other architectures are secondary, i.e., if our packages are broken on those, it does not break the primary architectures. In theory, secondary architectures are entirely the responsibility of the secondary architecture maintainers, but of course, in practice, we will try to help them get our packages work. That can indeed be quite a mess. (s390x can be particular "fun", for example.) For the primary ones, we are of course on the hook for fixing any issues. (One major annoyance there has been the qreal=float nonsense on ARM in Qt 4. Thankfully, that has now been fixed in Qt 5, qreal is now double by default everywhere in current 5.x releases. But it's of course too late to clean this up for Qt 4.)

> Future Changes
Fedora is also seeing changes right now (the "" plans; and no, I don't know whether that term came up before or after "Plasma Next" ;-) ) for which we're still trying to figure out how they affect us KDE packagers. In particular, Fedora is now going to focus on 3 Products (a Workstation based on GNOME 3, Server and Cloud), and we would like a fourth, KDE-based, Product. The outcome on that is still open.

> ISO Testing
In Fedora, KDE Plasma is a release-blocking desktop environment and so we get tested under the same process as Fedora as a whole. There is a bunch of testcases that the images have to pass. This forms a matrix, with the test cases as lines and the images to test as columns. This matrix is filled in manually by people testing the images. If the matrix is all green, the release is obviously good to go. If not, the QA team decides what counts as a blocker, what is eligible as a freeze override while waiting for the blockers to get fixed, and what is just not worth the risk of an attempted fix at all. There has been some talk of automating QA for a long time in Fedora, but most of the QA is still done manually around here.

There are some additional issues we encounter here in Fedora:

Shared Dependencies

I'm surprised you haven't brought up that one as well, given that you also draw from repositories shared with Ubuntu [Unity].

One problem we have had more than once is that some core system library got upgraded in Fedora because GNOME required the newer version, but KDE was not ready for it yet. The solution was almost always some ugly kludge. For PolicyKit 1, we ended up shipping the GNOME authentication agent even for KDE for a release because the KDE one wasn't ready yet. For NetworkManager 0.9, Fedora 15 shipped with a really ugly hack where they patched a compatibility API emulating NetworkManager 0.8 into 0.9. The big problem was that every NetworkManager update that was pushed to Fedora 15 (and there were many) would break one of the 2 APIs (while fixing the other, so the users of the desktop environment that was not affected would all give positive feedback to get the update rushed out to stable and the other would break). In addition, the compatibility API supported only what the snapshot of Plasma-networkmanagement shipped in Fedora 15 GA happened to use; so, when support for system connections was added in upstream Plasma-networkmanagement, that could not be used. We ended up pushing an update to something that used the real 0.9 API as soon as it became available (despite some known issues with migration of settings; after yet another NetworkManager update totally breaking the compatibility API, it was just not possible to wait anymore), and a later NetworkManager update dropped the compatibility API entirely. That was a lot of churn in a stable release. For BlueZ 5, we had to ship Fedora 20 with a prerelease of the new BlueDevil that had some bugs; those should now be fixed in post-release updates though. (That was probably the nicest solution, but in the other 2 cases I mentioned, not even such a prerelease was available for us to ship.) In less drastic cases, we sometimes have to backport support for a newer system library in some application from master.

With the post-release updates that we do, we sometimes have the opposite problem, i.e. that the new version wants a newer system library than what we have available. There too, the solutions vary. Sometimes, we can just upgrade the library along with the KDE application that requires it. Sometimes, we can get the application to build with the older library by reverting some patches. And sometimes, we just do not push the new version because of the dependency issues.

Patent-encumbered Dependencies

This one probably affects us more than any other distro. Please, oh pretty please, do not depend directly on something like FFmpeg! For multimedia, please use GStreamer 1 (for KDE applications, probably through an abstraction, i.e., through either Phonon or QtGStreamer), which handles all codecs as plugins. That way, we can just build your package without worrying about patents, and users can plug in the codecs they want from a third-party repository (once for all applications!) and they'll just work. And yes, GStreamer can do encoding, decoding, transcoding (yes, even from file to file), streaming in both directions and more. If you depend directly on FFmpeg, then we can only build your application without FFmpeg support or not at all. Similar considerations go for any other dependency known to be patent-encumbered, but FFmpeg is the prime offender there.

Non-Free Dependencies

In a similar vein, please do not depend on libraries that are not Free (as in Speech) Software. For example, for a long time, we were unable to package Simon because it relied on HTK, which is not even redistributable. (Simon also had a mandatory dependency on Julius, which comes with a custom BSD-style-but-not-quite license. The freeness of that license was also unclear, but it was eventually ruled to be acceptable for Fedora.) These days, Simon also supports PocketSphinx which is Free Software and can replace both HTK and Julius.

Even if the dependency is optional, it still means that Fedora users will not be able to use the features that rely on the dependency.

Bundled Libraries

One thing Fedora really does not like by policy is bundled libraries. Please never bundle libraries, whatever the reason. You are just making us extra work, because we will have to package the library separately anyway, but now in addition we will have to get rid of your bundled copy. The worst is patent-encumbered or non-Free bundled libraries, because we cannot distribute those at all, so in that case, we actually have to unpack your tarball, rm ‑rf the bundled crap and repack the tarball! That is something we really hate having to do. (For bundled libraries with acceptable licensing and without patent issues, we can rm ‑rf them in the %prep step of the specfile, but for encumbered stuff, that is not sufficient.) In short, there's not much that annoys us more than a 3rdparty/ffmpeg directory, please don't do that!

By Kevin Kofler at Thu, 08/14/2014 - 03:33

Based on the indications from Kevin and Jonathan, I would like to add the view of an openSUSE KDE community packager. Within openSUSE a few community members are packaging KDE, so the below might not necessarily reflect the opinion of the other members.

As indicated also by Kevin, we share most of the issues/problems mentioned. But I believe we have found some alternative solutions for them. Below I will just mention those that are different for openSUSE.

> Symbols
Silent ABI breaks are also an issue for openSUSE, however we are utilizing the power of the openSUSE OBS to rebuild all dependent packages. This means that an update of kdebase-workspace (e.g.) would not only rebuild this package, but also all other packages that are dependent on kdebase-workspace. A checking mechanism within OBS validates if the new build of a package is different from the previous build. If it is not, then the new build is being disregarded. This prevent users from updating unnecessary components.

> Copyright
Not only Fedora is picking regarding the licensing and copyrights, but also openSUSE is facing similar things. As Fedora, also we are required to have the RPM license tag and the upstream COPYING files. Every new package that is being submitted, is going through a legal review where the source is being reviewed based on the existing COPYING files. If something is found not to be correct, we are submitting upstream bug reports to get the licensing fixed. As Fedora we do not ship our own COPYING files with the packages.

> Multiarch
We are following the same principles as Fedora where we are trying to split as much as possible the shared libraries from the binaries. By using the soversion in the package names of those shared libraries, we are able to make them coinstallable with older or newer version.

> Splitting up Packages
Same methodology as Fedora is used. As people might know, sometime ago we had a small project named KlyDE to see how far we could actually split packages to create a desktop where people where not required to install the items that they didn't want (e.g. Activities, Semantic, social). Of course without breaking the desktop. Although successful we only applied it in certain areas in order not to confuse the user with too many packages.

> Stable Release Updates
> Backports
Like Fedora also openSUSE allows to release bugfixes for any package for older openSUSE releases. Especially for this a maintenance process has been setup that supports this type of fixes. The only thing excluded from these type of updates are updates that incorporates new functionality. I.o.w. it would be impossible to update KDE 4.12 to KDE 4.13 as a maintenance update. But up to now we have been pushing the 4.13.1, 4.13.2, etc releases as maintenance updates as that they are bugfix releases. With the help of openSUSE OBS however, we defined a special repository where the latest KDE release is being build for all supported openSUSE releases. This way users can incorporate this repository in their package management and always run the latest KDE SC release.

> Crack of the Day
openSUSE at this moments offers 5 repositories for different KDE versions:
* The latest official KDE SC release based on kdelibs4
* The development repository where the upcoming KDE 4.x SC release is build (including beta's and RC's)
* The latest official Framework release based on KDE Frameworks.
* The unstable repository where we provide daily updates based on git snapshots for packages based on kdelibs4
* The unstable repository where we provide daily updates based on git snapshots for packages based on Frameworks

> Co-installability
Thanks a lot for your work here. Unfortunately we didn't reach 100% where we would be able to offer a full KDELIBS4 environment besides a full Frameworks5 environment. But at this moment we are just using the RPM Conflict to remove the conflicting package and we try to split out the KDE4 package as much as possible so that really the conflicting files are gone without loosing too much functionality.

> Automated Testing
> ISO Testing
openSUSE doesn't utilize the testsuites that come with the packages. Packages are build and based on manually testing done by the packaging team issues are identified/resolved. However for those packages that end up in the distribution itself, those are tested against pre-defined scenarios in the automated QA process. For openSUSE the KDE desktop enviroment (at this moment KDE4) is a important product and therefore it has to pass the QA process before updates are accepted into the distribution. Of course the automated QA process can not test everything, but at least the KDE desktop can be started without errors.

>Shared Dependencies
As that openSUSE offers multiple desktops on a single distribution, this is one of the biggest area for issues. Here we see that sometimes GNOME is jumping faster to depending on a newer version than KDE does and vice-versa. Examples in the past were GStreamer 1.0 support and Bluez5. Like Fedora, openSUSE 13.1 was also shipped with a pre-release/git snapshot of bluedevil, just to support the Bluez5 that was required for GNOME. Fortunately the relationship between the GNOME and KDE packaging teams is very good and we always try to resolve this type of issues in the best way possible.

>Patent-encumbered Dependencies
>Non-Free Dependencies
Like Fedora also these type of dependencies are causing issues for openSUSE. Fortunately we are able to move such packages to an alternative community buildservice (Packman) and build there the full functionality. Recently however we have seen quite some work on building a skeleton package for ffmpeg, that would allow us to build against it. This ffmpeg package itself however does not provide the actual functionality, but the user has to install it from Packman to get the functionality.

>Bundled Libraries
Similar issues as mentioned by Kevin. Target for openSUSe is to build as much as possible against the system libraries. However it seems that the openSUSE policies are a little looser on the source files. It seems fine if the tarball has the sources for ffmpeg, as long as we do not build those libraries. This allowed us to build the Chromium webbrowser, but also the VLC libraries that are required for the phonon-vlc backend. VLC itself is build with the opensource plugins/codecs, but can be easily replaced with a full VLC package from the VLC or packman repositories.

By Raymond Wooninck at Thu, 08/14/2014 - 10:16

For splitting packages, you actually split the packages even more than we do in Fedora. For example, as you mentioned, you do the per-library splits with soname-based naming that Debian and Ubuntu/Kubuntu also do, which is something we don't do in Fedora. (We typically do one -libs subpackage for multilib purposes and that's it.)

The update policies actually put you somewhere in between Fedora and Kubuntu, as you're allowed to push any bugfix release (like us), but no feature releases (whereas we can and do get exceptions for those on a case-by-case basis).

As for GStreamer, that's actually one of the few cases where we do ship both the old and the new version in parallel in Fedora. So we just kept building KDE stuff against GStreamer 0.10 up to and including Fedora 20. For the upcoming Fedora 21, we have moved all the Qt/KDE stack to GStreamer 1 though (except for the QtMobility Multimedia Kit, which is not used by KDE applications anymore, only by qutim; we'd take patches to port that to GStreamer 1 too, but so far there aren't any).

One problem we do have is that the relationship between the GNOME and KDE packaging teams is not as good as in openSUSE. Typically, the GNOME maintainers are not very willing to compromise on those dependencies. They think that blocking a new system library until KDE is ready for it amounts to holding innovation hostage, and they absolutely want to ship the latest version of GNOME completely, with all its dependencies. They always argue that they are the default desktop and that Fedora needs to ship the best possible GNOME experience. So we have to work with whatever libraries they ship. :-( In some cases, the changes haven't even been communicated in a timely manner, but rushed in somewhere around the distribution's feature freeze with no advance warning whatsoever, though at least that has improved lately. This attitude is how we ended up with that horrible "hybrid NetworkManager" (0.9 with 0.8 API hacked in) hack in Fedora 15.

By Kevin Kofler at Thu, 08/14/2014 - 13:42

The standard openSUSE policy is indeed to split out libraries in separate packages to allow multiple versions to be installed. However for KDE we have quite some exceptions on this policy, which was accepted. A lot of libraries are packaged together in our kdelibs, workspace and kdepimlibs packages. Only after KDE moved to git, we had to split libraries from the main package as that this also happened upstream. An example is KDEEDU. The the libkdeedu was packaged separately and for those packages we have to follow the policies and include the soname in the package name.

openSUSE 13.1 also shipped with GStreamer 1.0 and GStreamer 0.10 and it looks like that this will also still be with the upcoming 13.2 as that I still see a lot of programs based on GTK using the older GStreamer version. As you also indicated for KDE we have almost reached the position that it only depends on GStreamer 1.0. However the recent update of phonon-gstreamer would also mean that for older openSUSE releases, we have to stay with the older phonon-gstreamer backend as that the new one no longer supports 0.10.

By Raymond Wooninck at Thu, 08/14/2014 - 15:07

I also think it was a quite unfortunate decision from upstream to drop GStreamer 0.10 support from the latest Phonon-GStreamer. It had actually been present (with #ifdef conditionals) all the time in the development branch, but then commit 719517f4521a875a3f0036b5b271171107961c0c dropped it. (The committer happens to also be a Fedora KDE packager, but he hasn't talked to us about it, at least both Rex Dieter and I were caught by surprise.) That said, the conditionals might actually have been broken already due to bitrot, I haven't actually tried building the snapshots against 0.10. We will also not be able to push the new Phonon-GStreamer to Fedora 20.

By Kevin Kofler at Fri, 08/15/2014 - 17:29

>you do the per-library splits with soname-based naming that Debian [...] In Fedora, we try to avoid shipping multiple versions of the same library whenever possible; instead, we try hard to get everything to build with the same version.

openSUSE, in general, also ships only one library version. The naming however enables coexistence of multiple versions (you need to get them from elsewhere usually, like an old distro version, or online repositories with newer versions), should there be some externally-maintained programs (like mplayer/ffmpeg), or developer wanting it. In Fedora, it would be quite impossible right now to install a new hypothetical version of libuuid-devel+libuuid with, without getting unhappy reports from rpm that thisandthat depend on

By cjk at Fri, 08/15/2014 - 03:41

When I saw your blog post I decided to answer in another one as I quickly realized it would be a long text. But I think quite a lot has already been said here, so I'll focus in some "details".


  • Thanks *a lot* for bringing this up.
  • Also thanks *a lot* to the other packagers who did reply.
  • I mostly concentrate on packaging Qt stuff, and I must admit that they are a wonderful upstream wrt many things, but most specially into taking care of not breaking API/ABI. Thank you Qt Project!
  • This are also my personal comments, no "official words" here.

Bundled libraries
As Kevin said, using his text with some Debian modifications: one thing Debian really does not like by policy is bundled libraries. Please never bundle libraries, whatever the reason. You are just making us extra work, because we will have to package the library separately anyway, but now in addition we will have to get rid of your bundled copy or redo the copyright stuff even if we not use it. The worst is patent-encumbered or non-Free bundled libraries, because we cannot distribute those at all, so in that case, we actually have to unpack your tarball, rm ‑rf the bundled crap and repack the tarball! That is something we really hate having to do. (For bundled libraries with acceptable licensing and without patent issues, we can rm ‑rf them in the right step, but for encumbered stuff, that is not sufficient.) In short, there's not much that annoys us more than a 3rdparty/ffmpeg directory, please don't do that! And if you can't avoid it (you can, but let's leave it there) give your friendly packager the option to use the system provided one at build time.

OK, here I'll sneak in a major "issue" I have with Qt but not with KDE: the CLA. To the best of my knowledge I'm not able to forward other people's patches upstream except if the patch is licensed under a permissive enough license like a BSD one. The real problem here is that people sometimes do not understand that Qt's CLA doesn't requires copyright reassignment, so they won't forward the patches themselves :-( So every time I receive a patch that touches the code I normally need to do a great deal of work to get it upstreamed. Mind you, non technical but social work.

As Ubuntu does, we keep "symbols files" which help a lot in finding API/ABI breakages. It also help us to do another wonderful things, like knowing which packages depend upon certain symbols if needed (think of private symbols, for example).
IMPOV the worst problem with them is filtering the changes. As we in Debian have 11 official archs (and probably arm64 will join soon) we also have different symbols fir the same API on different archs. pkg-kde-tool's symbolshelper helps us a lot in that, but it's still *very* time consuming.

This is mostly Debian specific (I don't know in Ubuntu's case) and is quite related to symbols above: if there is an incompatible change aka API/ABI change we need to rebuild all the stuff that links against the library which did the change. This is called a "transition" and it starts when the offending package gets uploaded to unstable and lasts until the package and all it's dependencies land in testing. Now we have 30000+ source packages in the archive, so transitions happen quite often. This also means that they need to be coordinated, else they might overlap. If they do they need to get into testing at the same time. And if some package involved in both of them fails to build from source both transitions are stuck. We have a team called "Release team" who does a wonderful work coordinating this, but it's still a very interesting process. Main point here: it's definitely not just "push to unstable and be done" when API/ABI or some other major change happens.

Many supported archs
As I said above, in Debian we have many official archs. This is actually *great*, but also requires more work from us. No, it's definitely not OK if your code works on little endian but does not works on big endian machines. No, it's not ok if your app uses a Linux-only header/definition/whatever when you can use something else which is as easy as the original code. Of course, a developer does not needs to know all and every arch quirk: that's when we the packagers (normally with a *great* deal of help from "porters", people looking at packages of an specific arch) need to chime in, provide a patch, build the package again and upload.
There is also the build-time factor here. As an example, qt5's webkit used to take ~20hs to build on mipsel with dwarf debugging symbols. I reduced the build time at the expense of not building debugging symbols at all. Using stabs would have helped, but memory comsuption at link time was also overkill, so that was the best solution we could find.

The packager's machines
Of course, from all the above points this is the most personal one. At least for now we Debian packagers mostly do build the packages for one arch and push that, the rest of them are built on what we call "the buildd service". Not so far ago, discussing something with some upstream (not Qt), he told me "just make -j6 upload and be done". My first reaction was "I *wish* I could do that". My "most capable" system is a dual core with "just" 4GB of RAM, so I have to resort to swap for being able to build beasts like webkit. And not to mention the fact that this same machine is what provides me my pay check, so I can't do it at any time. I normally left it building at night. The point here: building time might not be as fast as you think, even for packagers.

Wrap up
Don't underestimate your downstreamers: we also have to do quite a lot of work to get your code to shine as much as possible. And we *really* want that to happen :)

By Lisandro Damián... at Fri, 08/15/2014 - 02:56

You make a good point there with the Qt CLA. This is indeed a major annoyance for Qt packaging. It is even more complicated for us in Fedora because Red Hat Legal apparently ruled that Red Hat employees cannot sign the CLA in their function as Red Hat employees, but recommends that they just sign it as individuals. But the Red Hat developers outside of Qt/KDE maintainership are not willing to do that. So we have interoperability fixes from the Fedora CUPS maintainer and from a gnome-shell developer working for Red Hat stuck in the queue blocking on this issue.

For the transitions issue, in Fedora, if something fails to build with the updated library, we just leave the dependency broken in Rawhide until it gets fixed. Compared to the past, we now try harder to avoid broken dependencies in Rawhide, but we cannot withhold some transitions forever. If the dependencies don't get fixed as release time approaches, the package with broken dependencies can get axed.

As for builds, I typically don't do any local build, I just send the package straight to Koji (our build system). If it fails, I fix it and send it back to Koji until it builds.

By Kevin Kofler at Fri, 08/15/2014 - 18:03

Have you tried to convince them to use something like a BSD license for their patches? In that way you can push it upstream adding Lars Knoll as a reviewer and mentioning the fact of the original author and license in the commit message. I pushed some Red Hat employees' aarcha64/arm64 patches doing that.

It's still social engineering mostly, but it might help.

By Lisandro Damián... at Sat, 08/16/2014 - 01:59