Subscribe to Planet KDE feed
Planet KDE -
Updated: 35 min 21 sec ago

GSoC Post #7: KCM Access Done!

Thu, 08/06/2015 - 21:22

It’s been a while since my last post, but that doesn’t mean there hasn’t been any progress. So, last I was working with K3b to install codecs. Well, while after loads of issues, I finally got it to build and compile properly, but then, we noticed that K3b still uses Kdelibs4. So that means, Packagekit-Qt4 will have to be used. But since my machine wasn’t equiped with Packagekit-Qt4 or Kdelibs, I thought of moving over to an application that uses KF5.GoogleSummer_2015logo

KCM Access

KCM Access has a nice screen reader tab wherein you can begin the on-screen narration. It has a checkbox which remains that way, even if the screen reader isn’t installed. So my task here was to add a button to install orca if it isn’t already and disable the checkbox in this case. Again, this button would install orca via PackageKit-Qt5 using the method similar to the one previously used.

It’s done and under review!

Next Up

As soon as this is reviewed, I’ll go over to the next application, which as of now can be any of KCM Locale or K3b.

Quick tip – sending notifications from the desktop to you phone with kdeconnect-cli

Thu, 08/06/2015 - 19:39

This entry is in the series “script addicts” using I3WM.

Put this in the scripts that are creating notifications:

kdeconnect-cli -d --ping-msg "${notificationMsg}"

That’s it! Now, the notifications will hunt you around if you’re not in front of your workstation.

Goodbye Akademy 2015, See You Randa 2015

Thu, 08/06/2015 - 13:53

OK, Akademy 2015 ended last week. This is my second Akademy, though the first full one.

A Coruña is located on the Atlantic front and on my way there, I encountered rainy spots and the rain was a familiar one for me, after having lived several years in northern France (Paris and Pays de la Loire). But this time I actually found it quite enjoyable, knowing that I left behind a 39°-heated Lyon. So, yes, A Coruña is warmer than what we could encounter in other parts of Spain. I shared my car with Sandro Knauß, who came to Lyon from Germany by train, so the one full-day trip was quite nice, KDE hacking-oriented. But be assured, we were also able to talk lots of other topics.

The venue and the hosting in Rialta were just perfect. The local team did an awesome job when organizing the event. They had it all: welcome party – we arrived at the right moment for the Queimada – sponsored food during the week-end (thanks Blue Systems), essential goodies (they carried the VIM T-shirt), “social event” – that was really a party where I had an excellent time -, the day trip and all the schedules which weren’t difficult to follow. Rialta has free swimming pool and I actually managed to use it.

Akademy is about KDE technology but also about meeting like-minded people. Getting along together is really easy, language barrier took apart, and I actually really enjoyed just sitting there and hacking with others, then having a beer or discussing technical issues or ideas. I already miss these spontaneous late evening hacking moments.

Speaking about KDE technology, we are at a turning moment, with Plasma Mobile becoming available. KDE is now ready to take on the mobiles platforms and that’s pretty cool. I look forward to the moment when I’ll have a Linux smartphone running both KDE software and Android applications (with Shashlik, bien-sur). I’ll do my best to help and I already plan to support KSecrets Service on mobile.

KSecrets Service had it’s own BoF. The updated slides are here. I’m working right now in implementing it and that would bring us to Randa, where I intend to continue even further and hopefully I’ll even have a working version by that time.

Randa is a great location for hacking. In fact, no, not Randa, but the venue in Randa is quite perfect for that :-) They have that big room under the roof, upstairs, where I look forward to hack, between some BoF’s or swiss meals. Some people who couldn’t make it to Akademy will go to Randa, so I look forward to meeting them there.

Finally, but not less importantly, I’d like to thank KDE e.V. and the sponsors for organizing these events and for providing travel reimbursement.

Announcing the draft Federated Cloud Sharing API

Thu, 08/06/2015 - 13:04

Here you go: link

I believe that federation of cloud services is the next important step for truly secure and flexible file sync and share cloud software. Because of that we are working for a while on the needed technologies and now have the first draft of an open specification ready.
The goal of the Federated Cloud Sharing API is to be a common language for sharing files across different file sharing server implementations. That only works if a wide audience provides input, which is why we started the Open Cloud Mesh initiative and have already been working with and talking about earlier drafts to partners and other open source projects.

The document describes an API which consists of two parts, WebDAV for file transfer and a simple REST based API to initiate sharing and exchange metadata. It is a very simple and pragmatic model, re-using as much existing technology as possible to ease implementation and migration. Rather than re-thinking the entire infrastructure of the web to enable federation, it combines the existing model of email (username @ server) and file transport protocols.

For more information and background on the architecture, read my earlier blog post.

For more general information on Federation, see my preceding post about the Next Generation File Sync and Share technology

This draft is more than theory as it is already implemented in ownCloud, being introduced as ‘server to server sharing’ with ownCloud 7 and matured to its current state which you can try out in ownCloud 8.1. See the video below for Federated Cloud Sharing in action.

GSoC 2015 Week #7-10 with Amarok

Thu, 08/06/2015 - 11:10
I haven't posted here for quite some time and a lot has happened over the last few weeks. Blame my habit of procrastination for not posting more frequently ;)

  • Ported from KAction to QAction, KMenu to QMenu with the help of the porting scripts.
  • Added KF5::GlobalAccel, KF5::KIO components, Qt5::Sql, Qt5::Quick, Qt5::ScriptTools, KF5::PlasmaQuick, KF5::NotifyConfig and KF5::Archive components.
  • KGlobal::mainComponent().aboutData() is replaced with KAboutData::applicationData() which contains information such as authors, license, etc.
  • KGlobalAccel::setGlobalShortcut is used instead of setGlobalShortcut to set global shortcuts.
  • QApplication::type() no longer exists and hence the macro QApplication::qApp is used by casting in src/PluginManager.cpp and other files.
  • In TrayIcon.cpp a QMap has been created mapping each QString to its corresponding QAction which is done in KActionCollection and hence the calls to actionCollection()->action is replaced by calls to a function defined in the class itself.
  • KGlobalSettings::CompletionPopup is replaced with KCompletion::CompletionPopup.

Now after all this, I was getting linking errors during the linking of amaroklib and hence I decided that it was high time to port the code in src/context but during the port of the code in src/context, I realized that it is going to take a LOT of time to complete. We may have to move to QML as Plasma 5 only supports QML for the widgets. So Mark suggested to leave it aside for the moment. To get around it, I tried to disable the compilation of src/context and the files that depend on the stuff in src/context. Well I should have guessed that it won't work (The effect was similar to the fall of dominoes; for each file that was disabled, I had to disable one or more file that depended on it) and later I had to comment out the code that was causing the linking errors. I have appropriately marked out each commented piece of code (with "FIXME: disabled temporarily for KF5 porting") so that it can be re-enabled later and we won't have to deal with obscure bugs due to their absence.

Apart from this, I had to remove the second argument of the KPluginInfo constructor ( KPluginInfo( const QString & filename, const char* resource = 0 ) has changed to KPluginInfo (const QString &filename) ). I am still unsure whether this change is correct or not.

Sadly, over the last week I messed up my system and I had to repeat quite a bit of the work but that didn't take much of my time. The port seems to be proceeding nicely and I am now reading about QNetworkAccessManager to replace QHttp that is no longer present in Qt5 API.

Cheers !!!

Evolving KDE – framework and next steps

Thu, 08/06/2015 - 08:20

One of the outcomes of the survey we did for Evolving KDE was that we need to get more clarity on our vision, strategy and focus. At Akademy we had many discussions to explore more how we all see this topic. We discussed what different contributors think KDE’s vision and focus should be. We tried to clarify what it actually means for KDE to have a vision, strategy and focus. And we talked about ways to get to a vision that would work for KDE.

Here’s a visualization of how I see the different parts fitting together:
Evolving KDE overview

Some time ago we created KDE’s manifesto. It answers the question “Who are we?”. What we need to work out now is our vision. It will answer the questions “What do we aspire to do?”. Different teams inside KDE can then in addition find their own local vision which can overlap more or less with the global one. The manifesto and the vision together will give us a framework in which we can develop our strategy. The strategy gives us the answer to the question how we want to go about achieving our vision. From the strategy we can derive a number of concrete actions that will get us closer to where we want to be.

The goal of all this is to give us clarity on who we are, what we do, why we do it and what everyone’s part in the big picture is.

Over the next weeks we will summarize the different ideas and thoughts concerning the vision that were brought up at Akademy and open it up for wider input and discussion. We will also hold another office hour on IRC. Depending on how all this goes we will have a sprint to work on it more.

Akademy 2015: An awesome experience.

Wed, 08/05/2015 - 23:22

So this was my very first Akademy, and I was excited about attending it ever since the beginning of when I started contributing to KDE a couple of years back. Feels great to have finally made it. Although I had some visa problems at the New Delhi airport because of which I reached A Coruna quite late and missed out on the entire first day of the conference, still I’m glad I could at least reach Rialta by sunset of that day and be able to attend the rest of all the days at Akademy.

So Day 2 began with the keynote speech by Lydia Pintscher on “Evolving KDE”, where many concerns regarding how KDE began, where it is going, how it needs to change, were discussed among many other things as well. It was really interesting, very motivating and I loved it, the best way the day could have started. Following this, were an array of many other talks as well, one of which I loved a lot, was the one given by Andrew Lake, where he spoke about visions, and how important it is for any product’s growth. His examples about a few “this is how *not* to write a vision” visions were hilarious. Overall, it was very interesting and fun to attend, and very informative of course. And then Vishesh Handa also spoke about file searching in Baloo in his talk, and how file searching and indexing happens in various other softwares and platforms, compared to how KDE does it in its browser in a more efficient way, was too good. Following the first half of talks, there was the group photo (glad I didn’t miss this :D ).


On the second half of the day I gave a very brief lightning talk about my previous year’s GSoC project on implementing interactive tours in Marble. It kind of finished very soon, and I still regret why I did not extend it with a few demos by creating some tours on-spot or something of that sort. Still, hope people didn’t find it too boring. What felt very good though, was when Valentin Rusu came to me some time later that day to let me know about a website called flightradar24, which shows real time flights that are actually taking place right now all across the globe. Although these are real-time flights and are not similar to those that happen in Marble tours (tours in Marble are only virtual and don’t deal with actual real-time flights in airplanes), the link he sent me is pretty awesome and you should check it out once here:

Apart from all this, there was a questionaire session with the KDE e.V. where various topics were raised, many interesting questions asked and answered, many secrets revealed, including the “secret handshake” ( :P ). It was totally awesome. Finally the day ended with the Akademy awards, where two of my most favorite KDE applications, that is, KDevelop and KDE Connect, were awarded, which they totally deserved, so I was very happy about it. The organizers were given a big thanks as well, along with all the sponsors, where Frederik Gladhorn from Qt, said something interesting, that is, “Be a good coder. But you need to be a good person before that” which was awesome to listen to!

So, after the first two days of talks, up next were the BOFs starting from day 3. The BOFs were too good. Lydia’s BOF on “Evolving KDE” was very interesting to attend, and many key points regarding what KDE actually is, how it should be correctly defined given the various types of projects under so many different fields that it supports, were discussed for a couple of hours, and it gave us a well detailed insight about the direction in which KDE is going as a whole, and ended with a decision to come up with a well-designed vision for KDE, among other things, in the next few days. It was good listening to everything that was discussed in this BOF.

Next day there was a party at the terrace of Espacio Coruna, which was awesome. I took this opportunity to meet up with a lot of people I didn’t interact with in person before. I really really wanted to meet Lydia Pintscher for a long time. And I did. It was so great to talk and discuss stuff with her in person, exchanging various ideas, opinions, and whatnot. I caught up with Aleix Pol as well, and the conversations were enlightening indeed. David Edmundson is a fun guy to hang out with. So is Andreas, Vishesh, Pinak, Devaja and everyone else. Akademy is totally awesome, mainly because of this, and its such a smart way of helping you have such fun conversations with so many people in person, with whom you had only conversed online before that. The food was great, along with the beer and wine, and there were even freestyle dance steps performed towards the end as well. A totally awesome evening it was.

On one of the next days, there was a tour to the aquarium as well. Viewing the shark from so close up-front was an out-of-this-world experience. Following that was a very long and tiring climb towards the top of the tower of Hercules, the view from which was magnificent. It was beautiful to be able to view almost the whole of the city from the top by standing on that tower.

I would like to thank KDE eV. a tonne, for providing me the sponsorship for this year’s Akademy, without which I would have had to miss out on such a wonderful experience of my life. Thanks to Jose Milan for organizing such an amazing event, and to all the rest of the organizers who played a part in this. It was so well-planned, I met a lot of people, explored through Spain, saw a live shark, and so much more. I would love to attend the Akademy’s that would be held in the next years to come. Thanks again, for giving me such an awesome experience of a lifetime to keep in my memories forever! Until next time, ciao! :)


A Frank Look at Simon: Where To Go From Here

Wed, 08/05/2015 - 20:56

As I had previously announced, I am resigning my active positions in Simon and KDE Speech.

As part of me handing over the project to an eventual successor, I had announced a day-long workshop on speech recognition basics for anyone who’s interested. Mario Fux of Randa fame took me up on that offer. In a long and intense Jitsi meeting we discussed basic theory, went through all the processes involved in creating and adapting language- and acoustic models and looked at the Simon codebase. But maybe most importantly of all, we talked about what I also want to outline in this blog post: What Simon is, what it could and should be, and how to get there.


As some of you may know, Simon started as a speech control solution for people with speech impediments. This use case required immense flexibility when it comes to speech model processing. But flexibility doesn’t come cheap – it eliminates many simplifying assumptions. This increases the complexity of the codebase substantially, makes adding new features more difficult and ultimately also leads to a more confusing user interface.
Just to give you an example of how deep this problem runs, I remember fretting over what to call the smallest lexical element of the recognition vocabulary. Can we even call it a “word”? It may actually not be in Simon – this is up to the user’s configuration.

Now, everyone reading this would be forgiven for asking, why, almost 9 years in, Simon hasn’t simply been streamlined yet. The answer is, that removing what makes Simon difficult to understand, and difficult to maintain, would necessarily also entail removing most of what makes Simon a great tool for what it was engineered to be: An extremely flexible speech control platform, allowing even people with speech impediments to control all kinds of systems in (almost) all kinds of environments.


Over the years, it became clear that Simon’s core functionality could also be useful to a much wider audience.
Eventually, this lead to the decision to shoehorn a “simple” speech recognition system for end-users into the Simon concept.

The logic was simple: Putting both use cases in the same application allowed for easier branding, and additional developers, who would hopefully be attracted by the prospect of working on a speech recognition system for a larger audience, would automatically improve the system for both use-cases. Moreover, the shared codebase could ease maintenance and further development.

In hindsight, however, this was a mistake.

Ostensibly making Simon easier to use meant that all the complexity, which was purposefully still there to support the core use case, needed to be wrapped in another layer of “simplification”, which in practice only further complicated the codebase. For the end-user, this was problematic as well, as the low-level settings were basically only hidden under a thin veil of convenience over a power-user system.


In my opinion, it’s time to treat Simon’s two personalities as the two separate projects, that simply share common libraries for common tasks.

Simon itself should stay the tool for power-users, that allows to fiddle with vocabulary transcription, grammar rules and adaption configurations. It’s really quite good at this, and there is a genuine need, as the adoption of Simon shows.
The new project should be a straight-forward dictation-enabled command and control system for the Plasma desktop. Plasma’s answer to Windows’ and OS X’ built in speech recognition, so to say. This project’s task would be vastly simpler than Simon’s task, which allows a substantially leaner codebase. Let’s look at a small list of simplifying assumptions that could never hold in Simon, but which would be appropriate for this new project:

  • As the system will be dictation enabled, it will necessarily only work for languages where a dictation-capable acoustic model already exists. Therefore, the capability to create acoustic models from scratch is not required.
  • As dictation capable speech models would need to be built anyway, a common model architecture can be enforced, removing the need to support HTK / Julius.
  • As generic speech models (base models) will be used, the pronunciations of words can be assumed to be known (for example, following the “rules” for “US English”). Therefore, users would not need to transcribe their words, as this can be done automatically through grapheme to phoneme conversion (the g2p model would be part of the speech model distribution). This, together with the switch from Grammars to N-Grams would eliminate the need for what were the entire “Vocabulary” and “Grammar” sections in the Simon UI.

But talk is cheap. Let’s look at a prototype. Let’s look at Lera.

Lera's Main UI

Lera’s main user interface is a simple app indicator, that gets out the way. Clicking on it opens the configuration dialog.

Lera's Configuration

Lera’s configuration dialog (mockup, non-functional) is an exercise in austerity. A drop-down lets the user chose the used speech model, which should default to the system’s language if a matching speech model is available. A list of scenarios, which should be auto-enabled based on installed applications, show what can be controlled and how. The user should be able to improve performance by going through training (in the second tab) and to configure when Lera should be listening (in the third tab).

Here’s the best part: Lera is a working prototype. Only the core functionality, the actual decoding, is implemented, but it works out of the box, powered by an improved version of the speech model I presented on this blog in 2013, enabling continuous “dictation” in English (the model is available in Lera’s git repository; So far, the only output produced is a small popup showing the recognition result).

Lera in Action

I implemented this prototype mostly to show off what I think the future of open-source speech recognition should look like, and how you could get started to get there. Lera’s whole codebase has 1099 lines, 821 of which are responsible for recording audio. The actual integration of the SPHINX speech recognizer is only a handful of lines. The model too, is built with absolute simplicity in mind. There’s no “secret sauce”, just a basic SPHINX acoustic model, built from open corpora (see the readme in the model folder).


If anything, Lera is a starting point. The next steps would be to move Simon’s “eventsimulation” library into a separate framework, to be shared between Lera and Simon. Lera could then use this to type out the recognition results (see Simon’s Dictation plugin). Then, I would suggest porting a simplified notion of “Scenarios” to Lera, which should only really contain a set of commands, and maybe context information (vocabulary and “grammar” can be synthesized automatically from the command triggers). The implementation of training (acoustic model adaption) would then complete a very sensible, very usable version 1.0.

Sadly, as I mentioned before, I will not be able to work on this any longer. I do, however, consider open-source speech recognition to be an important project, and would love to see it continued. If Lera kindled your interest, feel free to clone the repo and hack on it a little. It’s fun. I promise.

Fiber Update; WebEngine vs CEF

Wed, 08/05/2015 - 17:43

Fiber has seen some active development, and over the course of a long 3-day weekend full of hacking I’m glad to say that exactly 0 progress has been made! Of course that would be a bit of a fib, I’ve spent the weekend re-factoring all of the profiles code and organising the codebase structure.

I also spent a good chunk of my time reading Qt and KDE coding guidelines and documentation on how files and classes should be structured, and then I applied that information to Fiber. The result now is well commented code, and consistent naming conventions in-line with other Qt/KDE projects.

But re-factoring code isn’t what I’m really interested in talking about…

WebEngine vs CEF

When I started Fiber I worked under the assumption that WebEngine would be the engine for this browser; it’s an official Qt extension, being actively developed, and isn’t going anywhere. After Fiber kind of came into the light I had a comments and emails pointing me to CEF, the “Chromium Embedded Framework” as an alternative to WebEngine.

After doing research it’s severely divided my thoughts on what to use.

What is it?

CEF started as a Chromium-based project meant to create a stable API relative to the rapidly changing engine, something non-qt applications could use as easily and reliably as Qt applications do with WebView . While it started off as just an implementation CEF has a defined stable enough API that it turned into a sort of pseudo-standard. Servo, Mozillas new wonder engine is actually building itself to be a native CEF implementation, meaning that future Firefox will actually be a CEF-based browser.

CEF, despite being not so well-known, is actually used by some very high-profile companies which lends credence to the longevity of the project. Adobe, Valve, Blizzard, Amazon, and other big names have thrown their chips behind CEF. Valve in-particular bases their Steam client on the thing.

Pros and Cons Cons

Not everything is rosy and bright in the world of CEF; there are always downsides. The first and biggest downside is the fact that CEF doesn’t have a Qt library. The Qt guys didn’t decide on this arbitrarily as they have a different goal for the WebEngine API. At minimum CEF means more complicated work than using an established Qt API.

CEF and having multiple engine options also means that we may see two entirely different sets of bugs coming in, depending on whether or not a person is running Fiber-Chromium or Fiber-Servo in the future. This doesn’t even include potential future CEF implementations; who knows what might show up in 5 years.

I would also like Fiber to be extremely portable, which makes CEF more of a concern; WebEngine currently supports mobile, but CEF will only have Android support ‘in the future’. Since Plasma Mobile includes a more malleable stack I have less doubts that Fiber will run fine on that, but I would like to see Fiber eventually run on Android.

Finally, CEF will add a lot of weight to the browser as an external dependency, to the tune of at least 40MB at minimum. This is more due to WebEngine being part of Qt and already being on the system – but CEF isn’t, and so the rendering engine is a separate binary to distribute. If a distro ever eyes Fiber as a default browser it means there’s over 40 extra reasons to consider a slimmer browser which makes use of more common libraries. Granted, just about every major browser is packing pretty big binaries anyway – but it’s still wasted space.


One thing that’s kind-of well-known is that WebEngine doesn’t have a particularly deep API (yet). For most applications this is fine as the utility is just used to display some content and hook a few things in so the app can talk to the page in an efficient manner. Fine for the use-case that Qt envisions. For a full-on browser though WebEngine lacks a lot of important interaction points, and though I Initially doubt Fiber will be able to make use of deeper integration in a meaningful way, as time goes on it’s a serious advantage to lose, especially since I don’t know the roadmap for WebEngine and what they plan to add.

WebEngine and WebView also have bad security reputations – I don’t know the specific issues, I just know those issues are prominent enough to see Fedora choose not to ship it. CEF doesn’t seem to have this perception as far as I know. That being said I’m not a guru-level C++ programmer so I’m not disillusioned to the fact that I’ll inject my own security shortcomings; though I won’t have the worry of breaking downstream applications in the quest to fix those issues.

There’s concern in the web development community of a WebKit/Blink monoculture. Outside of Gecko, there’s no longer any rendering engine variety for the Linux community. While I doubt Blink will ever “slack off”, the fact is Blink has undue weight over the web because of its sheer dominance. With more variety it means Blink has to keep in-line with the wider browser community rather than driving it. Gecko, Trident, Edge, and Servo all push Webkit/Blink harder in the same way many desktop environments push KDE to be better.

But the absolute biggest advantage in my opinion is the fact that CEF will offer Servo in the future as well as Blink. It means that we will be highly portable in our choice of rendering engines, able to swap out quickly. If we tightly integrate with Webengine it means Fiber won’t have mobility in the future, but through CEF if one of the engines gains a significant technical lead we can change the engine easily.

The Poll

We have three options for Fibers engine of choice, and I’d like the people who may ultimately use the browser to decide, as I’m really truly torn!

  1. Stick with WebEngine. It’s simple, easy, fast, supported. Fiber already has WebEngine running.
  2. Start with WebEngine and just get the browser up-and-running, later make the transition to CEF. It would be fast at first, but it will complicate things later on as we make the transition, especially if there’s several extension APIs connected with WebEngine.
  3. Write Fiber to use CEF natively. This will probably result in a more performant browser, and will allow deep integration; though it would still be slower to develop than just using WebEngine.
Take Our Poll

Finally, if you have comments, please post them below and elaborate on what drove your decision.

Report from Akademy 2015

Wed, 08/05/2015 - 15:59

A week has passed since I’ve been back from Akademy, so it’s more than time to make a little report.

I’ve enjoyed a lot meeting old and new friends from KDE. Lot of good times shared :)

akademy2015-people(photo by Alex Merry ; you can find lot of ther cool photos on this link)

This year I made a quick little talk presenting the result of my work on GCompris. You can find it with all other talks recorded on this page, if you didn’t watch them already.


Also I could discuss some ideas for next things to come, so stay tuned ;)

Thanks a lot to KDE e.V for the support, that was another awesome experience.

Endocode is hiring: Linux and Systemd Engineer

Wed, 08/05/2015 - 13:09

Endocode is looking to add skilled engineers to its existing team of Linux and systemd experts. We want engineers who are excited to contribute to projects that form the basis of modern Linux systems and have the experience and skills to do so.

Our engineers work at the cutting edge of Linux kernel development. Kernel features like cgroups and namespaces introduce exciting new capabilities like containers and lightweight Linux distros ideal for clustered environments, and these are areas we focus heavily on.

Another technology that Endocode focuses on is systemd, which makes use of many features that are unique to the Linux kernel, often driving the development of new kernel features or improvements to existing ones. Its adoption has seen rapid acceleration over the past couple of years. and this has driven increased demand for systemd expertise, one that Endocode is well positioned to meet. We work closely with upstream developers to make sure that we can provide the best support possible for our clients and improve systemd for everyone.

Considering all this, an ideal candidate would be someone who describes themselves as comfortable in both user and kernel space.

You’ll be joining a team of experienced, motivated engineers and have the chance to work with and/or on open source software on a daily basis. You’ll have the chance to do this in Berlin, a city with a vibrant technology scene, excellent nightlife, and ideal conditions for families.

Deadline for applications:

28th Aug 2015

Filed under: Coding, containers, English, FLOSS, KDE, Linux, Qt, systemd Tagged: coding, containers, Linux, systemd

Akademy 2015 – an unforgettable experience

Wed, 08/05/2015 - 07:04

So, I’ve been to my first Akademy.

I had no idea what to expect, but the experience was unforgettable. I’ve finally had the opportunity to meet face to face a part of the KDE community. I’m sad that not all contributors could come, so we could see ourselfs, just how many we are.

Akademy 2015 group photo

It is my understanding that the schedule of every Akademy is like this:

  • 2 days of talks, where the current state and future plans of a few projects is presented.
  • 5 days of BoF meetings and workshops.

At the talks, in the centre of attention was Plasma Mobile, but many other projects had with what to show off. The sessions were recoded and are available here split by days.

During the BoFs, but also outside of them, I’ve had the opportunity of meeting and working side by side with part of the core KDE developers. During which I’ve updated my KDE ToDo list.

I’ve started my first BoF with the translations team (i18n/l10n) during which we’ve talked about the prospect of introducing Pootle or another web application into our translation process.

In those days I’ve participated at many BoFs, where we’ve talked and planned different things about the future of  some KDE projects.

After all that, here’s my ToDo list:

  • Test Pootle with SVN
  • Finish translating GCompris in Romanian (text and audio)
  • Work with Aleix Pol on an KDevelop plugin for the development of Android applications in QML
  • Test and fix the KDevelop plugin for custom Makefiles (with accent on the custom include paths part)

I hope to finish those soon, so I may adopt new challenges.

This year I’ve been to Akademy together with my brother, George Raoul BOGDAN, who is also a KDE contributor. During this Akademy he translated in Romanian over 600 paragraphs for GCompris, which will be available in the near future.

Thank you to the Ubuntu Community for the funds that made this participation possible.

I love Akademy 2015

Zanshin 0.2.2 transition release

Tue, 08/04/2015 - 22:58

Three years, five months and eleven days... yes, it's the elapsed time since our last release announcement. But don't despair! We're still alive and kicking.

We've been busy working on our next release which is much more ambitious than the previous one. As part of this future release, we had to adjust a bit how we store some information. That is why today we are announcing a transitional minor release.

Behold Zanshin 0.2.2!

From the user point of view you should not see any difference with the previous bugfix release, but Zanshin 0.2.2 is here to help the data transition toward the future Zanshin 0.3.

It is especially interesting for users who want to test the bleeding edge and upcoming version as it is prepared. By using Zanshin 0.2.2 they will get forward and backward compatibility for their data allowing them to easily switch between versions.

As usual, grab it while it is fresh, it is available on a wide range of distros. Note however that most distributions will need a bit of time to catch up, so they might provide 0.2.1 for a little while longer.

Fruits of Akademy

Tue, 08/04/2015 - 20:40

For the second time I had the chance to attend Akademy, this time in cold and rainy La Coruña. It has been a week of interesting talks, good food (except for one Tortilla incident), and hacking.

My personal highlight was obviously the introduction of Plasma Mobile on Saturday, an event that caused the one or other tear of joy. Later that day I gave a talk titled “Qt’s Road to Mobile Domination” looking back on what has improved with Qt on mobile platforms as well as what we can expect in upcoming releases.

One interesting BOF I attended was about LiMux where two guys from Munich shared their experiences with the Plasma desktop in an enterprise environment and what we could do to ease their eventual transition to Plasma 5 in a a few years, notably: Kiosk support, ie. locking down the UI so the user couldn’t mess it up, for which, at that very day, I wrote a proof of concept for making Plasma config dialogs smarter by automatically disabling elements whose backing config is marked immutable.

KRunner History is Back

Supposedly this was one of the reason I still saw quite a few people running Plasma 4 during the conference but now there’s no more reason not to do the switch! ;)

Both the history dropdown…Both the history dropdown… … as well as auto-completion as-you-type… as well as auto-completion as-you-type are back

Press the down arrow key in an empty KRunner to show the recent entries. They’re not shown automatically for obvious reasons ;) After invoking a search result, its query will move to the top, so repeatedly used commands never fall out of the list – there’s a clear history button in settings.

Improved Calendar Navigation

The fancy zooming calendar overview I wrote back in February already has finally landed and will be part of Plasma 5.4; but it’s even cooler: it now supports pinch-to-zoom if you happen to have a touch screen (or you’re on the phone).
Pinch to Calendar

Tuesday, Together with the Visual Design Group we had a three hour talk through that Phoronix article™ where we discussed every single issue mentioned. Seriously. We do care. Even about the little things.

Session Switcher … fixed ;)Session Switcher “text field” … fixed ;)

Additionally, I did a bit of cleanup in PowerDevil so it no longer annoys you that your battery is supposedly broken but this message has been moved into the battery monitor. Also, it no longer messes with your screen brightness by default when plugging in/out the AC adaptor.

It has been an amazing week with many great people that just passed by way too quickly. Looking forward to Akademy 2016!

All the yummies at Akademy 2015

Tue, 08/04/2015 - 18:53

A Coruña is a beautiful place to host a get together with a bunch of really inspiring, motivated, friendly people. Even better when those folks are from the KDE community.

This was my second Akademy and I had a fantastic time. The talks were, hands down, awesome. So I'm one of those guys that will yell at movie trailers telling too much of the story before I actually watch the movie. So I actively avoid movie trailers for movies I already know I want to see. I delight in being surprised. Apparently I was the only person completely in the dark about the Plasma Mobile effort, and what a delight it was to be surprised! Another highlight for me was WikiFM/WikiToLearn. Holy crap what an excellent project!

For the Visual Design Group stuff, it was quite enjoyable seeing several projects work through defining a vision. There is little that provides the foundation for, among other things, great UI design than a well prepared project vision. Thomas also led a great discussion on the recent Phoronix article describing the experiences of switching to GNOME for a week and then back to KDE. That plus lots of little impromptu sessions here and there.

Lydia's community keynote and the Evolve KDE BoF were also very enlightening. My primary take-away is how thoughtful the KDE community is about where it has been, where it is, and where it wants to go. It is never an easy thing for anyone to look in the mirror to find opportunities to become better. It necessarily means identifying aspects of yourself that make you not quite what you aspire to be. I was inspired by the willingness of folks in the community and at Akademy to do just that; a desire to work together to make our community an even more incredible place to produce fantastic, enabling free software for everyone. No peanut gallery nay-saying. No insufferable pessimism. Just a take-of-gloves, step-down-from-the-pulpit willingness to learn together and work together toward an ambitious future. I. Freaking. Love it.

Till next year!!!

First days at Red Hat

Tue, 08/04/2015 - 14:59

Red Hat Logo As I mentioned in my last post I left my previous employer after quite some years – since July 1st I work for Red Hat.

So, its one month since I joined Red Hat and it is been quite an experience so far. Keeping in mind where I come from – infrastructure focused, couple dozen people – Red Hat is something entirely different. They are huge. Like, *really* big. And that shows everywhere. Organization, processes, structure, reach, customers, employees, possibilities, etc. Also, these days Red Hat is much more than just Linux: other huge chunks of Red Hat are Middleware, there are several virtualization products, they are serious towards software defined storage, and they indeed have a very specific idea of what Cloud means and how to do that – and it’s all backed up by products which are again backed by pretty vivid community projects (with colorful names as Drools, Byteman and CapeDwarf).

All in all, it’s a lot to learn – and as usual I will use the blog to try to digest everything. Most likely this will focus on technologies I yet don’t even have a clue about – like the aforementioned drooling midgets. But I might also reiterate everything else I have to know in my own words to better learn it – subscription model, product variation, all the shiny stuff you print glossy papers about but have to explain anyway.

It might not be the most interesting for others – but vital for me. And I’m actually looking forward to learn, well, really a lot in a short time :)

Filed under: Business, Fedora, Linux, Politics, Technology, Thoughts

Akademy 2015 – Phones, CI, and Kubuntu

Tue, 08/04/2015 - 12:39

Last week KDE’s annual world summit, Akademy, happend. And how exciting it was.

Akademy always starts off with two days of ever so exciting talks on a number of engaging subjects. But this year particularly interesting things happened courtesy of Blue Systems.

First Plasma Mobile took the stage with a working prototype running on the Nexus 5 using KWin as Wayland compositor. This is particularly enjoyable as working on the prototype, currently built on Kubuntu, made me remember the Kubuntu phone and tablet ports we did some 4 years ago.

Plasma Mobile was followed by a presentation on Shashlik, technology meant to enable running Android applications on Linux systems that aren’t Android. So I can finally run candy crush on my desktop. Huzzah!

Rohan Garg and I also talked for a bit about our efforts to bring continuous integration and delivery to Kubuntu and Debian to integrate our packaging against KDE’s git repositories and as a byproduct offer daily new binaries of most software produced by KDE.

After a weekend of thrilling talks, Akademy tends to continue with a week of discussion and hacking with Birds of Feathers sessions.

Ever since the Ubuntu Developer Summits were discontinued it has been common practise for the Kubuntu team to hold a Kubuntu Day at Akademy instead, to discuss long term targets and get KDE contributor’s thoughts and input. Real life meetings are so very important to a community. Not just because it is easier to discuss when talking face to face making everyone more efficient and reducing the chances of people misunderstanding one another and getting frustrated, they also are an important aspect of community building through the social interaction they allow. There is something uniquely family-like about sharing a drink or having a team dinner.

A great many things were discussed pertaining to Kubuntu. Ranging from Canonical’s  IP rights policy and how it endangers what we try to achieve, to websites, support, and the ever so scary GCC 5 transition that José Manuel Santamaría put a great deal of effort into making as smooth as possible.

In the AppStream/Muon BoF session Matthias Klumpp gave us an overview on how AppStream works and we discussed ways to unblock its adoption in Kubuntu to replace the currently used app-install-data.

Muon, the previously Kubuntu specific software manager that is now part of the Plasma Workspace experience, is getting further detangled from Debian specific systems as the package manager UI is going to be moved to a separate Git repository. Also tighter integration into the overall workspace and design of Plasma is the goal for future development.

As always Akademy was quite the riot. I’d like to thank the Akademy team for organizing this great event and the Ubuntu community for sponsoring my attendance.


❤ KDE ❤

Randa Meetings 2015 – The countdown begins

Tue, 08/04/2015 - 12:25

In a bit more than a month, 50+ KDE contributors will arrive in Randa, Switzerland and work again on different topics for a whole week. The motto this year is:

Bring Touch to KDE!

From September 6th to September 13th, developers, artists, documentation writers, translators and other contributors will meet in Randa and continue their work on great KDE software that’s free for anyone to use, modify and distribute. We will welcome people from Africa, America, Asia and Europe (missing Australia and Antarctica, there will be plenty of penguins though).

Six groups are going to bring KDE’s great software to touch and mobile devices:

  • digiKam – Manage your photographs
    like a professional with the power of Open Source
  • KDE Connect – to coordinate communication between all your devices
  • KDE Multimedia – Audio, Movie, Music, Sound and Video
  • KDE PIM – personal information management including email, address books, calendars, tasks, news feeds and more
  • QMLweb – Bring QML to the web
  • Touch and Mobile – The biggest
    group with the main topic

It will be an intense, creative and productive week, and anyone is welcome to visit. There won’t be a designated Open Day, but tell us that you want to visit and we’ll have interesting things for you to see.


For this week to happen, we need more than the motivation of Free Software contributors and local helpers. Travel costs, accommodation and other expenses need to be paid. Thanks to your financial support in previous years, we have held five successful Randa Meetings events creating exceptional value for users. With your help, the sixth will be the best ever, and we will be able to plan for more events in the future.

Don’t hesitate, make a donation, spread the word, ask questions or make recommendations, take a look at the list of participants or read the experience of a developer last year.

Flattr this!

GCompris at Akademy 2015

Mon, 08/03/2015 - 22:27

Akademy 2015 was an awesome time for GCompris. First we decided with my wife Zohra to hold a little booth to showcase how GCompris runs fine on different platforms. We had a GNU/Linux PC running KDE, an Android tablet, an iPad and a MacBook running MacOSX. To our own surprise the booth was quiet successful, not all in the KDE community have a vested interest in children education and discovered GCompris for the first time. Everybody was surprised that GCompris is so comprehensive.

On the conference side, GCompris attracted a large number of attendees. After the usual project history, I dig a little bit on the difficulty of supporting many platforms, the stats on the Google Play Store, and about the commercial effort behind the project. You can see the video here and the slides here.

At Akademy there are developers from all around the world. We used that opportunity to record their many voices that were missing in our voice data set. To be precise, we recorded Taiwanese, Galician, Spanish, Italian, Romanian and Dutch. We have been surprised by the welcoming of this initiative, thank you all who took time to help on that.

As it was quiet hard to know what’s missing for each locale, we created a new page that reports what voices and what data files are missing from GCompris.

Following the success of the recording sessions, we will redo the operation at Randa Meetings 2015.