Migrating to Akonadi, Part 2

Additionally to the migration path for PIM applications there are similar options for the actual data access facilities, i.e. the addressbook and calendar plugins traditionally used to access PIM data in local files, groupware servers and so on.

In the Akonadi universe this task is handled by programs we developers refer to as Akonadi resources.
Based on a suggestion by Till Adam, I created two such resources based on our traditional data access plugins, one for addressbook plugins and one for calendar plugins.
This should allow us to port the functionality currently implemented by our traditional plugins one by one and it also decouples the porting efforts of applications and storage facilities.

The following image shows three setups: one for each "end" of the transition and one intermediate step based on the example of an addressbook storage on a Kolab server
[image:3302 size=preview]
(click here for full size)

  • Traditional setup
    In the traditional setup based on the KResource framework, each application would use plugins to access the storage device directly.
    When compared to the other two setups is looks a lot simpler and it in some way it is. However this approach is also more primitive, a lot of work is done multiple times, i.e. each application has its own connections to the Kolab server and each application is transferring all the data.
  • Migration setup
    The main advantage of this setup is to fold the multiple instances of data access into one.
    For simplicity of the diagram I assumed that the accessing KAddressBook has already been ported to use Akonadi natively, but of course it would also work in its traditional form using the "akonadi" plugin described in part 1.
  • Future setup
    If your first impression of this setup is that it is less flexible than the one before because it does not use plugins any longer, be assured it is not. Different types of storage devices are now handled by their special resource handler program, where each of them can be specifically tailored to the capabilties of the storage device it is working with. Quite like plugins but with added bonuses like not taking down an end-user application in case of crash. Pretty much comparable to how KDE has been using KIO slaves for document centric data access.

    If your second impression is that this setup introduced overhead because its resources are processes of their own, be assured it does not.
    Both the traditional and the migration setup require one plugin per type of PIM data and storage device, potentially resulting in more than one connection per application to a remote storage, whereas the new kind of resources can handle all their storage device supported types simultaniously.

Btw, a similar approach could also be used to create a migration path for applications currently using libedata-book and libedata-cal, i.e. applications currently using the Evolution Data Server as their storage service. Ideally using a GLib based Akonadi library similar to our libakonadi, but since these applications already expect to communicate with a local server, one could also implement an Akonadi agent using libakonadi that just "look" like EDS protocol-wise (probably using the D-Bus based EDS implementation Ross Burton created for the Embedded EDS)