Subscribe to Planet KDE feed
Planet KDE - http://planetKDE.org/
Updated: 32 min 36 sec ago

Google Summer Of Code progress with okular

Fri, 05/29/2015 - 07:03
Its been around a month now and I have been working on and off my project as college exams are keeping me a bit busy. I have started encountering the challenges expected in a GSOC project :) .

Now details about my work. The work I have done so far is related to the layers feature.

Layers feature is almost done. A list of layers is being generated in the left sidebar and toggling visibility of layers is also working.

Check out the code here.

Here is a screenshot for the layers feature:-

Before toggling layer:-



After toggling layers:-




Technical details
Layers feature assume that generator provides a pointer to QAbstractItemModel representing the model for layers with support for Qt::CheckStateRole. Toggling checkbox should automatically change the visibility of layer.


Few things which are left :-
  • Selection of appropriate icon in the left sidebar.
  • Changing check box to icons.
  • Proper working of search bar in the layers view.

Typing together alone

Fri, 05/29/2015 - 00:00

Editing ODT documents can be done in the webbrowser with WebODF. This experience can be shared with others by doing collaborative editing. Why edit documents alone when you can do it together, right?

Innocent Until Proven Guilty

Thu, 05/28/2015 - 20:18

In case you missed the latest news, Jonathan Riddell has been accused by the Ubuntu Community Council (CC) of breaking Ubuntu Code of Conduct (CoC) and has been asked to resign from his position of leader of the Kubuntu project (a title which actually does not exist and which he never claimed to hold)

I had the chance of meeting Jonathan when I joined Canonical in 2009. I was a bit intimidated during my first Canonical real-life meeting, but Jonathan carried me around and went out of his way to introduce me to many of my then new colleagues.

Since then he has always been one of the friendliest person I know. We often shared rooms during Canonical, Ubuntu or KDE events and went on to be colleagues again at Blue Systems. I believe Jonathan kindness is one of the reasons why the Kubuntu community has grown into such a welcoming and closely-knit group of people.

Sometimes passion carries us over too far and we say or do things we should not, but until now all I have found is just accusations and no proof of any such behavior from Jonathan. I am certainly biased, but since breaking Ubuntu CoC is so unlike the Jonathan I know, I stand by his side. The CC should post real pointers to the repeated CoC breakage Jonathan is accused of. "Innocent until proven guilty", that is how real justice works. Publish proofs of what you claim. Until such pointers are published it all sounds like the CC is having a hard time getting precise answers to Jonathan questions and opted to get rid of him instead of pushing for more answers.

PS: Before you ask: yes, I read all the long email threads and IRC logs I could find. While I have found some rough exchanges I don't think they qualify as breaking Ubuntu CoC.

Kubuntu: Statement from a not so important Kubuntu Developer.

Thu, 05/28/2015 - 01:56
I support Jonathan Riddell

I support Jonathan Riddell

First, I hate drama, no I loathe drama.
I have refrained from much more than the occasional social share up to this point.
I do however, stand by our fearless leader (pun intended, Jonathan has never claimed to be the leader).

As I sit here packaging what has to be my millionth package, I wonder..
why do I work so hard, for free, in what has become such a hostile environment?
For the following reasons:
Jonathan: who has taught me so much and removed the barrier of entry for me.         (Took me well over a decade to get through this barrier), not to mention he has a heart of gold, I am having a hard time believing the accusations. I do however know his frustrations, as he was trying to get the information for the people affected by it.
Kubuntu team: Every single one of them I consider family. Great teachers and great friends.
Kubuntu community: Our wonderful community of users. Time to test! Extremely great bunch.

It truly saddens me to see all this FUD being thrown around, by folks that up till recently I had great respect for.
Couple things that do not sit well with me at all.
1) Absolutely zero communication to the Kubuntu Council about the “issues” with Jonathan prior to the shocking “request”.
2) The Kubuntu Council asked (repeatedly) for one thing: proof. This still has not been provided.
So what was suppose to happen here? Evidently bow down, walk away and happily work away silenced.
This is NOT the open source / FLOSS way. At least not to my understanding. Perhaps I have misunderstood the meaning all these years.

The result of all of this… My motivation to dedicate every waking hour to my passion, open source software, is depleting rather quickly. At least in the corporate environment there is a paycheck at the end of the week.

I will stick by Jonathan and the rest of the team until the bitter end, but not at the capacity that I was. So with that said..
I will support our current releases with bugfix KDE releases. I have currently packaged 15.04.1 which is in testing, 5.3.1 Plasma is in the works.

And yes, I will work on 4.14.3 for trusty, but it will take time as it has to be done by hand.

I also want to make note that the super awesome folks at KDE are not affected by my recent woes, I will continue my Continuous Integration support!

Cheers,
Scarlett

Qt on Android Episode 7

Wed, 05/27/2015 - 17:05

In the last two Qt on Android episodes we learned how to use basic JNI on Android and how to use an external IDE to easily manage the Java part. In this episode, it is time to move forward and focus on extending our Qt on Android Java part and also how to interact with it using JNI in a “safe way”.

In this part we are going to implement an SD-Card listener. This is quite a useful example for applications that are using SD-Cards to store their data, because if the application doesn’t close all the opened files immediately when it gets the notification, it will be killed by the Android O.S.

As we’ve seen in Episode 5 it’s quite easy to call a Java method from C/C++ and a C/C++ function from Java, but it doesn’t work on all cases. But why not?

To understand why not, we need first to understand the Qt on Android architecture.

Architecture diagram:

Java-Qt

A few words about the architecture diagram.

  • the left blue rectangle represents the Android UI thread
  • the right green rectangle represents the main Qt thread (where the main QEventLoop is running). Read Episode 1 if you want to learn more about Android UI & Qt threads)
  • the top (black) rectangle is the Java part of your application. As you can see the biggest part of it runs on the Android UI thread. The only case when the Java part runs on the Qt thread is when we call it from C/C++ from Qt thread (as most of the JNI calls will come from there).
  • the bottom (black) rectangle is the C/C++ (Qt) part of your application. As you can see the biggest part of it runs on the Qt thread. The only case when the C/C++ part runs on the Android UI thread is when it’s called from the Java part from Android UI (as most of the Java callbacks will be from there).

Ok … so what’s the problem? Well, the problem is that there are SOME Android APIs that MUST be called from Android UI thread, and when we call a Java method from C/C++ we do it from Qt thread. It means that we need a way to run that code on Android UI not on Qt thread. To do such a call, from C/C++ Qt thread to Java Android UI thread, we need to do 3 steps:

  1. call a Java method from C/C++ Qt thread. The Java method will be executed in Qt thread, so we we need a way to access Android APIs in Android UI thread.
  2. our Java method uses Activity.runOnUiThread to post a runnable on Android UI thread. This runnable will be executed by the Android event loop on Android UI thread.
  3. the runnable accesses the Android APIs from Android UI thread.

The same problem occurs when Java calls a C/C++ function, because Java will call our C/C++ functions from Android UI and we need a way to pass that notification on Qt thread. Again there are 3 steps involved:

  1. call a C/C++ function from Android UI thread.
  2. use QMetaObject::invokeMethod to post a method call on Qt event loop.
  3. Qt event loop will execute that function on Qt thread.
Extending the Java part:

Before you start, make sure you read Episode 6 one more time because you’ll need it to easily manage the Java files. First step is to create a custom Activity by extending QtActivity and defining a method which will post our Runnable.

// src/com/kdab/training/MyActivity.java package com.kdab.training; import org.qtproject.qt5.android.bindings.QtActivity; public class MyActivity extends QtActivity { // this method is called by C++ to register the BroadcastReceiver. public void registerBroadcastReceiver() { // Qt is running on a different thread than Android. // In order to register the receiver we need to execute it in the Android UI thread runOnUiThread(new RegisterReceiverRunnable(this)); } }

Java-Qt_1-2

Next step is to change the default activity to AndroidManifest.xml, from:

<activity ... android:name="org.qtproject.qt5.android.bindings.QtActivity" ... >

to:

<activity ... android:name="com.kdab.training.MyActivity" ... >

We need to do this to make sure that our custom Activity will be instantiated when the application starts.

Next step is to define our RegisterReceiverRunnable class: The run method of this class will be called on Android UI thread. In run method we register our SDCardReceiver listener.

// src/com/kdab/training/RegisterReceiverRunnable.java package com.kdab.training; import android.app.Activity; import android.content.Intent; import android.content.IntentFilter; public class RegisterReceiverRunnable implements Runnable { private Activity m_activity; public RegisterReceiverRunnable(Activity activity) { m_activity = activity; } // this method is called on Android Ui Thread @Override public void run() { IntentFilter filter = new IntentFilter(); filter.addAction(Intent.ACTION_MEDIA_MOUNTED); filter.addAction(Intent.ACTION_MEDIA_UNMOUNTED); filter.addDataScheme("file"); // this method must be called on Android Ui Thread m_activity.registerReceiver(new SDCardReceiver(), filter); } }

Java-Qt_3

Let’s check what SDCardReceiver class looks like:

// src/com/kdab/training/SDCardReceiver.java package com.kdab.training; import android.content.BroadcastReceiver; import android.content.Context; import android.content.Intent; public class SDCardReceiver extends BroadcastReceiver { @Override public void onReceive(Context context, Intent intent) { // call the native method when it receives a new notification if (intent.getAction().equals(Intent.ACTION_MEDIA_MOUNTED)) NativeFunctions.onReceiveNativeMounted(); else if (intent.getAction().equals(Intent.ACTION_MEDIA_UNMOUNTED)) NativeFunctions.onReceiveNativeUnmounted(); } }

Java-Qt_4SDCardReceiver overrides onReceive method, then it uses the declared native functions to send the notification to C/C++.

Last step is to declare our native functions that we used in SDCardReceiver:

// src/com/kdab/training/NativeFunctions.java package com.kdab.training; public class NativeFunctions { // define the native function // these functions are called by the BroadcastReceiver object // when it receives a new notification public static native void onReceiveNativeMounted(); public static native void onReceiveNativeUnmounted(); } Architecture diagram Java:

Let’s see the summary of the Java part calls on our architecture diagram:

Java-Qt_Java_final

Extending C/C++ part:

Now let’s see how we extend the C/C++ part. To illustrate how to do it, I’m using a simple widget application.

First thing we need to do, is to call the registerBroadcastReceiver method.

// main.cpp #include "mainwindow.h" #include <QApplication> #include <QtAndroid> int main(int argc, char *argv[]) { QApplication a(argc, argv); // call registerBroadcastReceiver to register the broadcast receiver QtAndroid::androidActivity().callMethod<void>("registerBroadcastReceiver", "()V"); MainWindow::instance().show(); return a.exec(); }

Java-Qt_1

 

// native.cpp #include <jni.h> #include <QMetaObject> #include "mainwindow.h" // define our native static functions // these are the functions that Java part will call directly from Android UI thread static void onReceiveNativeMounted(JNIEnv * /*env*/, jobject /*obj*/) { // call MainWindow::onReceiveMounted from Qt thread QMetaObject::invokeMethod(&MainWindow::instance(), "onReceiveMounted" , Qt::QueuedConnection); } static void onReceiveNativeUnmounted(JNIEnv * /*env*/, jobject /*obj*/) { // call MainWindow::onReceiveUnmounted from Qt thread, we wait until the called function finishes // in this function the application should close all its opened files, otherwise it will be killed QMetaObject::invokeMethod(&MainWindow::instance(), "onReceiveUnmounted" , Qt::BlockingQueuedConnection); } //create a vector with all our JNINativeMethod(s) static JNINativeMethod methods[] = { {"onReceiveNativeMounted", "()V", (void *)onReceiveNativeMounted}, {"onReceiveNativeUnmounted", "()V", (void *)onReceiveNativeUnmounted}, }; // this method is called automatically by Java after the .so file is loaded JNIEXPORT jint JNI_OnLoad(JavaVM* vm, void* /*reserved*/) { JNIEnv* env; // get the JNIEnv pointer. if (vm->GetEnv(reinterpret_cast<void**>(&env), JNI_VERSION_1_6) != JNI_OK) return JNI_ERR; // search for Java class which declares the native methods jclass javaClass = env->FindClass("com/kdab/training/NativeFunctions"); if (!javaClass) return JNI_ERR; // register our native methods if (env->RegisterNatives(javaClass, methods, sizeof(methods) / sizeof(methods[0])) < 0) { return JNI_ERR; } return JNI_VERSION_1_6; }

Java-Qt_4-5

In native.cpp we are registering the native functions. From our static native functions we are using QMetaObject::invokeMethod to post the slots call to Qt thread.

 

// mainwindow.h #ifndef MAINWINDOW_H #define MAINWINDOW_H #include <QMainWindow> namespace Ui { class MainWindow; } class MainWindow : public QMainWindow { Q_OBJECT public: static MainWindow &instance(QWidget *parent = 0); public slots: void onReceiveMounted(); void onReceiveUnmounted(); private: explicit MainWindow(QWidget *parent = 0); ~MainWindow(); private: Ui::MainWindow *ui; }; #endif // MAINWINDOW_H // mainwindow.cpp #include "mainwindow.h" #include "ui_mainwindow.h" MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent), ui(new Ui::MainWindow) { ui->setupUi(this); } MainWindow::~MainWindow() { delete ui; } MainWindow &MainWindow::instance(QWidget *parent) { static MainWindow mainWindow(parent); return mainWindow; } // Step 6 // Callback in Qt thread void MainWindow::onReceiveMounted() { ui->plainTextEdit->appendPlainText(QLatin1String("MEDIA_MOUNTED")); } void MainWindow::onReceiveUnmounted() { ui->plainTextEdit->appendPlainText(QLatin1String("MEDIA_UNMOUNTED")); }

Java-Qt_6

MainWindow class is used just to add some text to our plainText control when it gets a notification. Calling these functions from Android thread might be very harmful to our application health – it might lead to crashes or unexpected behavior, so they MUST be called from Qt thread.

Architecture diagram C/C++:

This is the summary of C/C++ calls on our architecture diagram:

Java-Qt_Qt_final

Architecture diagram Java & C/C++:

This is the summary of all the calls that we’ve done in C/C++ and in Java.

Java-Qt_final

Here you can download the example source code.

Thank you for your time!

The post Qt on Android Episode 7 appeared first on KDAB.

Google Summer of Code 2015 with KDE

Wed, 05/27/2015 - 14:05
Hello,
I will be using this blog primarily as a means to communicate on the progress of the project of "Porting of Amarok to Qt5/KF5" with the KDE Organization under the GSoC program 2015. My mentors for the project are Mark Kretschmann and Myriam Schweingruber.
Amarok has a huge codebase and I believe there will be a lot of commits involved. I will try to be as verbose as possible on the changes made to the codebase and I plan on posting frequent updates here through the summer.

I look forward to a very productive summer along with the open source community.

Cheers

transactional b-trees and what-not

Wed, 05/27/2015 - 13:28

transactional b-trees and what-not

Over the last few months I've been reading more than the usual number of papers on a selection of software development topics that are of recent interest to me. The topics have been fairly far flung as there are a few projects I have been poking at in my free time.

By way of example, I took a couple weeks reading about transitory trust algorithms that are resistant to manipulation, which is a pretty interesting problem with some rather elegant (partial) solutions which are actually implementable at the individual agent level, though computationally impractical if you wish to simulate a whole network which thankfully was not what I was interested in. (So reasonable for implementing real-world systems with, though not simulations or finding definitive solutions to specific problems.)

This past week I've been reading up on a variety of B-tree algorithms. These have been around since the early 1970s and are extremely common in all sorts of software, so one might expect that after 40+ years of continuous use of such a simple concept that there'd be very little to talk about, but it's quite a vast territory. In fact, each year for the last two decades Donald Knuth has held a public lecture around Christmas-time about trees. (Yes, they are Christmas Tree Lectures. ;) Some of the papers I've been reading were published in just the last few years, with quite a bit of interesting research having gone on in this area over the last decade.

The motivation for reading up on the topic is I've been looking for a tree that is well suited to storing the sorts of indexes that Akonadi Next is calling for. They need to be representable in a form that multiple processes can access simultaneously without problems with multiple readers and (at least) one writer; they also need to be able to support transactions, and in particular read transactions so that once a query is started the data being queried will remain consistent at least until the query is complete even if an update is happening concurrently. Preferably without blocking, or at least as little blocking as possible. Bonus points for being able to roll-back transactions and keeping representations of multiple historic versions of the data in certain cases.

In the few dozen papers I downloaded onto the tablet for evening reading, I came across Transactions on the Multiversion B+-Tree which looks like it should do the trick nicely and is also (thankfully) nice and elegant. Worth a read if you're into such things.

As those who have been following Akonadi Next development know, we are using LMDB for storage and it does a very nice job of that but, unfortunately, does not provide "secondary" indexes on data which Akonadi Next needs. Of course one can "fake" this by inserting the values to be indexed (say, the dates associated with an email or calendar event) as keys with the value being they key of the actual entry, but this is not particularly beautiful for various reasons, including:

  • this requires manually cleaning up all indexes rather than having a way to efficiently note that a given indexed key/value pair has been removed and have the indexes cleaned up for you
  • some data sets have a rather low cardinality which would be better represented with approaches such as bitmap indexes that point to buckets (themselves perhaps trees) of matching values
  • being able to index multiple boolean flags simultaneously (and efficiently) is desirable for our use cases (think: "unread mails with attachments")
  • date range queries of the sort common in calendars ("show this month", "show this week", e.g.) could also benefit from specialized indexes

I could go on. It's true that these are the sorts of features that your typical SQL database server provides "for free", but in our case it ends up being anything but "free" due to overhead and constraints on design due to schema enforcement. So I have been looking at what we might be able to use to augment LMDB with the desired features, and so the hunt for a nice B+-tree design was on. :) I have no idea what this will all lead to, if anything at all even, as it is purely an evening research project for me at the moment.

They application-facing query system itself in Akonadi Next is slowly making its way to something nice, but that's another topic for another day.

It is official, Marble is coming to Android

Wed, 05/27/2015 - 09:12
First, I would like to announce, I have been chosen as a Google Summer of Code student and my task is to provide a working version of Marble on Android at the end of the summer.This is a very important for Marble, because Marble currently only available on Desktop and on same rare mobile platforms (Maemo, MeeGo) but on the most widespread platform (Android), not. It is very sad because it is more and more common in education systems that they use TVs, tablets and smartphones with Android so they can’t use Marble as an educational tool.The supported Android platforms will be Android v2.3.3 (API level 10) and higher, because it will be ported with Qt for Android.
The work has been started. Stay tuned...

Challenges and opportunities

Wed, 05/27/2015 - 09:01

Challenges are a normal part of life; and seeing opportunities is a skill all of us can get better at. This past week, though, has been something new.

The Ubuntu community and philosophy has been home to me. The Ubuntu Code of Conduct is not just about individual conduct, but how we make a community. In fact, the first sentence is Ubuntu is about showing humanity to one another: the word itself captures the spirit of being human.[1] This is my kind of place, where we not only have high ideals, but live those out in our practice. And so it has been for many years.

So it was a complete shock to get a secret email from the Community Council to me as a Kubuntu Council member announcing that Jonathan Riddell had been asked to step down from Kubuntu leadership. We (the KC) recently met with the CC, and there was no discussion of any issues they had with Jon. They never wrote to us asking for feedback or discussion.

Jonathan's questions to the CC about a legal issue and that of funds donated to the flavors were not personal, but done on behalf of the Ubuntu community, and on behalf of us, the Kubuntu Council and the Kubuntu community as a whole. We are still concerned about both these issues, but that pales in comparison to the serious breach in governance we've experienced this past week.

The Code of Conduct states: We expect participants in the project to resolve disagreements constructively. When they cannot, we escalate the matter to structures with designated leaders to arbitrate and provide clarity and direction.

The CC did not follow this basic procedure. The Community Council is full of great people; a couple of them are personal friends. However, they are unelected. In contrast, we members of the Kubuntu Council stand for election every other year.[2] [Note: Oops, I've been pointed at https://wiki.ubuntu.com/CommunityCouncil/Restaffing which states that ubuntu members vote to elect CC members. Sorry for the error.]

We have had a number of emails back and forth during the week.[3] What has stood out to me is the contrast between their approach, and our own. They have focussed on their feelings (feelings about working with Jon), whereas we continue to point out facts and ask them to follow the Code of Conduct. Naturally, we all experienced emotions about the situation, but emotion is not a basis for decision-making.

Of course, the members of the CC may perceive the situation entirely differently.

I wish I knew how this conflict will work out long-term. The Council supports Jonathan, and continues to ask for resolution to the issues he has raised with the CC on the community list. We have done so formally yesterday.

Jon is the person who brought KDE to Ubuntu, and Ubuntu to KDE, and has always functioned as a bridge between the two projects and the two communities. He will continue to do this as long as he is able, and we rely on his faithfulness for the success of Kubuntu. He is the magnet who draws new developers to us, and his loss would spell the end of Kubuntu-the-project.

The CC did not follow the basic procedure and raise bring the issue they had with Jon to us, the Kubuntu Council. We await their return to this principle as we work to find a way forward. We are determined to find a way to make this work.

1. http://www.ubuntu.com/about/about-ubuntu/conduct
2. http://www.kubuntu.org/kubuntu-council
3. https://skitterman.wordpress.com/2015/05/26/information-exchange-between-the-ubuntu-community-council-and-the-kubuntu-council/

Making Sense of the Kubuntu/Canonical Leadership Spat

Wed, 05/27/2015 - 04:39

By now the news has spread quite quickly; the Canonical Community Council (or “CC” for short) had attempted to boot Jonathan Riddell as a community leader, asking him to “take an extended break” from the Kubuntu Council (“KC” for short) citing personality conflicts and breaches of Canonical codes of conduct.

So, what just happened? On the various news sites and through some broken telephones there’s several misconceptions about what happened. I took an interest because I know nothing of the structure around Canonical, and I wanted to know how all this relates to Kubuntu, especially since I know Jonathan is a Blue Systems employee.

This isn’t going to be a post about the he-said-she-said arguments, but is more just my research into how all this fits together and what it really means.

What is the Community Council? How does it work?

The Community Council is the highest governing body representing the Ubuntu umbrella of projects, including its derivatives. The group is open to anyone, but as of my research only one of the seven members of the council is not a Canonical employee. Of those seven members one is Mark Shuttleworth who has tie-breaking votes.

The group manages infrastructure and communication for Canonical to allocate its resources for Ubuntu and derivatives. An important part of this event is the mandate that the council operates transparently to the wider community, the idea being that they would also serve as a bridge between the commercial arm of Canonical and the open-source community at large.

What is the Kubuntu Council?

Just like a larger governing body the Community Council has offshoots to represent larger projects who are tasked with relaying project-specific notices to the mothership. The Kubuntu Council is one such offshoot managing the KDE-oriented Kubuntu project.

When the system works the idea is that the Kubuntu Council will take care of project-level matters independently, and form a todo list for upstream needs of the Kubuntu project and pass it back to the Community Council. In turn, the Community Council will keep the individual projects abreast of the goings-ons.

So… Canonical Owns Kubuntu?

This is where it gets sticky.

Canonical owns the trademark for Kubuntu – so as a ‘brand’ they own Kubuntu. Beyond that Canonical does not directly fund Kubuntu, instead they offer infrastructure in the form of repositories and servers, where Kubuntu is allowed to piggyback off the Canonical/Ubuntu project network and work more closely with upstream resources.

But Canonical does not employ the Kubuntu staff; previously they did employ staff but Blue Systems stepped in when Canonical cut funding. Blue Systems has since become a much larger part of what drives Kubuntu than Canonical.

In over-simplified terms Canonical owns the franchise and Blue Systems runs the hottest ‘non-headquarters’ location.

Who is Jonathan Riddell?

Jonathan is an ex-Canonical employee who was scooped up by Blue Systems after Canonical cut funding.

Part of Canonical cutting Kubuntu funding was terminating Jonathan as an employee of Canonical. He essentially retained his position in all community aspects of Canonical, just without the pay-cheque: he is a Kubuntu Council member, has access to the Canonical infrastructure, and helps manage the Kubuntu project.

Blue systems picked him up and he is able to work full-time in an almost identical capacity that he did as a Canonical employee.

What was the Ruckus?

Mainly, there’s some conflicts between Riddell and members of the core Community Council. Riddell had repeatedly pushed several issues which the council was unable to fulfil, leading to frustration on both sides because Jonathan was obligated to push important issues and the Community Council was the place he was supposed to do it.

In the end both sides showed the stress they were under, at which point the Community Council privately decided they would oust Jonathan from the Kubuntu Council. The KC replied arguing that the decision was not made transparently, questioned how much power the Community Council should have over the Kubuntu Council roster, and was incensed by the CC not retracting the decision before a transparent conversation. The Kubuntu Council didn’t want to negotiate with a gun to their heads.

Who Ultimately Gives the Orders? Can Canonical Fire ‘Kubuntu Employees’?

This goes back to the sticky ownership issue; Canonical technically owns Kubuntu, but has relinquished a great deal of control over the project to focus on Ubuntu. Canonical can’t and doesn’t have the right to “fire” Jonathan, as he works for Blue Systems now – and that’s not what they tried to do – but Canonical does have the right to control their internal governing bodies, and how much access Jonathan has to their network.

The reason Kubuntu was able to reject an authoritative attempt is because it had simply never happened before, and because of the very real danger that it would fracture the Kubuntu project.

By removing Jonathan from his position in the Kubuntu community, it also affects his value for Blue Systems. If he were removed, it brings into question what Blue Systems and the community would do in response; Riddell is a Blue Systems employee and carries significant community favour from KDE users.

We’ve seen this before when a community rejects a commercial governing body – they fork. In this case instead of moving from OpenOffice to LibreOffice, we’d see it go from Kubuntu to ‘KuBlue’… or a more catchy name.

In short: yes Canonical can remove people from its own internal councils, but it might have terrible fallout if done improperly. They can’t really tell the Kubuntu crew what to do, but when it comes to the infrastructure they could tell then what they can’t do.

What Happens Now?

Right now Canonical is exerting control over projects using their infrastructure much like a company would manage employees; if someone isn’t in perfect sync they can be moved, removed, or suspended.

The problem with this strategy is the fact that communities don’t like being dictated to, and in attempting to do so rubbed the community the wrong way. The Community Council literally gave an order and the Kubuntu Council said “no”. So what happens now?

The first thing that can happen is… Nothing. Birds will sing, grass will grow, and the KC will make the CC grit their teeth a bit. Maybe Jonathan will be removed after a more transparent meeting, maybe not. If the KC doesn’t remove Jonathan, then it may force Canonical into an awkward situation where it must either admit it doesn’t have control, or get nasty and start cutting off infrastructure.

Second, if this is resolved, Canonical may revise its community strategy and put in safeguards for these situations and possibly enforce a more formal structure over the ad-hoc sub-community model. This would need to apply to all communities as singling out specific projects would simply inflame the situation, in the future preventing other projects from entering a similar situation.

Third, instead of a split the Kubuntu crew might attempt to separate their internal governance a bit; possibly designating a separate group to work with Canonical while the main leadership remains as-is. Canonical can work with their partners effectively without disturbing the leadership.

The next thing that may happen could be the start of a more gradual separation; Kubuntu as a project may slowly take on more infrastructure, growing apart and leaving the nest – maybe with Canonicals blessing and the transfer of the Kubuntu trademark. Who knows.

Lastly (and by far the least likely) both sides would calmly file into a room before sizing up chairs to throw at each other; terrible words being said about peoples mothers before forking Kubuntu into Librebuntu.

In the End… ?

In the end, I think we all simply hope that projects, companies, communities, and benevolent dictators can all work together in relative harmony. The situation isn’t ideal, but a major part of building strong communities is occasionally finding out something doesn’t work – and fixing it; hopefully to the benefit of everyone involved.

That’s my breakdown of the politics; I hope it helped and provided insight into this whole messy affair. I hope to gets all sorted out in the long run. If I have anything wrong, please do let me know in the comments and I’ll make the relevant corrections.

Personally, I hope Canonical will put more structure in place when it comes to the Community Council and its offshoots;


Google Summer of Code 2015 Kick-Off

Tue, 05/26/2015 - 21:36

img

This year I have been accepted the second time to the Google Summer of Code programme. The community bounding period is over and there is a time to start the development. This year I’m doing the project for LabPlot application that has the aim to integrate VTK library for 3D data visualization.

I hope this summer will be rich and productive and my contribution into the LabPlot project will be valuable for its users.

And thank you Google for the sticker. I have successfully glued to my new laptop :)

img

Google Summer of Code 2015 Kick-Off

Tue, 05/26/2015 - 21:36

img

This year I have been accepted the second time to the Google Summer of Code programme. The community bounding period is over and there is a time to start the development. This year I’m doing the project for LabPlot application that has the aim to integrate VTK library for 3D data visualization.

I hope this summer will be rich and productive and my contribution into the LabPlot project will be valuable for its users.

And thank you Google for the sticker. I have successfully glued to my new laptop :)

img

Reaffirmed on the Kubuntu Council

Tue, 05/26/2015 - 16:24

I’d like to thank all the Kubuntu members who just voted to re-affirm me on the Kubuntu Council.

Scott Kitterman’s blog post has a juicy details of the unprecedented and astonishing move by the Ubuntu Community Council asking me to step down as Kubuntu leader.  I’ve never claimed to be a leader and never used or been given any such title so it’s a strange request without foundation and without following the normal channels documented of consultation or Code of Conduct reference.

I hope and expect Kubuntu will continue and plan to keep working on the 15.10 release along with the rest of the community who I love dearly.

 

facebooktwittergoogle_pluslinkedinby feather

Interview with Andrei Rudenko

Tue, 05/26/2015 - 12:09
D3_Contest_Monk_AR-800
Could you tell us something about yourself?

My name is Andrei Rudenko, I’m a freelance illustrator, graduated from the Academy of Fine Arts (as a painter) in Chisinau (Moldova). I have many hobbies, I like icon/UI design, photography, learned a few programming languages and make games in my spare time, and also have about 10 releases on musical labels as 2R. For now I’m trying to improve my skills in illustration and game development.

Do you paint professionally, as a hobby artist, or both?

Both, it is good when your hobby is your job.

What genre(s) do you work in?

I like surrealism, critical realism. I don’t care about genre much, I think the taste and culture in art is more important.

Whose work inspires you most — who are your role models as an artist?

I really like the Renaissance artists, Russian Wanderers, also Jacques-Louis David, Caravaggio, Anthony van Dyck, and Roberto Ferri.

When did you try digital painting for the first time?

I think about 2010, trying to paint in Photoshop but I didn’t like that to draw with, and I left it until I found Krita.

What makes you choose digital over traditional painting?

Digital painting has its advantages, speed, tools, ctrl+z. For me it is a place for experiments, which I can then use in traditional painting.­

How did you find out about Krita?

When I became interested in Linux and open source. I found Krita, it had everything that I needed for a digital painting. For me it is important to repeat that feeling like you paint using traditional materials.

What was your first impression?

As soon as I discovered a powerful brush engine. I realized that this is what I was looking for a long time.

What do you love about Krita?

I like its tools, as I have already said the brush engine, the large variety of settings. I like the team who are developing Krita, very nice people. And of course it is free.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I think better vector graphics tools, for designers. Also make some fixes for pixel art artists.

What sets Krita apart from the other tools that you use?

The possibility to customize it the way you like.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

Monk for Diablo 3 contest, there is a lot of work and a lot that needs to be done. But Krita gave me everything i needed to make this art.

What techniques and brushes did you use in it?

Most of the time I use the color smudge brush (with dulling), like in traditional oil painting. For details a simple circle brush, for leaves I made a special brush with scattering. Almost all brushes I made myself, and also patterns for brushes I made from my texture photos.

Where can people see more of your work?

http://andreyrudenko.deviantart.com/
https://dribbble.com/Rudenko
https://twitter.com/AndreiRudenko

Anything else you’d like to share?

Thank you for inviting me to this interview. And thank you, Krita team, for Krita. ;)

Reducing relocations with Q_STRINGTABLE

Tue, 05/26/2015 - 08:52

Qt is a native library at the heart. As a native (C++) library, it already outperforms most higher-level language libraries when it comes to startup performance. But if you’re using native languages, you usually do so because you need to get the most out of the available hardware and being just fast may not be fast enough. So it should come as no surprise that we at KDAB are looking into how to speed things up even more.

A Look At Dynamic Linking

One source of startup delays in native applications is the dynamic linker. You can read all about how it works on Unix in Ulrich Drepper’s excellent article, How To Write Shared Libraries. For the purposes of this article, it is sufficient to understand that the final link step of a native application is performed at startup-time. In this step, the application, as well as the libraries it uses, are loaded into memory and adapted to the specific memory location they have been loaded to. Memory locations may differ from application to application because of security reasons (address space randomisation) or simply because one application loads more libraries than another, requiring a different memory layout, esp. on 32-bit platforms.

With this — very simplified — view of things in mind, let’s look at what “adapting to the specific memory location” actually involves.

Most of the library code is compiled in position-independent code, which means that jumps, as well as data references are always expressed as an offset to the current execution position (commonly called Program Counter – PC). That offset doesn’t change when the library is loaded at different memory locations, so code, for the most part, does not need to be adapted.

But a library does not only consist of code. It also contains data, and as soon as one piece of data points to another (say, a pointer variable which references a function), the content of that pointer suddenly becomes dependent on the actual position of the library in memory. Note that the trick used in code (offsetting from the PC) doesn’t work here.

So the linker is forced to go in and patch the pointer variable to hold the actual memory location. This process is called relocation. By performing relocations, the dynamic linker changes the data from how it is stored on disk, which has several drawbacks: First, the data (actually, the whole memory page – usually 4KiB) is no longer backed on-disk, so if memory gets tight, it has to be copied to swap instead of just being dropped from memory, knowing that it can always be loaded back from disk. Second, while unmodified data is shared among processes, once the data is modified in one process, the data is copied and no longer shared (copy-on-write), and this can be a real memory waster on systems where many applications use the same library: All the library copies living in different application address spaces are duplicated instead of shared, increasing the total memory footprint of the system.

V-Tables And String Tables

If all of the above was a bit abstract for you, let’s look at some concrete examples:

In a C++ library, the virtual function call mechanism is a major source of relocations, because vtables are simply lists of function pointers, all entries of which require relocation. But short of reducing the number of virtual functions (something Trolltech originally did for Qt 4), there’s not much one can do about those.

But there is a class of relocations that are 100% avoidable, with some work: string tables. In their simplest form, they come as an array of C strings:

enum Type { NoType, TypeA, TypeB, TypeC, _NumTypes }; const char * const type2string[] = { "", "A", "B", "C", }; static_assert(sizeof type2string / sizeof *type2string == _NumTypes);

But the above is just a short-cut for the following:

const char __string_A[2] = "A"; // ok, no relocation const char __string_B[2] = "B"; // ditto const char __string_C[2] = "C"; // ditto const char * const type2string[4] = { // oops, 4 entries each requiring relocation: &__string_A[1], // optimisation: common suffix is shared &__string_A[0], &__string_B[0], &__string_C[0], };

You can view this as a mapping between a zero-based integer and a string, with the integer implicitly encoded in the string position in the array. In the more complex form, the string table maps something else than a zero-based integer:

static const struct { QRgb color; const char * name; } colorMap[] = { { qRgb(0xFF, 0xFF, 0xFF), "white" }, { qRgb(0xFF, 0x00, 0x00), "red" }, { qRgb(0x00, 0xFF, 0x00), "green" }, // ... };

Here, too, what we colloquially call a “string” is actually a pointer-to-const-char, and therefore in need of relocation at dynamic link time.

One Solution

So the underlying problem here is that strings are inherently reference types — they are only a pointer to the data stored elsewhere. And we learned that data referring to other data causes relocations. So it would seem that the easiest way to avoid relocations is to store the data directly, and not reference it. The two examples above could be rewritten as:

const char type2string[2][] = { "", "A", "B", "C", }; // ok, type2string is an array of const char[2], no relocs static const struct { QRgb color; const char name[6]; // ok, name is const char[6] } colorMap[] = { // same as before };

In both cases, the string data is now stored in-line, and no relocations are necessary anymore.

But this approach has several drawbacks. First, it wastes some space if the strings are not all of the same length. In the above examples, that waste is not very large, but consider what happens if the colorMap above gets a member whose name is “azure light blue ocean waves”. Then the name member needs to be at least of size 31. Consequently, less than two of those structs now fit into one cache line, reducing scanning performance significantly — for both lookups: by-color as well as by-name, which is the second problem.

So, this simple approach that requires no changes to the code or data except to fix the declaration of the string member works well only if the strings are of essentially the same length. In particular, just one outlier pessimises the lookup performance of the whole lookup table.

A Better Solution

Data-Oriented Design suggests that we should prefer to separate data of different type. We can apply this in the colorMap case and hold colors and names in different arrays:

static const QRgb colors[] = { qRgb(0xFF, 0xFF, 0xFF), ... }; static const char names[6][] = { "white", ... };

We still have the gaps within the names array, but at least the colors are out of the way now. We can then compress the string data the way moc has been doing since Qt 4.0:

static const QRgb colors[] = { qRgb(0xFF, 0xFF, 0xFF), qRgb(0xFF, 0x00, 0x00), ... }; static const char names[] = { "white\0" "red\0" ... }; static const uint nameOffsets[] = { 0, 6, 10, ... }; // the i-th name is names[nameOffsets[i]]

We just concatenate all strings into one, with NULs as separators, and record the start of each one in an offset table. Please take a moment to digest this. We now have reached a point where there are no relocations, not more than sizeof(uint) bytes wasted per-entry (could be reduced to sizeof(ushort) or sizeof(uchar) for smaller tables, which is less than the sizeof(const char*) with which we started out), and nicely separated lookup keys and values.

But we have created an unmaintainable beast. The largest such table in Qt is ca. 650 entries in size. One problem is that key and value are now separated — those two arrays better stay in sync. The even larger problem is that no-one is calculating the offset table for us!

So, while this technique of avoiding relocations is pretty well-known, it is hardly ever applied in practice because it essentially forces you to write a code generator to create these intricately-connected sets of tables from a human-readable description.

Enter Q_STRINGTABLE

The key insight now is that C++ comes with powerful code generators built-in: Both Template Meta-Programming (TMP) can be used here, at least in C++11, as well as the good ol’ C preprocessor.

Using the preprocessor, the colorMap example can be written like this:

#define COLORS \ (("white", qRgb(0xFF, 0xFF, 0xFF))) \ (("red", qRgb(0xFF, 0x00, 0x00))) \ ... /*end*/ Q_STRINGTABLE_DATA_UNSORTED(ColorMap, COLORS) #undef COLORS

First, you describe the key-value pairs as a sequence of 2-tuples: ((.,.))(.,.))..., then you feed that into a magic macro (here, the one for when the strings are not sorted), and voila, you get all three tables generated for you, including a nice find() function for looking up values by string. To use:

if (const QRgb *color = ColorMap::find("red")) // found else // not found

Obviously, if you sort the data (one of the things that’s not done automatically for you, yet), you can use Q_STRINGTABLE_SORTED instead and get an O(log N) find() method.

Next week, we’ll look at both the Q_STRINGTABLE API and implementation in more depth. This will also reveal why Q_STRINGTABLE, despite its usefulness, has not been accepted into Qt, yet. If you can’t wait to start playing with it, head over to the Qt-Project Gerrit: Long live Q_STRINGTABLE!. The header file implementing all of this has minimal dependencies (Boost.PP and <QtGlobal>).

Stay tuned!

The post Reducing relocations with Q_STRINGTABLE appeared first on KDAB.

A Linux proud history – 15 years ago and the Brazilian ATM

Mon, 05/25/2015 - 14:06

Some time ago i passed by in one of the bank agencies of a brazilian south bank, called Banrisul, and see a change, the ATM’s are evolving. The ATM’s are changing for modern code, and i don’t know what they are using now, but is the past that is the history itself.

Most of old Linux guys remember that as one of the firsts bank ATM done in Linux in the world ( or at least the first openly shown ) was made here in this bank, here’s a picture from the wonderful article of John MadDog Hall in this Linux Journal article. ( I hope he will not bother that i’m citing him here ).

 

The Banrisul

The Banrisul “Tux” ATM, picture from John MadDog Hall

 

The history i want to share with you is how that “marble Tux” happens. Yes, it was a production machine that you see in the picture and was running in every place in Brazil for at least 10 years.

So, a 25 years old boy, in this case me, the guy typing now,  who was working in a ILOG graphical toolkit partner suddenly decide to look for Linux jobs, it was out of university for 1 year, but was already infected for the open source and Linux for more than 3 years, and thought it can be done.

Lucky me, that there was a company locally in Curitiba, hiring Linux guys, for a short time prototype project in C, and was the chance i foresse to enter in linux job world for good. This company was Conectiva, and then, this prototype end up to be my first job in the company, mostly at this time, was a universe confluence, since all the players involved, the bank through  the manager, Carlos Eduardo Wagner, the corporate development manager from Conectiva, João Luis Barbosa and the PERTO ATM company, moving to Linux, all believing that could be done.

And then, they need the suicide guys, meaning me and Ruben Trancoso which made the mainframe comm network stack.

To resume, 3 months, four different ATM’s with their original specific DOS code, one barely new ATM designed to be first time used in this project by PERTO, and that’s it.

We didn’t had much requisites that time, mostly keep the same original face and make it work. On the verge of everything, we made the base code been ported quickly, but still, was 2000, and linux graphics stack and licensing still not heavily clarified. qt was out of question, Gtk was not suitable for the older environments. Aside other toolkits, i decided go on X11 pure code, which at least took one layer of code bug testing on our side, despite the inherent difficulty and from a guy that get used already on C++ toolkits ( Ilog Views, today now owned by IBM ).

But worked, it paid the efforts, then one day, comes the day where the manager sit downs on your side and say: “We have a big meeting with bank directors to show the prototype, is it ready ?”. The interface was already exactly the same as the older DOS interfaces, and that’s our initial target.

The answer from me was a sound yes, from Ruben as well, but i asked if i could “pimp up” the interface a little. Was a demo anyway, and not need to be the final result.  Just don’t told what i will be doing, since i have some idea, but not THE FINAL idea.

So, i pick up gimp, pick the Conectiva logo, and then put on top right, as a proud developer of his company, and to show that it done by us, here, in Brazil. I know this would be for testing, never would go to production.

And for some reason as most aesthetically possible for a developer, the lower left corner was visibly empty, unbalanced, could have something else there, but couldn’t be too “loud” in terms of graphics, so i decided that an emboss figure could be ok’ish. And i start to drumming my fingers and i heard someone around the office saying something ..Linux…, and again, …Linux word, so i though that need to be something Linux related, obviously. But there are no Linux text logo, no official at least, the only thing was Tux. Then i placed that embossed Tux, proud myself that at least me, my coleagues and some guys at Banrisul will see what we achieved. Again, i know that was for demo day an in production, the clean face would be back.

Then the day of demo and approval came. My manager from Banrisul came back, and say everyone was happy with the results, everything worked as expected, with only one single remarks. ( i was expecting already ), the logo need be gone.

The CONECTIVA logo.

No one single remark over that embossed shadow Tux there.

And then again, the machine gone to a bank office to real public test, again, no remark of Tux logo, some people outside even noticed the penguin.

The rest is history, i left Banrisul after the work and back to Conectiva engineering and KDE  and several other Conectiva staff went there to finish the code that known better than me, polish or remove old DOS tidbits, and 15 years later, still you can see some TUX happily providing money and services for customers.

I remember the day John MadDog took that picture in one FISL, i remember a crazy Miguel de Icaza jumping over the machine taking pictures as well on FISL. Banrisul was smart in place a machine right aside the stairs of the entrance of FISL where thousands of geek was passing daily in ever conference.

Never intended, well executed &#x1f600;

Hitting the ground running

Mon, 05/25/2015 - 09:06

Today is officially the first day of coding for this year's Google Summer of Code. For the next three months I will be working on bringing animation to Krita. There's a lot of work ahead, but I have a solid plan to work with.

Timeline docker wireframesIn addition to the implementation plan from our sprint, we have been discussing the user interface design with some of the animators among our users. Scott Petrovic has made some very nice wireframes based on these. The discussion is still ongoing and constructive feedback is always welcome.

Even though coding officially starts today, I am not starting everything from scratch. As mentioned in my previous post, I have a partially working prototype to build upon. One can already add, move, delete and duplicate keyframes on a paint layer, as well as play the animation in real time. The animation can also be saved and loaded, albeit in an experimental file format.

However, the code is still in a rough state. There are a number of major issues with it, including crashes and even data loss. Due to a number of technical shortcuts taken for the sake of faster prototyping, it is cumbersome and unintuitive to use in places. For instance, in order to play the animation, one must have visited each frame in order to populate the playback cache. In short, it's a minefield of bugs and missing features.

I will start this week by finishing a refactoring of the prototype towards the final design and looking into some of the major issues, especially one relating to data loss with undo/redo operations. Hopefully in a couple of weeks I can get to implementing new features. I for one am looking forward to seeing fully functional animation playback and onion skinning in Krita.

Interview with Griatch

Mon, 05/25/2015 - 09:01

Surprise of Trolls, by Griatch

Could you tell us something about yourself?

I, Griatch, am from Sweden. When not doing artwork I am a astrophysicist, mainly doing computer modeling of astronomical objects. I also spend time writing fiction, creating my own music and being the lead developer of Evennia, an open-source, professional-quality library for creating multiplayer text games (muds). I also try to squeeze in a roleplaying game or two now and then, as well as a beer at the local pub.

Do you paint professionally, as a hobby artist, or both?

A little bit of both. I’m mainly a hobby painter, but lately I’ve also taken on professional work and I’m currently commissioned to do all artwork and technical illustration for an upcoming book on black holes (to be published in Swedish). Great fun!

What genre(s) do you work in?

I try to be pretty broad in my genres and have dabbled in anything from fantasy and horror to sci-fi, comics and still life. I mostly do fantasy, sci-fi and other fantastical imagery but I often go for the mundane aspects of those genres, portraying scenes and characters doing non-epic things. I try to experiment a lot but like to convey or hint at some sort of story in my artwork.

Whose work inspires you most — who are your role models as an artist?

There are too many to list, including many involved in the Krita project! One thing you quickly learn as an artist (and in any field, I’ve found) is that no matter how well you think you are doing for yourself, there are always others who are way better at it. Which is great since it means you can learn from them!

How did you get to try digital painting for the first time?

I did my first digital drawing with a mouse on an Amiga 500 back in the mid-nineties. I used the classical program Deluxe Paint. You worked in glorious 32 colours (64 with the “halfbrite” hardware hack) on a whopping 320×240 pixel canvas. I made fantasy pictures and a 100+ frame animation in that program, inspired by the old Amiga game Syndicate.

But even though I used the computer quite a bit for drawing, digital art was at the time something very different from analogue art – pixel art is cool but it is a completely separate style. So I kept doing most my artwork in traditional media until much later.

What made you choose digital over traditional painting?

I painted in oils since I was seven and kept doing so up until my university years. I dropped the oils when moving to a small student apartment – I didn’t have the space for the equipment nor the willingness to sleep in the smell. So I drew in charcoal and pencils for many years. I eventually got a Linux machine in the early 2000’s and whereas my first tries with GIMP were abysmal (it was not really useful for me until version 2+), I eventually I made my first GIMP images, based on scanned originals. When I got myself a Wacom Graphire tablet I quickly transitioned to using the computer exclusively. With the pen I felt I could do pretty much anything I could on paper, with the added benefits of undo’s and perfect erasing. I’ve not looked back since.

How did you find out about Krita?

I’ve known about Krita for a long time, I might have first heard about it around the time I started to complement my GIMP work with MyPaint for painting. Since I exclusively draw in Linux, the open-source painting world is something I try to keep in touch with.

What was your first impression?

My first try of Krita was with an early version, before the developers stated their intention of focusing on digital painting. That impression was not very good, to be honest. The program had a very experimental feel to it and felt slow, bloated and unstable. The kind of program you made a mental note of for the future but couldn’t actually use yet. Krita has come a long way since then and today I have no stability or performance issues.

What do you love about Krita?

Being a digital painter, I guess I should list the brush engines and nice painting features first here. And these are indeed good. But the feature I find myself most endeared with is the transform tool. After all my years of using GIMP, where applying scale/rotate/flip/morph etc is done by separate tools or even separate filters, Krita’s unified transform tool is refreshing and a joy to use.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I do wish more GUI toolkits would support the GTK2 direct-assignment of keyboard shortcuts: Hover over the option in the menu, then click the keyboard shortcut you want to that menu item. Fast and easy, no scrolling/searching through lists of functions deep in the keyboard shortcut settings. I also would like to see keyboard shortcuts assigned to all the favourite brushes so you can swap mid-stroke rather than having to move the pen around on the pop-up menu.

Apart from this, with the latest releases, most of my previous reservations with the program have melted away actually. Apart from stability concerns, one of the reasons I was slow to adopt Krita in the past was otherwise that Krita seems to want to do it all. Krita has brushes, filters, even vector tools under the same umbrella. I did (and still often do) my painting in MyPaint, my image manipulation in GIMP and my vector graphics in Inkscape – each doing one aspect very well, in traditional Unix/Linux fashion. For the longest time Krita’s role in this workflow was … unclear. However, the latest versions of Krita have improved the integration between its parts a lot, making it actually viable for me to stay in Krita for the entire workflow when creating a raster image.

The KDE forum and bug reporting infrastructure it relies on hides Krita effectively from view as one of many KDE projects. Compared to the pretty and modern Krita main website, the KDE web pages you reach once you dive deeper are bland and frankly off-putting, a generic place to which I have no particular urge to contribute. That the Krita KDE-forum can’t even downscale an image for you, but requires you to first yourself rescale the image before uploading, is so old-fashioned that it’s clear the place was never originally intended to be hosting art. So yes, this part is an annoyance, unrelated to the program itself as it is.

What sets Krita apart from the other tools that you use?

The transform tool mentioned above and the sketch-brush engine which is great fun. The perspective tool is also a very cool addition, just to name a few things. Krita seems to have the development push, support and ambition to create a professional and polished experience. So it will be very interesting to follow its development in the future.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

“The Curious Look”. This is a fun image of a recurring character of mine. Whereas I had done images in Krita before, this one was the first I decided to make in Krita from beginning to end.

What techniques and brushes did you use in it?

This is completely hand-painted only using one of Krita’s sketch brushes, which I was having great fun with!

Where can people see more of your work?

You can find my artwork on DeviantArt here: http://griatch-art.deviantart.com/
I have made many tutorials for making art in OSS programs: http://griatch-art.deviantart.com/journal/Tutorials-237116359
I also have a Youtube channel with amply commented timelapse painting videos: https://www.youtube.com/user/griatch/videos

Anything else you’d like to share?

Nothing more than wishing the Krita devs good luck with the future development of the program!

A neat UNIX trick for your bash profile

Sun, 05/24/2015 - 15:12

Hi folks! I have been spending a lot of time with KStars lately. I will write a detailed account on the work done till now, but here’s something I found interesting. This, I think is a handy ‘precautionary’ trick that every newbie should implement to avoid pushing to the wrong git repo/branch.

Here’s what you do. Open up the konsole and type cd ~ (This should take you to your home directory. Now what we need to do is add one line to your .bashrc file.

$nano .bashrc

Opens up your .bashrc file in the nano editor (you could choose vim, or emacs too).

Add this line export PS1=’\W$(__git_ps1 “(%s)”)> to the part of the ‘if block’ that pertains to bash completion. In my case this is how my .bashrc looks.

  if [ -f /usr/share/bash-completion/bash_completion ]; then
    . /usr/share/bash-completion/bash_completion
        export PS1=’\W$(__git_ps1 “(%s)”)> ‘
  elif [ -f /etc/bash_completion ]; then
    . /etc/bash_completion
  fi

What this does is that it changes the text on your konsole. Whenever you enter a git repository, the text on your console reads the repo name with the git branch you are on currently (hence the %s to the __git_ps1 variable). This is how my kstars repository now looks.

~> cd Projects/kstars/
kstars(gsoc2015-constellationart)> git branch
* gsoc2015-constellationart
  master
kstars(gsoc2015-constellationart)> git checkout master
Switched to branch ‘master’
Your branch is up-to-date with ‘origin/master’.
kstars(master)>

Now you can always know what branch you are on, without typing git branch. Pretty neat! &#x1f60e;

 

 

Pages