The main new feature of this release is a new tool that allows you to easily extract data from image files – Datapicker. This tool was contributed by Ankit Wagadre during GSoC2015, s.a. his final report, who continued to work on this tool even after the summer program was over.
The process of data extraction consists mainly out of the following steps:
- Import an image containing plots and curves where you want to read the data points from
- Select the plot type (cartesian, polar, etc.)
- Select tree reference points and provide values for them. With the help of these points the logical coordinate system is determined.
- Create a new datapicker curve and set the type of the error bars
- Switch to the mouse mode “Set Curve Points” and start selecting points on the imported image – the coordinates for the selected points are determined and added to the spreadhseet “Data”
It is possible to add more then one datapicker curve. This is useful in case the imported image contains several curves that need to be digitized. The datapicker curve that is currently being selected in the Project Explorer is the “active” one – points clicked on the datapicker image will be calculated and added to its data spreadhseet. Calculated values are stored in different columns in data spreadsheets in the datapicker. These columns behave exactly the same like other columns in usual spreadsheets and can be directly used as source columns for curves in your own plots.
Datapicker supports the process of the data extraction with several helpers. To place the points more precisely, a magnification glass with different magnification levels is available. Also, the last selected point can be shifted with the help of the navigation keys. Furthermore, when reading data points having error bars, datapicker automatically creates bars indicating the end points of the error bars. Those bars can be pulled with the mouse until the required length (the distance to the data point) is reached.
The procedure for the extraction of data from an imported plot as described above is feasible when dealing with a limited number of points. In case the curves in the imported image are given as solid lines, the datapicker tool in LabPlot; allows to read them (semi-)automatically. For this, after a new datapicker curve was added as described above, switch to the mouse mode “Select Curve Segments”. The curves on the plot are recognized and highlighted. By clicking on a highlighted curve (or one of its segments), points along this curve are created. The length of a segment and the density of created points (separation between two points) are adjustable parameters.
In many cases the plot is not as simple as above (single black curve on white background) and contains grid lines, many curves of different color and thinness and a non-white background. In such a case the automatic detection fails (too many or no objects are highlighted). To help the datapicker to determine the curve(s) correctly, the user has to limit the allowed ranges in the HSV (or HSI) colour spaces. To subtract the non-white background it is possible to limit the range for the foreground colour, too. Internally, each pixel of the image is converted to black and white where only the points fitting into the user-defined ranges for hue, saturation, value, intensity and foreground are set to black.
In the screenshots below, the blue curves in the original image were projected onto by having appropriately reduced the allowed ranges in the colour space (note the peak for blue in the histogram for the hue). The transformed black and white image contains only the curves the user is interested in and it is now an easy task for the datapicker to determine the curves and to place points on them.
As another new feature in this release a new custom point was implemented which allows the user to add a symbol to the plot at user-defined position, see the screenshot below:
Such a point can be moved freely around the plot area with the mouse or by directly specifying it’s coordinates in the corresponding dock widget.
LabPlot acccepts now drag&drop-events. You can drag the file you want to import from your file manager and drop it on LabPlot – the import dialog will pop up and you can proceed importing the data as usual.
Finally, some speed-up in the rendering of the image view of the matrix was achieved by utilizing the multi-threading and the support for GSL 2.x was added. The release also includes several bug fixes, the details for them can be found in the ChangeLog-file.
10 places to get free 3d printing files
Well like I said in my last post, 3d printing isn’t only to print Eiffel Towers, and Thigiverse isn’t the only website that you can find 3d models to print.
Ark, the file archiver and compressor developed by KDE, has seen the addition of several new features as well as bugfixes in Applications 16.04. This blog post gives a short summary of the changes.
New features Properties dialog
Ark got a properties dialog that shows various information about the currently opened archive. This includes e.g. archive type, compressed and uncompressed size, as well as MD5, SHA-1 and SHA-256 cryptographic hashes. The hashes can be selected with the mouse for easy copying. The properties dialog can be accessed in the Archive menu or using the keyboard shortcut ALT+RETURN.Ark’s new Properties dialog.
Unarchiver is a free and open-source archive decompressor that supports e.g. RAR archives. A new plugin for this decompressor was developed by Elvis Angelaccio and added in Ark 16.04. Ark previously required unrar for opening/decompressing RAR archives. The new plugin is now used if unrar is not installed on the system. This is relevant because of unrar's not-completely-free status and hence not being easily available in some distributions.
See this blog post by Elvis for more details on the unarchiver plugin.
Support for new compression formats
Ark can now compress/decompress TAR archives using three new compression formats: Lzop, lzip and lrzip. Support for lrzip requires the lrzip executable to be installed. Additionally, support for creating tar.Z archives had been broken for some time, but this should now be fixed.Ark’s New Archive dialog showing support for several new compression formats.
Runtime check for executables
Ark now checks if executables needed to handle certain archive types are installed. If the executables are not found in path, the formats are not displayed in Open/New dialogs. Previously, an error would be displayed if the user attempted to open/create an archive for which a needed executable was not installed.
Improved password widget
Ark now uses the new password widget developed by Elvis Angelaccio (KNewPasswordWidget) when asking for a password to protect an archive. This means e.g. that the user gets nice color feedback when the second entered password is different from the first. There is also a “Show password” icon that can be clicked.Ark’s new password widget showing red background due to non-identical passwords.
Polishing of the user interface
Ark’s menus and toolbars were polished to hopefully achieve a more user-friendly, intuitive and modern interface.
Firstly, the status bar is now only displayed when a job is executing. This makes sense since it was only being used for displaying a progress bar when a job was running.
The menu system was re-organized. There is now an Archive and a File menu, which contain actions affecting the archive and files within the archive, respectively. Also, the toolbar was restructured to be less cluttered.The user interface of Ark was polished in 16.04. For example, the statusbar is now hidden and the menus and toolbar were reorganized.
Ark is now increasingly using it’s message widget instead of message boxes for displaying error, warning or information messages to the user. Additionally, various error messages were improved to be more user-friendly.The message widget showing an information message to the user.
A bunch of bugs were fixed in the 16.04 release; several of these being 6-7 years old. The bugfixes are too numerous to list here, but some of the most important ones are mentioned below.
When extracting, Ark should now present an error message if the destination partition is full. Previously, Ark would either fail silently or or the user interface would be in a “busy” state continuously.
The quick-extract menu is used to quickly extract to destination folders that have been used previously. This menu has not worked since the port to KDE Frameworks 5 in Applications 15.04. Thanks to new Ark contributor Chantara Tith (tctara), the menu is now fixed and works properly.
Other bugfixes include drag’n’drop extracting huge files doesn’t fill the memory anymore, DrKonqi is now used again for handling crashes and overwriting archives should now work as intended.
Testing and feedback
The 16.04 beta is now out, while the release candidate will be out on April 6th and the final release on April 20th. Please test the new features and provide feedback either as comments on this blog post or as bugs on KDE’s bugzilla. Hopefully, we can squash some more bugs before the final release.
There are several new features planned for Ark 16.08. These include a possibility to set compression level when creating new archives and a plugin configuration page to allow users to e.g. select which plugins to use for various archive types.
Thanks to Elvis Angelaccio and Chantara Tith for their development work and Thomas Pfeiffer for providing feedback on user interface changes.
Today I released 1.0.2 of Yokadi, the command-line todo list, it comes with a few fixes:
- Use a more portable way to get the terminal size. This makes it possible to use Yokadi inside Android terminal emulators like Termux
- Sometimes the task lock used to prevent editing the same task description from multiple Yokadi instances was not correctly released
- Deleting a keyword from the database caused a crash when a t_list returned tasks which previously contained this keyword
Download it or install it with pip3 install yokadi.
Could you tell us something about yourself?
Well I’ve been drawing for most of my life and I’ve always wanted to make art for games, so I pursued a degree at Collins College and I graduated with my Bachelor of Science in Game Design.Do you paint professionally, as a hobby artist, or both?
Both! I’m freelance at the moment. While I enjoy drawing for fun, and most of my experience has been from doing it as a hobby, I’m more than ecstatic to draw something for someone and when they’re offering to pay for it, it does give that motivation to make it the best I can do. After all if you’re paying for something, you expect the artist to put in their heart and soul!What genre(s) do you work in?
Would you consider “Anime and Manga Inspired” as a genre? Most of my art borrows from Anime styles, typically Takehito Harada’s style.
Other than that, I can usually imitate any genre and style if I really want to.Whose work inspires you most — who are your role models as an artist?
Takehito Harada is hands down my favorite artist, I own all of his art books and study his drawing style a lot. I hope to be able to replicate his style someday.How and when did you get to try digital painting for the first time?
The first time I ever painted digitally was in 2013, I had just started my college classes, had gotten my first graphics tablet ever and before that I mostly stayed away from digital painting. I lacked a tablet and using vectors or a mouse was foreign and confusing to me.What makes you choose digital over traditional painting?
It’s so much more convenient! I had always wanted to go digital because it has nigh infinite resources, that coupled with the benefit of having a wonderful community out there that creates many cool tools and brushes, which overall makes the drawing experience a blast. Before digital I could spend up to two days working on a single piece and now I can finish a drawing in hours, it is a real time saver.How did you find out about Krita?
My college instructor and fellow classmates showed me the Kickstarter back in May 2015, I had been excited from the very first moment I saw it.What was your first impression?
It looked really cool, both my instructor and classmates all seemed impressed by the program, which of course had an effect on me since I looked up to them as artists.What do you love about Krita?
The community and developer support, it’s really awesome and I find it gives the encouragement to keep making more works.What do you think needs improvement in Krita? Is there anything that really annoys you?
The ‘file layer’ tool, it’s a really powerful tool but it’s a little bit lacking, I originally thought it was just like the “smart object” tool from Photoshop, only to find out you can’t scale or transform your ‘file layer’, which is a shame because you could use that tool to draw higher resolution details in another file, import it in, scale it down just like the ‘smart objects’ in Photoshop and retain the detail. I would really like to see that feature improved upon in the future.What sets Krita apart from the other tools that you use?
Well for one, it doesn’t take forever to do things in, even when drawing huge images I don’t get slow down or the pen lag I get while using other products!If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?
I’d say it’d be as of this time my most recent finished piece, my Undertale OC Eris Luna.
I used a ton of brushes and played with more features while drawing it, I got to learn quite a bit from it.What techniques and brushes did you use in it?
I mostly stuck to the Default Ink brushes and FX Brushes, but I used a handful of the ‘Deevad’ brushes in it as well.
A technique I use for shading involves about three to four layers painted over the image in solid colours set to 25% opacity. Then I blended all of the shading layers out to create a soft effect or at least the effect I desire. To be honest I don’t really follow any strict techniques, I have a habit of just doing anything necessary to get the picture looking the way I want it!Where can people see more of your work?
I hope you had a wonderful read. I’m hoping for my twitch channel to begin growing, right now I’m very small but I feel over time and with the more works I do, I may even become a name people recognize! Thank you for this wonderful opportunity and I look forward to making many more wonderful works in Krita.
The call for papers and call for tools for the WSL – Workshop de Software Livre (Workshop on Free Software), the academic conference held together with FISL – Fórum Internacional de Software Livre (International Free Software Forum) is open!
WSL publishes scientific papers on several topics of interest for free and open source software communities, like social dynamics, management, development processes, motivations of contributors communities, adoption and case studies, legal and economic aspects, social and historical studies, and more.
This edition of WSL has a specific call for tools to publish papers describing software. This specific call ensures the software described was peer-reviewed, is consistent with the principles of FLOSS, and the source code will be preserved and accessible for a long time period.
All accepted and presented papers will be published in WSL open access repository. This year we are working hard to provide ISSN and DOI to the publications.
The deadline is April 10. Papers can be submitted in Portuguese, English or Spanish.
See the official WSL page to get more information.
It’s been two weeks since we are back from the Sprint @CERN: we came back with lot of work done but much more to be done, that’s why I’ve decided to post something about our post-Sprint activity up to now.
These weeks have been full of important work for us and since I’m part of the editor team let me tell you two words about what we are working on.
We are focusing very much on organizational aspects such as the introduction of new users, our internal structure, solving some bugs related on how math is rendered in some browsers and the conversion from tex (or any other format) to mediawiki. According to me our strenght is the fact that we are a young – and extremely willing – community: all these points are very important to us, since we understand that the future of the community is based on them and that’s why we are trying to do our best to deepen these aspects and not to leave anything to chance.
One of the things that I really appreciate is the attention we are paying to new users: we really care about them! We though about the idea of creating the role of “tutors” (i.e. more experienced users) to help newcomers, because we want everybody to be perfectly introduced in the community and to feel free to ask for any doubt. Moreover we’ve also decided to make editing experience even more user friendly than it is by using buttons and interactive tools on the personal userpage and on the editor environment. More precisely: new users are really important to us and we care about them, that’s why this point has been so fundamental in these weeks.
We are working, we are growing and the best is yet to come: #operation1000 is coming!
(Guest post by Nathan Lovato)
Game Art quest, the 2d game art training series with Krita, is finally coming out publicly! Every week, two new videos will be available on YouTube. One on Tuesdays, and one on Thursdays. This first volume of the project won’t cost you a penny. You can thank the Kickstarter backers for that: they are the ones who voted for a free release.
The initial goal was to produce a one-hour long introduction to Krita. But right now, it’s getting both bigger and better than planned! I’m working on extra course material and assignments to make your learning experience as insightful as possible.
Note that this training is designed with intermediate digital art enthusiasts in mind. Students, dedicated hobbyists, or young professionals. It might be a bit fast for you if you are a beginner, although it will allow you to learn a lot of techniques in little time.
Subscribe to Gdquest on Youtube to get the videos as soon as they come out: http://youtube.com/c/gdquest
Here are the first two tutorials:
Hurrah, Launchpad now handles git, and can generate daily builds directly from this clone!
So ubuntu users now have 3 repositories to get the latest of Kdenlive:
- ppa:kdenlive/kdenlive-master is the development branch, with the very latest features additions, to give us feedback on the app evolution
- ppa:kdenlive/kdenlive-testing is the feature-frozen branch, starting from the first beta, with bugfix updates as soon as we find solutions
- ppa:kdenlive/kdenlive-stable is the last robust official release
As all these require Qt5/KF5, builds are activated for Wily (for some time) and Xenial (LTS to be released soon)... Live in the present ;)
This work is now owned by a team on Launchpad, you are welcome to join if you want to co-maintain the packages. I deleted my own unmaintained PPA, and invite users to switch to one of the above.
If daily builds are available on other distributions, maintainers are welcome to advertise their work on our wiki, and will be glad to relay the info here!
Happy testing and editing!
Can't reach the server from the LANOf course, I sync files on my desktop between my laptop and phone. The desktop client is setup with the IP address of the server in my living room. But my phone and laptop, configured to connect to my public, DynDNS URL (so they work when I'm traveling), can't connect from the home network. Triple-uncool. I like my photos from my phone to by auto-uploaded when I connect to wifi at home; and more importantly my laptop should sync when I get home from travel!
Danimo blamed my router - a Cisco (Linksys) E4200. That was (once upon a time) an expensive, high-end router. Sadly, having been abandoned by its manufacturer, it has become an expensive, high-end liability. I can't even log into the administration interface, browsers tell me that the connection is insecure. There are more issues, like the slow WLAN-LAN transfer speeds I experienced and I'm not even talking about security here. Linus once eloquently expressed his feelings towards NVIDIA, a resentment I now feel towards CISCO.
openWRT to the rescueI learned my lesson. No router not able to run an open source firmware will get in my house. While I don't feel any need whatsoever to fiddle with things that do their job, Linksys screwed up here: they left me on broken software long before I had any need for new hardware.
After some digging, I learned that TP-Link has been (mostly inadvertently) a decent citizen for OpenWRT fans. So, even if they'd abandon their router like Linksys/Cisco did, there was a future. I bought a TP-Link Archer C7. Affordable and it can run OpenWRT.
After setting it up initially, things worked. For a day. After that, no amount of fiddling could make it work again. Magic. Today I gave up on the original firmware and installed OpenWRT. It was easy - as easy as upgrading to a new TP-Link firmware: download the openWRT firmware, go to the upgrade interface, select it, hit start. A while later you ca visit the web interface. Which is a tad more complicated, but not much - and noticeably more capable. It didn't take me any longer than on the original firmware to set up my wifi and guest networks.
How to make it workBut it didn't solve the problem. I had to resort to a web search and found a neat trick, which I'm happy to share (assuming 192.168.1.11 is your server on your LAN):
- Log into your router over ssh
- Add to your /etc/dnsmasq.conf file the following: address=/example.com/192.168.1.11
- Add to your /etc/hosts file: 192.168.1.111 example.com
Essentially, the DNS provider in OpenWRT will provide your local server address to local clients... It thus breaks when you use another DNS than the one provided by the router via DHCP.
I'd be happy to hear from other and/or better solutions. Heck, this might only work for a day or might be horrible or maybe I changed something else which made it work. What do I know...
For now I have created Maxima Session and now I work on different backends, implementing property section, different actions needed for the management of cantor's session in the upcoming week. I hope to integrate cantor's worksheet completely by midterm evaluation.
First phase to GSOC 2015 has ended, that is community bonding period. I have tried my best to interact with my mentor and community members, but I was mostly occupied with my university examination during most of the time.
Looking forward to the coding period I have started working on integration of Cantor's UI into LabPlot. I will push all my commits to branch integrate-cantor of LabPlot.
Looking forward to learn, code and develop this summer !
Below are the screenshots of LabPlot.
As you can see in the above screenshots Cantor's session is integrated. Variable manager, Print, Print Preview and all other relevant actions in Cantor has also been implemented into LabPlot. With implementation of all these I have successfully achieved my midterm evaluation target. I was working on improvising my code implemented so far and implementing my mentor's suggestion to code.
I will now move on to extract variables from the cantor's session so that we can use them to create new plots inside LabPlot. After that I will work on saving Cantor's data along with LabPlot's data, so that user could save and load both the worksheets.
I have learned a lot during this journey so far. I learn how duscussion plays an important role in the development of a software. I learn about some of the best practices that should be followed and their importance in real life code.
That is not all! My upcoming weeks will see more of coding and hence I am prepared to learn more during that time.
For this demonstration I will be using Python3 and numpy.
Let's first create a Python3 session inside LabPlot.
I will be referring to the script from here.
So let's execute the script and transfer every variable to lists as LabPlot for now supports only lists and tuple data structure. The screenshot bellow shows the final script when written.
Every variable that is either list or tuple is converted to a column inside LabPlot which are child objects of the main CAS Session. These columns can then be used to generate plots as shown below:
Finally, we chose the columns/variables we want to use and plot the graph. The following graph is generated if we plot "t vs s1" and "t vs s2" columns from the above data:
I have created parser, that is used to parse the variables to columns, for every backend, but testing for Sage and R backend is left. I will be testing those two parsers extensively during this week.
Below are some more screenshots showing zoomed in and zoomed out plots.
Moreover, user can now save all the data of the session as well as the plots inside one LabPlot's .lml file and it can be loaded to be used next time user want.
I hope with integration of the two applications, user experience of the users will increase and they will have a richer experience while ploting and using CAS .
It was amazing to meet Stephanie and Catherine in person. They shared their GSOC experience and how such a big event is managed and successfully organised from past 12 years continuously.
I also met Mike McQuaid and had a healthy chat about homebrew and GitHub (where he is currently working)
It was a great experience hope to be in touch with all the new developers I met :)
At SMC, we’ve been continuously working on improving the fonts for Malayalam – by updating to newer opentype standard (mlm2), adding new glyphs, supporting new Unicode points, fixing shaping issues, reducing complexity and the compiled font size, involving new contributors etc.
Recently, out of scratching my own itch, I decided that it is high time the annoyance that combination of Virama(U+D04D ് ) with quote marks (‘ ” ‘ ’ “ ” etc) used to overlap into an ugly amalgam in all our fonts. Usually Virama(് ) connects/combines two consonants which makes all 3 into a new glyph – for example സ+്+ന is shaped with a new glyph സ്ന (Note that you need a traditional orthography font installed to see the distinction in this example. Many of them are available here) . The root of the problem is that sometimes when Virama(് ) appears individually in a word such as “സ്വപ്നം” it connects two consonants പ and ന, it is positioned above the x height of most glyphs and it shall not have much left and right bearing to avoid ugly spacing between the consonants പ and ന. Because of small side bearings, when a quote mark follows it, quote mark gets a little juxtaposed into Virama glyph and renders rather bad. The issue is quite prominent when you professionally typeset a book or article in Malayalam using XeTeX or SILE.
Fontforge’s tools made it easy to write opentype lookup rules for horizontal pair kerning to allow more space between Virama(് ) and quote marks. You can see the before and after effect of the change with Rachana font in the screenshot.
Other fonts like AnjaliOldLipi, Meera and Chilanka also got this feature and those will be available with the new release in the pipeline. I have plans to expand this further to use with post-base vowels of വ(്വ) and യ(്യ) with abundant stacked glyphs that Malayalam have.
Tagged: fonts, smc
We are preparing Kubuntu Xenial Xerus (16.04 LTS) for distribution on April 21, 2016. With this Beta 2 pre-release, you can see what we are trying out in preparation for our next stable version. We would love to get some testing by our users.
Plasma 5, the next generation of KDE’s desktop has been rewritten to make it smoother to use while retaining the familiar setup. The fifth set of updates to Plasma 5 is the default in this version of Kubuntu. Plasma 5.6 should be available in backports soon after release of the LTS.
Kubuntu comes with KDE Applications 15.12 containing all your favorite apps from KDE, including Dolphin. Even more applications have been ported to KDE Frameworks 5 but those which aren’t should fit in seamlessly. Non-KDE applications include LibreOffice 5.1 and Firefox 45.
Please see Release Notes for more details, where to download, and known problems. We welcome help to fix those final issues; please join the Kubuntu-Devel mail list, just hop into #kubuntu-devel on freenode to connect with us.
1. Kubuntu-devel mail list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
“OpenBlackMischief PhotoPaintStormTool Kaikai has a frobdinger tool! Krita will never amount to a row of beans unless Krita gets a frobdinger tool, too!”
The cool thing about open source is that you can add features yourself, and even if you cannot code, you talk directly with the developers about the features you need in your workflow. Try that with closed-source proprietary software! But, often, the communication goes awry, leaving both users with bright ideas and developers with itchy coding fingers unsatisfied.
This post is all about how to work, first together with other artists, then with developers to create good feature requests, feature requests that are good enough that they can end up being implemented.
For us as developers it’s sometimes really difficult to read feature requests, and we have a really big to-do list (600+ items at the time of writing, excluding our own dreams and desires). So, having a well written feature proposal is very helpful for us and will lodge the idea better into our conscious. Conversely, a demand for a frobdinger tool because another application has it, so Krita must have it, too, is likely not to get far.
Writing proposals is a bit of an art form in itself, and pretty difficult to do right! Asking for a copy of a feature in another application is almost always wrong because it doesn’t tell us the most important thing:
What we primarily need like to know is HOW you intent to use the feature. This is the most important part. All Krita features are carefully considered in terms of the workflow they affect, and we will not start working on any feature unless we know why it is useful and how it is exactly used. Even better, once we know how it’s used, we as developers can start thinking about what else we can do to make the workflow even more fluid!
Good examples of this approach can be found in the pop-up palette using tags, the layer docker redesign of 3.0, the onion skin docker, the time-line dockers and the assistants.
Feature requests should start on the forum, so other artists can chime in. What we want is that a consensus about workflow, about use-cases emerges, something our UX people can then try to grok and create a design for. Once the design emerges, we’ll try an implementation, and that needs testing.
For your forum post about the feature you have in mind, check this list:
- It is worth investigating first if the feature in question has similar functionality in Krita that might need to be extended to solve the problem. We in fact kind of expect that you have used Krita for a while before making feature requests. Check the manual first!
- If your English is not very good or you have difficulty finding the right words, make pictures. If you need a drawing program, I heard Krita is pretty good.
- In fact, mock-ups are super useful! And why wouldn’t you make them? Krita is a drawing program made for artists, and a lot of us developers are artists ourselves. Furthermore, this gets past that nasty problem called ‘communication problems’. (Note: If you are trying to post images from photobucket, pasteboard or imgur, it is best to do so with [thumb=][/thumb]. The forum is pretty strict about image size, but thumb gets around this)
- Focus on the workflow. You need to prepare a certain illustration, comic, matte painting, you would be (much) more productive if you could just do — whatever. Tell us about your problem and be open to suggestions about alternatives. A feature request should be an exploration of possibilities, not a final demand!
- The longer your request, the more formatting is appreciated. Some of us are pretty good at reading long incomprehensible texts, but not all of us. Keep to the ABC of clarity, brevity, accuracy. If you format and organize your request we’ll read it much faster and will be able to spent more time on giving feedback on the exact content. This also helps other users to understand you and give detailed feedback! The final proposal can even be a multi-page pdf.
- We prefer it if you read and reply to other people’s requests than to start from scratch. For animation we’ve had the request for importing frames, instancing frames, adding audio support, from tons of different people, sometimes even in the same thread. We’d rather you reply to someone else’s post (you can even reply to old posts) than to list it amongst other newer requests, as it makes it very difficult to tell those other requests apart, and it turns us into bookkeepers when we could have been programming.
Keep in mind that the Krita development team is insanely overloaded. We’re not a big company, we’re a small group of mostly volunteers who are spending way too much of our spare time on Krita already. You want time from us: it’s your job to make life as easy as possible for us!
So we come to: Things That Will Not Help.
There’s certain things that people do to make their feature request sound important but are, in fact, really unhelpful and even somewhat rude:
- “Application X has this, so Krita must have it, too”. See above. Extra malus points for using the words “industry standard”, double so if it refers to an Adobe file format.
We honestly don’t care if application X has feature Y, especially as long as you do not specify how it’s used.
Now, instead of thinking ‘what can we do to make the best solution for this problem’, it gets replaced with ‘oh god, now I have to find a copy of application X, and then test it for a whole night to figure out every single feature… I have no time for this’.
We do realize that for many people it’s hard to think in terms of workflow instead of “I used to use this in ImagePainterDoublePlusPro with the humdinger tool, so I need a humdinger tool in krita” — but it’s your responsibility when you are thinking about a feature request to go beyond that level and make a good case: we cannot play guessing games!
- “Professionals in the industry use this”. Which professionals? What industry? We cater to digital illustrators, matte painters, comic book artists, texture artists, animators… These guys don’t share an industry. This one is peculiar because it is often applied to features that professionals never actually use. There might be hundreds of tutorials for a certain feature, and it still isn’t actually used in people’s daily work.
- “People need this.” For the exact same reason as above. Why do they need it, and who are these ‘people’? And what is it, exactly, what they need?
- “Krita will never be taken seriously if it doesn’t have a glingangiation filter.” Weeell, Krita is quite a serious thing, used by hundreds of thousands of people, so whenever this sentence shows up in a feature request, we feel it might be a bit of emotional blackmail: it tries to to get us upset enough to work on it. Think about how that must feel.
- “This should be easy to implement.” Well, the code is open and we have excellent build guides, why doesn’t the feature request come with a patch then? The issue with this is very likely it is not actually all that easy. Telling us how to implement a feature based on a guess about Krita’s architecture, instead of telling us the problem the feature is meant to solve makes life really hard!
A good example of this is the idea that because Krita has an OpenGL accelerated canvas, it is easy to have the filters be done on the GPU. It isn’t: The GPU accelerated canvas is currently pretty one-way, and filters would be a two-way process. Getting that two way process right is very difficult and also the difference between GPU filters being faster than regular filters or them being unusable. And that problem is only the tip of the iceberg.
Some other things to keep in mind:
- It is actually possible to get your needed features into Krita outside of the Kickstarter sprints by funding it directly via the Krita foundation, you can mail the official email linked on krita.org for that.
- It’s also actually possible to start hacking on Krita and make patches. You don’t need permission or anything!
- Sometimes developers have already had the feature in question on their radar for a very long time. Their thinking might already be quite advanced on the topic and then they might say things like ‘we first need to get this done’, or an incomprehensible technical paragraph. This is a developer being in deep thought while they write. You can just ask for clarification if the feedback contains too much technobabble…
- Did we mention we’re overloaded already? It can easily be a year or two, three before we can get down to a feature. But that’s sort of fine, because the process from idea to design should take months to a year as well!
To summarize: a good feature request:
- starts with the need to streamline a certain workflow, not with the need for a copy of a feature in another application
- has been discussed on the forums with other artists
- is illustrated with mock-ups and example
- gets discussed with UX people
- and is finally prepared as a proposal
- and then it’s time to find time to implement it!
- and then you need to test the result
(Adapted from Wolthera’s forum post on this topic).
So Plasma 5.6 has seen the revival of the Weather widget that is part of KDE Plasma Add-ons module (revival as in: ported from Plasma4 API to Plasma5 API).
(And even got the interesting honour to be mentioned as the selling item in the headline of a Plasma 5.6 release news article by a bigger German IT newsite, oh well :) )
This revival was concentrating for 5.6 to restore the widget in its old state, without any bugs. But that’s not where things are to end now…
And you, yes, you, are invited and even called in to help with improving the widget, and especially for connecting to more weather data providers, incl. your favourite one.
For a start, let’s list the current plans for the Weather widget when looking at the next Plasma release, 5.7:
- Overhaul of look (IN PROGRESS)
- Breeze-style weather state icons (IN PROGRESS)
- also temperature displayed in compact widget variant, like on panel (TODO)
- support for more weather data providers (YOUR? TODO)
- privacy control which weather data providers are queried on finding weather stations (YOUR? TODO)
The KDE meta sprint at CERN (of groups WikiToLearn, Plasma, VDG, techbase wiki cleanup) at begin of this March, where I mainly went for techbase wiki cleanup work, of course was a great occasion to also work face-to-face with Plasma developers and VDG members on issues around the Weather widget, and so we did. Marco helped me to learn more about Plasma5 technologies which resulted in some small bugs fixed in the widget still in time for Plasma 5.6 release.
Ken presented me the drafts for the look & design and the config of the weather widget that he had prepared before the sprint, which we then discussed. Some proposals for the config could already be applied in time for Plasma 5.6 release (those without need for changed UI texts, to not mess the work of the translators). The rest of it, especially the new look & layout of the widget, is soon going to be transferred from our minds, the notes and the photos taken from the sketches on the whiteboard at CERN into real code.
Ken also did a Breeze style set of the currently used weather type icons. While touching the icons, a few icon naming issues are going to be handled as well (e.g. resolving inconsistent naming pattern or deviation from the weather icon names actually defined in the XDG icon naming spec (see Table 12. Standard Status Icons). Should soon be done as well.Temperature display in compact widget variant
One thing that has been missing also from the old version of the Weather widget is the display of the current temperature in the compact widget variant, where now only the current weather condition is shown (e.g. when on the panel). While some weather data providers (like wetter.com) do not provide that detail, so there is nothing to display, others do, and often it is pretty nice to know (clear sky can happen with temperatures of any kind, so it’s good information about if and then what kind of jacket to put on before stepping outside). First code is drafted.
Now, finally the things were you can improve things for yourself and others:Supporting more weather data providers
The Weather widget does not talk to any weather data providers directly. Instead it talks to a weather dataengine (currently part of the Plasma Workspace component), to query for any weather stations matching the entered location search string when configuring the widget and to subscribe to the data feed for a given weather station from a given weather data provider.
That weather dataengine itself again also does not talk directly to any weather data providers. Instead it relies on an extendable set of sub-dataengines, one per weather data provider.
The idea here is to have by the one weather dataengine an abstraction layer (he, after all this is KDE software ;) ) which maps all weather data services into a generic one, so any UI, like the Weather widget, only needs to talk one language to whoever delivers the data.
Which works more. Or less. Because of course there are quite some weather data specifications out there, what else did we expect :P And possibly the spec interpretations also vary as usual.
((You might think: “Well, screw that over, there is only one user of the weather dataengine, so integrate that directly into the Weather widget!”
While that might be true right now, it does not need to stay this way. There are ideas like showing the weather forecast also with the days in Plasma calendar widgets. Or having a wallpaper updater reflecting the current weather by showing matching images, yes, not everyone has the joy to work close to a window, enjoy if you do. And also alternative weather widgets with another UI, remember also the WeatherStation widget in kdeplasma-addons (still waiting for someone to port it), which focussed on details of the current weather. These kind of usages might prefer a normalized model of weather data as well, instead of needing custom code for each and any provider again. Actually long-term such a model would be interesting outside of Plasma spheres, like for any calendaring apps or e.g. a plugin for Marble-based apps showing weather states over a map))
While I only took over kind of maintainership in the last weeks, so did not design the current system myself, I still think it’s a good approach, having in mind reusable UIs and relative independence from any given weather data providers. So for now I do not plan to dump that, simply lacking a more promising alternative.
So, given you are still reading this and thus showing me and yourself your interest :) let’s have a closer look:
The sub-dataengines for the different weather data providers have been named “ion”s. On the code level they are subclasses of a class IonInterface, which itself is a subclass of Plasma::DataEngine.
See here for the header defining IonInterface: https://quickgit.kde.org/?p=plasma-workspace.git&a=blob&f=dataengines%2Fweather%2Fions%2Fion.h
This header and the respective lib libweather_ion are public and should be installed with the devel packages of Plasma Workspace. Which means you should be able to develop your custom ion as 3rd-party developer without the need to checkout the plasma-workspace repository and develop it in there.
Plasma 5.6 itself installs three such ions:
- wetter.com: adapter to service of the private company running wetter.com
- envcan: adapter to service of Environment Canada, by Government of Canada
- noaa: adapter to service of USA’s National Oceanic and Atmospheric Administration
Find their sources here: https://quickgit.kde.org/?p=plasma-workspace.git&a=tree&f=dataengines%2Fweather%2Fions
In that source folder you will also spot a bbcukmet ion, for the BBC weather service. While being ported to build and install at least with Plasma5, the service API of BBC as used by the code seems to have been either changed or even removed. So that ion is currently disabled from the build (uncomment the #add_subdirectory(bbcukmet) to readd it to the build). Investigations welcome.
Another old ion which though already got removed during the revival was a more fun one: there used to be a debian “weather” service (random related blog post), which reported the status of Debian-based distros by number of working packages as weather reports, and this ion connected to them. But the service died some years ago, so the ion was just dead code now (find the unported code in versions of “Plasma 5.5” and before)
Talking about funny weather reports: why not write an ion for the CI system Jenkins, e.g. with build.kde.org, which while perhaps not being able to give forecasts at least reports the current build state, with builds mapped to stations. After all the short report for a build uses the weather metaphor, see https://build.kde.org/ :)
For more serious weather data provider ions again, as you surely know or can guess, there are many more public & private weather data providers than those 3 currently supported. And they not only may have a better coverage of your local area, but might also provide more data or more suited ones.
Please also see the proposal for using “weather data from the open data initiatives” in a comment on the first blog post on the Weather widget.
It would be great to have a larger selection of weather data providers supported in Plasma 5.7. So while having your ion as 3rd-party plugin somewhere else is fine, consider maintaining it inside one of the Plasma repositories, either with the existing ions in the repo plasma-workspace or as part of addons in the repo kdeplasma-addons. This should ensure more users and also more co-developers.
Do a good check of the licenses for using the data services of the weather data providers. Especially public ones should be fine given their purpose, but if in doubt after reading the details, ask the providers.Privacy control
One issue in the old and current Weather widget code is that when searching for a weather station suiting the users desire, as expressed by the location search string, all currently installed ions are queried. Which of course is a problem from a privacy point of view. And will be worse the more ions there will be.
So there needs to be a way to limit the scope of ions that would be queried. Given that dataengines by design should be used by all kind of dataengine consumers, a Weather widget-only solution might be only a short jump here. There are very possibly other Plasma modules talking to remote services as well, like geolocation services. And ideally one is able to control system-wide (so even beyond Plasma scope) which remote services are allowed and which not.
Until such a global solution exists a Weather widget-only solution is better than nothing surely. Still, it needs to be designed UI-wise, so a job to be picked up by someone :)Getting started with your own ion
So while I am currently impeded these days from hacking, among other things by waiting for new development-capable hardware being delivered (disadvantage of small towns: need to use an online shop for special hardware desires, and there latency unit is days, not minutes, especially bad when wrong stuff is delivered and then also holidays get into the game. Looking on the bright side of life, my old hardware only broke right after the CERN sprint, not before :) )…
… Do not hesitate to look into things already. I would have liked to provide a KAppTemplate-package with an ion sample in this post already, so you could experiment right away. But perhaps you are experienced enough to get a new ion working by looking at the existing ones. If not, in a few weeks hopefully there should be a template, so watch out.
And do not hesitate to ask for support on #plasma on irc or on the Plasma mailinglist, I received lots of help there with the Weather widget porting, so you should when trying to write an ion :)