Hi! Everything all right with you?
Today was the first day of Randa Meetings, a KDE sprint in Randa, a city that is +/- 300km from Geneve/Switzerland.
I had a departure from Rio de Janeiro/Brazil at 11pm last Friday night and went to Frankfurt to get a connection to Geneve, the connection that I lost, because of the delay of the departure in Rio and to the plane taxiing on the airport to the arrival gate. For luck, the Service of Lufthansa understood the problem and booked me to the next flight.
After that, I got a train to Visp and then to Randa. Arrived at Mario’s house after midnight.
So, today I went to the big house and started to code… After a few pieces of advice from Tomaz, I started to make several changes in Umbrello code, that in the end leave me with more than 500 lines deleted and 200 ones added.
Also, I got my arrive package from Google Summer of Code! =D
And it’s awesome! \o/
Today I had the opportunity to talk with a lot of KDE nice people that is around here, and give to them Paçoca, a traditional candy that I brought from Brazil, that is a combination of Sugar and Peanuts melted together, and I think that they liked. Have doubts? Please watch the video:
For the next days I will wrote more about the experience that I’m having around here. But for now you can see my photos and follow me on Instagram(@lays147) for more photos. This is my first trip outside Brazil, and until now, is everything amazing.
The Randa Meetings is already happening, but you can support us through this link.
Sitting on Lake Zurich and reflecting over things was a great way to get started. http://manifesta.org/2015/11/pavillon-of-reflections-for-zurich-in-2016/
After spending a bit of time in a train, I climbed part of a mountain together with Adriaan – up to the snow where I could throw a snowball at him. We also designed a couple of new frameworks on our climbing trip. Maybe they will be presented later.
1.Planet moonsThis small addition required me to change the structure of PlanetNode. Because moons sometimes have to be drawn in front of the planet there is now PlanetMoonsNode that holds pointer to PlanetNode and a bunch of PointSourceNodes that represent moons. Whenever z-order changes PlanetMoonsNode reorders all nodes in node tree to follow the order. Jupiter and its moons 2. Horizon and GroundAt first sight drawing horizon and ground might seem trivial (I also thought that it is) but it turned out that there is no in-built support for concave polygons neither in OpenGL nor in Qt Quick Scene. And the only way to draw a filled polygon in OpenGL is to convert it to the set of triangles. So I spent some time on finding the best algorithm or library to triangulate the polygon used for drawing ground and decided to use this library (standalone version of tessellator from OpenGL Utility Library (GLU)). It produces perfect results and performs well on both my laptop and Android tablet 3. LinesThis is a major part of SkyMapLite. Equator, Ecliptic, Constellation Bounds and Lines, Horizontal Coordinate Grid and Equatorial Coordinate Grid were ported to SkyMapLite. Unfortunately, lines caused significant loss in performance on Android due to a lot of calls to routines that convert equatorial and horizontal coordinates of object to QPoint. There are few ways how we can optimize it (e.g. use SkyMesh to skip checking of lines that are not visible in current sky hemisphere, optimize Projector) it but I decided first to finish porting all components to SkyMapLite and then come back to optimization.Another trivial problem that turned out to be not so trivial caused Equatorial and Horizontal Coordinate Grids. In KStars they are drawn with dotted lines but Qt Quick Scene Graph doesn't support it. First I tried to use custom shaders but guys from Qt interest mailing list gave me an idea to use QSGVertexColorMaterial to draw different line segments with different color. 4. LabelsAnother major member of KStars which is now available in SkyMapLite. At first I thought that each SkyNode (e.g. PlanetNode and PointSourceNode) can have its own label as a child node but this way it would be impossible to control the drawing order of labels as other planets or stars can overlap labels. So I created LabelsItem class that holds and updates all labels, which can be instantiated on request to LabelsItem.Qt Quick Scene Graph doesn't offer tools for drawing text so I decided to store labels as textures. To do that you need to call SkyMapLite::textFromTexture() which will return QSGTexture with the text. This approach has one problem - if we want labels to have different size on different zoom levels than on each zoom change we will need to create a new texture or just resize the texture but I'm already working on this problem.
There are 2 types of labels LabelNode and GuideLabelNode where the latter can be rotated with a given angle but in most cases LabelNode is used.
Also I changed the structure of SkyMapLite and its children. Previously each of SkyComponents was represented with QQuickItem derived class that was reparented to SkyMapLite like SolarSystemSingleComponent -> PlanetsItem. Now I moved everything inside SkyMapLite e.g. PlanetsItem is now derived from QSGOpacity. This way we have only one clipping node and texture cache. The reason I haven't done that before is that rendering of QQuickItem and all another stuff are done in two separate threads that shouldn't intersect. I thought that we might have problems if I will delegate all data work to QSGNodes, but it turns out that multiple clipping nodes cause loss of performance and transition to this scheme was inevitable. Also I haven't faced any problems with this approach yet (it works better and there is much less duplicate code now).
Current state of KStars Lite on desktop
And on my Nexus 7 tablet
What's next?My next stop is stars and DSOs. If everything goes well I will finish SkyMapLite by GSoC midterm examinations and proceed to optimization of KStars Lite.
The Randa meeting starts this week, and I'll be working with the KDE multi-platform group, being led by Aleix Pol, and will be spending my time working on both flatpak (formerly xdg-app) and Snappy.
During this past week I have been brought into some technical discussions about deployment on both platforms; so I intend to spend my time working closely with other interested developers solving problems that affect either platform as there is a lot of overlap. Tackling these independently doesn't make sense.
These two emerging technologies both have a lot of potential to revolutionise Linux packaging and distribution with either being a huge boost over the current state.
Both are going to become relevant in the Linux world over the next few years, and the important thing is making sure our software works best for our users whatever the platform.
So far over this week I've spent some time fixing KDE flatpak applications, in particular fixing multiple problems we've encountered with Dolphin; namely being able to load plugins and making kio slaves work.
In the meantime I've been testing out packaging some Snappy apps, packaging and running a few applications.
Over the week I'll write some more in depth blog posts, exploring the state of each, where we have problems deploying our apps, and hopefully some concrete solutions :)
Be sure to sponsor the sprint to help pay for developers from around the world to come together to work on important projects and be sure to follow PlanetKDE for blog posts about software developments from the people here.
Being a Google Summer of Code’16 student, this is my first milestone report blog. Hope you’ll find it detailed enough!
Task 1: Review current whole database implementation including database schema hosted as XML.
- Understand all three parts of Core DB:
- CoreDB- hosts all albums, items, search data…
- ThumbsDB – host image Thumbnails.
- FaceDB – host image histogram for face recognition.
- Analyzing the ‘databaseserver’ code in core (which rely on DBUS. Actions are ongoing to minimize DBUS dependency, to improve digiKam stability for non Linux systems).
- Understand how common code fragments work for both MySQL/SQLite (DB settings extracted from config file and later functions accordingly).
- Understanding how SQL queries are being used in the source and implemented.
- Thorough reading of dbconfig.xml.cmake.in file to understand how database schema is set up. (Create table/indexes/triggers statements for both SQLite/MySQL).
- Understanding database version update through schemaupdater files (of all 3 DB).
I faced many doubts while reading the source code, plus several concept related queries. But all of them were resolved with Mentors’ help and guidance throughout.
Like Christoph, I’m going to Randa! It’s a long train ride, but that means I can get some hacking done on the way.
You can support the Randa meetings! Click on the image for fundraiser information. The Randa meetings are one of the biggest sprints in KDE. Each year a tightly focused group gets together to work on KDE technology for one goal. This year the goal is KDE technology on every device.
While most of the participants seem to be going to the meeting for the purpose of getting more KDE applications on Windows, MacOS or Android — indeed platforms where our technology can make a difference for developers and where our applications can make a difference for Freedom — I’m going with a slightly different purpose. I’m there for our traditional niche platforms: the BSD’s. But also for packaging in a traditional sense, and for building our software effectively and efficiently.
There’s a lot of infrastructure in the KDE software (git) repositories, information about dependencies and build orders and where to find sources and stuff like that. For the traditional packagers, though, packaging is largely an artisanal process: discover what software is released this time, under what names and in which directory; figure out what new dependencies there are by trying to compile the new stuff in an environment that worked for the previous release; adjust sources for invalid assumptions. Lather, rinse, repeat.
If containerized apps are really a thing, I’d hope to be able to build FreeBSD(-ish) containers from the same metadata as other containers are built from.
As preparation, I tried to build KDE (as in, the software stack needed for Zanshin) from source on a Linux distro. I failed. To me, that suggests that we shouldn’t forget the devices running Linux, either, and the packagers that work there.
This year's kickstarter fundraising campaign for Krita was more nerve-wracking than the previous two editions. Although we ended up 135% funded, we were almost afraid we wouldn't make it, around the middle. Maybe only the release of Krita 3.0 turned the campaign around. Here's my chaotic and off-the-cuff analysis of this campaign.
We were ambitious this year and once again decided upon two big goals: text and vector, because we felt both are real pain points in Krita that really need to be addressed. I think now that we probably should have made both into super-stretch goals one level above the 10,000 euro Python stretch goal and let our community decide.
Then we could have made the base level one stretch goal of 15,000 euros, and we'd have been "funded" on the second day and made the Kickstarter expectation that a succesful campaign is funded immediately. Then we could have opened the paypal pledges really early into the campaign and advertise the option properly.
We also hadn't thought through some stretch goals in sufficient depth, so sometimes we weren't totally sure ourselves what we're offering people. This contrasts with last year, where the stretch goals were precisely defined. (But during development became gold-plated -- a 1500 stretch goal should be two weeks of work, which sometimes became four or six weeks.)
We did have a good story, though, which is the central part of any fundraiser. Without a good story that can be summarized in one sentence, you'll get nowhere. And text and vector have been painful for our users for years now, so that part was fine.
We're also really well-oiled when it comes to preparation: Irina, me and Wolthera sat together for a couple of weekends to first select the goals, then figure out the reward levels and possible rewards, and then to write the story and other text. We have lists of people to approach, lists of things that need to be written in time to have them translated into Russian and Japanese -- that's all pretty well oiled.
Not that our list of rewards was perfect, so we had to do some in-campaign additions, and we made at least one mistake: we added a 25 euro level when the existing 25 euros rewards had sold out. But the existing rewards re-used overstock from last year, and for the new level we have to have new goodies made. And that means our cost for those rewards is higher than we thought. Not high enough that those 25 euros pledges don't help towards development, but it's still a mistake.
Our video was very good this year: about half of the plays were watched to the end, which is an amazing score!
Kickstarter is becoming a tired formula
Already after two days, people were saying on the various social media sites that we wouldn't make it. The impression with Kickstarter these days is that if you're not 100% funded in one or two days, you're a failure. Kickstarter has also become that site where you go for games, gadgets and gags.
We also noticed less engagement: fewer messages and comments on the kickstarter site itself. That could have been a function of a less attractive campaign, of course.
That Kickstarter still hasn't got a deal with Paypal is incredible. And Kickstarter's campaign tools are unbelievably primitive: from story editor to update editor (both share the same wysiwyg editor which is stupidly limited, and you can only edit updates for 30 minutes) to the survey tools, which don't allow copy and paste between reward levels or any free text except in the intro. Basically, Kickstarter isn't spending any money on its platform any more, and it shows.
It is next to impossible to get news coverage for a fundraising campaign
You'd think that "independent free software project funds full-time development through community, not commercial, support" would make a great story, especially when the funding is a success and the results are visible for everyone. You'd think that especially the free software oriented media would be interested in a story like this. But, with some exceptions, no.
Last year, I was told by a journalist reporting on free and open source software that there are too many fundraising campaigns to cover. He didn't want to drown his readers in them, and it would be unethical to ignore some and cover others.
But are there so many fundraisers for free software? I don't know, since none get into the news. I know about a few, mostly in the graphics software category -- synfig, blender, Jehan's campaign for Zemarmot, the campaign by the Software Freedom Conversancy, KDE's Randa campaign. But that's really just a handful.
I think that the free and open source news media are doing their readers a disservice by not covering campaigns like ours; and they are doing the ecosystem a disservice. Healthy, independent projects that provide software in important categories, like Krita, are essential for free software to prosper.
Without the release, we might not have made it. But doing a Kickstarter is exhausting: it's only a month, but feels like two or three. Doing a release and a Kickstarter is double exhausting. We did raise Krita's profile and userbase to a whole other level, though! (Which also translates into a flood of bug reports, and bugzilla basically has become unmanageable for us: we need more triagers and testers, badly!)
Right now, I'd like to take a few days off, and Dmitry smartly is taking a few days off, but there's still so much on my backlog that it's not going to happen.
I also had a day job for three days a week during the campaign, during which I wasn't available for social media work or promo, and I really felt that to be a problem. But I need that job to fund my own work on Krita...
Kickstarter lets one know where the backers are coming from. Kickstarter itself is a source of backers: about 4500 euros came from Kickstarter itself. Next up is Reddit with 3000 euros, twitter with 1700, facebook 1400, krita.org 1000 and blendernation with 900. After that, the long tail starts. So, in the absence of news coverage, social media is really important and the Blender community is once again proven to be much bigger than most people in the free software community realize.
The campaign was a success, and the result pretty much the right size, I think. If we had double the result, we would have had to find another freelancer to work on Krita full-time. I'm not sure we're ready for that yet. We've also innovated this year, by deciding to offer artists in our communities commissions to create art for the rewards. That's something we'll be setting in motion soon.
Another innovation is that we decided to produce an art book with work by Krita artists. Calls for submissions will go out soon! That book will also go into the shop, and it's kind of an exercise for the other thing we want to do this year: publish a proper Pepper and Carrot book.
If sales from books will help fund development further, we might skip one year of Kickstarter-like fund raising, in the hope that a new platform will spring up that will offer a fresh way of doing fund raising.
As you might know, I’m the new maintainer of KApiDox, and by that I’m more or less responisble of the api.kde.org website.
KApiDox used to be a script for generating the KDE Frameworks (KF5) API. This is however changing: I would like to generate all KDE APIs with this tool.
It is on its way, but before going further I need your input.
We worked with Thomas (@Colomar) on a little survey about the api. This will take less than 5 minutes of your time but help a lot. To be helpful, you don’t need to actually use the KApiDox scripts, but only the api.kde.org website.
Please answer the questions here: http://survey.kde.org/index.php/612593/lang-en
It will help me a lot to make things better!
Cheers and have fun!
In the past week, I worked on the code reviews I got. Hence, I changed the classes’ design all over. The way it works now, is that there is a central dispatcher, the daemon, that handles all the jobs. I chose this design, since it was how originally KIMAP jobs, was supposed to be managed. My mentor and Daniel Vratil helped me in deciding this.
I previously said, that I’d change the update to a new thread. That might not have been entirely true😉 . From what I know, IDLE is an async process, so there might be some actual lag there. I’ll add this in my TODO during the time I test everything and just move to other things on my list.
The IMAP client was looking good, so I started to implement a parser. Now I must say, this took more time than I expected. I was adamant at using Qt JSON library which is pretty good for parsing JSON. But to parse that JSON, the first thing is that I need JSON data. I looked through the docs. Nothing seemed good enough. QtWebKit seemed overkill. Of course, me and my mentor were against using it. There’s no GUI in our application, and using it would have been a waste of precious resources. I checked some other 3’rd party C++ libraries like SGML for Qt, Gumbo Parser by Google but nothing seemed appropriate. SGML was a bit (well more) on the slower side. Gumbo seemed good at first, but since it was 3rd party, I was worried. Maybe some things about license too. I didn’t feel like adding a dependency. Finally I settled for regex (No Wait. Whattt ? I know ? Before you all charge at me, I didn’t use it). Yeah, so I settled for regex. I contacted my mentor and Dan, and they were against it (Well me too, but what choice did I have ?). To those of you, who still think, it would have been okay to use regexes, read this. Dan told me to use QXmlStreamReader. My initial reaction was whattt ? How can I parse HTML with an XML parser ? Hence I never even googled about it. On a second thought though, HTML if formatted nicely, is just plain XML. (Now do you get it ? Just the tags !). I tested it with my IMAP client. Everything seemed good. It’s fast and reliable. It’s also in the QtCore module. So, Speed; check. Reliability; check. No added dependencies;check. What else could I have asked for (A tank ?). So, now I had data between those script tags in HTML, after I parsed it through QXmlStreamReader. Next I parsed this data using Qt JSON Library (QJsonDocument, QJsonArray, QJsonObject). I store the required data in Maps. With the power and consistency of QVariant(Map) things became quite easy.
I got code reviews from my mentor. We discussed some stuff for the next stages.In the next part, I’ll perfect the extractors I have written. Implement them as plugins and load these plugins in the client. This will make writing extractors for other schemas and hence adding support very easy. I’ll also be working with a database (SQLite) to cache the fetched emails. One of the things, I noticed just now, is that I’m using the Incremental methodology of Software Engineering. I am adding new features one by one. Building and testing at each step. Never thought, I’d see the impact of the course classes this early😉
See you later.
Tomorrow I will start to travel to the Randa Meeting 2016.
I hope I can help to make it possible to spread KF5 based stuff more easy on platforms like Windows & Mac.
Lets see what can be done, will be an interesting week ;=)
If you want to support us, please consider to donate, we have a fundraising campaign running!
In less then 48 hours the people will arrive in Switzerland (actually the first participant already arrived) and this year’s edition of the Randa Meetings will start under the motto: KDE technology on every device
From Sunday, 12th to Sunday 19th of June around 40 people are going to work hard, discuss tirelessly and decide upon new ideas and directions. And this people need some energy too and thus we went out today and bought some nice stuff for them:
It’s been some months, and this time has been mostly about maturing what we already had and making it useful for others:
- Improved the runtime, updates to newer versions of Qt and KDE Frameworks. Some functionality issues were fixed.
- We published the runtime so that developers can test their applications against it.
- Added several recipes for KDE Applications (help! testers required).
- We got some initial documentation for developers.
Now it’s time to make this work. I find it already close to magic how we get to compile in one distro and works on another. I must admit, I’m excited. But then many things need work, should be simple, but we need to spend the time.
Also we need to compile the applications, start using them and see where’s the limitations, especially regarding the sandboxing. In the end, we also want to bring KDE applications to our GNU/Linux users who cannot reach our stable releases.
Most of it will happen in Randa, let’s see how far we get!Join us!
Another great step for the project, now you can add new keywords, or remove them from the extensions!
There are some restrictions:
- you're not allowed to remove mandatory keywords (still needs some polishing)
- you're not allowed to add mandatory keywords, since those must be available when you create the extension
- no duplicates
The new keywords
There are still many things to do, to refine:)
Over the last few months I have been poking away at a refactoring of the IMAP library that Kolab's IMAP filter/proxy uses behind the scenes, called eimap. It consolidated quite a bit of duplicated code between the various IMAP commands that are supported, and fixed a few bugs along the way. This refactoring dropped the code count, makes implementing new commands even easier, and has allowed for improvements that affect all commands (usually because they are related to the core IMAP protocol) to be made in one central place. This was rolled as a eimap 0.2 the other week and has made its way through the packaging process for Kolab. This is a significant milestone for eimap on the path to being able to be considered "stable".
Guam 0.8 was tagged last week and takes full advantage of eimap 0.2. This has entered the packaging phase now, but you can grab guam 0.8 here:
Highlights of these two releases include:
- several new IMAP commands supported
- all core IMAP response handling is centralized, making the implementation for each command significantly simpler and more consistent
- support for multi-line, single-line and binary response command types
- support for literals continuation
- improved TLS support
- fixes for metadata fetching
- support for automated interruption of passthrough state to send structured commands
- commands receive server responses for commands they put into the queue
- ported to eimap 0.2
- limit processcommandqueue messages in the FSM's mailbox to one in the per-session state machine
- be more expansive in what is supported in LIST commands for the groupware folder filter rule
- init scripts for both sysv and systemd
One change that did not make it into 0.8 was the ability to define which port to bind guam listeners to by network interface. This is already merged for 0.9, however. I also received from interest in using Guam with other IMAP servers, so it looks likely that guam 0.8 will get testing with Dovecot in addition to Cyrus.
Caveats: If you are building by hand using the included rebar build, you may run into some issues with the lager dependencies, depending on what versions of lager and friends are installed globally (if any). If so, change the dependencies in rebar.config to match what is installed. This is largely down to rebar 2.x being a little limited in its ability to handle such things. We are moving to rebar3 for all the erlang packages, so eimap 0.3 and guam 0.9 will both use rebar3. I have guam already building with rebar3 in a 0.9 feature branch, and it was pretty painless and produces something even a little nicer already. As soon as I fix up the release generation, this will probably be the first feature branch to land in the develop branch of guam for 0.9!
It is also known that the test suite for Guam 0.8 is broken. I have this building and working again in the 0.9 branch, and will probably be doing some significant changes to how these tests are run for 0.9.
I joined Kolab Systems just over 1.5 years ago, and during that time I have put a lot of my energy and time into working with the amazing team of people here to improve our processes and execution of those processes around sales, communication, community engagement, professional services delivery, and product development. They have certainly kept me busy and moving at warp 9, but the results have certainly been their own reward as we have moved together from strength to strength across the board.
One place that this has been visible is the strengthening of our relationship with Red Hat and IBM, which has culminated in two very significant achievements this year. First, Kolab is available on the Power 8 platform thanks to a fantastic collaboration with IBM. For enterprise customers and ISP/ASPs alike who need to be able to deliver Kolab at scale in minimum rack space, this is a big deal.
For those with existing Power 8 workloads, it also means that they can bring in a top-tier collaboration suite with quality services and support backing it up on their already provisioned hardware platform; put more simply: they won't have to support an additional x86-based pool of servers just for Kolab.
To help introduce this new set of possibilities, we have organized a series of open tech events called the Kolab Tasters in coordination with IBM and Red Hat.
Besides enjoying local beverages and street food with us at these events, attendees will be able to experience Kolab on Red Hat Enterprise Linux on Power 8 first-hand on the demo stations that will be available around the event site. Presentations from Kolab Systems, IBM and Red Hat form the main part of the agenda for each of these events, and will give attendees a deep understanding of how the open technologies from IBM (Power 8), Red Hat (Linux OS), and Kolab Systems (Kolab) deliver fantastic value and freedom, especially when used together.
The first events scheduled are:
- Zürich, Switzerland on the 14th June, 2016
- Vienna, Austria on the 22nd June, 2016
- Bern, Switzerland on 28th June, 2016
There are some fantastic speakers lined up for these events, including Red Hat's Jan Wildeboer and Dr. Wolfgang Meier who is directory of hardware development at IBM. At the Vienna event, we will also be celebrating the official opening of Kolab Systems Austria, which has already begun to support their needs of partners, customers and government in the beautiful country of Austria from our office in Vienna.
Events in Germany, starting in Frankfurt, will be scheduled soon, and we will be doing a "mini-taster" at the Kolab Summit which is taking place in Nürnberg on the 24th and 25th of June. Additional events will be scheduled in accordance with interest over the next year. I expect this to become a semi-regular road-show, in fact.
And speaking of the Kolab Summit: is it also going to be a fantastic event. Co-hosted at the openSUSE Conference, we will be sharing the technical roadmap for Kolab for 2016-2017; unveiling our partner program for ISPs, ASPs and system integrators that we incrementally rolled out earlier this year and which is now ready for broad adoption; listening to guest speakers on timely topics such as Safe Harbor in the EU and taking Kolab into vertical markets; and, of course, having a busy "hallway session" where you can meet and talk with key developers, designers, management and sales people from the Kolabiverse.
You can still book your free tickets to these events from their respective websites:
This days I’m working to improve my skils about prepare, test and deploy complex IT systems like mail servers or database cluster.
To acomplish this I started using ansible to speed up the operation.
With ansible is quite easy setup a configuration template and the procedure to bring up the new service or re-configure an existing one.
Unlike other automation tools like puppet it don’t require any kind of specialized server, it uses ssh to accesso to all servers and this can be a good solution also to firewall/network ACL issues.
I’m thinking about migrate all my sh script to ansible structure but first I have to make some test.
Of course, we should have posted this yesterday. Or earlier today! But when around midnight we opened the Champagne (only a half-bot, and it was off, too! Mumm, booh!), we all felt we were at the end of a long, long month! We, that’s Boudewijn, Irina and Wolthera, gathered in Deventer for the occasion (and also for the Google Summer of Code). Over the past month, hundreds of bugs have been fixed, we’ve gone through an entire release cycle, we managed another successful Kickstarter campaign! Exhaustion had set in, and we went for a walk around scenic Deventer to look at cows, sheep, dogs, swans, piglets, ducklings, budgerigars and chickens, and lots of fresh early summer foliage.
But not all was laziness! Yesterday, all Kickstarter backers got their surveys, and over half have already returned them! Today, the people who backed us through paypal got their surveys, and we got a fair return rate as well! Currently, the score looks like this:
- 24. Python Scripting Plugin: 414
- 8. SVG Import/Export: 373
With runners up…
- 21. Flipbook/Sketchbook 176
- 2. Composition Guides 167
- 1. Transform from pivot point 152
- 7. Vector Layers as Mask 132
- 13. Arrange Layers 129
The special goals selected by the 1500 euro backers are Improving the Reference Docker and — do what you developers think most fun! That’s not an official stretch goal, but we’ve got some ideas…
I tend to write a blog post every Thursday but I was late this week.
Two weeks have passed. Here is what Neverland can do for now:
- Create a new theme:
- Delete a theme:
- Compile sass for a theme:
- “Watch” a theme, but for some reasons it doesnt output the stdout from BrowserSync:
in order to see the output, you have to run gulp:
It runs but still need a lot of works to clean the code.
And I was facing the harder part: how to organize the default WordPress theme. I has been thinking of the WP default themes: tweentyfourteen, tweentyfifteen, tweentysixteen … ? It was so confused so I was coming to ask my Jedi Master and his answer was:
“I would probably more base it on a framework based theme”
“What is a `framework based theme` ?”
“Like underscores, or something more specific, like sage”
It was the first time I’ve ever heard about them. Sounds interesting.
Sage is more than a theme. It adopts many modern development tools like Sass, Gulp, Bower and it’s DRY. On the other hand, Underscore is just a starting simple theme.
“Your choice” he said.
It’s for this week.
p/s: I’ve just won a LaraconUS 2016 ticket but I dont have enough money to go. If you are interested, leave your email here😉
It was great to have conversations with the contributors who visited us as well as some downtime with the team. It's been a busy time since we announced our new endeavor. And it continues to be awesome to get so many supportive comments and feedback on what we're up to! People are excited about our open strategy and appreciate the fact that there is a solid company behind it. The flood of incoming requests for information and support from customers presents a good problem. So let me point out, again, that we're hiring!
Once again “Wiki, what’s going on?”. Today I’m here to give you some updates on our work with the WikiToLearn community.
First of all our activity of the last two days: the participation sprint. On Wednesday Daniele (@Mte90) came to Milan for a two-days sprint on a very important theme for us: participation. What can we do to improve the way we work? How can we help new users participating and feeling involved? These, and many others, are important points to us and we’d like to come up with good decisions and conventions to go straightforward to succeed.
During the first day we had a great brainstorming and we have discussed important points on how we intend to plan our work. We have realized that both internal organization and new users involvement have some lacks that have to be fixed: communication channels, user experience, “taskization” and detailed documentation to help people not to be lost on our website.
In this second day we have discussed different workflows for new users; we have tried to understand how to better set our internal organization and way of working. We have also had an important discussion to try to clarify how to engage successful students that do not feel involved in the community. We have also wrote down all the ideas that came up, trying to find their pros and cons: this is the (more or less) the first time we try to take notes live, during a discussion. I think this is an extremely useful strategy and we can adopt it during our future discussions.
There is another thing I’m proud of: remember the course I was writing of which I talked you about in the last part of “Wiki, what’s going on”? Well, I’m happy to announce that now the online course and the pdf book are complete!
L'articolo Wiki, what’s going on? (Part 4-Participation Sprint) sembra essere il primo su Blogs from WikiToLearn.