April 20, 2014

April 18, 2014

HUD for the command line

HUD shown over terminal app with commands visible

Most expert users know how powerful the command line is on their Ubuntu system, but one of the common criticisms of it is that the commands themselves are hard to discover and remember the exact syntax for. To help a little bit with this I've created a small patch to the Ubuntu Terminal which adds entries into the HUD so that they can be searched by how people might think of the feature. Hopefully this will provide a way to introduce people to the command line, and provide experienced users with some commands that they might have not known about on their Ubuntu Phone. Let's look at one of the commands I added:

UnityActions.Action {
  text: i18n.tr("Networking Status")
  keywords: i18n.tr("Wireless;Ethernet;Access Points")
  onTriggered: ksession.sendText("\x03\nnm-tool\n")
}

This command quite simply prints out the status of the networking on the device. But some folks probably don't think of it as networking, they just want to search for the wireless status. By using the HUD keywords feature we're able to add a list of other possible search strings for the command. Now someone can type wireless status into the HUD and figure out the command that they need. This is a powerful way to discover new functionality. Plus (and this is really important) these can all be translated into their local language.

It is tradition in my family to spend this weekend looking for brightly colored eggs that have been hidden. If you update your terminal application I hope you'll be able to enjoy the same tradition this weekend.

Project naming

Number two in an occasional series of “time I wish I could have back” topics related to releasing proprietary software projects as free software.

What’s in a name?

It was famously said that there are 2 hard problems in computer programming: Cache invalidation, naming things, and off by one errors.

Naming a project is a pain. Everyone has their favourite, everyone’s an expert, and there are a dozen different creative processes that people will suggest. Also, project names are subject to approval by legal and brand departments often, which are 2 departments most engineers don’t want anything to do with. So how do you come up with an acceptable name without spending weeks talking about it? Here are some guidelines.

Avoid anything related to the company name or products

You don’t want to have corporate trademark guidelines, certification programmes, etc impacting the way your community can use the project name, and avoiding names related to company assets will make it easier to transfer trademark ownership to an independent non-profit, should you decide to do that in the future. In terms of maintaining a clear separation in people’s minds between the community project and your company’s products, it’s also a good idea to avoid “line extension” by reusing a product name

Outside of that, the number one thing to remember is:

Projects names are not the most important thing about your project

What’s important is what problems you solve, and how well you solve them. The name will grow on people if they’re using the project regularly. You can even end up with interesting discussions like “is it pronounced Lee-nooks or Lie-nucks?” which can galvanise a nascent community. Also, remember that:

Project names can be changed

How many projects fall foul of existing trademarks, end up rebranding soon after launch, or are forced to change names because of a change of corporate sponsor or a fork? Firefox, Jitsi, WildFly, Jenkins, LibreOffice, Joomla, Inkscape all started life under different names, and a rename has not prevented them from going on to be very successful projects. The important thing, in an open source project, is to start small so that you don’t have a huge amount invested in the old name, if circumstances require you to change it.

Avoid a few pitfalls

Avoid using anything which is related to the trademarks of competing companies or projects, unless it is pretty abstract (Avid to Diva, Mozilla to Mosaic Killer, Eclipse to Sun).

That said, don’t worry too much about trademarks. Yes, do a quick search for related projects when you have a shortlist, and check out USPTO. But just because there is a Gnome Chestnut Farms in Bend, Oregon doesn’t mean you can’t call your free software desktop environment GNOME. Domain of use is a powerful constraint, take advantage of it.

Avoid potentially politically incorrect or “bad language” words. Also, avoid artificially smart acronyms. The Flexible Add-on Release Tracker might seem like a good idea, but… don’t.  GIMP is a notable exception here to both rules, and countless days have been spent defending the choice of name over the years.

Do worry about the domain name. This will be the primary promotion mechanism. People shouldn’t spend time trying to figure out if your project is hosted at “sandbla.st” or “sandblast.org” or “ProjectSandblast.org”. Make sure you get a good domain name.

Empower a small group to choose

The decision on a name should belong to 1, 2 or 3 people. No more. Once you realise that names are not the most important thing, and that the name can be changed if you mess up badly, that frees you from getting buy-in from everyone on the development team. The “committee” should include the project leaders (the person or people who will be identified as the maintainers afterwards), and one person who is good at facilitating naming discussions (perhaps someone from your Brand department to ensure their buy-in for the result). Beyond that, do not consider surveys, general calls for names, or any other process which gives a sense of ownership of the process to more than 2 or 3 people. This way lies many weeks and months of bikeshedding arguments.

Have a process

  1. Start with a concept and work from there. Break out the Thesaurus, make a list of related concepts.
  2. Names can be abstract or prosaic, it doesn’t really matter. Discourse is one of the most wonderfully prosaic project names I’ve seen, but StackOverflow has nothing to do with a questions & answers forum. Ansible is a made up word, Puppet and Chef both evoke wonderfully orchestration while being dictionary words.
  3. Keep the shortlist to names which are short and pronounceable in multiple languages.
  4. Cull ruthlessly – don’t keep “maybe” names. If you get to the end, go back to the concepts list and start again.
  5. If you get to a shortlist of 2 or 3 and can’t decide, use random() to pick the winner or go with the choice of the project leader.

In general, don’t spend too much time on it. You should be able to get a couple of candidate names in a few days of discussion, submit them to Legal for a trademark review, and spend your time on what really matters, understanding your users’ problems and solving them as well as you can.

Of course, this is easier said than done – good luck!

April 17, 2014

Get involved in the Fedora.next web efforts!

Lately I’ve been blogging about the proposal for new Fedora websites to account for the Fedora.next effort. So far, the proposal has been met with warm reception and excitement! (Yay!)

We Really Would Love Your Help

Two very important things that I’d like to make clear at this point:

  • This plan is not set in stone. It’s very sketchy, and needs more refinement and ideas. There is most certainly room to join in and contribute to the plan! Things are still quite flexible; we’re still in the early stages!
  • We would love your help! I know this usually goes without saying in FLOSS, but I still think it is worth saying. We would love more folks – with any skillset – to help us figure this out and make this new web presence for Fedora happen!

Are you interested in helping out? Or perhaps you’d just like to play around with our assets – no strings attached – for fun, or follow along on the progress at a lower level than just reading these blog posts? Let’s talk about where the action is happening so you can get in on it! :)

How To Get Involved

Up until this point, the Fedora.next web ideas and mockups have been scattered across various blogs, Fedora people pages, and git repos. We talked a bit last week in #fedora-design about centralizing all of our assets in one place to make it easier to collaborate and for new folks to come on board and help us out. Here’s what we’ve set up so far:

  • A Fedora Design GitHub group – I’ve already added many of our friends from the Fedora Design team. If you’d like to be included, let me know your github usersname!
  • nextweb-assets git repo – This repo has the Inkscape SVG source for the mockups and diagrams I’ve been blogging here. Please feel free to check them out, remix them, or contribute your own! I tried to set up a sensible directory structure. I recommend hooking this repo up to SparkleShare for a nice workflow with Inkscape.
  • mockups-getfedora git repo – This repo holds the prototype Ryan has been working on for the new getfedora.org ‘Brochure Site’ in the proposal.

We also, of course, have #fedora-design in freenode IRC for discussing the design, as well as the design-team mailing list for discussion.

The Fedora Websites team will be setting up a branch for the new websites work sometime by the end of next week. For now, you can take a look at the mockups-getfedora repo. You also might want to set up a local copy of the Fedora websites repo by following these instructions to get familiar with the Fedora websites workflow.

Okay, I hope this makes it abundantly clear that we’d love your help and gives you some clear steps towards starting to get involved should you be interested. Please don’t hesitate to get in touch with me or really anyone on the design team or websites team if you’d like to get started!

Back from PyCon

I'm back from Montreal, settling back in.

The PiDoorbell tutorial went well, in the end. Of course just about everything that could go wrong, did. The hard-wired ethernet connection we'd been promised didn't materialize, and there was no way to get the Raspberry Pis onto the conference wi-fi because it used browser authentication (it still baffles me why anyone still uses that! Browser authentication made sense in 2007 when lots of people only had 801.11g and couldn't do WPA; it makes absolutely zero sense now).

Anyway, lacking a sensible way to get everyone's Pis on the net, Deepa stepped as network engineer for the tutorial and hooked up the router she had brought to her laptop's wi-fi connection so the Pis could route through that.

Then we found we had too few SD cards. We didn't realize why until afterward: when we compared the attendee count to the sign-up list we'd gotten, we had quite a few more attendees than we'd planned for. We had a few extra SD cards, but not enough, so I and a couple of the other instructors/TAs had to loan out SD cards we'd brought for our own Pis. ("Now edit /etc/network/interfaces ... okay, pretend you didn't see that, that's the password for my home router, now delete that and change it to ...")

Then some of the SD cards turned out not to have been updated with the latest packages, Mac users couldn't find the drivers to run the serial cable, Windows users (or was it Macs?) had trouble setting static ethernet addresses so they could ssh to the Pi, all the problems we'd expected and a few we hadn't.

But despite all the problems, the TAs: Deepa (who was more like a co-presenter than a TA), Serpil, Lyz and Stuart, plus Rupa and I, were able to get everyone working. All the attendees got their LEDs blinking, their sonar rangefinders rangefinding, and the PiDoorbell script running. Many people brought cameras and got their Pis snapping pictures when the sensor registered someone in front of it. Time restrictions and network problems meant that most people didn't get the Dropbox and Twilio registration finished to get notifications sent to their phones, but that's okay -- we knew that was a long shot, and everybody got far enough that they can add the network notifications later if they want.

And the most important thing is that everybody looked like they were having a good time. We haven't seen the reviews (I'm not sure if PyCon shares reviews with the tutorial instructors; I hope so, but a lot of conferences don't) but I hope everybody had fun and felt like they got something out of it.

The rest of PyCon was excellent, too. I went to some great talks, got lots of ideas for new projects and packages I want to try, had fun meeting new people, and got to see a little of Montreal. And ate a lot of good food.

Now I'm back in the land of enchantment, with its crazy weather -- we've gone from snow to sun to cold breezes to HOT to threatening thunderstorm in the couple of days I've been back. Never a dull moment! I confess I'm missing those chocolate croissants for breakfast just a little bit. We still don't have internet: it's nearly 9 weeks since Comcast's first visit, and their latest prediction (which changes every time I talk to them) is a week from today.

But it's warm and sunny this morning, there's a white-crowned sparrow singing outside the window, and I've just seen our first hummingbird (a male -- I think it's a broad-tailed, but it'll take a while to be confident of IDs on all these new-to-me birds). PyCon was fun -- but it's nice to be home.

Choosing a license

One in a series of indeterminate length I am calling “mostly unimportant questions which take an inordinate amount of time to resolve when releasing a project as free software”. For the next topic, I’m hesitating between “naming”, “logo/icon/mascot design” and “mailing lists or forums”.

 

Choosing a license

Free software projects need licenses. But choosing a license is such a pain that most github projects don’t even bother (resulting in an initiative by Github to rectify this). And when taking a closed source project and making it free software, the topic of license choice will take a huge amount of time and effort.

I have found the following questions accelerate things nicely.

  1. Does the project exist as part of a greater ecosystem (eg. Apache, Eclipse, GNOME, Perl, Ruby)?
  2. If so, is there a predominant license in that ecosystem (eg EPL for Eclipse, MPL for Mozilla, MIT for Ruby gems)? Then use that license.
  3. Does your business model depend on you having total control of the project for the foreseeable future? (Aside: If so, consider changing your business model) Consider GPL v3+/proprietary dual license
  4. Do you want to grow a vibrant developer community around your project? If not, why not? Avoid dual license, copyright assignment
  5. Do you want to grow a vibrant service partner/extensions ecosystem, including proprietary extensions, around your project? Avoid GPL v2+ or v3+ – prefer MPL v2 or Apache v2
  6. Do you have any dependencies whose licenses you must comply with (eg. GPL v2 hard dependency)? Ensure you can distribute result under a compliant license
  7. Do you have concerns about the patent portfolios of potential project contributors? Choose from GPL v3, MPL v2, Apache v2 for stronger patent protection for contributors
  8. Do you believe that all contributors to the project, including extensions, should be subject to the same rules? Choose GPL v3
  9. Do you believe that the source code is free, and people should do whatever they want with it as long as they give you credit? Choose MIT or Apache v2
  10. After answering these questions, are you considering a license outside of (L)GPL v3, MPL v2, Apache v2 or MIT? Don’t.

After all of this, there are still situations which can lead to different outcomes – perhaps you want to join a specific non-profit later, and your license choice will be influenced by that. Perhaps you have a dependency currently which you plan to work around later, and you might dual license source code contributions under multiple free software licenses to allow relicensing easily (as OpenOffice.org and Mozilla have done). But the answers to the 10 questions above will at least reduce the scope of your search to one or two licenses.

Any considerations I have missed? Comments welcome!

Writing more

I realised recently that most of my writing has been of the 140 character format recently…. I plan to rectify this, starting today.

What is GOM¹

Under that name is a simple idea: making it easier to save, load, update and query objects in an object store.

I'm not the main developer for this piece of code, but contributed a large number of fixes to it, while porting a piece of code to it as a test of the API. Much of the credit for the design of this very useful library goes to Christian Hergert.

The problem

It's possible that you've already implemented a data store inside your application, hiding your complicated SQL queries in a separate file because they contain injection security issues. Or you've used the filesystem as the store and threw away the ability to search particular fields without loading everything in memory first.

Given that SQLite pretty much matches our use case - it offers good search performance, it's a popular thus well-documented project and its files can be manipulated through a number of first-party and third-party tools - wrapping its API to make it easier to use is probably the right solution.

The GOM solution

GOM is a GObject based wrapper around SQLite. It will hide SQL from you, but still allow you to call to it if you have a specific query you want to run. It will also make sure that SQLite queries don't block your main thread, which is pretty useful indeed for UI applications.

For each table, you would have a GObject, a subclass of GomResource, representing a row in that table. Each column is a property on the object. To add a new item to the table, you would simply do:

item = g_object_new (ITEM_TYPE_RESOURCE,
"column1", value1,
"column2", value2, NULL);
gom_resource_save_sync (item, NULL);

We have a number of features which try to make it as easy as possible for application developers to use gom, such as:
  • Automatic table creation for string, string arrays, and number types as well as GDateTime, and transformation support for complex types (say, colours or images).
  • Automatic database version migration, using annotations on the properties ("new in version")
  • Programmatic API for queries, including deferred fetches for results
Currently, the main net gain in terms of lines of code, when porting SQLite, is the verbosity of declaring properties with GObject. That will hopefully be fixed by the GProperty work planned for the next GLib release.

The future

I'm currently working on some missing features to support a port of the grilo bookmarks plugin (support for column REFERENCES).

I will also be making (small) changes to the API to allow changing the backend from SQLite to a another one, such as XML, or a binary format. Obviously the SQL "escape hatches" wouldn't be available with those backends.

Don't hesitate to file bugs if there are any problems with the API, or its documentation, especially with respect to porting from applications already using SQLite directly. Or if there are bugs (surely, no).

Note that JavaScript support isn't ready yet, due to limitations in gjs.

¹: « SQLite don't hurt me, don't hurt me, no more »

April 16, 2014

CopyClay overhaul

Hi guys!

Long time without posting! yes, mainly due to internal work that realy is not worth to publicly comment but not for being internal is less interesting and important. At least to me and 3DCoat and also the fact im in Cuba again :P
Finally CopyClay is getting to its goal. many new features has being added and others got a major overhaul and improvement!

Hope you like it!

 


Design Hub Idea (Fedora.next website redesign)

So a couple of weeks ago we talked about a proposal for the new Fedora website that Ryan Lerch, Matthew Miller, and myself came up with. The feedback we’ve gotten thus far has been overwhelmingly positive, so I’ve put some time into coming up with less vague and hand-wavy ideas as to what a particular sub hub on the Fedora ‘Community Hub’ might look like. Remember, this thing we talked about:

diagram_communityhub_subhubs

We’re talking about what one of those individual little hubs might look like. The theoretical examples above are very Fedora team-centric; I would like us to follow a model a little more flexible than that in the spirit of Reddit. E.g., it should be easy to break out a new subhub for a specific topic, or a cross-team collaboration / project, etc. So the sub-hubs won’t necessarily be along team lines.

A Sub-hub for the Design Team

sub

Okay, okay, not that kind of sub. (I have a sandwich graphic too, just waiting for its opportunity. :) ) I understand pretty deeply how the Fedora design team works, the workflows and processes we’re involved with, so I figured it’d make the most sense to mock up a subhub for that team. The lovely Tatica volunteered to be the subject of this mockup. :)

This is going to be an obnoxiously big one. We’ll walk through it. Here goes:

design-hub-idea_notes

Alert Box

The first thing that should hit you is the purple alert box. (I think the color should probably be customizable from a pre-selected set of choices on a per-sub hub basis.) From looking around at various online communities and forums and chatting with folks, it seems a common meme for organizing online communities is having a set of guidelines for how the community is run. The idea with this box is that the community owners / mods can set the message, and it’ll be display to newcomers to the hub or to everybody (if it is ever updated.) It can be dismissed and won’t show up again unless the content is changed. It also links up to a fuller set of community rules and guidelines.

Moderator Box

This is kind of a meta help box. It’s in right sidebar, towards the top. It has a list of the group owners / mods; you can click on their names to get more info about them. It also has a link to the community rules & guidelines (helpful in case you closed out the alert box.) One idea we’ve been kicking around is letting people notify the mods of any issues from this widget; the tension there is making sure it doesn’t become a spam outlet.

Custom subhub banner

Following Reddit’s lead, there’s a space below the main navbar designated for the subhub’s branding. Some of the subreddit artwork I’ve seen isn’t the best quality though. We’ll probably offer a design team service to design the banners for different subhubs in the system. We can also provide a precanned set of nice backgrounds that teams can choose from. The way we’re thinking the banner will work is you can set a repeatable background tile, and then set a graphic that will be displayed left, center, or right.

User / profile config center

This isn’t mocked up yet; the vision there is that it would let you visit your profile page and would also provide a lot of the functionality you have in the FAS2 website today: change your password, change your ssh key, location, etc., as well as manage your group memberships.

Messaging center

This one is also not yet mocked up. It will likely be getting its own blog post soon. There’s a lot of different types of messages/notifications a user could get, so I think we need to sit down and catalogue the ones we know about before mocking anything up. I think it might also be cool to have a place to save/store stuff you like; like a favorites list you can refer back to later.

Nav bar

Okay, so here’s the idea with the navbar. It’s another Reddit-inspired thing. Users logging in for the first time with fresh FAS accounts by default will have a few select hubs in their navbar – perhaps ‘front,’ ‘planet,’ and ‘announce.’ (‘front’ could be maybe some kind of aggregation of the most popular content; planet would be a hub that basically repeats Fedora planet maybe, announce would basically mirror the Fedora project announce-list.)

Once a user joins different FAS groups – or if they are already a member of FAS groups – the hubs associated with the groups they are a member of could appear in their navbar. So here, you see Tatica has ‘designteam,’ ‘ambassadors,’ ‘marketing,’, and ‘LATAM’ subhubs in her navbar, as an example.

You can customize your nav bar by hitting the ‘edit’ button on the far right side of it. Maybe there could be a directory of subhubs across the system when you click on the ‘hubs’ logo in the upper right, and you can add them to your nav from their as well.

Subhub meta bar

This is the topmost widget in the right-hand sidebar. It gives you an idea of how many people are ‘members’ of the subhub (analogous to how many people are members of the FAS group it’s assocaited with,) and how many people follow the hub (‘subscribers.’) It also provides a mechanism for you to subscribe or unsubscribe from the hub.

Example Hyperkitty post

There’s an example post, ‘Fedora Design Github org,’ that I posted to the design-team mailing list a few days ago. This is mean to show how a post from Hyperkitty could appear in this system. The thought / hope is that we could use the Hyperkitty/Mailman API to send comments, or at the very least simply display them and link back out to Hyperkitty for replying and reading other posts.

Alternatively, we could just have a widget for the design-team mailing list, and not integrate posts into the news stream on the hub. We could instead show some of the Hyperkitty widgets currently displayed on list overviews, like the most active threads list or the most recent threads list. That’s another way to go. I’m not sure what’s best yet. Maybe we give subhub owners a choice, depending on how much they actively use the mailing list or not in their particular community.

Glitter Gallery post

We have another example post below the Hyperkitty one; this one is an example Glitter Gallery post. You can view the artwork and make comments on it, and the comments should get sent back to Glitter Gallery.

Example blog post

Further down the main news feed area we have a small snippet of a blog post to show how that would look in the subhub context. The idea here is that the design team has a subplanet on planet.fedoraproject.org associated with it – planet.fedoraproject.org/design – so those posts could show up in the chronological news stream as well.

Chat widget

Another idea in the right-hand sidebar – inspired by waartaa and ideally driven by it – a little chat client that connects to #fedora-design on freenode IRC, where the design team tends to hang out. I do not think the backlog should be blanked on every page load – I think there should always be at least a few hundred lines of backlog stored with the subhub so anybody coming in can follow the conversation from before they joined that they missed out on. It’ll let folks catch up and participate. :)

Nauncier widget

This is just a simple little widget to show that widgets don’t have to be complex – this one drives users who haven’t yet voted on the Fedora supplemental wallpapers to go and vote!

Ticket Widget

The design team has trac queue where we accept requests for artwork and design from across the project. It might be nice to inspire folks to help out with a ticket by having available tickets listed right there. It might be some good motivation too – if someone finishes a ticket or posts something to a ticket – that would be shown in the feed too. When you do good work and complete something, or submit something and are looking for feedback – you’d get more exposure by having that broadcast to the subhub news feed.

Some thoughts

Okay, so hopefully that little tour through the mockup made sense. What do you think?

Overall, I would like to point out that as with Hyperkitty, the design principle here is the same – we do not want to displace folks who are already happy with the tools they use and force them to log into this web system and use only that. If someone posts a reply to a mailing list post through this hubs system, the reply should get send back to the mailing list as a reply and should be perfectly readable by folks using only a mail client to receive postings by email. If someone sends a message in the chat, folks using a traditional IRC client in that channel should be able to see that message and communicate with the sender without issue. The hope here is to bring things together to make it easier and less intimidating for newcomers without sacrificing anything on the current contributors’ side.

I’d love to hear your thoughts in the comments!

Krita 2.8.2 Released

Today the Krita team releases the second bugfix release of Krita 2.8.

Most of the development  work at the moment is going into some big issues for 2.9, like the resources manager, MVC refactoring and HDR color selectors, but there are some nice improvements

  • add support for reading PSD layer groups
  • new splash screen with recent files and links to the Krita website
  • save tags with special characters properly (bug 332708)
  • fix removing tags
  • restore native file dialogs on Windows
  • fix a bunch of memory leaks

And expect more bug fixes for 2.8.3!

Linux users can get updates from their distributions, Windows users can download installers from the kritastudio.com website.

http://heap.kogmbh.net/downloads/krita_x64_2.8.2.0.msi
http://heap.kogmbh.net/downloads/krita_x86_2.8.2.0.msi




The SELinux Coloring Book

selinux-comic-book-thumb

Dan Walsh had a great idea for explaining SELinux policy concepts in a fun way – creating an SELinux coloring book! He wrote up a script, I illustrated it using my Wacom in Inkscape on Fedora, and we turned it into an opensource.com article. Still. We needed physical coloring books, and what better place to hand them out than at the Red Hat Summit?

We got them printed up and shipped off to the Summit (some in assorted volunteers’ baggage :) ), and they’ve been so popular that Dan is getting close to running out, except a reserve he’s kept for the SELinux for Mere Mortals talk later today. We also handed out some slightly imperfect misprints in the Westford Red Hat office, and we’ve been told a co-worker’s daughter brought hers to pre-school and it was a big hit – the other kids want their own. When it comes to SELinux, we’re starting ‘em young on the setenforce 1 path. :)

How might you get your own copy? Well, we’ve made the coloring book, including the text and artwork, available under a Creative Commons Attribution ShareAlike license. So download, print, share, remix, and enjoy! :)

No Starch Press GIMP Books 30% off!

You know that I like the No Starch Press GIMP books – I even had my hand in the production of one of them. The other is always in reach at my desk. Over at Reddit there is a coupon code for getting 30% off of them – valid until May 6.

(I once got some $$$ and a crate of books from them for doing a technical review, but this is not paid by anyone for.)

flattr this!

April 15, 2014

IFC support in FreeCAD

We have recently reached a very important point with the Architecture module of FreeCAD: Import and export of IFC files. The support to IFC is not fully compliant yet, but I believe it is stable enough so I can talk about it here. The IFC format is a very important foundation of any decent BIM workflow....

Blenderart Mag Issue #44 now available

Welcome to Issue #44, “Mech Mayhem

There is an artistry to a well designed machine that even the most clueless of us can appreciate. Watching all the little moving parts work together in harmony is a sight to behold and a wonder to those of us with no concept of how it all goes to together.

The magic of robots, gears and all manner of mechanical moving parts are to be found in our latest issue of Blenderart Magazine as well as a look at how to realistically texture them. It is an issue full of mechanical goodness waiting for your reading pleasure.

So grab your copy today. Also be sure to check out our gallery of wonderful images submitted by very talented members of our community.

Table of Contents:

  • Tutorial: Mech Bust
  • Tutorial: Mech Textures
  • A Weight Scale
  • Mechanical Systems
  • Retracting Landing Gears
  • Creating Comics Character

And Lot More…

April 14, 2014

JDLL 2014 report

The 2014 "Journées du Logiciel Libre" took place in Lyon like (almost) every year this past week-end. It's a francophone free software event over 2 days with talks, and plenty of exhibitors from local Free Software organisations. I made the 600 metres trip to the venue, and helped man the GNOME booth with Frédéric Peters and Alexandre Franke's moustache.



Our demo computer was running GNOME 3.12, using Fedora 20 plus the GNOME 3.12 COPR repository which was working pretty well, bar some teething problems.

We kept the great GNOME 3.12 video running in Videos, showcasing the video websites integration, and regularly demo'd new applications to passers-by.

The majority of people we talked to were pretty impressed by the path GNOME has taken since GNOME 3.0 was released: the common design patterns across applications, the iterative nature of the various UI elements, the hardware integration or even the online services integration.

The stand-out changes for users were the Maps application which, though a bit bare bones still, impressed users, and the redesigned Videos.

We also spent time with a couple of users dispelling myths about "lightness" of certain desktop environments or the "heaviness" of GNOME. We're constantly working on reducing resource usage in GNOME, be it sluggishness due to the way certain components work (with the applications binary cache), memory usage (cf. the recent gjs improvements), or battery usage (cf. my wake-up reduction posts). The use of gnome-shell using tablet-grade hardware for desktop machines shows that we can offer a good user experience on hardware that's not top-of-the-line.

Our booth was opposite the ones from our good friends from Ubuntu and Fedora, and we routinely pointed to either of those booths for people that were interested in running the latest GNOME 3.12, whether using the Fedora COPR repository or Ubuntu GNOME.

We found a couple of bugs during demos, and promptly filed them in Bugzilla, or fixed them directly. In the future, we might want to run a stable branch version of GNOME Continuous to get fixes for embarrassing bugs quickly (such as a crash when enabling Zoom in gnome-shell which made an accessibility enthusiast tut at us).


GNOME and Rhône

Until next year in sunny Lyon.

(and thanks Alexandre for the photos in this article!)

April 13, 2014

Blender website downtime

The DNS service we use – from www.powerdns.net – is failing a lot recently. Last weekend even for more than 12 hours. That meant that all blender.org domains were not accessible for that period. That’s not acceptable.

We are working on solving this issue.

-Ton-

April 12, 2014

Episode 197: The LGM 2014 Group Photo

LGM 2014 Group PhotoDownload the Video! (37:06, 124 MB)

Download the Companion File! (130 MB)

Watch at YouTube

I went to the Libre Graphics Meeting 2014 in Leipzig to get either a boost for my motivation or to find an end for this project. It turned out to be a booster.

It was a really good time – even with missing the first day and the (for me) most interesting talks because I had to work that day. Another day was spent at the Zoo with Pat David and his wife, more time in the coffee room, Milchbars and restaurants. All the time having good conversations and learning a lot.

And then I got the honors of shooting the traditional group image. I assume it was not my track record of famous group pictures but the 36 megapixel resolution of my D800 that led to that decision.

In this episode I cover the post processing of the image and how I blended my previously taken image into the group.

The TOC

00:00:00 Intro
00:01:14 The LGM Group Photo
00:04:05 Rotating the image
00:06:05 Adjusting contrast and brightness with the curves tool
00:09:23 Getting myself into the image
00:10:54 Registering the layers
00:14:38 Merging with a layer mask
00:17:55 Ways to change the brush size
00:18:50 Cleaning up the layer mask
00:20:38 Curves adjustments on a selection
00:24:28 Cropping the image
00:25:58 Sampling a fill colour out of the image
00:27:14 Sharpening with wavelets
00:31:12 Adding the SVG logo
00:34:04 Scaling down and exporting
00:36:38 Good bye!
00:37:06 EOF

Meet the GIMP Video Podcast by Rolf Steinort is licensed under a Creative Commons Attribution 4.0 Unported License.
Permissions beyond the scope of this license may be available at http://meetthegimp.org.

flattr this!

Entangle at LibreGraphicsMeeting 2014, plugin system and Windows porting

Libre Graphics Meeting, 2014

Last week I took some time off from my day job working on virtualization, to travel to Leipzig for this years LibreGraphicsMeeting. This was my first visit to LGM, coming about after Jehan Pagès suggested that I present a talk about Entangle in order to attract more developer & user interest. I had a 20 minute slot at the end of the first day where I went through the basic concepts / motivation behind the project, then did a short live demo, finishing up with a look at some of the problems facing / future needs of the project. Historically my talks have always involved death-by-bullet-point style libreoffice slides, but for LGM I decided it was time to up my game. So I turned to the GNOME Pinpoint application:

“a simple presentation tool that hopes to avoid audience death by bullet point and instead encourage presentations containing beautiful images and small amounts of concise text in slides”

I took this to heart, creating a set of slides which consisted entirely of beautiful images I’ve taken myself overlaid with just one or two words of text. I found it quite a liberating experience to give a presentation using such a slide deck, since it freed me from having to read the slides myself or worry that I’d forgotten to talk about a particular bullet point. Hopefully those in the audience found the talk a more fluid & engaging experience as a result. After the talk, and in the days following, I had interesting discussions about Entangle with a number of other attendees, so going to LGM definitely achieved the goal of attracting more interest.

At the end of the last day, we had the usual conference group photo, but Jakub Steiner then raised the bar by getting out his quad-copter with camera mounted to shoot a group video from the air.

Application plugins

Since the very start of the project, it has always been the intention for users to be able to extend the core interface with custom plugins. Thus far though, this has been all talk, with little real action. Entangle is written using GTK/GObject with a fairly well structured object hierarchy / codebase to make it easier to understand / extend and integrates with the libpeas library for plugin infrastructure. What has been missing is proper annotations on the internal APIs to enable GObject Introspection to be able to correctly call them from non-C languages. The past few weeks I’ve been working hard to address all the warnings displayed by g-ir-scanner with the goal that all internal APIs are fully annotated with calling conventions.

I had tentatively setup libpeas to use the ‘gjs’ loader to enable plugins to be written in JavaScript, inspired by the fact that GNOME Shell is extendable using JavaScript. At LGM, Martin Owens (IIRC) mentioned in passing to me that he thought Entangle would be better off using Python. Though we didn’t get into a deep discussion on this matter at the time, it got me thinking about whether JavaScript was the right choice. A few months back I did start hacking on a GNOME Shell extension for displaying an astronomy weather forecast in the top bar, which naturally enough was in JavaScript. One of the frustrating things I found with that effort was a general lack of documentation / examples about JavaScript usage in the context of non-browser apps, as well as the lack of a good archive of reusable library functions. While the GNOME platform APIs are, of course, all available for usage, there’s more to the world than GNOME. In particular I needed code for calculating sunset/sunrise times and lunar phases. With this experience in mind, and since there are no plugins yet written for Entangle, I’ve decided to switch it to use Python for its plugin system. This means that as well as having access to the GNOME platform APIs, plugin authors will be able to leverage the entire Python ecosystem which is truly enormous. Since Entangle requires GTK3, which is pretty modern, I decided I might as well ignore Python 2.x, so Entangle is going to require Python 3 only. This avoids a painful decision later about whether/when to switch from 2 to 3.

To actually prove that the plugin system is useful, I’ve also started work on a simple demonstration plugin. The goal of the plugin is to provide a way to turn the UI into a capture “selfie” photobox. This was inspired by the fact that a user has already used Entangle for such a system in the past. The plugin basically just hides the menu bar, toolbar and camera settings panels, adds a large “capture” button and then puts the app into fullscreen mode. Working on this helped me identify a few issues in the codebase which were making life harder than it needed to be – for example there was no easy way to get access to the GtkBuilder object associated with a window, so getting hold of individual widgets was kind of tedious. With the first cut of the plugin now written and working, it should serve as nice example code for users interested in figuring out how to write more plugins. At the top of the list for plugins I want to work on next is something to automate the shooting of an entire sequence of images. For example with astrophotography it is common to take sequences of many 100′s of images over several hours. You quickly become bored of pressing the shutter button every 30 seconds, so automation is very useful.

Windows porting

One of the memorable quotes from LGM was

“5% of the Windows market is larger than 95% of the Linux [desktop] market”

While I’m not personally interested in using Windows on the desktop, preferring GNOME 3 and open source software exclusively, I realize that not everyone shares this viewpoint. There are plenty of people who want to continue to use Windows for whatever reasons they have, but wish to be able to use cool open source applications like LibreOffice, GIMP, Digikam. The Nikon provided software for DSLR control is not cheap, so there’s clearly an opportunity for Entangle to be useful for Windows users. The key factor from my POV is minimising the overhead / costs of maintaining the port. I don’t want to have to use Windows on a day-to-day basis for this porting work, so getting it working using Mingw64 and hopefully WINE is a pre-requisite.

Entangle depends on a number of 3rd party libraries the most critical of which are of course GTK3 and libgphoto2. GTK3 is available for Minge64 in Fedora already, but libgphoto2 is not in such good shape. There is clear evidence that someone has done some amount of work porting libgphoto2 to Windows in the past, but it is unclear to me if it was ever complete, as today’s codebase certainly does not work. One evening’s hacking was enough to fix the blocking compile errors exposed when cross-compiling with Mingw64, but that left many, many 100′s of compile warnings. Even with the native Linux toolchain, libgphoto2 spews several 100 compiler warning messages, which really obscure the warnings relevant to Mingw64. So I’ve spent several more evenings cleaning up compiler warnings in libgphoto2 on Linux, resulting in a 17 patch series just submitted upstream. Now I can focus on figuring out the rest of the Mingw64 portability problems in the codebase. There’s still a way to go before Entangle will be available on Windows, but at least it looks like it will be a feasible effort overall.

April 11, 2014

Mechanic-Sister Concept

According to the screenplay of Morevna Project, the main character (Ivan) have three sisters. One of them is Mechanic-Sister, who sleeps with a wrench under her pillow.

Mechanic-Sister Artwork by Anastasia Majzhegisheva

Mechanic-Sister
Artwork by Anastasia Majzhegisheva

LGM Leipzig

Another great Libre Graphics Meeting is behind us and I’m greatful for being able to take part in it. Big thanks to everyone making it happen, particularly the GIMP folks for allowing an old affiliate to share the Wilberspace.

There’s been some great talks, quite a few relating to Blender this year which I hope will become a trend :) Peter Sikking demonstrated how to present (yet again). Even though I’ve been fully aware with the direction of GEGL based non destructive editing GIMP is taking, the way Peter showed the difference between designing for a given context versus mimicking was fun to watch. Chris Lilley showed us the way forward for the gnome-emoji project with SVG support in OpenType. So much going on beside the main talks that I managed to miss many, including Pitivi and Daniel’s on Entangle.

Allan and I presented what we do within the GNOME project and how to get involved. Kind of ran out of time though, guess who’s to blame. The GIMP folks set up a camera, so hopefully there will be footage of the talks available. Really enjoyed my time, always like coming back with the need to create more things.

Martin Owens deserves a shoutout for being an awesome Inkscape developer trying to address some rough spots we’ve bumped into over the years. Almost made me want to follow the Inkscape mailing list again :) Hopefully soon, we’ll be able to ditch the window opening verb madness we use for gnome-icon-theme-symbolic export.

Watch on Youtube

April 10, 2014

LGM 2014, one more year of awesomeness in Libre graphics software

Here’s my report of the Libre Graphics Meeting 2014 that took place in Leipzig last week:
-Very nice people
-Awesome projects
-Productive connections
-Time was flying (and a quadcopter drone too… ;P )

Seriously, it has been one more impressive meeting, big thanks to the organisers who made very good work!
And again, very big thanks to KDE e.V. for supporting me to can represent Krita there.
I could make a workshop about managing all kinds of assets in Krita, and participants were very happy about it. I also improvised a little lightning talk to forward the Krita Steam early access announcement that happened the same week.

Then, lots of unexpected productive discussions:
-I spent lot of time talking with Tom Lechner and learned some cool fanzine production tips, that make me want to do some now. And he’s a crazy good independant comics artist, so it was very inspiring to can discuss comics related topics with him.
He also has developed incredibly good new tools in his Laidout software, I definitely must give them a try! And I hope to see some of these tools included in others libre graphics software at some point, as he’s been working on the Tool Sharing concept..

-I met Manuel Quiñones, the one who made the xsheet-mypaint branch two years ago for a local animation production in Argentina. He is now working on a new “Xsheet” animation software from scratch using libmypaint for the brushes and GEGL as “canvas engine”. Again it was really good to can meet him and discuss animation related projects, and how his xsheet software could be used in combination with the Krita animation plugin that is in progress.

-David Tschumperlé from Gmic was there too for the first time, so it was great to finally meet him personally as we worked together recently on the colorize-comics filter. I hope to can send him good ideas for some new Gmic filters soon.

And of course all the other presentation and workshop topics were immensely interesting for someone working in graphics: fonts creation, raw photography workflow, all kinds of 3D work with Blender, Inkscape and SVG specs evolutions, the Libre Graphics Magazine team talks…

It was cool to see the Gnome design group talk, but then I’ve been thinking that the new Visual Design Group in KDE community was badly missing, so I hope some of them will be able to come next year.

Look forward to the conference videos that should be online soon, and the first LGM-people aerial-group-video recorded by Jakub Steiner from his funny quadcopter!

Krita: Russian Translations Updated!

A picture by Georgiy Syptchenko
after a well-known series by David Revoy :)


Thanks to Georgiy Syptchenko from Krita Russian Community [0] Krita's translations into Russian got significantly improved recently!

We have already done three translation updates in Krita Lime repository and there are new changes yet to come!

So if you happen to speak Russian and want to help us with testing our translations, please follow this manual [1] and install updated translation packages!



[0] - http://vk.com/ilovefreeart
[1] - http://dimula73.blogspot.ru/2014/03/krita-lime-localization-support.html

April 08, 2014

Interview with Tago Franceschi

Interview with Tago Franceschi

How did you first find out about open source communities? What is your opinion about them?

In 2005, a friend told me about Ubuntu, and since then I discover it. I love the open source philosophy, I think it's a great project and all those who participate are awesome people!

What was your first take on Krita when you tried it?

My first impression of krita has been very positive, intuitive interface and an excellent management of shortcuts and I don't know, for me, after using photoshop elements and gimp for several years, it was love at first sight!

What do you think needs improvement in Krita? Also, anything that you really hate about Krita?

For the improvements don't know, the only thing that makes me mad is the management of adjustments made with curves, I would prefer the bars with more or less ... I hope you understand what I mean …

In your opinion, what sets Krita apart from the other tools that you may be using?

In the past two years I have used only krita, I think it's for the responsiveness of the brush, with the tools that I used previously didn't have the same feeling.

If you had to pick one favorite of all your work done in Krita so far, what would it be?

Good question! Perhaps "bellezza sul lago".

What is it that you like about it? What brushes did you use in it?

In it I was able to retain, I think with a good result, different styles (impressionism and realism) in a single work. I used the default brush, with variation in size, opacity and shape (round and square).

Would you like to share it with our site visitors?

Sure, no problem!

 

April 07, 2014

Mon 2014/Apr/07

  • Upgraded GPG Key

    I've upgraded my GPG key from an old, 1024-bit DSA key to a new, 4096-bit RSA one.

    You can get my new GPG key here. Also, here is my transition statement which you can verify; it is signed with both keys.

    To do the upgrade I used this tutorial, and to find out how to sign the transition statement with both the old and new keys, I looked here.

Snow-Hail while preparing for Montreal

Things have been hectic in the last few days before I leave for Montreal with last-minute preparation for our PyCon tutorial, Build your own PiDoorbell - Learn Home Automation with Python next Wednesday.

[Snow-hail coming down on the Piñons] But New Mexico came through on my next-to-last full day with some pretty interesting weather. A windstorm in the afternoon gave way to thunder (but almost no lightning -- I saw maybe one indistinct flash) which gave way to a strange fluffy hail that got gradually bigger until it eventually grew to pea-sized snowballs, big enough and snow enough to capture well in photographs as they came down on the junipers and in the garden.

Then after about twenty minutes the storm stopped the sun came out. And now I'm back to tweaking tutorial slides and thinking about packing while watching the sunset light on the Rio Grande gorge.

But tomorrow I leave it behind and fly to Montreal. See you at PyCon!

April 05, 2014

Benchmarking on OSX: HTTP timeouts!

I’ve been doing some HTTP benchmarking on OSX lately, using ab (ApacheBench). After a large volume of requests, I always ended up with connection timeouts. I used to blame my application and mentally filed it as “must investigate”.

I was wrong.

The problem here was OSX, which seems to only have roughly 16000 ports available for connections. A port that was used by a closed connection is only released after 15 seconds. Quick calculation shows that you can only do a sustained rate of 1000 connections per second. Try to do more and you’ll end up with timeouts.

That’s not acceptable for testing pretty much anything that scales.

 

Here’s the workaround: you can control the 15 seconds release delay with sysctl:

sudo sysctl -w net.inet.tcp.msl=100

There’s probably a good reason why it’s in there, so you might want to revert this value once you are done testing:

sudo sysctl -w net.inet.tcp.msl=15000

 

Alternatively, you could just use Linux if you want to get some real work done.

Slides from “Dear designer, have these cool tools”

At Libre Graphics Meeting this year, the Libre Graphics magazine team gave two talks. Below, the deck from our second talk, titled “Dear designer, have these cool tools” made with scri.ch.

m7y
m85
m84
m87

April 04, 2014

Reading suggestions from the “Beyond the women in tech talk” panel at LibrePlanet 2014

Below, we present to you a set of reading suggestions created collaboratively (in real time, on an etherpad) during the “Beyond the women in tech talk” panel held at LibrePlanet 2014. This list is the work of a number of different people who attended the panel. A version which shows revision history can be found at http://piratepad.net/lp2014-beyond-women

 
reading list — suggestions from the LP 2014 Beyond Women in Tech talk

for those coming to this after not seeing the talk — a list of resources for folks looking to educate themselves. 

Published here: https://twitter.com/fsf/status/447461813849190401 & https://status.fsf.org/notice/45714

http://bit.ly/journalists-of-color  — list of 140+ journalists of color (with links to personal blogs, etc) covering a variety of topics (via http://www.diversify.journalismwith.me/)

http://www.blackgirldangerous.org/- queer, poc, atypically abled voices.the blog’s purpose is NOT to educate about privilege, but is extremely informative if you are willing to think critically about yourself re: what the writers are saying,  and their experiences 

http://www.crunkfeministcollective.com/ – “The Crunk Feminist Collective (CFC) will create a space of support and camaraderie for hip hop generation feminists of color, queer and straight, in the academy and without”

http://www.theatlantic.com/ta-nehisi-coates/ – madprime’s suggestion, Ta-Nehisi Coates, prompting this question. TNC regularly writes about the black male perspective & helps me understand the (or rather, his) African American perspective. TNC’s audience has a lot of white liberals, and he’s figured out how to make us listen (or me, at least!). (but sometimes he talks about a geeky thing or two like D&D http://www.theatlantic.com/entertainment/archive/2013/01/growing-up-in-the-caves-of-chaos/267107/ ;-) )

http://www.slate.com/articles/life/counter_narrative.html written by http://tressiemc.com/ 
http://melissaharrisperry.com/
http://satifice.com/
http://blogs.scientificamerican.com/urban-scientist/ – “A hip hop maven blogs on urban ecology, evolutionary biology & diversity in the sciences”
http://www.racialicious.com/ – “Racialicious is a blog about the intersection of race and pop culture.”
http://colorlines.com/ – “Colorlines is a daily news site where race matters, featuring award-winning investigative reporting and news analysis.”
https://www.takebackthetech.net/write/recent-posts – “Take Back the Tech! is a collaborative campaign to reclaim information and communication technologies (ICT) to end violence against women (VAW).”  (They have a feed of posts, but are not primarily a blog org.)

For expediency & to avoid mainstream aggregates & for more decentralization, use RSS! http://lzone.de/liferea/

/>http://planeteria.org/wfs/

/>http://empowermentors.org/page/view-page-slug/1/home

http://libcom.org/node/43011  – Not Your Mom’s Trans 101 – Asher

http://ovum.com/authors/nishant-shah/
Nishant Shah rec – http://dmlcentral.net/blog/nishant-shah/how-can-we-make-open-education-truly-open




April 03, 2014

XDG Summit: Day #4

During the wee hours of the morning, David Faure posted a new mime applications specification which will allow to setup per-desktop default applications, for example, watching films in GNOME Videos in GNOME, but DragonPlayer in KDE. Up until now, this was implemented differently in at least KDE and GNOME, even to the point that GTK+ applications would use the GNOME default when running on a KDE desktop, and vice-versa.

This is made possible using XDG_CURRENT_DESKTOP as implemented in gdm by Lars. This environment variable will also allow implementing a more flexible OnlyShowIn and NotShowIn desktop entry fields (especially for desktops like Unity implemented on top of GNOME, or GNOME Classic implemented on top of GNOME) and desktop-specific GSettings/dconf configurations (again, very useful for GNOME Classic). The environment variable supports applying custom configuration in sequence (first GNOME Classic then GNOME in that example).

Today, Ryan and David discussed the desktop file cache, making it faster to access desktop file data without hitting scattered files. The partial implementation used a custom structure, but, after many kdbus discussions earlier in the week, Ryan came up with a format based on serialised GVariant, the same format as kdbus messages (but implementable without implementing a full GVariant parser).

We also spent quite a bit of time writing out requirements for a filesystem notification to support some of the unloved desktop use cases. Those use cases are currently not supported by either inotify and fanotify.

That will end our face-to-face meeting. Ryan and David led a Lunch'n'Learn in the SUSE offices to engineers excited about better application integration in the desktops irrespective of toolkits.

Many thanks to SUSE for the accommodation as well as hosting the meeting in sunny Nürnberg. Special thanks to Ludwig Nussel for the morning biscuits :)

April 02, 2014

Freedesktop Hackfest: Day #3

Wednesday, Mittwoch. Half of the hackfest has now passed, and we've started to move onto other discussion items that were on our to-do list.

We discussed icon theme related simplifications, especially for application developers and system integrators. As those changes would extend into bundle implementation, being pretty close to an exploded-tree bundle, we chose to postpone this discussion so that the full solution includes things like .service/.desktop merges, and Intents/Implements desktop keys.

David Herrman helped me out with testing some Bluetooth hardware (which might have involved me trying to make Mario Strikers Charged work in a Wii emulator on my laptop ;)

We also discussed a full-fledged shared inhibition API, and we agreed that the best thing to do would be to come up with an API to implement at the desktop level. The desktop could then proxy that information to other session- and/or system-level implementations.

David Faure spent quite a bit of time cleaning up after my bad copy/pasted build system for the idle inhibit spec (I copied a Makefile with "-novalidate" as an option, and the XML file was full of typos and errors). He also fixed the KDE implementation of the idle inhibit to match the spec.

Finally, I spent a little bit of time getting kdbus working on my machine, as this seemed to trigger the infamous "hidden cursor bug" without fail on every boot. Currently wondering why gnome-shell isn't sending any events at all before doing a VT switch and back.

Due to the Lufthansa strike, and the long journey times, tomorrow is going to be the last day of the hackfest for most us.

Krita on Steam: Early Access is Now Open!


Kiki says Thank You!

Yesterday -- yes, April 1st, but it wasn't a joke! -- Krita went into Early Access mode on Steam!

 The version of Krita that is on offer on Steam is Krita Gemini, which can switch between the desktop mode and tablet mode, depending on whether you're using it on your tablet, your television set, your laptop or your desktop.

It's early access, so there are still bugs and we'll provide frequent updates! And, of course, we're still working on all the fun Steam cloud integration features. Right now, we only have Windows builds for Steam. The goal is to release the first, full release on Windows and Linux for Steam in May, with all the integration with the Steam platform, like Big Picture Mode, Workshop, using the Steam cloud for settings and brushes. If you have any ideas for things you think could be great to see, the usual place is open for ideas, or use the Steam community page.

The early access price is €22,99.

Krita is free software under the GNU Public License V2+. All the work for Krita on Steam is open and public. But adding the steam features, building and creating packages does take a lot of effort! And that's why there's a price tag for the binaries.

And, of course, the Early Access price is lower than the final price will be, so if you're on Steam, grab your chance - Krita Gemini's Steam store page is open for business right now!

Web-based IRC, if you’re at #lgm 2014

If you’re at LGM 2014 in Leipzig this week and having trouble accessing the #lgm IRC through the usual *ahem* channels, try this web-based client.

April 01, 2014

A proposal for Fedora’s website (considering Fedora.next)

(I’d like to apologize upfront, in that I meant to post this about a month or so ago. You might be aware that the Red Hat Summit is coming up in 2 weeks, and I’ve had a few odds and ends to take care of for that event that cut to the front of the line on my task list because of their imminent deadlines!)

So, Fedora.next is shaking Fedora up a bit – enough that our current fedoraproject.org website is going to need a bit of a gut reno to appropriately reflect the new world of Fedora! A few weeks back, Ryan Lerch and I had an informal brainstorming session about how to account for Fedora.next with fedoraproject.org. We came up with what we thought was a pretty workable concept, and met with Matthew Miller a few days later to see what he thought. Here’s the whiteboard of what we came up with:

Fedora.next whiteboard

Whoah, what’s going on here? Okay, let’s walk through this.

The Proposal

There’s several website components to this proposal. We’ll go through each one-by-one. We have some thumbnail mockups of each site to give you a vague idea of the kind of thing we’re thinking of – there are no larger, detailed mockups at this point, except that I think Ryan is working on a prototype of the Brochure site with the websites team, which I think he is planning to blog about soon.

A Fedora ‘Brochure’ Site

fedora-next_brochure

So first, let’s create a Fedora website to allow people to learn about Fedora – the Operating System, you know, the sausage we’re making here – and to be able to download it. Ryan and I started calling this the ‘brochure’ site because it’s really informational and aimed at people who have never heard of Fedora before and want to learn more about what it is. It’s also aimed at people who know what Fedora is and use it, but who simply want to download it and get on with their lives. It’s not primarily aimed at contributors.

Some of the sites we were inspired by when coming up with this idea:

  • getfirefox.org – simple and clean website introducing what Firefox is, with a prominent single download button, features list, and tour as well as links out to more information about add-ons, support, and Mozilla.
  • google.com/chrome – short and sweet introduction to Chrome, with a prominent single download button, screenshots, feature list, and some chromebook / chromecast stuff.
  • android.com – clean layout with basic information about Android. It also includes news updates, which the other two didn’t.

Do you think this makes sense to have? It might be cool if we could give it its own domain to separate it out from contributor-centric stuff.

A Fedora User Support Site

fedora-next_user-support

Right now, there’s a lot of places within Fedora you can get help. To a newbie, it’s not really clear what the best place to go is. We’ve had ask.fedoraproject.org set up for a while, and it has the potential to become a really useful knowledgebase of help for Fedora users – if only it had enough prominence for more folks to start using it. In this proposal, then, we’re elevating ask.fedoraproject.org to a more prominent position – and having it linked it strongly with the brochure site. We’ll skin it to match the new brochure site as well.

The idea for this site is to be targeted primarily at Fedora users. We could, however, have another instance set up for contributors to ask questions about how to do things within Fedora and get help.

Further on down the line, it’d be really sweet to have Fedora desktop integration with the support site, so you can ask questions and get notifications when answers to your questions are available from the desktop. Maybe?

What do you think?

The Fedora ‘Commnunity Hub’

diagram_communityhub

Okay, so here’s where things get more complex. Full disclosure, too – we’ve tried doing something like this a couple of times now and honestly failed to get wide adoption and also to expand the functionality across Fedora’s disciplines. Hopefully, third time’s the charm! :) Perhaps there is something inherent in this idea that is fatally flawed, though. What do you think?

Well, let’s talk through it, first. It’s inspired by a lot of currently-popular social media sites; we’d really like it to be a place that Fedora contributors would feel compelled to visit daily and refresh throughout the day, depending on how actively they are working on Fedora that given day. (Whether or not we’ll achieve that, of course, we really can’t say or guarantee.) Here’s some of the features this hub would have:

  • Logged out mode has new user signup flow: Similar to how Twitter and Facebook operate when you’re logged out of them in a web browser, we were thinking that if you’re not logged into the hub, we don’t know what content would be the most appropriate to show you, so rather than risk a bad experience, let’s just prompt you to log in. This also gives us a nice clean space to promote new account signups. From the logged-out page, we could create a guided Fedora Account System (FAS) account creation flow (that uses FAS as a backend, of course) to help ease users into becoming contributors.
  • FAS integration: Speaking of FAS, we were thinking it might be nice if some of the basic account management tasks you do in FAS today were available from this site. E.g., change your password, email address, sign up for groups, that sort of thing. Maybe we could build out a groups UI and make the FAS groups we have more social (well, the groups that serve as more than just an ACL for something else. Maybe we could filter out the ACL groups?)
  • Messaging / notification tray: Another inspiration from various social media sites, we were thinking about having a messaging / notification tray along the top of every page in the hub. If there is new content available in one of the streams you’re subscribed to, or maybe if someone has sent you a message or you have a build that’s finished or what not, it’ll pop up as an additional item in your messaging tray. You can see in the ‘logged in’ thumbnail mockup above that the messaging tray has been expanded and shows a bunch of items. Click on any given item and you’ll be taken to the full details for that item, whether it’s a finished build in koji, a new update in bodhi to a package you watch, or a Fedora badge you’ve just been awarded.
  • Reddit-like spaces: Fedora is a really big project. We’re big enough and have enough teams and efforts and things going on that I think a global nav to encompass all that we do would be too unwieldy and out-of-date a few days after it was created. We’re more fluid than that. So I really like the idea of how Reddit organizes space within itself – users create spaces (‘subreddits’) that operate roughly under their own governance, and can customize those spaces (albeit in somewhat limited ways, a logo and custom banner I think.) New spaces can be created on an as-needed basis. Every space is basically a forum; each post has threaded comments and there is a voting system both for posts and for comments so the best content bubbles up to the top. We had the idea that we could organize the community hub in this fluid fashion – prepopulate the system with some established spaces, like for the Fedora Design team, Infrastructure team, and maybe the working groups and Fedora.next and projects like that. We’d allow Fedora community members to create and manage their own spaces too, and allow them to customize those spaces to their teams’ needs.

Let’s talk about that last point a little bit more in-depth, since it’s really core to how we were thinking this community hub could be organized.

Sub-hub-hubs and a bottle of rum!

diagram_communityhub_subhubs

Ideally, the spaces for various Fedora teams and projects on the community hub wouldn’t be limited to forums – we could build out custom widgets that each space could make use of, and tie those widgets into data coming in from fedmsg, for example.

You can see some (very, very rough) thumbnail mockups of what different hubs might look like above. Let me walk you through my brain using these thumbnails as a guide:

  • We’ve got an Ambassadors hub that shows a map of current ambassador activity, a swag gallery (lower right) and some discussions (courtesy Hyperkitty?) in the lower left…
  • There’s a Development hub with a list of builds and updates along the right column and devel-list discussions as well as announcements in the main content area…
  • Of course there is a hub for the Design Team with a widget to show recent mockups / designs, design-team list discussions, an asset search box, and a list of recent tickets along the right…
  • Finally, a Marketing hub with recent articles about Fedora from the press aggregated on top, with marketing-list discussions on the bottom, and recent Fedora magazine post along the right sidebar.

The specific content I walked you through on each thumbnail mockup above? It could all be totally bogus. It is a lot of work, figuring out for each team and project, what would be the right content to offer, and how to place it, and how to design widgets that would best hold it. To a certain extent, we’d like each subreddit subhub moderator/admin to create the default content offering, and maybe even allow users to customize for their individual needs further on top of those defaults. We do have to offer some basic tools and feeds, though, to make any of that possible.

You can see in our initial whiteboard that we walked through with Matthew (in the lower left corner) a list of the ways that different teams / efforts have overloaded the wiki to achieve what they needed:

  • development docs
  • personal pages
  • SIG pages
  • strategy
  • workflow (e.g., QA, design, rel-eng)
  • packaging policy
  • meeting minutes
  • team organization
  • asset management (e.g., the design team’s SVGs and PDFs!)
  • event management (pre-Flock, sign up for FUDcon in this wiki table!)
  • standard test procedures
  • UI specs (like this)
  • user documentation
  • community policy

Maybe some of these things would be better served by mechanisms made for working with them? Maybe we could design and build widgets or tie-ins to pre-existing infrastructure (like meetbot for meeting minutes, as an example) that would help manage these better than a wiki?

Getting flashbacks about early 2000′s web portal design? Yeh. We’re not trying to go there, honestly. Think about a model more similar to Facebook’s groups and apps, if you use Facebook.

On a per-user basis, the user might have a favorites bar or nav bar along the top prepopulated with the subhubs affiliated with the Fedora groups they are a member of. The user could customize this, to remove the subhubs they don’t want in their navbar and to add additional ones.

Does any of this make sense?

Well, what do you think? Ryan and I met with the Fedora websites team a few weeks ago or so to talk through the idea after working up this proposal with Matthew Miller. They were very supportive of the idea. So, we have had some designer-crazy checks up to this point; it can take a village to suss out designer-crazy sometimes, though, so do let us know your thoughts on this!

If need be, we can modify the plan, come up with a new one, or start blazing forward towards making this one a reality. We’ll need your feedback to figure out what to do. So, what do you think?

Addendum

I found this lurking in my Fedora People ~/public_html from 2009 – we’ve thought about this kind of split of the website before, and even thought about hubs (although not necessarily a platform for fluid hubs as in this proposal):

fedora-model-1

By the way! The scribbly font I used in the mockups is called ‘Redacted’ – it’s an Open Font License font that is really handy for when you want to whip out some quick mockups and don’t want to bother with lorem ipsum text. You can snag it from this link.

Freedesktop Summit: Day #2

Today, Ryan carried on with writing the updated specification for startup notification.

David Faure managed to get Freedesktop.org specs updated on the website (thanks to Vincent Untz for some chmod'ing), and removed a number of unneeded items in the desktop file specification, with help from Jérôme.

I fixed a number of small bugs in shared-mime-info, as well as preparing for an 8-hour train ride.

Lars experimented with technics to achieve a high score at 2048, as well as discussing various specifications, such as the possible addition of an  XDG_CURRENT_DESKTOP envvar. That last suggestion descended into a full-room eye-rolling session, usually when xdg-open code was shown.

XDG Hackfest: Day #1

I'm in Nürnberg this week for the Freedesktop Hackfest, aka the XDG Summit, aka the XDG Hackfest aka... :)

We started today with discussions about desktop actions, and how to implement them, such as whether showing specific "Edit" or "Share" sub-menus and how to implement them. We decided that that could be implemented through specific desktop keys which a file manager could use. This wasn't thought to be generally useful to require a specification for now.

The morning is stretching to discuss "splash screens". A desktop implementor running on low-end hardware is interested in having a placeholder window show up as soon as possible, in some cases even before the application has linked and the toolkit is available. This discussion is descending into slightly edge cases, such as text editors launching either new windows or new tabs depending on a number of variables.

Specific implementation options were discussed after a nice burrito lunch. We've decided that the existing X11 startup notification would be ported to D-Bus, using signals instead of X messages. Most desktop shells would support both versions for a while. Wayland clients that want startup notification would be required to use the D-Bus version of the specification. In parallel, we would start passing workspace information along with the DESKTOP_STARTUP_ID envvar/platform data.

Jérôme, David and I cleared up a few bugs in shared-mime-info towards the end of the day.

Many thanks to SUSE for the organisation, and accommodation sponsorship.

Update: Fixed a typo

Release Notes: Mar 2014

What’s the point of releasing open-source code when nobody knows about it? In “Release Notes” I give a round-up of recent open-source activities.

Slightly calmer month, nonetheless, here are some things you might enjoy:

 

angular-debounce (New, github)

Tiny debouncing function for Angular.JS. Debouncing is a form of rate-limiting: it prevents rapid-firing of events. You can use this to throttle calls to an autocomplete API: call a function multiple times and it won’t get called more than once during the time interval you specify.

One distinct little feature I added is the ability to flush the debounce. Suppose you are periodically sending the input that’s being entered by a user to the backend. You’d throttle that with debounce, but at the end of the process, you’ll want to immediately send it out, but only if it’s actually needed. The flush method does exactly that.

Second benefit of using an Angular.JS implementation of debounce: it integrates in the event loop. A consequence of that is that the testing framework for E2E tests (Protractor) is aware of the debouncing and it can take it into account.

 

angular-gettext (Updated, website, original announcement)

A couple of small feature additions to angular-gettext, but nothing shocking. I’m planning a bigger update to the documentation website, which will describe most of these.

 

ensure-schema (New, github)

Working with a NoSQL store (like MongoDB) is really refreshing in that it frees you from having to manage database schemas. You really feel this pain when you go back to something like PostgreSQL.

The ensure-schema module is a very early work-in-progress module to lessen some of that pain. You specify a schema in code and the module ensures that your database will be in that state (pretty much what it says on the box).

var schema = function () {
    this.table("values", function () {
        this.field('id', 'integer', { primary: true });
        this.field('value', 'integer', { default: 3 });
    });
    this.table("people", function () {
        this.field('id', 'integer', { primary: true });
        this.field('first_name', 'text');
        this.field('last_name', 'text');
        this.index('uniquenameidx', ['first_name', 'last_name'], true);
    });
};
ensureSchema('postgresql', db, schema, function (err) {
    // Do things
});

It supports PostgreSQL and SQLite (for now). One thing I specifically do not try to do is database abstractions: there are other tools for that. This means you’ll have to write specific schemas for each storage type.

There’s a good reason for that: you should pick your storage type based on its strenghts and weaknesses. Once you pick one, there’s no reason to fully use all of its capabilities.

This module is being worked out in the context of the project where I use it, so things could change.

 

Testing with Angular.JS (New, article)

Earlier last month I gave a presentation for the Belgian Angular.JS Meetup group:

ngmeetup-testing.001

The slides from this presentation are now available as an annotated article. You can read it over here.

March 31, 2014

Magic Lantern @ LGM in Leipzig

… on Wednesday April 2nd. Their talk will begin 18:30 o’clock local time in the New Paulinum of the University of Leipzig.

Magic Lantern was started in 2009 by Trammel Hudson to bring professional video recording and advanced photographic features to Canon EOS DSLR cameras.

The project expanded and its feature set. Custom video overlays, raw video recording, time lapsed video, manual audio control and more belong to it. With these tools Magic Lantern greatly improved useability in many areas upon bare Canon firmware and is now daily used by many professional photographers, journalists and movie makers.

March 29, 2014

Happy Document Freedom Day! Have some SVG tools

We dusted off the cover of issue 1.2, for which we had laboriously traced a set of illustrations taken from the Lello Universal encyclopedia, scanned and published by El Bibliomata from the Sevilla Faculty library — be sure to see their other sets!

To mark Document Freedom Day, we’re releasing the source file for all the vector traced images — in SVG, of course!

dfd-svg-toolkit

You can get the SVG file here. Happy Document Freedom Day!

March 28, 2014

Project Gooseberry, why it matters

I can’t express enough how important this Gooseberry project is, for me personally and for a lot of people out there. There are so many solutions for urgent issues coming together in Gooseberry – it is really mind-blowing sometimes.

This is what Gooseberry is for me:

- Open Movies as a Blender development model.
Open Source software works very well as in-house software, as an ongoing flexible development process. This is opposite to commercial programs, these have a more distinct product life cycle.
Imagine: Gooseberry is going to be an 18 month animation-studio simulation! With so many wonderful technical-creative challenges to solve, and we can all be part of it.

- Raise the bar – make a feature animation film.
Every animator or artist who has done a couple of shorts before, understands the excitement of the prospect doing a feature animation film once. It’s really a different medium, it’s a new technical and creative challenge – risky but rewarding. It’s also a medium that brings you a new and massive audience. This would be the ultimate advertisement for Blender as well as for FOSS in general.

- Investigate using Cloud services and features for open source projects
Software is moving into the cloud, Adobe and Autodesk work hard on it. They present this as “benefit for the users” but they actually just pull up an Iron Curtain to safely hide their software behind. No more piracy, no reverse engineering… total control!
I don’t want to wait for us to lose this fight. We can find out ourselves what the real user benefits are, but in openness and by truly respecting user freedom. The Gooseberry teams will use Cloud, for sharing and collaboration. With you too!

- Building the world’s largest free/open 3d content & education repository
We shouldn’t underestimate how much importance the open movies and the open game had for education and training. Not only for its free data, but especially for the tutorials, the making-of videos, the training dvds we made with these teams. This massive dataset should be kept around, renewed but also be kept updated and working.

- New business model for Blender Institute and Foundation
We can’t keep selling paper and plastic with open/free data forever… that did a lot for us, helped Blender to grow, hire developers and do big projects. But the revenues are going down. Having a pile of DVDs is nice on your bookshelf, but not to actually use. Online sharing – in the cloud – is a much better solution for the data.
I believe in a future for subscription models for cool content/training/data/services. Especially if that enables us to become a media producer ourselves!

- Occupy Bay Area, Occupy Hollywood?
There’s a real growing unrest out there about how a few greedy people control this business -  making their billions – while others lose jobs in the same week their company has won an Oscar. Yep, Mark Z. buys another toy for billions, which he makes by selling our digital lives. And we nerds just line up for yet another Marvel super hero movie again. Meanwhile the powers that be prepare for a separated internet – with fast and “free” commercial channels – and a slow, expensive one for the remains of the open internet we love.

I’m not fit for politics, nor do I feel much like protesting or mud slinging. I’m a maker – I’m interested in finding solutions together and doing experiments with taking back control over our digital lives, our media, and especially get back ownership as creative people again – and make a decent living with it.

So that’s Gooseberry for me. An experiment, but with potential impact!

I know there’s some skepticism out there, about the project concept and about the slow funding start. But well – we’re learning, and we’re developing well to get the message and the website to work optimally. It’s also inventing something new, and that you can only do by trying it.

Key is that I’m having a vision, and the guts to live by that vision. I’m not lead by polls, not by common opinions or what others think might be more successful. I’m also not a billionaire. Not a movie star. It’s just me :) And one thing for sure, I cannot do this alone.

http://cloud.blender.org/gooseberry/

Thanks,

Ton Roosendaal
Chairman Blender Foundation

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Ton Roosendaal in 1992, with his first SGI.

Ton Roosendaal in 1992, with his first SGI.

Krita 2.9 (pre-alpha): Updated Fill Tool!


Preface

The Fill Tool was present in Krita since the ancient days. Its was first implemented back in 2004, and since then there were only minor changes to the algorithm it used. Now we are glad to announce a major update of it!

The old implementation used a, though a bit optimized, but still conventional, flood-fill algorithm that iterated through all the pixels recursively using a huge array as a map to store which pixels were visited and which not. It had two obvious drawbacks: it ate your memory (~100MiB for a map for an 10k x 10k image) and it was rather slow. The new algorithm is free from these disadvantages!

Scanline Flood Fill Algorithm

The new algorithm uses larger entities than just a single pixel. It uses "scanlines", which is effectively a chunk of a row of the image. The process of filling looks like a game. Several scanlines traveling throughout the image and checking or filling the pixel data. Each scanline can travel in either upward or downward direction. When two scanlines of opposite directions meet, they eat each other and both disappear! The process continues while there is at least one scanline alive :)

The rules for breeding and eating scanlines are a bit complicated, but they guarantee that not a single pixel will ever be visited twice! And that is without keeping any map of visited pixels, and therefore without oppressing your RAM! The experiments we conducted showed that to flood-fill an image of 10k by 10k pixels one would need only about 1MiB of memory! Just compare that to the 100MiB demanded by the conventional algo!

Real Performance Tests

The tests showed that the performance of the new Fill Tool greatly depends on whether the user needs some complex compositioning or not. That is why we introduced two new modes for the Fill Tool:
  • Advanced Mode — the mode supporting all the features of the tool, such as applying a selection, rendering the result with a user-supplied composite op, growing or feathering the selection area. This mode works about 1.4 times faster than the old implementation;

  • Fast Mode — just fill the area with color! No compositioning or selections are supported, but thanks to these limitations it is capable of up to 2 times better performance than the Advanced Mode. Which is almost 3 times faster that the old conventional algorithm!
And, of course, these speed benefits are nothing in comparison to the economy of memory we achieve!

Conclusion

The new Flood Fill algorithm is already present in Krita master and is available for all the users having Krita Lime [0] installed! Just update and have fun with your painting!

And yes, this work would be impossible without the support from Krita Foundation! Become a sponsor of the Krita Project and help us move the painting world further!

Monthly Donation through Paypal

Krita Development Funding

One-time donation through Paypal

 
[0] -  http://dimula73.blogspot.ru/2013/05/krita-lime-ppa-always-fresh-versions.html

PS:
Thanks Timothée Giet for a nice title image!

    Krita 2.8.1 Released

    Hot on the heels of Krita 2.8.0, we're releasing Krita 2.8.1! This release contains a lot of bug fixes and improved support for the Surface Pro 2 tablet on Windows.

    • support for surface pro 2 on Windows
    • fixed several memory leaks
    • save single layer CMYK images correctly to PSD
    • BUG:331805 Do not let the selection grow bigger than the image on invert
    • BUG:329945: fix the unsharp mask filter
    • Fix tablet support on OSX
    • Fix convolution operations when the resulting alpha is 0
    • BUG:332022 Fix mirror mode in color smudge and filter ops
    • Make the warp tool handles less obtrusive
    • BUG:331758 Make the transform tool scale filter selector work
    • BUG:332070 Fix crash when selecting a template with stylus double-click
    • BUG:331950: set the document modified status when changing layer properties
    • BUG:331890: Fix loading of multi-layered PSD files, including 16 bit ones
    • BUG:331708: Fix crash in undo/redo of transformations
    • BUG:331759: improve performance of the OpenGL canvas in some cases
    • Make it possible to use the OpenGL canvas on more GPU/driver combinations
    • Fix crash in pixelize filter
    • Fix the emboss filter to apply to the whole image
    • Fix crash when loading an image that has a broken colorspace id
    • BUG:331702: Fix crash when loading a 16 bit/channel PSD image
    • Fix crash in the oilpaint filter
    • Fix crash when applying a gradient
    • Fix crash when trying to create a selection in the artistic text tool
    • Fix not being able to add new canvas input shortcuts
    • Improve the crash reporter on Windows

    Download Krita 2.8.1 for Windows from the Krita Studio website, or update on Linux using your package manager.

    Krita 2.8.2 will contain the improved support for uc-logic/evdev based tablets on Linux.

     

    March 27, 2014

    Email is not private

    Microsoft is in trouble this week -- someone discovered Microsoft read a user's Hotmail email as part of an internal leak investigation (more info here: Microsoft frisked blogger's Hotmail inbox, IM chat to hunt Windows 8 leaker, court told). And that led The Verge to publish the alarming news that it's not just Microsoft -- any company that handles your mail can also look at the contents: "Free email also means someone else is hosting it; they own the servers, and there's no legal or technical safeguard to keep them from looking at what's inside."

    Well, yeah. That's true of any email system -- not just free webmail like Hotmail or Gmail. I was lucky enough to learn that lesson early.

    I was a high school student in the midst of college application angst. The physics department at the local university had generously given me an account on their Unix PDP-11 since I'd taken a few physics classes there.

    I had just sent off some sort of long, angst-y email message to a friend at another local college, laying my soul bare, worrying about my college applications and life choices and who I was going to be for the rest of my life. You know, all that important earth-shattering stuff you worry about when you're that age, when you're sure that any wrong choice will ruin the whole rest of your life forever.

    And then, fiddling around on the Unix system after sending my angsty mail, I had some sort of technical question, something I couldn't figure out from the man pages, and I sent off a quick question to the same college friend.

    A couple of minutes later, I had new mail. From root. (For non-Unix users, root is the account of the system administrator: the person in charge of running the computer.) The mail read:

    Just ask root. He knows all!
    followed by a clear, concise answer to my technical question.

    Great! ... except I hadn't asked root. I had asked my friend at a college across town.

    When I got the email from root, it shook me up. His response to the short technical question was just what I needed ... but if he'd read my question, did it mean he'd also read the long soul-baring message I'd sent just minutes earlier? Was he the sort of snoop who spent his time reading all the mail passing through the system? I wouldn't have thought so, but ...

    I didn't ask; I wasn't sure I wanted to know. Lesson learned. Email isn't private. Root (or maybe anyone else with enough knowledge) can read your email.

    Maybe five years later, I was a systems administrator on a Sun network, and I found out what must have happened. Turns out, when you're a sysadmin, sometimes you see things like that without intending to. Something goes wrong with the email system, and you're trying to fix it, and there's a spool directory full of files with randomized names, and you're checking on which ones are old and which are recent, and what has and hasn't gotten sent ... and some of those files have content that includes the bodies of email messages. And sometimes you see part of what's in them. You're not trying to snoop. You don't sit there and read the full content of what your users are emailing. (For one thing, you don't have time, since typically this happens when you're madly trying to fix a critical email problem.) But sometimes you do see snippets, even if you're not trying to. I suspect that's probably what happened when "root" replied to my message.

    And, of course, a snoopy and unethical system administrator who really wanted to invade his users' privacy could easily read everything passing through the system. I doubt that happened on the college system where I had an account, and I certainly didn't do it when I was a sysadmin. But it could happen.

    The lesson is that email, if you don't encrypt it, isn't private. Think of email as being like a postcard. You don't expect Post Office employees to read what's written on the postcard -- generally they have better things to do -- but there are dozens of people who handle your postcard as it gets delivered who could read it if they wanted to.

    As the Verge article says, "Peeking into your clients' inbox is bad form, but it's perfectly legal."

    Of course, none of this excuses Microsoft's deliberately reading Hotmail mailboxes. It is bad form, and amid the outcry Microsoft has changed its Hotmail snooping policies somewhat, saying they'll only snoop deliberately in certain cases).

    But the lesson for users is: if you're writing anything private, anything you don't want other people to read ... don't put it on a postcard. Or in unencrypted email.

    Development snapshots

    The latest testing snapshots for development version of Synfig Studio are available for download. ...

    More Tablets Supported by Krita

    The brand new graphics tablet support code Dmitry Kazakov has worked on for Krita 2.8 is bearing fruit. In the past week, Dmitry has improved and fixed support for many uc-logic based tablets on Linux. These tablets use the evdev driver, but they all use the driver in a different way! Some of them are very strange, one even reports itself as a keyboard to X11.

    Examples of these tablets are Monoprice, Bosto, Huion or Genius. Check out his blog for the details! This is on Linux, of course, but Dmitry has also fixed support for Microsoft's Surface tablets's stylus and eraser.

    This work has been made possible by the Krita Foundation using the development fund. Without sponsorship, Krita wouldn't see such rapid improvement, so if you use Krita, consider subscribing to the development fund!

    Monthly Donation through Paypal

    Krita Development Funding

    One-time donation through Paypal

    Krita: new extended tablet support!

    Traditionally Krita supported various types of tablets on Linux, but this support was limited to the tablets handled by wacom driver. The list of the supported hardware included (obviously) all the Wacom-branded tablets and some rare species that were trying to resemble the branded devices. It was the best that one could get form the Qt's tablet code. From now on Krita can also work with all the devices which are handled by the evdev X11 driver!



    The list of the devices supported by evdev is really vast. It includes such not-very-expensive brands like Monoprice, Bosto, Huion and Genius. By the moment we tested Krita on two devices: Bosto 19MA [0] and Genius G-Pen 560. Both work fine with Krita and have really nice pressure support! Right now we are also working on support for the Huion tablet supplied by huiontablet.com to the Krita project as well!

    Bosto kingtee 19MA now works with Krita!
    So if you have a tablet device which used to refuse to work with Krita on Linux, test it now!

    I also did a small cross-test of my old Wacom Graphire2 device and Genius G-Pen 560. What I noticed is that the lines generated by the Wacom tablet are more stable and smooth, whereas the lines by the Genius tablet are a bit dizzy. I don't know what is the exact reason for it, but I have a feeling that Wacom does some internal filtering of the coordinates generated by the hardware, which gives us better lines. Anyway, even if you own this Genius device, Krita allows you to workaround the issue. Just enable Weighted Smoothing painting algorithm and it will produce the results just like Wacom does!

    And if you would like to know the technical details about why Qt's tablet code supports wacom-driver-based tablets only...

    Qt's code doesn't fully support the interface of the XDeviceMotionEvent. Qt expects that the values of all six axes of the tablet will be sent to the client in each event. This is true for wacom driver, but it is not guaranteed to be true for other XInput devices. The motion event can also send the values that changed since the last event only. It even has special fields for that:

    typedef struct
        {
        /* ... skipped ... */
        unsigned char axes_count;
        unsigned char first_axis;
        int           axis_data[6];
        } XDeviceMotionEvent;

    axes_count field tells us how many axes are really delivered to the client and first_axis tells to what absolute axis the first element of axis_data corresponds to.

    Now Krita can handle that effectively, as well as recognize to what sensor each axis is assigned to! [1]


    [0] - https://groups.google.com/forum/#!topic/bosto-user-group/sL_O4VoopVk
    [1] - unlike Gimp's way, where the user must manually investigate to which sensor each axis is connected to ;P

    Open standards allow the unexpected

    surpriseUmbrellas

    One of the best things about open standards is their ability to surprise. A proprietary standard is designed with one purpose in mind. And, because only authorized parties have access to the standard’s specification, those designed purposes are generally where its utility stops. Because open standards have publicly-available specifications, anyone who’s interested can develop new tools and purposes.

    Here are a couple great examples of people using SVG’s open spec to do the unexpected.

    Sozi: A fantastic tool for presentations. An extension to Inkscape, it lets users build zooming presentations, using the capabilities of SVG animation.

    Design with Git: Visual version control for SVGs. This project uses SVG’s readable, comparable source code to let designers track the history of their work.

    Read the rest of our Document Freedom Day series on SVG:

    Celebrating Document Freedom Day, celebrating our favourite open standard
    Working together, developing open standards
    SVG or: How we learned to stop worrying and love document freedom

    March 26, 2014

    My GNOME 3.12 in numbers

    1 new GNOME Videos, 1 updated Bluetooth panel, 2 new thumbnailers, 9 grilo sources, and 1 major UPower rework.

    I'm obviously very attached to the GNOME Videos UI changes, the first major UI rework in its 12-year existence.


    GNOME Videos watching itself

    SVG or: How we learned to stop worrying and love document freedom

    130-libregraphicsmag_issue11

    SVG was one of the brightest revelations when we switched to libre design tools; in fact, it’s one of the major reasons that made us switch from the Adobe toolchain we learned in college to a 100% F/LOSS workflow for design.

    After years of working with closed formats, SVG seemed like a dream: it’s viewable and editable in a wide set of free and proprietary tools, it’s based on a familiar XML syntax, and can even be viewed on a modern web browser.

    How the death of FreeHand shows why open formats matter

    Back in 2001, when we started our communication design studies, Macromedia FreeHand was the most widely used vector graphics software. At the time, with one single iMac G3 in the classroom, we took turns, in pairs, to design our very first double page layout using FreeHand. That was the first assignment using this tool and the one that got into computer assisted design.

    FreeHand was a faithful companion during our five years of studies. In 2005 Macromedia was bought by Adobe. By the time we graduated, in 2006, it became clear that the Freehand days would soon be over. Illustrator, Adobe’s main vector graphics software, was the only cool alternative. So if you wanted to continue to work in vector graphics, you knew at you were suposed to do: learn Illustrator, even if you did not like that tool.

    Looking back on five years of work, sadly enclosed in FreeHand’s proprietary format, we knew that moving to another proprietary tool would mean going through the exact same process in a few years. No other software could open FH files and soon FreeHand would be incompatible with the most recent operating systems.

    We didn’t want the situation to repeat itself, and we didn’t want our activity to be dependent on the whims of corporations that we don’t have any relationship with. That was when we searched and started to learn about Standard formats and Free Software. We found SVG and we knew it was the right format for our vector work. So we installed Inkscape and began our quest in designing with F/LOSS.

    Reverse Engineers

    The closed black boxes of proprietary formats shackle designers to specific tools, and forces them into certain workflows that depend on those tools. And because tools like FreeHand can quickly and unexpectedly reach their end of life, we end up with many lost, undocumented formats, like old scrolls written in undecipherable languages whose message we’ll probably never be able to read. However, in the same way old scrolls invite crafty cryptographers to devise ways to decypher them, there are crafty hackers tirelessly working to release these formats from their orphan state by reverse-engineering them.

    One beautiful example of this is Valek Filippov‘s and Fridrich Strba‘s work in reverse-engineering the FreeHand file format for the LibreOffice project. The mostly invisible nature of this kind of work makes it even more important to draw attention to it; Valek and Fridrich have been busy with this endeavour for years, and we’re crossing our fingers waiting for the day when we can finally rescue our old work from its still-impenetrable black box.

    Possibilities for an open format

    And what can we do with an open format? So much! During the last few years, we’ve done many promising experiments with SVG. We tried our hand at SVG business card generators, using the sed tool to find and replace text based on CSV files; we’ve set up automated command-line vector workflows using svg2pdf for auto-export and pdftk for post-processing; and we’ve been sending SVG files to our clients that they can open directly in their browser, without the need for specialised tools or using clunky interchange formats like PDF.

    All this is only possible because SVG is an open, documented, standard format. There are hardly any excuses to keep on using closed formats that limit our intentions and force us to use tools we might not want or even need. Open formats are empowering, and by each inch of progress the world makes in making open formats better and more widespread, the more we can all grow.

    Read the rest of our Document Freedom Day series on SVG:

    Celebrating Document Freedom Day, celebrating our favourite open standard

    Working together, developing open standards

    Open standards allow the unexpected

    GNOME Software on Ubuntu (II)

    So I did a bit more hacking on PackageKit, appstream-glib and gnome-software last night. We’ve now got screenshots from Debian (which are not very good) and long application descriptions from the package descriptions (which are also not very good). It works well enough now, although you now need PackageKit from master as well as appstream-glib and gnome-software.

    Screenshot_UbuntuSaucy_2014-03-26_15:27:33

    Screenshot_UbuntuSaucy_2014-03-26_15:31:05

    Screenshot_UbuntuSaucy_2014-03-26_15:55:45

    This is my last day of hacking on the Ubuntu version, but I’m hopeful other people can take what I’ve done and continue to polish the application so it works as well as it does on Fedora. Tasks left to do include:

    • Get aptcc to honour the DOWNLOADED filter flag so we can show applications in the ‘Updates’ pane
    • Get aptcc to respect the APPLICATION filter to speed up getting the installed list by an order of magnitude
    • Get gnome-software (or appstream-glib) to use the system stock icons rather than the shitty ones shipped in the app-install-data package
    • Find out a way to load localized names and descriptions from the app-install-data gettext archive and add support to appstream-glib. You’ll likely need to call dgettext(), bindtextdomain() and bind_textdomain_codeset()
    • Find out a way how to populate the ‘quality’ stars in gnome-software, which might actually mean adding more data to the app-install desktop files. This is kind of data we need.
    • Find out why aptcc sometimes includes the package summary in the licence detail position
    • Improve the package details to human readable code to save bullet points and convert to a UTF-8 dot
    • Get the systemd offline-updates code working, which is completely untested
    • Find out why aptcc seems to use a SHA1 hash for the repo name (e.g. pkcon repo-list)
    • Find out why aptcc does not set the data part of the package-id to be prefixed with installed: for installed packages

    If you can help with any of this, please grab me on #PackageKit on freenode.

    Interview with Eric Lee

    Interview with Eric Lee

    Would you like to tell us something about yourself?

     

    I am the Animation Director for a post-production facility in the Midwest, Steelehouse Productions. I have been there for over 11 years doing all sorts of things involving art and animation, including 2D hand-drawn animation, 3D animated series, producing comic books, some stop motion, IP development, commercial work... the list goes on. You can go check out the Animation Reel at www.steelehouse.com for a taste of what I am talking about. You can also catch a small sampling of my work at http://eric3dee.deviantart.com

    Concurrently, I am also a husband, a father (2 boys), a musician, a DIY tinkerer, and a puppet-maker among other things.


    Do you paint professionally or as a hobby artist?

     

    Yes. Painting is both a passion of mine, and something I am blessed enough to do for money.


    When and how did you end up trying digital painting for the first time?

     

    I have been painting "tradigitally" ever since I can remember owning a wacom tablet.. possibly 15 years ago?


    What is it that makes you choose digital over traditional painting?

     

    Computers have always played a large role in my life and interests, and they are a natural fit for me in my art. I am very excited about the ways technology brings new opportunities and efficiencies to art and animation. In many ways it is cheaper, certainly more versatile, and provides an infinite canvas (if you will) of new and exciting ways of expression. At times I do miss the tactile messiness of getting your hands dirty with a traditional medium, but I am always drawn to the seemingly limitlessness of working digitally.


    How did you first find out about open source communities? What is your opinion about them?

     

    I have been aware of open source for a long time now- since the birth of popular programs such as Gimp and Blender. I love the openness of ideas, and how easy it is for users like me to play a role in shaping the development. I love being able to be on the "bleeding edge" of technology through testing and playing with early builds. And above all, I love that it is FREE! Over the years I have come in to contact with many people wanting to get started in digital art and animation, and many have always used cost of entry as an excuse, but nowadays that excuse is simply not valid. Programs such as Krita and Blender are at such a high quality that in some ways they can even trump their commercial competitors. The accessibility of such programs allows for a much larger and faster growth of talent within the industry. I am a big proponent of open source development.


    Have you worked for any FOSS project or contributed in some way?

     

    I haven't. I have always played around with many open source programs, but have done very little other than provide feedback and do some bug testing.


    How did you find out about Krita?

     

    A very talented Youtube artist named Sycra posted a short video review about Krita. I was shocked that I hadn't heard of it sooner. His channel is definitely worth checking out: https://www.youtube.com/user/Sycra


    What was your first take on it?

     

    I immediately was drawn in by the familiar interface and the brush smoothing options- somewhat of a mixture of Photoshop and Sai. But to my surprise, the more I played with Krita, the more I fell in love with all sorts of unexpected features- especially the pop-up palette!

     

    What do you love about Krita?

     

    Krita has an extremely powerful and versatile brush engine. Something that is far superior to Photoshop's (in my opinion). Dynamic brushes, wet brushes, tone brushes and more are very fun to use and lead to all sorts of interesting effects. I also love navigating the canvas in Krita. The OpenGL integration (even on my sub-par tablet graphics card) makes navigation very fluid and the Pop-up palette is the absolute most indispensable tool I didn't know I needed! Photoshop could learn a lot from the Pop-up palette. These are just some of the standouts in Krita's robust toolset.

     

    What do you think needs improvement in Krita? Also, anything that you really hate?

     

    There is little I dislike about Krita. Mostly in regards to speed on my computer. In

    Photoshop I can tweak HSV and Levels with Preview updating almost instantaneously, in Krita it is quite a bit slower. There is always room to improve. I would encourage Krita developers to continue to streamline the canvas navigation, as that is one of the best things about using Krita- possibly beefing up the Pop-up palette, or adding more pop-up palettes (for switching layers/opacity, switching to eraser mode etc.) I run Krita on a tablet (Samsung Series 7 Slate- quite similar to the Surface Pro), so the more I can use Krita in full-screen canvas mode, and the less I need to access my keyboard, the better.

    What I am most interested in seeing is the addition of animation tools to Krita. As a professional animator, I am dying to see this implemented into Krita. The current selection of commercial 2D animation software is severely lacking in their approach- specifically by limiting themselves to a vector-only engine (with the exception of TVPaint which is expensive and has a very hard to navigate interface). Using a Flash-like timeline alongside Krita's killer brush engine would be fantastic, and I am dying to get involve in helping to play a part in shaping it in some way.
    Lastly, an OSX build would be fantastic. At work we are almost exclusively mac-based, so until Krita releases a mac build I will have to stick to Artrage and Photoshop ;)

     

    In your opinion, what sets Krita apart from the other tools that you use?

     

    Krita, in my opinion, simply takes some of the best features of popular paint apps, combines them into a great GUI and improves the heck outta them. As already stated, the brush engine, interface, and navigation tools are simply the best I have found for my worklfow.


    If you had to pick one favourite of all your work done in Krita so far, what would it be?

     

    Probably my Talespin fan art piece of Baloo. It was my first big effort with Krita (admittedly, I have only been using since last fall and haven't had time to do many), and I was very pleased with the end result. I am also very happy with the Tiger speed paint as I got to playing around with the symmetry function.

     

    What is it that you like about it? What brushes did you use in it?

     

    I used just about every brush I could. There are so many great brushes and variants (Vasco Basquéhas' and Muse's brushsets are a must!), I am still trying to whittle down my absolute favorites into my pop-up palette. Using the dynamic sketch brushes and the halftone brushes give a real modern edge to the "tradigital" approach.

     

    Would you like to share it with our site visitors?

     

    Attached are a bunch of pieces I have all done completely in Krita.

     

     

     

     

    March 25, 2014

    Hyperkitty at the 0th SpinachCon

    SpinachCon - The gnu has spinach in his teeth!

    SpinachCon – The gnu has spinach in his teeth!

    As part of the greater LibrePlanet 2014 festivities, Deb Nicholson organized the first SpinachCon at Industry Lab in Cambridge MA, sponsored by the Open Invention Network.

    What is SpinachCon?

    “What the heck is a SpinachCon?” you may ask. Well, the idea is that there are free software projects that have niggling usability issues. Like pointing out to a friend that they have spinach stuck in their teeth, SpinachCon is an event where free software fans can show up and let the participating free software projects know whether or not they have ‘spinach in their teeth,’ and between what teeth that spinach might be. :) It’s basically a fun and informal free software usability testing session open to anyone who would like to drop by and help out by testing software.

    13409182604_22085421aa_b

    Hyperkitty joined three other free software projects – MediaGoblin, LibreOffice, and Inkscape – in the event.

    How it worked

    SpinachCon was 5 hours long, from noon until 5 PM the Friday before LibrePlanet. Each project had its own table. Ryan and I brought laptops that we set up on the Hyperkitty table, open and logged in as me to the Fedora’s Hyperkitty test server. Deb provided user questionnaire sheets that we stacked on the table, and I had also written out a set of six tasks for users to complete during their testing. I transcribed those into a text file in gedit, and we used USB sticks that Deb brought to transfer the file to the other laptop. We kept the gedit buffer open on each laptop.

    Lunch was generously provided by the Open Invention Network, so we both grabbed some pizza and salad and let Deb know when we were ready for our first testers after we finished eating. We had a steady stream of testers throughout the entire length of the event.

    Our per-user workflow was as follows:

    1. Open up the task list text file in gedit and save a copy coded with tester number. (This completely fell apart because the ability to save a gedit buffer with another file name has been taken away in Fedora rawhide. :( So we ended up copy/pasting buffers from one tab to another, it was hectic and stressful and our numbers got out of sync with our testers so we can’t associate questionnaires with tester task text files.)
    2. Introductions with tester, ask tester to sit down at appropriate laptop.
    3. Ask tester if they’ve used mailman before, and explain to them a little bit about what Hyperkitty is and how it works.
    4. Show the tester the gedit buffer and explain it has a list of tasks for them to complete, and that we’d like them to take notes and answer the questions as they complete the test.
    5. Sit with the tester and watch them complete the tasks, answering any questions they have and noting any issues or comments that came up as they went along.
    6. Thank tester for completing test, ask them to fill out questionnaire. (We forgot to have one of our testers fill out a questionnaire. Oops!)

    Ideas for Running a Smoother Test Next SpinachCon

    Based on some of our goofs / mishaps this time around, here are some suggestions for the next time we go to SpinachCon – please feel free to use these to prepare your project for SpinachCon in the future!

    • Bring enough Laptops! SpinachCon was held at a co-working space that didn’t have a computer lab. We didn’t realize until the week of that we’d need to bring laptops to the event for testers to use. I was hoping to use my own laptop to take notes, but it turned out to be fine to not have it to use in that capacity. We brought two laptops so we were able to run two tests at once. That worked well, since there was only two of us (me and Ryan) admining the test. I’d make sure you have one laptop per test admin so you’re not stuck waiting on a laptop to open up.
    • Grab enough blank questionnaire sheets! We ran out of questionnaire sheets and weren’t sure if there were any more blank ones available. I did track Deb down and get more, but I think for a 5-hour event, if each team has at least 20 sheets, they shouldn’t run out.
    • Bring plenty of paper and pens! I brought a paper notebook to take notes in and misplaced it. I ended up using the back of one of the filled out questionnaires to take some notes. Some testers brought laptops and wanted to use their own laptops, so we needed somewhere to write down the URL for them to go to. We also only had one pen at our table. Bring pens! Bring paper notebooks!
    • Take photos! I took photos of our testing process, with the permission of the folks sitting at the table. (I’ve used them in this blog post.) This is a good idea to do at the event, because it makes recap blog posts like this more interesting. :)
    • Assign numbers to testers and use to correlate data collected! If you are saving data about users on the computer and want to correlate that with your testing sheets, assign your testers a number before their test begins, and coordinate with any other test admins. How Ryan and I could have coordinated (and didn’t think to, until it was too late) was one admin gets even numbers, the other gets odd. So my testers would be #1, #3, #5, etc., and his could have been #2, #4, #6, so on and so forth. Write the tester number on the questionnaire sheet at the beginning of the test, save out the blank task question buffer in gedit as ‘test-$NUMBER.txt’ where $NUMBER is the tester’s assigned number, then hand the same sheet to the tester to fill out when they’ve completed the tasks. (A note for me: use vim, not gedit.)
    • Collect data on paper rather than electronically! Can you tell I’m frustrated by the issues we ran into collecting the user’s notes in text files? If I had been smart and prepared, I would have typed out the task list into LibreOffice and printed out a stack (At least 20 copies) and had the users refer to it and write their feedback using pen and paper during the test. Then it would have been easier to correlate the task sheets with the questionnaires – just staple them, or write on them, or fold them together, etc.
    • Test out your tasks before finalizing them! We realized after the first test that the wording in one of the tasks was confusing. Oops. We were able to update the gedit buffers (with great pain) on both laptops afterwards, but yeah. Test your test before you run it. Doh!

    13408918443_ec3aa55103_b

    What were the tasks for Hyperkitty?

    Ooh! Do you want to try at home and let us know how you did? You know, that would be quite lovely! Anyhow, here’s the list of tasks:

    1. You’ve got a crazy inbox and you’re really not happy about signing up for high-traffic mailing lists. Based on this, would you rather sign up for the devel list or the astronomy list?
    2. Some of the lists on this system are kind of dead; some have a lot going on. Name some of the lists where there’s a lot going on – people actively making discussions.
    3. You’ve heard about Fedora’s development list (“devel”) and decided to look into it. Who are you likely to hear from if you read that list?
    4. In March 2012, there was a discussion about GPT and Fedora 17 on the devel list. Can you find it?
    5. Find a post you like an add a tag to it so you can find it later.
    6. Start a new discussion on the Fedora users list (“users.”)

    If you’d like to play at home, visit lists.stg.fedoraproject.org and let us know how you did in the comments!

    So, what did you learn about Hyperkitty?

    Our full test results, including the full text from the user’s task notes buffers as well as the user questionnaires (listed in full per user as well as across user per question), are available on the Hyperkitty trac wiki:

    At a high-level, we drew a few pretty strong conclusions from the testing experience:

    1. Hyperkitty’s search needs more work.

    Only 1 of the 8 testers was able to find the GPT posting from March 2012 in task #4! Suggestions the users offered here included:

    • listing more than 10 results per search results page
    • allowing users to sort the search results
    • allowing users to search only within a certain time period (e.g., only in March 2012)
    • simplifying the search results listing of posts
    • making the search box easier to find / more apparent
    • have an ‘advanced search’ page with a lot more searching tools
    • filter search results to only threads you were involved in

    2. The users enjoyed Hyperkitty’s look and feel.

    Multiple testers asked me about how to set up Hyperkitty for their own usage after completing the test. Hyperkitty earned the following average user ratings on the user questionnaire:

    • Aesthetics: 8.4 out of 10
    • Intuitive: 8.0 out of 10
    • User-Friendly: 7.6 out of 10
    • Professional: 8.6 out of 10

    3. The left-hand nav / filters for the mailing list overview weren’t visible to users.

    Only one of the 8 testers discovered the left nav on the hyperkitty list overview that would have allowed them to sort the lists by popularity (making task #2 super-easy!)

    4. Users didn’t seem to notice the search box.

    Only a few of the users noticed the search box. Most users, while on the devel list overview page, noticed the year/month listings in the left nav and clicked to March 2012 when completing task #4. Only a few tried to use search, and most only after browsing to March 2012 first.

    5. Tagging needs more work.

    Aaron, one of our testers, had a lot of interesting thoughts on making tags useful. For example, we should suggest tags already in use in the system (maybe via tag cloud, maybe via some other mechanism) to prevent users coming up with different tags for the same thing (or at least, preventing that as much as possible.) Some of the users also had a hard time finding the tags – they are associated with threads, not individual posts, so they are on the right-hand side of the thread view. Another issue that came up – some users weren’t sure if the tags were for their own personal usage or if they were viewable to everyone in the system. (The latter is the case!)

    Summary

    Overall we got a ton of great ideas, feedback, and information from SpinachCon, and it was a really valuable experience. Deb mentioned we may be doing another SpinachCon in the Boston area later this spring; I am totally excited to bring Hyperkitty to SpinachCon again then!

    BTW, here’s a screenshot of how Hyperkitty looked to testers, for posterity / comparisons at the next SpinachCon:

    hyperkitty-ss

    Thank you the Open Invention Network for sponsoring such a great event!

    GNOME Software on Ubuntu

    After an afternoon of hacking on appstream-glib, I can show the fruits of my labours:

    1

    This needs gnome-software and appstream-glib from git master (or gnome-apps-3.14 in jhbuild) and you need to manually run PackageKit with the aptcc backend (--enable-aptcc).

    2

    It all kinda works with the data from /usr/share/app-install/*, but the icons are ugly as they are included in all kinds of sizes and formats, and also there’s no long descriptions except for the two (!) installed applications new enough to ship local AppData files.Also, rendering all those svgz files is muuuuch slower than a pre-processed png file like we ship with AppStream. The installed view also seems not to work. Only the C locale is present too, as I’ve not worked out how to get all the translations from an external gettext file in appstream-glib. I’d love to know how the Ubuntu software center gets long descriptions and screenshots also. But it kinda works. Thanks.

    Working together, developing open standards

    A snippet of SVG code generated in Inkscape

    A snippet of SVG code generated in Inkscape

    Scalable Vector Graphics (SVG for short) is an XML-based format for vector graphics, as the name might imply. It’s an open standard developed by the World Wide Web Consortium, which you may know of already by its short form: the W3C. Because SVG is an open standard, its specification is public. Anyone can read the rules and guidelines that make up the SVG format. Anyone can make software that parses or produces SVG. And because the specification is public, that reading, parsing, programming and changing can go on for ever. File formats based on open standards never have to die. Your SVG could be immortal.

    Let’s talk about the specification a little. For those not already rabidly interested in standards, the specification is the standard: it’s the document defining what a particular standard is and how it can be implemented. The specification makes everything else possible. The SVG specification has been under development since 1998. It grows and changes a bit, but stays stable. In its current form, SVG 1.1, it defines a language, and ultimately a format, with a diverse set of capabilities. It includes the features most of us know, like vector shapes, paths and text rendering; and features many may not know about, like animation and interactivity.

    One of the joys of SVG is that it really is under active development, working up to a new major release of the specification. If you take a look at the archives of the SVG Working Group mailing list, you’ll see people discussing features and implementations. The contributors to those discussions aren’t just W3C employees (in fact, the majority of them aren’t). Many of them work for companies with an interest in SVG, or are just involved members of the public. It’s a diverse group of people helping to develop and troubleshoot the standard.

    SVG embodies one of the great features of open standards development: it unites a whole collection of different players and stakeholders in a group effort to make something good. Companies that might otherwise spend their time, effort and labour building closed systems instead end up working together to build something everyone can use.

     

    Read the rest of our Document Freedom Day series on SVG:

    Celebrating Document Freedom Day, celebrating our favourite open standard
    SVG or: How we learned to stop worrying and love document freedom
    Open standards allow the unexpected

    March 24, 2014

    Celebrating Document Freedom Day, celebrating our favourite open standard

    This Wednesday, it’s Document Freedom Day, a worldwide day for recognizing and celebrating open standards. Open standards, put simply, are standards, file formats and codecs that are usable and implementable by everyone and anyone. In our work on Libre Graphics magazine, we use open standards every day, whether it’s in our print design work or in something as simple as an HTML web page.

    For Document Freedom Day this year, we want to spend some time talking about one of our very favourite open standards: SVG, the Scalable Vector Graphics format. If you’ve ever picked up a copy of Libre Graphics, you’ve encountered an image originally produced as an SVG. For our illustration work, for posters, and for elements like our logo, we make heavy use of this versatile file format. We already showcase beautiful SVG images frequently in our “Best of” section. This week, we’re making a point of talking about the format itself.

    We invite you to read along from now until Friday as we post a series of entries about our favourite open standard.

    Read the rest of our Document Freedom Day series on SVG:

    Working together, developing open standards
    SVG or: How we learned to stop worrying and love document freedom
    Open standards allow the unexpected

    GNOME Software 3.12.0 Released!

    Today I released gnome-software 3.12.0 — with a number of new features and a huge number of bugfixes:

    gnome-software-312-main

    I think I’ve found something interesting to install — notice the auto-generated star rating which tells me how integrated the application is with my environment (i.e. is it available in my language) and if the application is being updated upstream. Those thumbnails look inviting:

    gnome-software-312-details

    We can continue browsing while the application installs — also notice the ‘tick’ — this will allow me to create and modify application folders in gnome-shell so I can put the game wherever I like:

    gnome-software-312-installing

    The updates tab looks a little sad; there’s no update metadata on rawhide for my F20 GNOME 3.12 COPR, but this looks a lot more impressive on F20 or the yet-to-be-released F21. At the moment we’re using the AppData metadata in place of update descriptions there. Yet another reason to ship an AppData file.

    gnome-software-312-updates

    We can now safely remove sources, which means removing the applications and addons that we installed from them. We don’t want applications sitting around on our computer not being updated and causing dependency problems in the future.

    gnome-software-312-sources

    Development in master is now open, and we’ve already merged several large patches. The move to libappstream-glib is a nice speed boost, and other more user-visible features are planned. We also need some documentation; if you’re interested please let us know!

    March 22, 2014

    Synfig optimization - First results

    This month we have been working on optimizations for Synfig rendering. It's time to share some results....

    March 21, 2014

    EmbroiderModder 2 kickstarter launch

    The world needs open source embroidery software.

    Direct Kickstarter link here.

    Some of EmbroiderModder 2′s kickstarter rewards include designs and stitched pieces by me.

    Share/Bookmark

    flattr this!

    Flicker Morning

    [Northern Flicker on our deck] "There's a woodpecker sitting on the patio", Dave said, shortly after we'd both gotten up. He pointed down through the gap where you can see the patio from upstairs. "It's just sitting there. You can go down and look through the door; it doesn't seem to mind."

    Sure enough, a female northern flicker was sitting on the concrete patio deck, immobile except for her constantly blinking eyes and occasionally swiveling head. Definitely not a place you'd normally expect to see a woodpecker.

    Some twenty minutes earlier, I remembered, I'd heard a couple of thumps on the roof outside the bedroom, and seen the shadow of wings through the drawn shades. I've heard of birds flying into windows and getting stunned, but why would one fly into a roof? A mystery, but I was sure the flicker's presence was related to the thumps I'd heard.

    I kept an eye out while I made coffee and puttered around with normal morning chores. She wasn't budging from that spot, though she looked relatively alert, keeping her eyes open even while sitting immobile.

    I called around. (We still don't have internet to the house -- Comcast keeps giving us the runaround about when they'll dig their trench, and I'm not entirely convinced they've even applied for the permit they said they'd applied for three weeks ago. Maybe we need to look into Dish.) The Santa Fe raptor center had a recorded message suggesting that injured birds be put in a cool dark box as a first treatment for shock. The Española Wildlife Center said if I thought she was injured and could catch her, they could take her in.

    I did suspect she was injured -- by now she'd been there for 45 minutes or more, without moving -- but I decided to give her some time to recover before going for a capture. Maybe she was just in shock and needed time to gather herself before trying to fly. I went on with my morning chores while keeping an eye out for coyotes and ravens.

    For two hours she remained there. The sun came out from behind the clouds and I wondered if I should give her some shade, food or water, but decided to wait a while. Then, as I was going back to the bird book to verify what kind of flicker she was and what gender, she suddenly perked up. Swiveling her head around and looking much more alert than before, she raised herself a little and took a few steps, to one side and then the other. More head swiveling. Then suddenly, as I was reaching for my camera again, she spread her wings and flew off. A little heavily and stiffly, but both wings looked okay.

    So our morning's flicker adventure has a happy ending.

    March 19, 2014

    AppStream Logs, False Positives and You

    Quite a few people have asked me how the AppStream distro metadata is actually generated for thier app. The actual extraction process isn’t trivial, and on Fedora we also do things like supply missing AppData files for some key apps, and replacing some upstream screenshots on others.

    In order to make this more transparent, I’m going to be uploading the logs of each generation run. If you’ve got a few minutes I’d appreciate you finding your application there and checking for any warnings or errors. The directory names are actually Fedora package names, but usually it’s either 1:1 or fairly predictable.

    If you’ve got a application that’s being blacklisted when it shouldn’t be, or a GUI application that’s in Fedora but not in that list then please send me email or grab me on IRC. The rules for inclusion are here. Thanks.

    March 18, 2014

    Announcing Appstream-Glib

    For a few years now Appstream and AppData adoption has been growing. We’ve got client applications like GNOME Software consuming the XML files, and we’ve got several implementations of metadata generators for a few distros now. We’ve also got validation tools we’re encouraging upstream applications to use.

    The upshot of this was the same code was being duplicated across 3 different projects of mine, all with different namespaces and slightly different defined names. Untangling this mess took a good chunk of last week, and I’ve factored out 2759 lines of code from gnome-software, 4241 lines from createrepo_as, and the slightly less impressive 178 lines from appdata-tools.

    The new library has a simple homepage, and so far a single release. I’d encourage people to check this out and provide early comments, as as soon as gnome-software branches for 3-12 I’m going to switch it to using this. I’m also planning on switching createrepo_as and and appdata-tools for the next releases too so things like jhbuild modulesets need to be updated and tested by somebody.

    Appstream-Glib 0.1.0 provides just enough API to make sense for a first release, but I’m going to be continuing to abstract out useful functionality from the other projects to share even more code. I’ve spent a few long nights profiling the XML parsing code, and I’m pleased to say the load time of gnome-software is 160ms faster with this new library, and createrepo_as completes the metadata generation 4 minutes faster. Comments, suggestions and patches very welcome. There’s a Fedora package linked from the package review bug if you’d rather test that. Thanks.

    Interview #2 @ FSCONS (English, Hacker Public Radio)

    It was more than year ago, at the day of release of Morevna Project Demo – me and Julia Velkova gave a talk at FSCONS 2012 (Göteborg, Sweden). That was also a first public screening of the Demo.

    I have mentioned two interviews happened back then – first one in Russian was published on January 2013. The second interview for Hacker Public Radio (in English) was published just recently!

    Check it out here:
    http://hackerpublicradio.org/eps.php?id=1458

    For me this publication is like some kind of “FSCONS report” and a good chance to take a look back. Not all plans come true, some turned into unexpected twists. But I am really happy to remember that time – a lot of good feelings and memories.

    Thanks to Kenneth Frantzen and Hacker Public Radio!

    a Maslow hierarchy of software making

    This morning, I whipped up a Maslow hierarchy for breakfast, one for the activity of software making:

    the Maslow hierarchy of software making

    My thoughts started where one normally starts explaining the hierarchy: at the bottom. I recalled what I wrote here a while ago:

    Everyone knows that in order to get software, it needs to get built, i.e. code developed.’

    And with that I had my first, ‘physiological’ level of software making: to build. Without it, there is nothing. This is the hammer and saw level; just cut it to size and nail it together.

    You would be surprised how much software is made like this—yes, also for top dollar.

    moving on up

    The next level is to engineer. At this point one is not just cobbling code together, there is also thought and structure involved, on a technological level. This is all about avoiding chaos, being able to scale up—in size and complexity—and code being maintainable in the future.

    We can use these two levels to illustrate a common source of strife in software development outsourcing. The customer thinks they will get an engineered solution for their money; the supplier thinks they can just build whatever passes the acceptance tests.

    Maslow’s second level is called safety. Somehow that matches quite well with what software engineering is about.

    a giant leap

    One level up is a whole new ballgame: to plan features. This requires having a product vision and picking only those features that make the product stronger, while saying ‘no’ to 95% of the features that users, marketing and engineering come up with.

    This is the entry level for putting some purpose into the activity; to make software that matters. It is not easy to make it up to here; respect for those who do. It takes a visionary person, confident enough to deal firmly but correctly with all the wild impulses that come from users and engineering.

    The corresponding Maslow level is called love. Indeed, to plan features is the first step of putting some love into the software you are making.

    accident prune

    The fourth level is to take the random factor out of software making: to specify. Define first what needs to be made, before engineering figures out how it can be built. This roots out the standard practice of the how determining what the result will be.

    I am totally not a fan of heavyweight, bureaucratic specifications. Just keep in mind: the point is that a project should be able to tip the code, swap out the complete engineering crew and be confident to resurrect exactly the same software from spec—just with different bugs.

    full potential

    And now we reach the top, the level of complete product realisation: to design. The focus moves to product value delivery, by addressing user needs, while taking into account what is feasible with technology.

    A software project that is operating smoothly on all five levels of the hierarchy is one that is delivering its full potential. It can concentrate on realising its vision and through that be a leader in its market.

    Conversely, a project where one or more levels of the hierarchy are missing, or not functioning well, will spend a considerable amount of its energy on trying to fill the void(s). The organisation may not quite be able to put its finger on what is wrong, but spend a lot of communication, meetings, time and action on correcting it.

    Maslow’s top level is summed up as

    ‘morality, creativity, spontaneity, problem solving, lack of prejudice and acceptance of facts’

    That’s a pretty good trait‐requirements list for a designer. Stated the other way around, to design is a human need of the highest level.

    use it

    Now we can work the diagram, in true Maslow style:

    An organisation will only be motivated to fix the lowest level that is found lacking.

    As we have already seen, the build level trumps anything. Problems on this level—e.g. lack of developers, or ‘physiological’ bugs (crashing)—will crowd out any other considerations. Moving up, if the engineering is found lacking, then an organisation will not be inclined to take feature‐planning, specification, or design serious.

    If an organisation fails to plan features then design and/or specification initiatives will be in vain. Is the specification process found lacking? Then it will be hard for an organisation to become design‑driven.

    Working the diagram, we can understand how software‐making organisations act.

    postscript

    Here is something I noticed when I looked at the hierarchy diagram: it doesn’t mention software or anything related, does it?

    Turns out the diagram is general for any design discipline; it is a Maslow hierarchy of making anything.

    March 14, 2014

    FreeCAD Architecture tutorial

    This is a copy of an href="http://www.freecadweb.org/wiki/index.php?title=Arch_tutorial">original article I wrote on the href="http://www.freecadweb.org">FreeCAD wiki, about doing BIM modeling with FreeCAD,that I copied it here for archiving purposes. I advise you to read the original article, the quality is much better. Introduction This tutorial aims at giving you the basics to work with the Arch Workbench. I will...

    March 12, 2014

    Mini Debian Women Conference 2014

    minidebconf2014

    This weekend we’ll be participating in the MiniDeb Conference at Barcelona.

    Two days of talks, discussions, the celebration of the Tenth anniversary of Debian Women and PGP Keysigning among other fun things.

    Be sure to check the full schedule, register and contribute to the crowdsourcing campaign that will help support this wonderful event.

    Libre Graphics magazine at Libre Graphics Meeting

    At the start of April, the 9th annual Libre Graphics meeting will take place in Leipzig, Germany. The program of talks and events is up now, and it looks fantastic. Here’s what we’ll be talking about:

    Beating the drums: Why we made gender an issue
    Since 2010, Libre Graphics magazine has been showcasing high quality art and design made with F/LOSS. We’ve also been publishing articles which offer critical perspectives on art and design practice in F/LOSS and Free Culture contexts. In winter 2014, we published an issue called “Gendering F/LOSS.” Building on years of discussion in diverse F/LOSS communities, we used the issue to look at the state of gender in F/LOSS art and design. In this presentation, we explain why we made gender an issue, literally.

    Dear designer, have these cool tools
    Getting designers to switch their tools is always a hard task. Convincing them to abandon their proprietary tools for F/LOSS ones is an even harder challenge. We know that the “You can’t do professional design without going the Adobe way” meme is untrue, and it is our personal itch to disprove it. For that, we’re cooking up a kit of tools and assets that can help anyone wanting to try Free Software tools for their design practice. The Libre Graphics magazine is one of the ways we have to prove that one can design and *print* with an F/LOSS based toolchain. What tactics can we resort to in order to get other designers out of their proprietary habits?

    Activity, Preparing to LGM 2014

    *Shaking off the dust*

    It's been quite a while since I last posted here. Some will smile at the sight of this post, especially those who know why I was absent (a state which should change until October this year).

    Not too far ago, I was invited to the Libre Graphics Meeting of 2014; one of the largest (if not the largest) annual gatherings of users and developers of Open-Source tools and programs for graphics. After having to turn down the invitation once or twice before due to problematic dates, this year I managed to arrange a "vacation" in the relevant dates and I'll actually be going!

    Yesterday I finished all of the arrangements (including setting up ubuntu dual-boot on my laptop, which turned out to be way easier than I thought), and I can start thinking about LGM itself. It's going to be the first time I attend a real convention like this, and the fact that this first time is abroad and far from home does add it's share to my excitement :)

    The program sounds very interesting and with a potential to cover many topics which I never had the chance to learn (such as color management and font design). Also meeting people whom I talked to by IRC in the last 4 years but never met in practice should be one of the greatest parts in LGM fromy my point of view.

    So, before I depart, it's time for some thanks. The first of them is to the GIMP family - the people who always care, help and greet you with a smile after you were missing for some time. The second thank is to you - the (usually anonymous) supporters of GIMP; donations you make to GIMP, help also to reimbruse the traveling expenses to developer gatherings like LGM.

    So, I'll see you there,
    ~ Barak

    March 10, 2014

    Testing with Angular.JS

    On Mar 5, 2014, I gave a presentation for the Belgian Angular.JS Meetup group:

    ngmeetup-testing.001

     

    The slides from this presentation are now available as an annotated article. You can read it over here. Now is a good time to start testing your code, if you aren’t already doing so.

    MyPaint on Twitter: @MyPaintApp

    Bird Mascot
    MyPaint now has a Twitter account, for announcements, Q&A, feature discussion and general development chatter. Follow us on @MyPaintApp!

    Krita Lime: Localization Support

    After some time of really hard work we are happy to announce that our Krita Lime packages now support localization!

    If you want to see Krita in your own language, just install a corresponding package...

    apt-get install krita-testing-l10n-[your_language_id]

    ... and run Krita!

    If you want to see Krita is a language that differs from your system one, type

    KDE_LANG=[your_language_id] krita


    Happy painting with Krita! :)

    PS:
    And for some languages the help of translators is really needed! Join our KDE translators team on l10n.kde.org and we will move KDE forward together! :)


    March 07, 2014

    Czech Myself Before I Wreck Myself

    My “Make Art Not Law” talk is finally online!

    Text and slides here.

    Polish Czech translation and subtitles here.

    Share/Bookmark

    flattr this!

    Krita Training Session Next Month

    For the second year, next month I’ll run a professional training session about drawing with Krita at ActivDesign in Rennes.
    It will take place the week right after the Libre Graphics Meeting, from 7 to 9 April.
    It is still possible to register if you’re interested (note: french-speaking session).

    ActivDesign

    Besides, if you want some local personal training about Krita, in French like this time but also in English, you can always contact me directly on my contact page.

    Here a little illustration made for the guide to Krita 2.8′s features pdf file, as you must already know 2.8 has been released a few days ago ;.)

    28-BirdyGirl_web

    The ‘paint-order’ property in SVG 2

    Introduction

    SVG 2 has a new ‘paint-order’ property which can be used to select the order in which the fill, stroke, and markers are painted. This is of especially great use for text where having the stroke painted on top of the fill leads to distorted glyphs.

    Sample text showing effect of paint-order.

    Sample text showing the effect of the ‘paint-order’ property.

    As of March 2014, Chrome and Firefox Nightly support the ‘paint-order’ property. Inkscape trunk also has rendering support for the property (if compiled in).

    Test SVGs

    Tests with different orders of 'fill', 'stroke', and 'markers'.

    Each square has the indicated value of the ‘paint-order’ property.

    Tests with different orders of 'fill', 'stroke', and 'markers'.

    Each line of text has the indicated value of the ‘paint-order’ property.

    March 06, 2014

    Blender 2.70 Release Candidate

    The release candidate for the upcoming Blender 2.70 release is now available for download.

    This is a build for users to test and find issues before we make the final release. If you find bugs, please report them to our bug tracker.

    New features include initial support for volumetrics in Cycles, and faster rendering of hair and textures. The motion tracker now supports weighted tracks and has improved planar tracking. For mesh modeling there are new Laplacian deform and wireframe modifiers, along with more control in the bevel tool. The game engine now supports object levels of detail.

    The first results from the new user interface project are also in this release, with dozens of changes to make the interface more consistent and powerful. This is also the first release of the multithreaded dependency graph, which makes modifier and constraint evaluation faster in scenes with multiple objects.

    For more details, see the release notes for Blender 2.70.

    March 05, 2014

    New Perspective Tool for Krita

    I have been meaning to right about this fabulous new tool that is the brain child of none other than David Revoy (the mockup of the same can be seen @ New Perspective Tool). Well, like many of you out there I am also mesmerized by this new tool, but, upon me is the task of […]

    March 04, 2014

    Krita 2.8.0 Released

    Today, the Krita team releases Krita 2.8. Krita 2.8 is a big milestone release, since this is the first release that is ready for end-users on Windows! Krita 2.8 has also many new features, hundreds of bug fixes, performance improvements, usability fixes and look-and-feel improvements. Let's join David Revoy for a tour of...

    What's new in Krita 2.8?

    Krita 2.8 running in Trisquel GNU/Linux 6.0, showing the default interface. The character on the canvas is Kiki the Cyber Squirrel, Krita's Mascot.

    Author of screenshot and mascot: Tyson Tan

    Krita 2.8 highlights:

    Windows version

    Krita 2.8 will be the first stable Krita release for Windows. We have been making experimental builds for Windows for about a year now, and a lot of testers helped us to stabilize it. While this is not really a new feature for the Linux user, the step and the work on it was so huge that it does merit the 1st rank in this feature list! Most work on the Windows port has been done by KO GmbH, in cooperation with Intel.

    Development: Dmitry Kazakov, Boudewijn Rempt.

    Better tablet support:

    Krita has relied on Qt's graphics tablet support since Krita 2.0. We consciously dropped our own X11-level code in favour of the cross-platform API that Qt offered. And apart from the lack of support for non-Wacom tablets, this was mostly enough on X11. On Windows, the story was different, and we were confronted by problems with offsets, bad performance, no support for tablets with built-in digitizers like the Lenovo Helix.

    So, with leaden shoes, we decided to dive in, and do our own tablet support. This was mostly done by Dmitry Kazakov during a week-long visit to Deventer, sponsored by the Krita Foundation. We now have our own code on X11 and Windows, though still based on Qt's example. Drawing is much, much smoother because we can process much more information and issues with offsets are gone.

    Photo: David Revoy testing Krita with four tablets.

    Development: Dmitry Kazakov, Boudewijn Rempt.

    New high-quality scaling mode for the OpenGL canvas

    Krita was one of the first painting applications with support of OpenGL to render the image. And while OpenGL gave us awesome performance when rotating, panning or zooming, rendering quality was lacking a bit.

    That's because by default, OpenGL scales using some fast, but inaccurate algorithms. Basically, the user had the choice between grainy and blurry rendering.

    Again, as part of his sponsored work by the Krita Foundation, Dmitry took the lead and implemented a high-quality scaling algorithm on top of the modern, shader-based architecture Boudewijn had originally implemented.

    The result? Even at small zoom levels, the high-quality scaling option gives beautiful and fast results.

    Image by Timothee Giet

    Development: Dmitry Kazakov, Boudewijn Rempt.

    Krita Gemini

    On Linux, the Krita Sketch user interface is now bundled with Krita 2.8, and users can switch from one interface to another. Krita Sketch and Krita Gemini were developed by KO GmbH together with Intel. On Windows, Krita Gemini will be available through Valve's Steam platform.

     {youtube}qiSGuCzRZFk{/youtube}

    {youtube}lLwFJ0jF37s{/youtube}

    Development: Arjen Hiemstra, Dan Leinir Turthra Jensen, Timothée Giet , Boudewijn Rempt

    Wrap Around mode

    The Wrap Around mode (activate it with the W key, or in View > Wrap around mode) tiles the artwork on the canvas border for easy creation of tiled textures. It is only visualized in OpenGL mode (Settings > Configure Krita > Display).

    Development: Dmitry Kazakov.

    New default presets, better tagging system

    Krita 2.8 offer a new set of brush presets, with new icons following standards. Here is a screenshot with a sample of them.

    Additionally, now It is easier and faster to assign tags to resources -- with a single right click.

    Development: Sascha Suelzer

    Resources: Timothée Giet, Ramon Miranda, Wolthera, David Revoy, and other Krita community artists.

    Layer-picker

    Directly select a layer by pressing the 'R' key and clicking with mouse/stylus on the canvas.

    Development: Dmitry Kazakov.

    Icons: David Revoy

    Custom transparency checkboxes:

    Transparency checkboxes represent the transparent colors. Now the colors and size of the checkboxes are configurable: You can change it in Settings > Preferences > Transparency Checkboxes. Here is a demo with a dark checker theme to show the halo around the Krita 256x256 *.png logo:

    Development: Boudewijn Rempt.

    New palette docker

    It’s now easier to switch palettes with the Palette docker. Adding and removing a color can be performed directly on the docker. A set of new palette presets made by talented authors are now also bundled with Krita by default. You can display the Palette docker via: Setting > Docker > Palette.

    Development: Sven Langkamp

    Resources: Tim Von Rueden, Spencer Goldade, Richard Fhager, Kim Taylor, David Revoy.

    Pseudo Infinite canvas

    If you scroll the canvas a lot in one direction, you’ll notice a big button appearing with an arrow on it on the border of the screen  A single click on this big button will extend the canvas automatically in this direction.

    This feature will help you to focus on drawing and never worry about the drawing surface available.

    Additional options for the crop tool

    The crop tool can now ‘grow’ (you can crop a document outside the canvas limit and extend the canvas) and also get decorations (third guidelines, middle crosshair).

    Development: Camilla Boemann

    Better color pickers

    The color pickers get new icons and more options.

    • Ctrl + LeftClick --- Pick from merged image to Fg color
    • Ctrl + Alt + LeftClick --- Pick from current layer to Fg color
    • Ctrl + RightClick --- Pick from merged image to Bg color
    • Ctrl + Alt + RightClick --- Pick from current layer to Bg color

    You can change the shortcuts (like Alt) for the Color pickers in the Settings > Configure Krita > Canvas Input Settings and unfold ‘Alternate Invocation’.

    Development: Dmitry Kazakov, icons: David Revoy

    New Color balance filter

    Color balance changes the overall colors of an artwork via sliders and is used for color correction. It’s an ideal filter to give an extra mood to your artwork (warmer, cooler) or enhance black and white studies. You can find the filter here: Filter > Adjust > Color balance (Ctrl+B)

    {youtube}Jos-2j4bgIE{/youtube}

    Development: Sahil Nagpal, Dmitry Kazakov

    Initial support for G'mic

    Use hundreds of famous Gmic filters directly in Krita. It’s a first implementation, and still experimental and unstable. Also, it does not yet take the selection into account and doesn't show a preview yet.

    You can find the feature under the menu: Layer > Apply Gmic actions.

    Development: Lukáš Tvrdý, David Tschumperlé.

    Clone array tool

    This new feature creates a number of clones of the current layer so you can paint on it as if the tiles were repeated. The feature is convenient for the creation of isometric tiles.

    You can find the feature in Layer > Clone Array

    Note that, ‘Clone layers’ child can be moved independently from the parent layer with the move tool. You can clone any base layer, and position them to your liking ; they keep the dynamic properties to the live clone.

    {youtube}IFKgqhTmM3w{/youtube}

    Images, test and videos by Paul Geraskin.

    Development: Dmitry Kazakov.

    More custom shortcuts

    Krita gets a new panel in the preferences (Setting > Configure Krita > Canvas Input Settings) to offer you the possibility to customize all related canvas shortcuts. That means: all the zoom- and color picker keys are configurable now.

    Development: Arjen Hiemstra

    More compact, better looking

    A lot of work was done to make the Krita 2.8 user interface more compact, and dim the saturation of the icons to let the user focus on the canvas.

    Other new Features:

    • Make it possible to copy the projection of a group layer
    • Make it possible to drop images on the startup page
    • Isolate layer or mask: right-click on the layer or mask, select "isolate" and temporarily work only on that layer or mask.
    • Make it possible to load ACT palette files
    • Make the channel docker work on the image, not the individual layer
    • Add a wraparound layer move mode

    Removed Features

    • OpenShiva filter and generator scripting language. This is replaced by the G'Mic plugin.

    Improvements of old features

    • Make it possible to end paths and selection paths with Shift-Click, Enter, Esc and clicking on a handle shown over the first point
    • Improve the reference images docker
    • Make predefined brush tips use a size parameter rather than scale
    • Update the default brush presets largely (new icons, new presets, better organization)
    • Make the fill layer command obey the alpha lock switch
    • Make the PSD import filter ignore malformed resource blocks
    • The resource tagging system has been hugely improved
    • Implemented anisotropic spacing for the Krita brushes. Now if you change the 'ratio' option of the brush, the horizontal and vertical spacing will be relative to the width and height of the brush correspondingly.
    • Added support for 16 bit color depths in Color Balance and Dodge and Burn Filter
    • Improve painting of sharp corners with the Drawing Angle sensor enabled ("fan corners feature")
    • Improve the UI of the Sobel Filter
    • Add support for loading single-layer PSD Grayscale images
    • Improve the image docker: displays a color picker when hover over the image in the docker, scale to fit filenames, use the theme highlight for the selected icons
    • Use 'size' instead of 'scale' to scale the predefined brushes
    • Make the fill tool obey the layer alpha lock state
    • Rework the brush outline cursor and add a combined brush outline and dot or crosshair cursor mode. Brush outlines now also behave sensibly with very big and very small brushes.
    • add option to hide preset strip and/or scratchpad in the brush editor
    • Make it possible to copy the projection of a group layer to the clipboard
    • Add the filter name to the filter layer or mask name
    • Make it possible to drag and drop an image on the startup window
    • Improve rendering of vector layers
    • Apply thickness parameter to the hatching brush
    • Add a shortcut (shift + Z) to undo points added to paths
    • Allow the multibrush to use an angled axis and have an option to show the axis
    • Improve mirroring of layers or masks (and make it four times as fast)
    • Improve the layout of many dialogs: imagesize, layersize, phong bumpmap, canvas size, modify selection, jpeg and jp2 export
    • Cut memory usage of pattern resources in half
    • Cut runtime memory usage when switching predefined brushes
    • Update the default workspaces, adding versions for high and low res screens
    • Pixel and vector selections can be converted to each other
    • Updated line smoothing algorithms
    • Fix saving compositions
    • New erase toggle icon
    • Fix a memory leak when using the brightness/contrast curve
    • Save resolution info to OpenRaster files
    • Make handling custom input profiles more robust, also when updating (this should be the first 2.7.9.x release where you shouldn't need to remove the input settings folder)
    • add a reset button to the input profile editor
    • Fix wraparound mode for the selection painting tools
    • Crop selection masks when activating the wraparound mode
    • Fix painting the cursor outline when there is no cursor outline
    • Make painting on high bit depth images much faste when the OpenGL canvas is enabled
    • Fix updates of the canvas rulers
    • Fix moving of a selection out of a layer
    • Fix saving indexed PNG files with current versions of libpng
    • Update to the latest G'Mic version and enable the G'Mic plugin on windows
    • Make the G'Mic dialog resize when selecting a filter (fixes layout issues)
    • Add a crash handler for Windows that uploads minidumps (the website that goes with it is not done yet!) and offers a clean restart

    Optimizations

    • Rewrite the OpenGL canvas: it's now much faster and more robust, as well as more extensible.
    • Rewrite the tablet support to support non-wacom tablets on Windows and Linux and have better support for Wacom tablets. It is now possible to use multiple tablets (like Cintiq + Intuos) and issues with offsets are gone.
    • Freehand lines are now much smoother and more precise
    • Load all resources in the background, as soon as possible
    • Fix memory leak when downscaling an image
    • Fix memory leak when making selections
    • Make painting gradients much faster
    • Make selections much faster
    • Painting with predefined brushes is 20% faster

    The Krita 2.8 release has been made possible by:

    • the KDE project and community which provided the infrastructure, the foundations and frameworks Krita has been built on as well as the community we are proud to be part of.
    • the Krita Foundation which, supported by the Krita user community has been able to sponsor Dmitry Kazakov as a full-time developer on Krita during this development period. Consider joining the Develop Fund to make sure Krita's development will continue at its current break-neck pace!
    • KO GmbH, responsible for most of the work of making Krita work on Windows and providing commercial support for Krita users, as well as the Krita on Steam effort.

    Many Linux distributions will offer Krita 2.8 in their backports repositories. Windows users can download Krita 2.8 here, provided by KO GmbH.

     Muses Training DVD

    The best way to get to know Krita is through the Muses DVD! Check out the contents, or order your copy now!

    The regular price is € 32,50 including shipping.

    March 03, 2014

    Development priority for March

    Our fundraising campaign have reached its goal and we are happy to announce the development priority for March!...

    March 01, 2014

    The status of frame-by-frame animation features

    We are happy to publish a short video, demonstrating the latest results in development of frame-by-frame animation features for Synfig....

    Release Notes: Feb 2014

    What’s the point of releasing open-source code when nobody knows about it? In “Release Notes” I give a  round-up of recent open-source activities.

    Since this is the first instalment of what will hopefully be a regular thing, I’ll look back a couple months into the past. A long (and not even complete) list of things, mostly related to web technology. I hope you’ll find something useful among it.

     

    angular-encode-uri (New, github)

    A trivial plugin for doing URI encoding in Angular.JS view, something it oddly doesn’t do out of the box.

     

    angular-gettext (Updated, website, original announcement)

    The nicest way to do translations in Angular.JS is getting even nicer, with more improved coverage of strings and filetypes, built-in support for asynchronous loading and more flexibility.

    But most of all: rock-solid stability. Angular-gettext is now in use for some nice production deployments and it just works.

    Highlights:

    • The website is now open-source and on github.
    • There’s an ongoing effort to split the grunt plugins up into the actual grunt bits and a more generic library. There’s also a Gulp plugin coming, so you can use any tooling you want.
    • Functionality for loading translations asynchronously.
    • Now usable without jQuery loaded.
    • Better handling of translation strings in directives.

     

    angular-import-scope (New, github)

    Angular.JS structures your data in nested scopes. Which is great, except when page structure doesn’t work like that and you need the inner data on a much higher level (say in the navigation). With import-scope, you can import the scope of a lower-level ui-view somewhere higher up.

    mockup

     

    angular-select2 (New, github)

    A select2 plugin for Angular.JS that actually works, with ng-options support.

     

    connect-body-rewrite (New, github, DailyJS coverage)

    A middleware plugin for Connect.JS that helps  you transform request bodies on the fly, typically on the result of a proxied call. Used in connect-livereload-safe and connect-strip-manifest (see below).

     

    connect-livereload-safe (New, github)

    A Connect.JS middleware plugin to inject livereload. What’s wrong with connect-livereload? Well, I ran into some freak issues where modifying the DOM during load breaks Angular.JS. This plugin avoids that.

     

    connect-strip-manifest (New, github)

    Connect.JS middleware to strip the HTML5 app cache manifest. Makes it possible to disable the caching in development, without having weird tricks in your HTML file.

     

    grunt-git (Updated, github)

    A pile of new git commands supported, with a much improved test suite.

     

    grunt-unknown-css (New, github)

    Lets you analyze HTML files to figure out which classes don’t exist anymore in the CSS. Good for hunting down obsolete style declarations.

     

    grunt-usemin-uglifynew (New, github)

    A plugin for grunt-usemin that reuses existing .min.js files. This speeds up compilation of web apps and lets you use the minified builds provided by library authors.

     

    json-inspect (New, github)

    Get JSON context information out of a string. Lets you build text editors that are aware of the structure of the JSON underneath it.

    Suppose you have this:

    {
      "string": "value",
      "number": 3,
      "object": {
        "key": "val"
      },
      "array": [
        1,
        2
      ]
    }

    With json-inspect you can figure out what it means if the cursor is at a given position:

    var context = jsonInspect(myJson, 2, 6);
    // { key: 'string', start: 4, end: 21, value: 'value' }
    var context = jsonInspect(myJson, 9, 5);
    // { key: 'array.1', start: 93, end: 102, value: 2 }

     

    mapbox-dist (New, github)

    A compiled version of Mapbox.JS, which you can use with Bower.

     

    Nested Means (New, github)

    A data quantization scale that handles non-uniform distributions gracefully.

    Or in human language: a Javascript module that calculates how you can make a meaningful legend for colorizing a map based on long tail data. If you’d use a linear scale, you’d end up with two colors: maximum and minimum. Nested means tries to adjust the legend to show more meaningful data.

    nested-means-3

    A linear scale would map everything to white and dark-green. Nested means calculates a scale that maps to many colors.

     

    node-trackchange (New, github)

    An experiment in using Harmony Proxies for tracking changes to objects. Here’s an example:

    var orig = {
      test: 123
    };
    // Create a wrapper that tracks changes.
    var obj = ChangeTracker.create(orig);
    // No changes initially:
    console.log(obj.__dirty); // -> []
    // Do things
    orig.test = 1;
    orig.other = true;
    // Magical change tracking!
    console.log(obj.__dirty); // -> ['test', 'other']

    You can even wrap constructors. This ensures that each created instance automatically has change tracking built-in:

    TestType = ChangeTracker.createWrapper(OrigType);
    var obj = new TestType();
    console.log(obj.__dirty);

     

    pofile (New, github)

    A gettext .po parser and serializer, usable in the browser and on the backend. The angular-gettext module is powered by this library.

     

    Blog it or it didn’t happen.

    February 28, 2014

    Freestyle Fiction 01 – part C

    Here is the third part of my little comics project “Freestyle Fiction”, of course once again all made from scratch using Krita on linux and a good old intuos3 graphic tablet.
    I used the 2.8 development version to keep tracking bugs and to check that my usual workflow was not broken before the 2.8 release (which should happen in a few days now).

    If you missed the first parts, go get those first here:
    PART A
    PART B

    And so the next pages:
    Freestyle Fiction 01 – part C – French
    Freestyle Fiction 01 – part C – English

    comics preview picture

    New house, no internet

    [My new office] I'm writing this from my new home office in our new house, as I listen to the wind howl and watch out the big windows to see lightning over the Sangre de Cristo mountains across the valley.

    We're nestled in the piñon-juniper woodlands of northern New Mexico. It's a big jump from living in Silicon Valley.

    [The house is nestled in pinon-juniper woodland] Coyotes roam the property, though we don't catch a glimpse that often, and I think I saw a grey fox the first morning we were here. These past few weeks, Sandhill cranes have been migrating far overhead, calling their wild cries; sometimes they catch a thermal (once right over our house) and circle for a while, gaining altitude for their trip north.

    And lightning -- summer thunderstorms were something I very much looked forward to (back in San Jose we got a thunderstorm maybe once every couple of years) but I didn't expect to see one so early. (I'm hoping the rain and wind will blow all the pollen off the junipers, so I can stop sneezing some time soon. Who knew juniper was such a potent allergen?)

    And the night sky -- for amateur astronomers it looks like heaven. We haven't had a telescope set up yet (we're still unpacking and sorting) but the Milky Way is unbelievable.

    [My new office, from the outside] We're in love with the house, too, though it's been neglected and will need a lot of work. It's by architect Bart Prince and it's all about big windows and open spaces. Here's me looking up at the office window from the garden down below.

    Of course, not everything is perfect. To start with, in case anyone's been wondering why I haven't been around online much lately, we have no internet to the house until the cable company gets a permit to dig a trench under the street. So we're doing light networking by mi-fi and making trips to the library to use their internet connection, and it may be a few more weeks yet before we have a connection of our own.

    I'm sure I'll miss the Bay Area's diversity of restaurants, though at the moment I'm stuffed with lamb, green chile and sopaipillas (a New Mexican specialty you can't really get anywhere else).

    And of course I'll miss some of the people and the geeky gatherings, living in a small town that isn't packed with Linux and Python and tech women's user groups like the Bay Area. Still, I'm looking forward to the adventure.

    And now, I'm off to the library to post this ...

    February 27, 2014

    Krita 2.8 (Beta3) is now in Krita Lime!

    As you may already know for some reasons the release of Calligra 2.8, which should have happened on the last Wednesday, was postponed for at least a week. That is sad but we could do nothing with it :(

    But Krita team didn't want to let its users down! So we created our own packages based on the 2.8 release branch and published them on Krita Lime! Now you can not only install 'krita-testing' package, which is based on quite unstable master branch, but also 'krita-2.8' package, which is almost exactly the same thing which is going to be published in a week as Krita 2.8! :)

    To install the stable Krita packages for any still supported Ubuntu distro (Precise, Quantal or Saucy) you need to do a couple of steps:
    1. Check that you don't have any original calligra or krita packages provided by your distribution or project-neon (if you have one, the installer will report you about a conflict)
    2. Add the PPA to repositories list:
      sudo add-apt-repository ppa:dimula73/krita
    3. Update the cache:
      sudo apt-get update
    4. Install Krita 2.8:
      sudo sudo apt-get install krita-2.8 krita-2.8-dbg
    We've also prepared new packages for Windows users:
    For 64-bit: Krita_x64_2.7.9.12
    and 32-bit: Krita_x86_2.7.9.12

      2.8 Release Candidate

      Still labelled as 2.8 beta3, the 2.7.9.12 is pretty close that what we hope to release next Wednesday. Compared to the .11 build, there are the following improvements:

      http://heap.kogmbh.net/downloads/krita_x64_2.7.9.12.msi
      http://heap.kogmbh.net/downloads/krita_x86_2.7.9.12.msi

      The XP package is being built and will be available later on.
       
      There is also an Ubuntu package available in the Krita Lime repository. Read all about it on Dmitry's blog!
      • Fix saving compositions (BUG:331310)
      • New erase toggle icon
      • Fix a memory leak when using the brightness/contrast curve (BUG:330479)
      • Save resolution info to OpenRaster files (BUG:321106)
      • Make handling custom input profiles more robust, also when updating (this should be the first 2.7.9.x release where you shouldn't need to remove the input settings folder)
      • add a reset button to the input profile editor
      • Fix wraparound mode for the selection painting tools
      • Crop selection masks when activating the wraparound mode (BUG:330372)
      • Fix painting the cursor outline when there is no cursor outline (BUG:330570)
      • Make painting on high bit depth images much faste when the OpenGL canvas is enabled (BUG:331347)
      • Fix updates of the canvas rulers (BUG:330129)
      • Fix moving of a selection out of a layer (BUG:324373)
      • Fix saving indexed PNG files with current versions of libpng
      • Update to the latest G'Mic version and enable the G'Mic plugin on windows
      • Make the G'Mic dialog resize when selecting a filter (fixes layout issues)
      • Add a crash handler for Windows that uploads minidumps (the website that goes with it is not done yet!) and offers a clean restart
       
      The final release is expected for next Wednesday.

      And if you're new to Krita, don't forget that the Muses Training DVD by Ramon Miranda is shipping now!

      February 26, 2014

      Call for submissions: Libre Graphics Magazine 2.3

      libre-type-collage-negative

      Since our first issue, back in 2010, Libre Graphics Magazine has paid close attention to fonts and type — after all, they are basic ingredients of any magazine, since the Sumerian clay tablets hit the streets of Uruk. In the last few years, we have witnessed a massive burst of creativity in the area of type design, with F/LOSS and libre fonts playing a major role (and we’re not just thinking of Lobster).

      For the upcoming issue 2.3, together with guest-editor Manuel Schmalstieg (Greyscale Press), we will zoom in on the Free & Libre type design scene, talk with the people behind the fonts, and reflect on the way they change our everyday lives.
      But let’s not forget, fonts are always bound to their real-world context — words, phrases, stories. We will take a bumpy ride through the unstable terrain of digital publishing, where open web standards (CSS, JS, EPUB) serve as a guiding light amongst the battling digital distribution giants.

      Finally, having examined “Collaboration” back in issue 1.3, we will perform another reality-check, to see how distributed workflows and software production paradigms have permeated (and enriched) our writing and design practices.

      Does your work fit into that raster?

      We welcome your submissions for articles, showcases of your work, interviews and anything else you might suggest. Proposals for submissions (no need to send us the completed work right away) can be sent to submissions@libregraphicsmag.com. The deadline for submissions is March 23, 2014.
      Libre Graphics Magazine is published under a free license (Creative Commons Attribution Share-Alike). All included submissions will also be published under CC-BY-SA (or a compatible license).

      Out now: Libre Graphics magazine issue 2.2, Gendering F/LOSS

      We’re very pleased to announce the release of issue 2.2 of Libre Graphics magazine. This issue, built around the theme “Gendering F/LOSS,” engages with discussions around representation and gendered work in Free/Libre Open Source Software and Free Culture. We invite you to buy the print edition of the issue or download the PDF from http://libregraphicsmag.com. We invite both potential readers and submittors to download, view, write, pull, branch and otherwise engage.

      Why Gendering F/LOSS?

      In the world of F/LOSS, and in the larger world of technology, debate rages over the under-representation of women and the frat house attitude occasionally adopted by developers. The conventional family lives of female tech executives are held up as positive examples of progress in the battle for gender equity. Conversely, pop-cultural representations of male developers are evolving, from socially awkward, pocket-protectored nerds to cosmopolitan geek chic. Both images mask the diversity of styles and gender presentations found in the world of F/LOSS and the larger tech ecology. Those images also mask important discussions about bigger issues: is it okay to construct such a strict dichotomy between “man” and “woman” as concepts; how much is our work still divided along traditional gender lines; is it actually enough to get more women involved in F/LOSS generally, or do we need to push for specific kinds of involvement; do we stop at women, or do we push for a more inclusive understanding of representation?

      This issue looks at some of the thornier aspects of gender in F/LOSS art and design. In discussing gendered work, the push for greater and greater inclusion in our communities, and representations of gender in our artistic practices, among others, we hope to add and amplify voices in the discussion.

      Gendering F/LOSS is the second issue in volume two of Libre Graphics magazine (ISSN 1925-1416). Libre Graphics magazine is a print publication devoted to showcasing and promoting work created with Free/Libre Open Source Software. We accept work about or including artistic practices which integrate Free, Libre and Open software, standards, culture, methods and licenses. To find out more about the purpose of Libre Graphics magazine, read our manifesto: http://libregraphicsmag.com/manifesto

      February 25, 2014

      Extend GNOME Videos with Lua

      As you've probably seen in my previous post, the new Videos UI has part of its interface focused on various channels from online sources, such as the Blip.tv, or videos from the Guardian.

      Grilo recently grew support for Lua sources, which means you can write about 100 lines of lua, and integrate videos from an online source into Videos easily.

      The support isn't restricted to videos, GNOME Music and GNOME Photos and a number of other applications will also be able to use this to be extended.

      Small tutorial by example

      Our example is that of a source that would fetch the list of Ogg Theora streams from Xiph.org's streaming directory.

      First, define the "source": the name is what will show up in the interface, supported_keys lists the metadata fields that we'll be filling in for each media item, and supported_media mentions we only show videos, so the source can be skipped in music players.


      source = { 
      id = 'grl-xiph-example',
      name = 'Xiph Example',
      supported_keys = { 'id', 'title', 'url', 'type' },
      supported_media = 'video',
      }

      We'll then implement one of the source methods, to browse/list items in that source. First, we cheat a bit and tell the caller that we don't have any more items if you need to skip some. This is usual for sources with few items as the front-end is unlikely to list items 2 by 2. If that's not the case, we fetch the page on the Xiph website and wait for the callback in fetch_cb


      function grl_source_browse(media_id)
      if grl.get_options("skip") > 0 then
      grl.callback()
      else
      grl.fetch('http://dir.xiph.org/by_format/Ogg_Theora', 'fetch_cb')
      end
      end

      Here's the meat of the script, where we parse the web page into media items. Lua doesn't use regular expressions, but patterns. They're different, and I find them easier to grasp. Remember that the minus sign/dash is a reserved character, '%' is the escape character, and '()' enclose the match.

      We create a new table called for each one of the streams in the HTML we scraped, with the metadata we said we'd give in the source definition, and send it back to Grilo. The '-1' there is the number of items remaining in the list.

      Finally, we call grl.callback() without any arguments to tell it we're done fetching the items.


      function fetch_cb(results)
      if not results then
      grl.callback()
      end

      for stream in results:gmatch('<p class="stream%-name">(.-)</p>') do
      media = {}
      media.url = stream:match('href="(.-)" ')
      media.id = media.url
      media['type'] = 'video'
      media.title = stream:match('<a href.->(.-)</a>')

      grl.callback(media, -1)
      end

      grl.callback()
      end

      We're done! You just need to drop this file in ~/.local/share/grilo-plugins/grl-lua-factory, and you can launch Videos or the test application grilo-test-ui-0.2 to see your source in action.



      Why Lua?

      This screen scraping is what Lua is good at, with its powerful yet simple pattern matching. Lua is also easily embeddable, with very few built-in functions which means we can have better control over the API plugins use, a low foot-print, and all the benefits of an interpreted garbage-collected language.

      I hear heckles of "Javascript" in the background, so I guess I better address those as well. I think Lua's pattern matching is better than Javascript regexes, and more importantly, Lua is more easily embeddable in big applications, because of its simplicity as a language and a VM. Basically, Javascript (and the gjs implementation we'd likely have used in particular) is too powerful for our use case.

      Better sources

      It's obviously possible to avoid this screen scraping when the online source provides data in an easily parseable format (such as Json for which we have Lua bindings). That will be the case of the Guardian videos source (once we've figured out a minor niggle with the 50 items query limit) thanks to the Guardian's Open Data work.

      Hopefully it means that we'll have sources for the Wiki Commons Picture of the day (as requested by GNOME alumni Luis Villa) for use in the Background settings, or for Mediagoblin installations.

      Videos sidenote

      An aside, for those of you who have videos on a separate network storage, or not indexed by Tracker, there's a hidden configuration to show specific paths in the Channels tab in Videos.

      gsettings set org.gnome.totem filesystem-paths "['smb://myserver.local/videos/']"

      Epilogue

      I'm looking forward to seeing more Grilo sources. I feel that this Lua source lowers the barrier to entry, enabling the evening hacker to integrate their favourite media source into GNOME, which probably means that we'll need to think about parental controls soon! ;)

      Thanks to Victor Toso for his work on Lua sources, both during and after the Summer of Code, and Juan Suarez for his mentoring and patch reviewing.

      Fanart by Anastasia Majzhegisheva – 8

      Artwork by Anastasia Majzhegisheva, MyPaint.

      Artwork by Anastasia Majzhegisheva, MyPaint.
      Get a full-size image.

      February 24, 2014

      Liberec

      The town I grew up in is even smaller when you look at it from above. Somehow I still love it.

      Watch on Youtube Watch on Vimeo

      GNOME 3.12 on Fedora 20

      I’ve finished building the packages for GNOME 3.11.90. I’ve done this as a Fedora 20 COPR. It’s probably a really good idea to test this in a VM rather than your production systems as it’s only had a small amount of testing.

      If it breaks, you get to keep all 132 pieces. It’s probably also not a good idea to be asking fedora-devel or fedoraforums for help when using these packages. If you don’t know how to install a yum repo these packages are not for you.

      Comments and suggestions, welcome. Thanks.

      February 21, 2014

      2.5D Parallax Animated Photo Tutorial (using Free Software)

      I had been fiddling with creating these 2.5D parallax animated photos for quite a few years now, but there had recently been a neat post by Joe Fellows that brought it into the light again.

      The reason I had originally played with the idea is part of a long, sad story involving my wedding and an out-of-focus camcorder that resulted in my not having any usable video of my wedding (in 2008). I did have all of the photographs, though. So as a present to my wife, I was going to re-create the wedding with these animated photos (I’m 99% sure she doesn't ever read my blog - so if anyone knows her don’t say anything! I can still make it a surprise!).

      The rest of my GIMP tutorials can be found here:
      Getting Around in GIMP

      So I had been dabbling with creating these in my spare time over a few years, and it was really neat to see the work done by Joe Fellows for the World Wildlife Fund. Here is that video:


      He followed that up with a great video walking through how he does it:



      I'm writing here today to walk through the methods I had been using for a while to create the same effect, but entirely with Free/Open Source Software...




      Open Source Software Method

      Using nothing but Free/Open Source Software, I was able to produce the same effect here:



      Joe uses Adobe software to create his animations (Photoshop & After Effects). I neither have, nor want, Photoshop or After Effects.

      What I do have is GIMP and Blender!

      Blender Logo + GIMP Logo = Heart Icon

      What I also don’t have (but would like) is access to the World Wildlife Fund photo archive. Large, good photographs make a huge difference in the final results you’ll see.

      What I do have access to are some great old photographs of Ziegfeld Follies Girls. For the purposes of this tutorial we’ll use this one:

      Pat David Ziegfeld Follies Woman Reclining
      Click here to download the full size image.

      This is a long post.
      It’s long because I’ve written hopefully detailed enough steps that a completely new user of Blender can pick it up and get something working. For more experienced users, I'm sorry for the length.

      As a consolation prize, I’ve linked to my final .blend file just below if anyone wants to download it and see what I finally ended up with at the end of the tutorial. Enjoy!

      Here’s an outline of my steps if it helps...

      1. Pick a good image
      2. Find something with good fore/middleground and background separation (and clean edges).
      3. Think in planes
      4. Pay attention to how you can cut up the image into planes.
      5. Into GIMP
        1. Isolate woman as new layer
        2. Mask out everything except the subject you want.
        3. Rebuild background as separate layer (automatically or manually)
        4. Rebuild the background to exclude your subject.
        5. Export isolated woman and background layer
        6. Export each layer as its own image (keep alpha transparency).
      6. Into Blender
        1. Enable “Import Images as Planes” Addon
        2. Enable this ridiculously helpful Addon.
        3. Import Images as Planes
        4. Import your image as separate planes using the Addon.
        5. Basic environment setup
        6. Some Blender basics, and set viewport shade mode to “Texture”
        7. Add some depth
        8. Push background image/plane away from camera and scale to give depth.
        9. Animate woman image mesh
        10. Subdivide subject plane a bunch, then add Shape Keys and modify.
        11. Animate camera
        12. Animate camera position throughout timeline as wanted.
        13. Animate mesh
        14. Set keyframes for Shape Keys through timeline.
        15. Render

      File downloads:
      Download the .blend file [Google Drive]
      These files are being made available under a Creative Commons Attribution, Non-Commercial, Share Alike license (CC-BY-SA-NC).
      You're free to use them, modify them, and share them as long as you attribute me, Pat David, as the originator of the file.

      Consider the Source Material

      What you probably want to look for if you are just starting with these are images with a good separation between a fore/middle ground subject and the background. This will make your first attempts a bit easier until you get the hang of what’s going on. Even better if there are mostly sharp edges differentiating your subject from the background (to help during masking/cutting).

      You’ll also want an image bigger than your rendering size (for instance, mine are usually rendered at 1280×720). This is because you want to avoid blowing up your pixels when rendering if possible. This will make more sense later, but for now just try to use source material that’s larger than your intended render size.

      Thinking in Planes

      The trick to pre-visualizing these is to consider slicing up your image into separate planes. For instance, with our working image, I can see immediately that it’s relatively simple. There is the background plane, and one with the woman/box:

      Pat David Ziegfeld Follies Woman Reclining Plane Example
      Simple 2-plane visualization of the image.

      This is actually all I did in my version of this in the video. This is nice because for the most part the edges are all relatively clean as well (making the job of masking an easier one).

      One of my previous tests had shown this idea of planes a little bit clearer:

      Yes, that’s my daughter at the grave of H.P. Lovecraft with a flower.

      So we’ve visualized a simple plan - isolate the woman and platform from the background. Great!

      Into GIMP!

      So I will simply open the base image in GIMP to do my masking and exporting each of the individual image planes. Remember, when we’re done, we want to have 2 images, the background and the woman/platform (with alpha transparency):

      Pat David Ziegfeld Follies Woman Reclining Background Clean
      What my final cleaned up backplate should look like.
      (click here to download the full size)

      Pat David Ziegfeld Follies Woman Reclining Clean Transparent
      My isolated woman/platform image.
      (click here to download the full size)

      Get Ready to Work

      Once in GIMP I will usually duplicate the base image layer a couple of times (this way I have the original untouched image at the bottom of layer stack in case I need it or screw up too badly). The top-most layer is the one I will be masking the woman from. The middle layer will become my new background plate.

      Isolating the Woman

      To isolate the woman, I’ll need to add a Layer Mask to the top-most layer (if you aren’t familiar with Layer Masks, the go back and read my previous post on them to brush up.

      I initialize my layer mask to White (full opacity). Now anywhere I paint black on my layer mask, will make that area transparent on this layer. I also usually turn off the visibility of all the other layers when I am working (so I can see what I’m doing - otherwise the layers beneath would show through and I wouldn’t know where I was working). This is what my layer dialog looks like at this point:


      Masking the Woman

      Some of these headings are beginning to sound like book titles (possibly romance?) “Isolating the Woman”, “Masking the Woman”...

      There’s a few different ways you can proceed at this point to isolate the woman. Really it depends on what you’re most comfortable with. One way is to use Paths to trace out a path around the subject. Another way is to paint directly on the Layer Mask.

      All of them suck.

      Sorry. There is no fast and easy method of doing this well. This is also one of the most important elements to getting a good result, so don’t cheap out now. Take your time and pull a nice clean mask, whatever method you choose.

      For this tutorial, we can just paint directly onto our Layer Mask. Check to make sure the Layer Mask is active (white border around it that you won't be able to see because the mask is white) in the Layer palette, and make sure your foreground color is set to Black. Then it’s just a matter of choosing a paintbrush you like, and start painting around your subject.

      I tend to use a simple round brush with a 75% hardness. I'll usually start painting, then take advantage of the Shift key modifier to draw straight lines along my edges. For finer details I'll drop down into a really small brush, and stay a bit larger for easier things.

      To illustrate, here’s a 3X speedrun of me pulling a quick mask of our image:



      To erase the regions that are left, I'll usually use the Fuzzy Select Tool, grow the selection by a few pixels, and then Bucket Fill that region with black to make it transparent (you can see me doing it at about 2:13 in the video).

      Now I have a layer with the woman isolated from the background. I can just select that layer and export it to a .PNG file to retain transparency.

      File → Export

      Name the file with a .png extension, and make sure that the “Save color values from transparent pixels” is checked to save the alpha transparency.

      Rebuilding the Background

      Now that you have the isolated woman as an image, it’s time to remove her and the platform from the background image to get a clean backplate. There’s a few ways to go about this, the automated way, or the manual way.

      Automated Background Rebuild

      The automated way is to use an Inpainting algorithm to do the work for you. I had previously written about using the new G'MIC patch-based Inpainting algorithm, and it does a pretty decent job on this image. If you want to try this method you should first read up about using it here (and have G'MIC installed of course).

      To use it in this case was simple. I had already masked out the woman with a layer mask, so all I had to do was Right-Click on the layer mask, and choose “Mask to Selection” from the menu.


      Then just turn on the visibility of my “Background” layer (and toggle the visibility of the isolated woman layer off) and activate my “Background” layer by clicking on it.

      Then I would grow the selection by a few pixels:

      Select → Grow

      I grew it by about 4 pixels, then sharpened the selection to remove anti-aliasing:

      Select → Sharpen

      Finally, make sure my foreground color is pure red (255, 0, 0), and bucket fill that selection. Now I can just run the G'MIC Inpainting [patch-based] against it to Inpaint the region:

      Filters → G'MIC
      Repair → Inpaint [patch-based]

      Let it run for a while (it’s intensive), and in the end my layer now looks like this:


      Not bad at all, and certainly usable for our purposes!

      If I don’t want to use it as is, it’s certainly a better starting point for doing some selective healing with the Heal Tool to clean it up.

      Manual Background Rebuild

      Manually is exactly as it sounds. We basically want to erase the woman and platform from the image to produce a clean background plate. For this I would normally just use a large radius Clone Tool for mostly filling in areas, and then the Heal Tool for cleaning it up to look smoother and more natural.


      It doesn't have to be 100% perfect, remember. It only needs to look good just behind the edges of your foreground subjects (assuming the parallax isn’t too extreme). Not to mention one of the nice things about this workflow is that it’s relatively trivial later to make modifications and push them into Blender.

      Rinse & Repeat

      For this tutorial we are now done. We’ve got a clean backplate and an isolated subject that we will be animating. If you wanted to get a little more complex just continue the process starting with the next layer closer to the camera. An example of this is the last girl in my video, where I had separated her from the background, and then her forearm from her body. In that case I had to rebuild the image of her chest that was behind her forearm to account for the animation.


      Example of a three-layer separation (notice the rebuilt dress texture)

      Into Blender

      Now that we have our source material, it’s time to build some planes. Actually, this part is trivially easy thanks to the Import Images as Planes Blender addon.

      The key to this addon is that it will automatically import an image into Blender, and assign it to a plane with the correct aspect ratio.

      Enable Import Images as Planes

      This addon is not enabled by default (at least in my Blender), so we just need to enable it. You can access all of the addons by first going to User Preferences in Blender:


      Then into the Addons window:


      I find it faster to get what I need by searching in this window for “images”:


      To enable the Addon, just put check the small box in the upper right corner of the Addon. Now you can go back into the 3D View.

      Back in the 3D View, you can also select the default cube and lamp (Shift - Right Click), and delete them (X key). (Selected objects will have an orange outline highlighting them).

      Import Images as Planes

      We can now bring in the images we exported from GIMP earlier. The import option is available in:

      File → Import → Images as Planes


      At this point you’ll be able to navigate to the location of your images and can select them for import (Shift-Click to select multiple):


      Before you do the import though, have a look at the options that are presented to you (bottom of the left panel). We need to turn on a couple of options to make things work how we want:


      For the Import Options we want to Un-Check the option to “Align Planes”. This will import all of the image planes already stacked with each other in the same location.

      Under Material Settings we want to Check both Shadeless and Use Alpha so our image planes will not be affected by lamps and will use the transparency that is already there. We also want to make sure that Z Transparency is pressed.

      Everything else can stay at their default settings.

      Go ahead and hit the “Import Images as Planes” button now.

      Some Basic Setup Stuff

      At this point things may look less than interesting. We’re getting there. First we need to cover just a few basic things about getting around in Blender for those that might be new to it.

      In the 3D window, your MouseWheel controls the zoom level, and your Middle Mouse button controls the orbit. Right-Click selects objects, and Left-Click will place the cursor. Shift-Middle Click will allow you to pan.

      At this point your image planes should already be located in the center of the view. Go ahead and roll your MouseWheel to zoom into the planes a bit more. You should notice that they just look like boring gray planes:


      I thought you said we were importing images?!

      To see what we’re doing in 3D View, we’ll need to get Blender to show the textures. This is easily accomplished in the toolbar for this view by changing the Viewport Shading:


      Now that’s more like it!

      At this point I personally like to get my camera to an initial setup as well, so zoom back out and Right-Click on your camera:


      We want to reset all of the default camera transformations and rotations by setting those values to 0 (zero). This will place your camera at the origin facing down.

      Now change your view to Camera View (looking through the actual camera) by hitting zero (0) on your keyboard numberpad (not 0 along the top of your alpha keys).


      Yes, this zero, not the other one!

      You’ll be staring at a blank gray viewport at this point. All we have to do now is move the camera back (along the Z-axis), until we can see our working area. I like to use the default Blender grid as a rough approximation of my working area.

      To pull the camera back, hit the G key (this will move the active object), and then press the Z key (this will constrain movement along the Z-axis. Slowly pull your mouse cursor away from the center of the screen, and you should see the camera view getting further away from your image planes. As I said, I like to use the default grid as a rough approximation, so I’ll zoom out until I am just viewing the width of the grid:


      I’ve also found that working at small scales is a little tough, so I like to scale my image planes up to roughly match my camera view/grid. So we can select all the image planes in the center of our view by pressing the B key and dragging a rectangle over the image planes.

      To scale them, press the S key and move the mouse cursor away from the center again. Adjust until the images just fill the camera view:


      Image planes scaled up to just fit the camera/grid

      This will make the adjustments a little easier to do. Now we’re ready to start fiddling with things!

      Adding Some Depth

      What we have now is all of our image planes in the exact same location. What we want to do is to offset the background image further away from the camera view (and the other planes).

      Right-click on your image planes. If you click multiple times you will cycle through each object under your cursor (in this case between the background/woman image planes). With your background image plane selected, hit the G key to move it, and the Z key again to constrain movement along the Z-axis. (If you find that you’ve accidentally selected the woman image plane, just hit the ESC key to escape out of the action).

      This time you’ll want to move the mouse cursor towards the center of the viewport to push the background further back in depth. Here’s where I moved mine to:


      We also need to scale that image plane back up so that its apparent size is similar to what it was before we pushed it back in depth. With the background image plane still selected, hit the S key and pull the mouse away from the center again to scale it up. Make it around the same size as it was before (a little bigger than the width of the camera viewport):


      Keep in mind that the further back the background plane is, the more pronounced the parallax effect will be. Use a relatively light touch here to maintain a realistic sense of depth.

      What’s neat at this point is that if we were not going to animate any of the image planes themselves, we would be about done. For example, if you select the camera again (Right-click on the camera viewport border) you can hit the G key and move the camera slightly. You should be able to clearly see the parallax effect of the background being behind the woman.




      Animating the Image Plane

      After Effects has a neat tool called “Puppet Tool” that allowed Joe to easily deform his image to appear animated. We don’t have such a tool exactly in Blender at the moment, but it’s trivial to emulate the effects on the image plane using Shape Keys.

      What Shape Keys does is simple. You will take a base mesh, add a Shape Key, and then deform the mesh in any way you’d like. Then you can animate the Shape Key deformation of the mesh over time. Multiple Shape Keys will blend together.

      We are going to use this function to animate our woman (as opposed to some much more complex animation abilities in Blender).

      Before we can deform the woman image plane, though, we need a good mesh to deform. At the moment the woman plane contains only 4 vertices in the mesh. We are going to make this much denser before we do anything else.

      We want to subdivide the image plane with the woman. So Right-click to select the woman image plane. Then hit the Tab key to change into edit mode. All of the vertices should already be active (selected), they will all be highlighted if they are (if not, hit the A key to toggle selection of all vertices until they are):


      What we want to do is to Subdivide the mesh until we get a desired level of density. With all of the vertices in the plane selected, hit the W key and choose Subdivide from the menu. Repeat until the mesh is sufficiently dense for you. In my case, I subdivided six times and the result looks like this:


      If you’ve got a touch of OCD in you, you might want to reduce the unused vertices in the mesh. This is not necessary, but might make things a bit cleaner to look at. To remove those vertices, first hit the A key to de-select all the vertices. Then hit the C key to circle-select. Your should see a circle select region where your mouse is. You can increase/decrease the size of the circle using your MouseWheel. Just click now on areas that are NOT your image to select those vertices:


      Select all the vertices in a rough outline around your image, and press the X key to invoke the Delete menu. You can just choose Vertices from this menu. You should be left with a simpler mesh containing only your woman image. Hit the Tab key again to exit Edit mode.

      Here is what things look like at the moment:


      To clear a little space while I work, I am going to hide the Transform and Object Tools palettes from my view. They can be toggled on/off by pressing the N key and T key respectively.

      I am also going to increase the size of the Properties panel on the right. This can be done by clicking and dragging on it’s edge (the cursor will change to a resize cursor):


      We will want to change the Properties panel to show the Object Data for the woman image plane. Click on that icon to show the Object Data panel. You will see the entry for Shape Keys in this panel.

      We want to add a new Shape Key to this mesh, so press the &plus button two times to add two new keys to this mesh (one key will be the basis, or default position, while the other will be for the deformation we want). After doing this, you should see this in the Shape Keys panel:


      Now, the next time we are in Edit mode for this mesh, it will be assigned to this particular Shape Key. We can just start editing vertices by hand now if we want, but there’s a couple of things we can do to really make things much easier.

      Proportional Editing Mode

      We should turn on Proportional Editing Mode. This will make the deformation of our mesh a bit smoother by allowing our changes to effect nearby vertices as well. So in your 3D View press the Tab key again to enter edit mode.


      Once in Edit mode, there is a button for accessing Proportional Editing Mode. Once here, just click on Enable to turn it on.

      To test things out, you can Right-click to select a vertex in your mesh, and use the G key to move it around. You should see nearby vertices being pulled along with it. Rolling your MouseWheel up or down will increase/decrease the radius of the proportional pull. Remember, to get out of the current action you can just hit the ESC key on your keyboard to exit without making any changes.

      If you really screw up and accidentally make a mess of your mesh, it’s easy to get back to the base mesh again. Just hit Tab to get out of Edit mode, then in the Shape Keys panel you can hit the “−” button to remove that shape key. Just don’t forget to hit “&plus” again to add another key back when you want to try again.

      Pivot Point

      Blender let’s you control where the current pivot point of any modifications you make to the mesh should be. By default it will be the median point of all selected objects, which is fine. You may occasionally want to specify where the point of rotation should be manually.


      The button for adjusting the pivot point is in the toolbar of the 3D View. I’ll usually only use Median Point or 3D Cursor when I'm doing these. Remember: Left-clicking the mouse in 3D View will set the cursor location. You can leave it at Median Point for now.

      To Animate!

      Ok, now we can actually get to the animating of the mesh. We need to decide what we’d like the mesh to look like it’s doing first, though. For this tutorial let’s do a couple of simple animations to get a feel for how the system works. I'm going to focus on changing two things.

      First we will rotate the womans head slightly down from it’s base position and second we will rotate her arm down slightly as well.

      Let’s start with rotating her head. I will use the circle-select in the 3D View again to select a bunch of vertices in the center of her head (no need to exactly select all the vertices all the way around):


      In the 3D View, press the R key to rotate those vertices. With Proportional Editing turned on you should see not only your selected vertices, but nearby vertices also rotating. While in this operation, the mousewheel will adjust the radius of the proportional editing influence (the circle around my rotation in my screenshot shows where my radius was set):


      Remember: hit the ESC key if you need to cancel out of any operation without applying anything. Go ahead and rotate the head down a bit until you like how it looks. When you get it where you’d like it, just Left-click the mouse to set the rotation. Subtle is the name of the game here. Try small movements at first!

      Now let’s move on to rotating the arm a bit. Hit the A key to de-select all the vertices, and choose a bunch of vertices along the arm (again, I use the circle-select C key to select a bunch at once easily):


      If you end up selecting a couple of vertices you don’t want, remember that you can Shift + Right-click to toggle adding/removing vertices to the selection set. For example, in my image above I didn't want to select any vertices that were too close to her face to avoid warping it too much. I also went ahead and made sure to select as many vertices around the arm as I could.

      I also Left-click in the location you see in my screenshot to place the cursor roughly at her shoulder. For the arm I also changed the Pivot Point to be the 3D Cursor because I want the arm to pivot at a natural location.

      Again, hit the R key to begin rotating the arm. If you find the rotation pulls vertices from too far away and modifies them, scroll your mousewheel to decrease the radius of the proportional editing. In my example I had the radius of influence down very low to avoid warping the womans face too much.

      As before, rotate to where you like it, and Left-click the mouse when you’re happy.


      Finally, you can test how the overall mesh modifications will look with your Shape Key. Hit the Tab key to get out of Edit Mode and back into Object Mode. All of your mesh modifications should snap back to what they were before you changed anything.

      Don’t Panic.

      What has happened is that the mesh is currently set so that the Shape Key we were modifying has a zero influence value right now:


      The Value slider for the shape key is 0 right now. If you click and drag in this slider you can change the influence of this key from 0 - 1. As you change the value you should be seeing your woman mesh deform from it’s base position at 0, up to it’s fully deformed state at 1. Neat!

      Once we’re happy with our mesh modifications, we can now move on to animating the sequence to see how things look!

      Animating

      So what we now want to do is to animate two different things over the course of time in the video. First we want to animate the mesh deformation we just created with Shape Keys, and second we want to animate the movement of the camera through our scene.

      If you have a look just below the 3D View window, you should be seeing the Timeline window:


      The Timeline window at the bottom

      What we are going to do is to set keyframes for our camera and mesh at the beginning and end of our animation timeline (1-250 by default).

      We should already be on the first frame by default, so let’s set the camera keyframe now. In the 3D View, Right-click on the camera border to select it (will highlight when selected). Once selected, hit the I key to bring up the keyframe menu.


      You’ll see all of the options that you can keyframe here. The one we are interested in is the first, Location. Click it in the menu. This tells Blender that at frame 1 in our animation, the camera should be located at this position.

      Now we can define where we’d like our camera to be at the end of the animation. So we should move the frame to 250 in the timeline window. The easiest way to do this is to hit the button to jump to the last frame in the range:


      This should move the current frame to 250. Now we can just move the location of our camera slightly, and set a new keyframe for this frame. I am going to just move the camera straight up slightly:


      Once position, hit the I key again and set a Location keyframe.

      At this point, if you wanted to preview what the animation would look like you can press Alt-A to preview the animation so far (hit ESC when you’re done).

      Now we want to do the same thing, but for the Shape Keys to deform over time from the base position to the deformed position we created earlier. In the Timeline window, get back to frame 1 by hitting the jump to first frame in range button:


      Once back at frame 1, take a look at the Shape Keys panel again:


      Make sure the value is 0, then Right-click on the slider and choose the first entry, Insert Keyframe:


      Just like with the camera, now jump back to the last frame in the range. Then set the value slider for the Shape Keys to 1.000. Then Right-click on the Value slider again, and insert another keyframe.

      This tells Blender to start the animation with no deformation on the mesh, and at the end to transition to full deformation according to the Shape Key. Conveniently Blender will calculate all of the vertices locations between the two for us for a smooth transition.

      As before, now try hitting Alt-A to preview the full animation.

      Congratulations, you made it!

      Getting a Video Out

      If you’re happy with the results, then all that’s left now is to render out the video! There are a few settings we need to specify first, though. So switch over to the Render tab in Blender:


      The main settings you’ll want to adjust here are the resolution X & Y and the frame rate. I rendered out at 1280×720 at 100% and Frame Rate of 30 fps. Change your settings as appropriate.

      Finally, we just need to choose what format to render out to...


      If you scroll down the Render panel you’ll find the options for changing the Output. The first option allows you to choose where you’d like the output file to get rendered to (I normally just leave it as /tmp - it will be C:\tmp in windows). I also change the output format to a movie rendering type. In my screenshot it shows “H.264”, by default it will probably show “PNG”. Change it to H.264.

      Once changed, you’ll see the Encoding panel become available just below it. For this test you can just click on the Presets spinner and choose H264 there as well.

      Scroll back up to the top of the Render panel, and hit the big Animation button in the top center (see the previous screenshot).

      Go get a cup of coffee. Take a walk. Get some fresh air. Depending on the speed of your machine it will take a while...

      Once it’s finished, in your tmp directory you’ll find a file called 0001-250.avi. Fire it up and marvel at your results (or wince). Here’s the result of mine directly from the above results:



      Holy crap, we made it to the end. That was really, really long.

      I promise, though, that it just reads long. If you’re comfortable moving around in Blender and understand the process this takes about 10-15 minutes to do once you get your planes isolated.

      Well, that’s about it. I hope this has been helpful, and that I didn’t muck anything up too badly. As always, I’d love to see others results!

      [Update]
      Reader David notes in the comments that if the render results look a little ‘soft’ or ‘fuzzy’, increasing the Anti-Aliasing size can help sharpen things up a bit (it’s located on the render panel just below render dimensions). Thanks for the tip David!

      Help support the site! Or don’t!
      I’m not supporting my (growing) family or anything from this website. Seriously.
      There is only one reason I am writing these tutorials and posts:
      I love doing it.
      Technically there is a second reason: to give back to the community. Others before me were instrumental in helping me learn things when I first got started, and I’m hoping to pay it forward here.

      If you want to visit an ad, or make a donation, or even link/share my content, I would be absolutely grateful (and tickled pink). If you don’t it’s not going to affect me writing and posting here one bit.

      I’ll keep writing, and I’ll keep it free.
      If you get any use out of this site, I only ask that you do one thing:
      pay it forward.


      February 20, 2014

      Blender 2.70 test build

      The test build for the upcoming Blender 2.70 release is now available for download.

      This is a build for users to test and find issues before we make the final release. If you find bugs, please report them to our bug tracker.

      New features include initial support for volumetrics in Cycles, and faster rendering of hair and textures. The motion tracker now supports weighted tracks and has improved planar tracking. For mesh modeling there are new Laplacian deform and wireframe modifiers, along with more control in the bevel tool. The game engine now supports object levels of detail.

      The first results from the new user interface project are also in this release, with dozens of changes to make the interface more consistent and powerful. This is also the first release of the multithreaded dependency graph, which makes modifier and constraint evaluation faster in scenes with multiple objects.

      For more details, see the (work in progress) release notes for Blender 2.70.

      Playkot uses Krita, Blender and Gimp to create SuperCity

      Free and open graphics software is taking its place in more and more game and vfx studios. Over the past year, Playkot's Paul Geraskin and his colleagues have been using Krita, Blender and Gimp to work on the assets for their latest facebook game, Super City "Epic city builder with amazing visuals", which was released yesterday!

       

       

      Playkot, based in St. Petersburg has been working on Super City for the past two years. To quote Paul Geraskin, "It's been really nice to have tools like this! We used blender with the internal render, then Cycles appeared and we moved to it. For texturing we first used Gimp but since December 2012 we fully moved to Krita as it's a more powerful tool for painting textures. We hope to see more Krita and Blender integration with sharing layers and easy painting!"

      He continues, "Super City first went live in Russian social networks like www.vk.com or www.odnoklassiki.ru, then in Korea and Japan. Over four million gamers have already enjoyed art created with Krita and Blender!"

      As for Krita, Paul has been a great member of our community, helping out with testing, feature requests and even a patch for the OpenGL canvas shaders! The awesome Supercity art has become really familiar to the Krita development team over the past year!

      If you're on facebook, you can play SuperCity, too. Have fun!

      How can one catch OS X sandbox violations in a debugger?

      Dear lazyweb,

      Do you know of any way to catch OS X sandbox violations as they happen (in a huge program like LibreOffice where you don't have a clue what is going on in all places in the codebase) in a debugger (either gdb on the command line, or lldb under Xcode)? The trick using Dtrace in https://devforums.apple.com/message/874181#874181 does not seem to work in 10.9.1 at least.

      February 19, 2014

      Greenlit!

      Just twelve days ago, the Krita team at KO GmbH submitted a greenlight application on steam... And today we got the news that we're greenlit already! Now the real work is going to start, to make Krita ready for release on Steam!

      The Krita on Steam campaign got many thousands of votes from interested users, more than two hundred comments that were almost all very, very positive. And the interest Krita garnered during the greenlight campaign brought us alread a huge number of new users, too! Thanks everyone for the support, for the comments, for the suggestions, for the downloads!

      Oh -- and as an aside: two new builds of Krita Desktop to test, which you can download from kritastudio.com.

      Fixes and enhancements:

      • Don't mess up the interface when reloading a document
      • Saving custom canvas shortcuts works again
      • Open brush presets as images to make the icon editable
      • Fix converting the color space of emtpy layers
      • Support the svg:exclusion blending mode when loading openraster images
      • Fix enabling the add/remove buttons for the favorite presets manager
      • Do not pick opacity when picking a color
      • Add a (hidden) option to invert the middle-click drag zoom
      • Make canvas-only mode work in Krita Gemini
      • Make get common colors from an image work on Windows
      • Make it possible to use multipe sensors with different curves in one brush preset option (like making opacity react to pressure and rotation of the stylus
      • Fix the outline cursor for the experiment brush
      • Use a new and nicer icon for the delete layer button
      • Fix loading of XFG (GIMP) files on Windows
      • Fix merge with layer below
      • Do not crash when applying a filter on a mask
      • Fix random rotation of brushes

      February 18, 2014

      Libre Graphics magazine on Hacker Public Radio

      FOSDEMhackerPublicRadioInterview

      While we were at FOSDEM, we had a chat with Hacker Public Radio. Listen to the interview here: http://hackerpublicradio.org/eps.php?id=1447

      February 15, 2014

      Deploying Node.JS the modern way, everywhere

      In a shocking move, Ubuntu decided to follow the long-strung-out decision of Debian to adopt systemd as their init system. This is a good thing: everyone can now get together and work on one great solution. I applaud them for making this move.

      It’s also a good thing for those who depend on systemd for having a fantastic modern deployment environment: soon you’ll be able to depend on systemd everywhere, regardless of the distribution being used.

      In this light it seems like a good idea to shamelessly mention the write-up I wrote a while back: Deploying Node.js with systemd. Everything in there is still highly relevant and relying on systemd for deploying Node.JS is still (in my humble opinion) one of the best possible setups.

      Good times for Node.JS developers that also need to administer infrastructure.

      February 12, 2014

      In the bag!

      Yesterday, the Muses DVD arrived from the printers. Today, all pre-orders got stuffed in jiffy bags and sent on to the post office! Irina Rempt did all the hard work of creating address labels, stuffing the bags, pasting on the stamps, and then we bagged the bags in Official Post Office bags! So, if you had ordered one, it's on its way!

      And if you haven't ordered one yet...

      Support Krita development by getting your very own copy!

      Abort a git commit –amend

      The situation

      You hack on a patch, add files to the index and with a knee jerk reaction do:

      git commit --amend

      (In fact, I do this in my editor with the vim-fugitive plug-in, but it also happened in the terminal). For the commit message git places you in your text editor. If you quit, your changes are merged with the last commit. Being aware of your trapped situation, what do you do?

      The solution

      Simply delete the commit message (up to where the comments start with #). The typical git commit-hook will see it as a commit with an empty message and abort the commit and therefore the merge.


      February 11, 2014

      Learning to let go

      There are times in a long career when you have to turn your back on what’s gone before. In work, this is easy. People stop giving you a paycheck, you stop turning up to work. And yet, some of my best friends are people I worked with 10 years ago.

      In open source, it’s harder. You build relationships, you grow emotional attachments to projects you work on.

      When life moves you on from a project, you stay subscribed to mailing lists, you add mail filters to move them to a folder you read less and less frequently.

      When you hit a threshold where you no longer consider yourself a developer or contributor, you keep watching from afar, and when the project takes a direction you disagree with, that you know you would have argued against, you feel a little sadness.

      After a while, this build-up of guilt weighs you down. But letting go is hard to do.

      I’m learning. Trying to get better at letting go. The next generation needs to find their own way. It’s liberating and saddening in equal measure. Old friends: we will stay friends, but I need to trust you to make your own way.