July 27, 2015

3D printing Poe

I helped print this statue of Edgar Allan Poe, through “We the Builders“, who coordinate large-scale crowd-sourced 3D print jobs:

Poe's Face

You can see one of my parts here on top, with “-Kees” on the piece with the funky hair strand:

Poe's Hair

The MakerWare I run on Ubuntu works well. I wish they were correctly signing their repositories. Even if I use non-SSL to fetch their key, as their Ubuntu/Debian instructions recommend, it still doesn’t match the packages:

W: GPG error: http://downloads.makerbot.com trusty Release: The following signatures were invalid: BADSIG 3D019B838FB1487F MakerBot Industries dev team <dev@makerbot.com>

And it’s not just my APT configuration:

$ wget http://downloads.makerbot.com/makerware/ubuntu/dists/trusty/Release.gpg
$ wget http://downloads.makerbot.com/makerware/ubuntu/dists/trusty/Release
$ gpg --verify Release.gpg Release
gpg: Signature made Wed 11 Mar 2015 12:43:07 PM PDT using RSA key ID 8FB1487F
gpg: requesting key 8FB1487F from hkp server pgp.mit.edu
gpg: key 8FB1487F: public key "MakerBot Industries LLC (Software development team) <dev@makerbot.com>" imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)
gpg: BAD signature from "MakerBot Industries LLC (Software development team) <dev@makerbot.com>"
$ grep ^Date Release
Date: Tue, 09 Jun 2015 19:41:02 UTC

Looks like they’re updating their Release file without updating the signature file. (The signature is from March, but the Release file is from June. Oops!)

© 2015, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

Basic Color Curves

Basic Color Curves

An introduction and simple color grading/toning

Color has this amazing ability to evoke emotional responses from us. From the warm glow of a sunny summer afternoon to a cool refreshing early evening in fall. We associate colors with certain moods, places, feelings, and memories (consciously or not).

Volumes have been written on color and I am in no ways even remotely qualified to speak on it. So I won’t.

Instead, we are going to take a look at the use of the Curves tool in GIMP. Even though GIMP is used to demonstrate these ideas, the principles are generic to just about any RGB curve adjustments.

Your Pixels and You

First there’s something you need to consider if you haven’t before, and that’s what goes into representing a colored pixel on your screen.

PIXLS.US House Zoom Example Open up an image in GIMP.
PIXLS.US House Zoom Example Now zoom in.
PIXLS.US House Zoom Example Nope - don’t be shy now, zoom in more!
PIXLS.US House Zoom Example Aaand there’s your pixel. So let’s investigate what goes into making your pixel.

Remember, each pixel is represented by a combination of 3 colors: Red, Green, and Blue. In GIMP (currently at 8-bit), that means that each RGB color can have a value from 0 - 255, and combining these three colors with varying levels in each channel will result in all the colors you can see in your image.

If all three channels have a value of 255 - then the resulting color will be pure white. If all three channels have a value of 0 - then the resulting color will be pure black.

If all three channels have the same value, then you will get a shade of gray (128,128,128 would be a middle gray color for instance).

So now let’s see what goes into making up your pixel:

GIMP Color Picker Pixel View The RGB components that mix into your final blue pixel.

As you can see, there is more blue than anything else (it is a blue-ish pixel after all), followed by green, then a dash of red. If we were to change the values of each channel, but kept ratio the same between Red, Green, and Blue, then we would keep the same color and just lighten or darken the pixel by some amount.

Curves: Value

So let’s leave your pixel alone for the time being, and actually have a look at the Curves dialog. I’ll be using this wonderful image by Eric from Flickr.

Hollow Moon by Eric qsimple Flickr Hollow Moon by qsimple/Eric on Flickr. (cbna)

Opening up my Curves dialog shows me the following:

GIMP Base Curves Dialog

We can see that I start off with the curve for the Value of the pixels. I could also use the drop down for “Channel” to change to red, green or blue curves if I wanted to. For now let’s look at Value, though.

In the main area of the dialog I am presented with a linear curve, behind which I will see a histogram of the value data for the entire image (showing the amount of each value across my image). Notice a spike in the high values on the right, and a small gap at the brightest values.

GIMP Base Curves Dialog Input Output

What we can do right now is to adjust the values of each pixel in the image using this curve. The best way to visualize it is to remember that the bottom range from black to white represents the current value of the pixels, and the left range is the value to be mapped to.

So to show an example of how this curve will affect your image, suppose I wanted to remap all the values in the image that were in the midtones, and to make them all lighter. I can do this by clicking on the curve near the midtones, and dragging the curve higher in the Y direction:

GIMP Base Curves Dialog Push Midtones

What this curve does is takes the values around the midtones, and pushes their values to be much lighter than they were. In this case, values around 128 were re-mapped to now be closer to 192.

Because the curve is set Smooth, there will be a gradual transition for all the tones surrounding my point to be pulled in the same direction (this makes for a smoother fall-off as opposed to an abrupt change at one value). Because there is only a single point in the curve right now, this means that all values will be pulled higher.

Hollow Moon Example Pushed Midtones The results of pushing the midtones of the value curve higher (click to compare to original).

Care should be taken when fiddling with these curves to not blow things out or destroy detail, of course. I only push the curves here to illustrate what they do.

A very common curve adjustment you may hear about is to apply a slight “S” curve to your values. The effect of this curve would be to darken the dark tones, and to lighten the light tones - in effect increasing global contrast on your image. For instance, if I click on another point in the curves, and adjust the points to form a shape like so:

GIMP Base Curves Dialog S shaped curve A slight “S” curve

This will now cause dark values to become even darker, while the light values get a small boost. The curve still passes through the midpoint, so middle tones will stay closer to what they were.

Hollow Moon Example S curve applied Slight “S” curve increases global contrast (click for original).

In general, I find it easiest to visualize in terms of which regions in the curve will effect different tones in your image. Here is a quick way to visualize it (that is true for value as well as RGB curves):

GIMP Base Curves darks mids lights zones

If there is one thing you take away from reading this, let it be the image above.

Curves: Colors

So how does this apply to other channels? Let’s have a look.

The exact same theory applies in the RGB channels as it did with values. The relative positions of the darks, midtones, and lights are still the same in the curve dialog. The primary difference now is that you can control the contribution of color in specific tonal regions of your image.

Value, Red, Green, Blue channel picker.

You choose which channel you want to adjust from the “Channel” drop-down.

To begin demonstrating what happens here it helps to have an idea of generally what effect you would like to apply to your image. This is often the hardest part of adjusting the color tones if you don’t have a clear idea to start with.

For example, perhaps we wanted to “cool” down the shadows of our image. “Cool” shadows are commonly seen during the day in shadows out of direct sunlight. The light that does fall in shadows is mostly reflected light from a blue-ish sky, so the shadows will trend slightly more blue.

To try this, let’s adjust the Blue channel to be a little more prominent in the darker tones of our image, but to get back to normal around the midtones and lighter.

Boosting blues in darker tones
Pushing up blues in darker tones (click for original).

Now, here’s a question: If I wanted to “cool” the darker tones with more blue, what if I wanted to “warm” the lighter tones by adding a little yellow?

Well, there’s no “Yellow” curve to modify, so how to approach that? Have a look at this HSV color wheel below:

The thing to look out for here is that opposite your blue tones on this wheel, you’ll find yellow. In fact, for each of the Red, Green, and Blue channels, the opposite colors on the color wheel will show you what an absence of that color will do to your image. So remember:

RedCyan GreenMagenta BlueYellow

What this means to you while manipulating curves is that if you drag a curve for blue up, you will boost the blue in that region of your image. If instead you drag the curve for blue down, you will be removing blues (or boosting the Yellows in that region of your image).

So to boost the blues in the dark tones, but increase the yellow in the lighter tones, you could create a sort of “reverse” S-curve in the blue channel:

Boost blues in darks, boost yellow in high tones (click for original).

In the green channel for instance, you can begin to introduce more magenta into the tones by decreasing the curve. So dropping the green curve in the dark tones, and letting it settle back to normal towards the high tones will produce results like this:

Suppressing the green channel in darks/mids adds a bit of magenta
(click for original).

In isolation, these curves are fun to play with, but I think that perhaps walking through some actual examples of color toning/grading would help to illustrate what I’m talking about here. I’ll choose a couple of common toning examples to show what happens when you begin mixing all three channels up.

Color Toning/Grading

Orange and Teal Hell

I use the (cinema film) term color grading here because the first adjustment we will have a look at to illustrate curves is a horrible hollywood trend that is best described by Todd Miro on his blog.

Grading is a term for color toning on film, and Todd’s post is a funny look at the prevalence of orange and teal in modern film palettes. So it’s worth a look just to see how silly this is (and hopefully to raise awareness of the obnoxiousness of this practice).

The general thought here is that caucasian skin tones trend towards orange, and if you have a look at a complementary color on the color wheel, you’ll notice that directly opposite orange is a teal color.

Screenshot from Kuler borrowed from Todd.

If you don’t already know about it, Adobe has online a fantastic tool for color visualization and palette creation called Kuler Adobe Color CC. It lets you work on colors based on some classic rules, or even generate a color palette from images. Well worth a visit and a fantastic bookmark for fiddling with color.

So a quick look at the desired effect would be to keep/boost the skin tones into a sort of orange-y pinkish color, and to push the darker tones into a teal/cyan combination. (Colorists on films tend to use a Lift, Gamma, Gain model, but we’ll just try this out with our curves here).

Quick disclaimer - I am purposefully exaggerating these modifications to illustrate what they do. Like most things, moderation and restraint will go a long ways towards not causing your viewers eyeballs to bleed. Remember - light touch!

So I know that I want to see my skin tones head into an orange-ish color. In my image the skin tones are in the upper mids/low highs range of values, so I will start around there.

What I’ve done is put a point around the low midtones to anchor the curve closer to normal for those tones. This lets me fiddle with the red channel and to isolate it roughly to the mid and high tones only. The skin tones in this image in the red channel will fall toward the upper end of the mids, so I’ve boosted the reds there. Things may look a little weird at first:

If you look back at the color wheel again, you’ll notice that between red and green, there is a yellow, and if you go a bit closer towards red the yellow turns to more of an orange. What this means is that if we add some more green to those same tones, the overall colors will start to shift towards an orange.

So we can switch to the green channel now, put a point in the lower midtones again to hold things around normal, and slightly boost the green. Don’t boost it all the way to the reds, but about 2/3rds or so to taste.

This puts a little more red/orange-y color into the tones around the skin. You could further adjust this by perhaps including a bit more yellow as well. To do this, I would again put an anchor point in the low mid tones on the blue channel, then slightly drop the blue curve in the upper tones to introduce a bit of yellow.

Remember, we’re experimenting here so feel free to try things out as we move along. I may consider the upper tones to be finished at the moment, and now I would want to look at introducing a more blue/teal color into the darker tones.

I can start by boosting a bit of blues in the dark tones. I’m going to use the anchor point I already created, and just push things up a bit.

Now I want to make the darker tones a bit more teal in color. Remember the color wheel - teal is the absence of red - so we will drop down the red channel in the lower tones as well.

And finally to push a very slight magenta into the dark tones as well, I’ll push down the green channel a bit.

If I wanted to go a step further, I could also put an anchor point up close to the highest values to keep the brightest parts of the image closer to a white instead of carrying over a color cast from our previous operations.

If your previous operations also darkened the image a bit, you could also now revisit the Value channel, and make modifications there as well. In my case I bumped the midtones of the image just a bit to brighten things up slightly.

Finally to end up at something like this.

After fooling around a bit - disgusting, isn’t it?
(click for original).

I am exaggerating things here to illustrate a point. Please don’t do this to your photos. :)

If you’d like to download the curves file of the results we reached above, get it here:
Orange Teal Hell Color Curves


Remember, think about what the color curves represent in your image to help you achieve your final results. Begin looking at the different tonalities in your image and how you’d like them to appear as part of your final vision.

For even more fun - realize that the colors in your images can help to evoke emotional responses in the viewer, and adjust things accordingly. I’ll leave it as an exercise for the reader to determine some of the associations between colors and different emotions.

Sun 2015/Jul/26

  • An inlaid GNOME logo, part 5

    Esta parte en español

    (Parts 1, 2, 3, 4)

    This is the shield right after it came out of the clamps. I had to pry it a bit from the clamped board with a spatula.

    Unclamped shield

    I cut out the shield shape by first sawing the straight sections, and then using a coping saw on the curved ones.

    Sawing straight edges

    Coping the curves

    All cut out

    I used a spokeshave to smooth the convex curves on the sides.

    Spokeshave for the curves

    The curves on the top are concave, and the spokeshave doesn't fit. I used a drawknife for those.

    Drawknife for the tight curves

    This gives us crisp corners and smooth curves throughout.

    Crisp corner

    On to planing the face flat! I sharpened my plane irons...

    Sharp plane iron

    ... and planed carefully. The cutoff from the top of the shield was useful as a support against the planing stop.

    Starting to plane the shield

    The foot shows through once the paper is planed away...

    Foot shows through the paper

    Check out the dual-color shavings!

    Dual-color shavings

    And we have a flat board once again. That smudge at the top of the sole is from my dirty fingers — dirty with metal dust from the sharpening step — so I washed my hands and planed the dirt away.

    Flat shield

    The mess after planing

    But it is too flat. So, I scribed a line all around the front and edges, and used the spokeshave and drawknife again to get a 45-degree bevel around the shield. The line is a bit hard to see in the first photo, but it's there.

    Scribed lines for bevel

    Beveling with a spokeshave

    Final bevel around the shield

    Here is the first coat of boiled linseed oil after sanding. When it dries I'll add some coats of shellac.

    First coat of linseed oil

Trackpad workarounds: using function keys as mouse buttons

I've had no end of trouble with my Asus 1015E's trackpad. A discussion of laptops on a mailing list -- in particular, someone's concerns that the nifty-looking Dell XPS 13, which is available preloaded with Linux, has had reviewers say that the trackpad doesn't work well -- reminded me that I'd never posted my final solution.

The Asus's trackpad has two problems. First, it's super sensitive to taps, so if any part of my hand gets anywhere near the trackpad while I'm typing, suddenly it sees a mouse click at some random point on the screen, and instead of typing into an emacs window suddenly I find I'm typing into a live IRC client. Or, worse, instead of typing my password into a password field, I'm typing it into IRC. That wouldn't have been so bad on the old style of trackpad, where I could just turn off taps altogether and use the hardware buttons; this is one of those new-style trackpads that doesn't have any actual buttons.

Second, two-finger taps don't work. Three-finger taps work just fine, but two-finger taps: well, I found when I wanted a right-click (which is what two-fingers was set up to do), I had to go TAP, TAP, TAP, TAP maybe ten or fifteen times before one of them would finally take. But by the time the menu came up, of course, I'd done another tap and that canceled the menu and I had to start over. Infuriating!

I struggled for many months with synclient's settings for tap sensitivity and right and left click emulation. I tried enabling syndaemon, which is supposed to disable clicks as long as you're typing then enable them again afterward, and spent months playing with its settings, but in order to get it to work at all, I had to set the timeout so long that there was an infuriating wait after I stopped typing before I could do anything.

I was on the verge of giving up on the Asus and going back to my Dell Latitude 2120, which had an excellent trackpad (with buttons) and the world's greatest 10" laptop keyboard. (What the Dell doesn't have is battery life, and I really hated to give up the Asus's light weight and 8-hour battery life.) As a final, desperate option, I decided to disable taps completely.

Disable taps? Then how do you do a mouse click?

I theorized, with all Linux's flexibility, there must be some way to get function keys to work like mouse buttons. And indeed there is. The easiest way seemed to be to use xmodmap (strange to find xmodmap being the simplest anything, but there you go). It turns out that a simple line like

  xmodmap -e "keysym F1 = Pointer_Button1"
is most of what you need. But to make it work, you need to enable "mouse keys":
  xkbset m

But for reasons unknown, mouse keys will expire after some set timeout unless you explicitly tell it not to. Do that like this:

  xkbset exp =m

Once that's all set up, you can disable single-finger taps with synclient:

  synclient TapButton1=0
Of course, you can disable 2-finger and 3-finger taps by setting them to 0 as well. I don't generally find them a problem (they don't work reliably, but they don't fire on their own either), so I left them enabled.

I tried it and it worked beautifully for left click. Since I was still having trouble with that two-finger tap for right click, I put that on a function key too, and added middle click while I was at it. I don't use function keys much, so devoting three function keys to mouse buttons wasn't really a problem.

In fact, it worked so well that I decided it would be handy to have an additional set of mouse keys over on the other side of the keyboard, to make it easy to do mouse clicks with either hand. So I defined F1, F2 and F3 as one set of mouse buttons, and F10, F11 and F12 as another.

And yes, this all probably sounds nutty as heck. But it really is a nice laptop aside from the trackpad from hell; and although I thought Fn-key mouse buttons would be highly inconvenient, it took surprisingly little time to get used to them.

So this is what I ended up putting in .config/openbox/autostart file. I wrap it in a test for hostname, since I like to be able to use the same configuration file on multiple machines, but I don't need this hack on any machine but the Asus.

if [ $(hostname) == iridum ]; then
  synclient TapButton1=0 TapButton2=3 TapButton3=2 HorizEdgeScroll=1

  xmodmap -e "keysym F1 = Pointer_Button1"
  xmodmap -e "keysym F2 = Pointer_Button2"
  xmodmap -e "keysym F3 = Pointer_Button3"

  xmodmap -e "keysym F10 = Pointer_Button1"
  xmodmap -e "keysym F11 = Pointer_Button2"
  xmodmap -e "keysym F12 = Pointer_Button3"

  xkbset m
  xkbset exp =m
  synclient TapButton1=1 TapButton2=3 TapButton3=2 HorizEdgeScroll=1

July 24, 2015

Fri 2015/Jul/24

  • An inlaid GNOME logo, part 4

    Esta parte en español

    (Parts 1, 2, 3)

    In the last part, I glued the paper templates for the shield and foot onto the wood. Now comes the part that is hardest for me: excavating the foot pieces in the dark wood so the light-colored ones can fit in them. I'm not a woodcarver, just a lousy joiner, and I have a lot to learn!

    The first part is not a problem: use a coping saw to separate the foot pieces.

    Foot pieces, cut out

    Next, for each part of the foot, I started with a V-gouge to make an outline that will work as a stop cut. Inside this shape, I used a curved gouge to excavate the wood. The stop cut prevents the gouge from going past the outline. Finally, I used the curved gouge to get as close as possible to the final line.

    V channel as a stop cut Excavating inside the channel

    Each wall needs squaring up, as the curved gouge leaves a chamfered edge instead of a crisp angle. I used the V-gouge around each shape so that one of the edges of the gouge remains vertical. I cleaned up the bottom with a combination of chisels and a router plane where it fits.

    Square walls

    Then, each piece needs to be adjusted to fit. I sanded the edges to have a nice curve instead of the raw edges from the coping saw. Then I put a back bevel on each piece, using a carving knife, so the back part will be narrower than the front. I had to also tweak the walls in the dark wood in some places.

    Unadjusted piece Sanding the curves Beveling the edges

    After a lot of fiddling, the pieces fit — with a little persuasion — and they can be glued. When the glue dries I'll plane them down so that they are level to the dark wood.

    Gluing the pieces Glued pieces

    Finally, I clamped everything against another board to distribute the pressure. Let's hope for the best.


July 22, 2015

Welcome G'MIC

Welcome G'MIC

Moving G'MIC to a modern forum

Anyone who’s followed me for a while likely knows that I’m friends with G’MIC (GREYC’s Magic for Image Computing) creator David Tschumperlé. I was also able to release all of my film emulation presets on G’MIC for everyone to use with David’s help and we collaborated on a bunch of different fun processing filters for photographers in G’MIC (split details/wavelet decompose, freaky details, film emulation, mean/median averaging, and more).

David Tschumperle beauty dish GMIC David, by Me (at LGM2014)

It’s also David that helped me by writing a G’MIC script to mean average images for me when I started making my amalgamations (Thus moving me away from my previous method of using Imagemagick):

Mad Max Fury Road Trailer 2 - Amalgamation Mad Max Fury Road Trailer 2 - Amalgamation

So when the forums here on discuss.pixls.us were finally up and running, it only made sense to offer G’MIC its own part of the forums. They had previously been using a combination of Flickr groups and gimpchat.com. These are great forums, they were just a little cumbersome to use.

You can find the new G’MIC category here. Stop in and say hello!

I’ll also be porting over the tutorials and articles on work we’ve collaborated on soon (freaky details, film emulation).



To the winners of the Open Source Photography Course Giveaway

I compiled the list of entries this afternoon across the various social networks and let random.org pick an integer in the domain of all of the entries…

So a big congratulations goes out to:

Denny Weinmann (Facebook, @dennyweinmann, Google+ )
Nathan Haines (@nhaines, Google+)

I’ll be contacting you shortly (assuming you don’t read this announcement here first…)! I will need a valid email address from you both in order to send your download links. You can reach me at pixlsus@pixls.us.

Thank you to everyone who shared the post to help raise awareness! The lessons are still on sale until August 1st for $35USD over on Riley’s site.

1.2.0-beta.0 released

Hello everyone. I’m really happy to announce the first proper pre-release of MyPaint 1.2.0 for those brave early-bird testers out there.

You can download it from https://github.com/mypaint/mypaint/releases/tag/v1.2.0-beta.0.

Windows and Ubuntu binaries are available, all signed, and the traditional signed canonical source tarball is there too for good measure. Sorry about the size of the Windows installers – we need to package all of GTK/GDK and Python on that platform too.

(Don’t forget: if you find the translations lacking for your languages, you can help fix mistakes before the next beta over at https://hosted.weblate.org/engage/mypaint/ )

July 20, 2015

Plugging in those darned USB cables

I'm sure I'm not the only one who's forever trying to plug in a USB cable only to find it upside down. And then I flip it and try it the other way, and that doesn't work either, so I go back to the first side, until I finally get it plugged in, because there's no easy way to tell visually which way the plug is supposed to go.

It's true of nearly all of the umpteen variants of USB plug: almost all of them differ only subtly from the top side to the bottom.

[USB trident] And to "fix" this, USB cables are built so that they have subtly raised indentations which, if you hold them to the light just right so you can see the shadows, say "USB" or have the little USB trident on the top side:

In an art store a few weeks ago, Dave had a good idea.

[USB cables painted for orientation] He bought a white paint marker, and we've used it to paint the logo side of all our USB cables.

Tape the cables down on the desk -- so they don't flop around while the paint is drying -- and apply a few dabs of white paint to the logo area of each connector. If you're careful you might be able to fill in the lowered part so the raised USB symbol stays black; or to paint only the raised USB part. I tried that on a few cables, but after the fifth or so cable I stopped worrying about whether I was ending up with a pretty USB symbol and just started dabbing paint wherever was handy.

The paint really does make a big difference. It's much easier now to plug in USB cables, especially micro USB, and I never go through that "flip it over several times" dance any more.

July 19, 2015

Windows porting

I’m hoping that MyPaint will be able to support Windows fully starting with the first v1.2.0-beta release. This is made possible by the efforts of our own Windows porters and testers, and the dedicated folks who keep MSYS2 working so well.

The Inno Setup installer we'll be using starting with the 1.2.0-beta releases. Releases will start happening shortly (date TBA) on Github, and you’ll be able to pull down installer binaries for 32 bit and 64 bit Windows as part of this.

If you’re interested in the workings of the installer build, and would like to test it and help it improve, it’s all documented and scripted in the current github master. Please be aware that SourceForge downloads are involved during the build procedure until MSYS2 fix that. Our own binaries and installers will never be knowingly distributed – by us – through SourceForge or any similar crapware bundling site.

Discussion thread on the forums.

July 15, 2015

The Open Source Photography Course

The Open Source Photography Course

A chance to win a free copy

Photographer Riley Brandt recently released his Open Source Photography Course. I managed to get a little bit of his time to answer some questions for us about his photography and the course itself. You can read the full interview right here:

A Q&A with Photographer Riley Brandt

As an added bonus just for PIXLS.US readers, he has gifted us a nice surprise!

Did Someone Say Free Stuff?

Riley went above and beyond for us. He has graciously offered us an opportunity for 2 readers to win a free copy of the course (one in an open format like WebM/VP8, and another in a popular format like MP4/H.264)!

For a chance to win, I’m asking you to share a link to this post on:

with the hashtag #PIXLSGiveAway (you can click those links to share to those networks). Each social network counts as one entry, so you can triple your chances by posting across all three.

Next week (Monday, 2015-07-20 Wednesday, 2015-07-22 to give folks a full week), I will search those networks for all the posts and compile a list of people, from which I’ll pick the winners (using random.org). Make sure you get that hashtag right! :)

Some Previews

Riley has released three nice preview videos to give a taste of what’s in the courses:

A Q&A with Photographer Riley Brandt

A Q&A with Photographer Riley Brandt

On creating a F/OSS photography course

Riley Brandt is a full-time photographer (and sometimes videographer) at the University of Calgary. He previously worked for the weekly (Calgary) local magazine Fast Forward Weekly (FFWD) as well as Sophia Models International, and his work has been published in many places from the Wall Street Journal to Der Spiegel (and more).

Riley Brandt Logo

He recently announced the availability of The Open Source Photography Course. It’s a full photographic workflow course using only free, open source software that he has spent the last ten months putting together.

Riley has graciously offered two free copies for us to give away!
For a chance to win, see this blog post.

Riley Brandt Photography Course Banner

I was lucky enough to get a few minutes of Riley’s time to ask him a few questions about his photography and this course.

A Chat with Riley Brandt

Tell us a bit about yourself!

Hello, my name is Riley Brandt and I am a professional photographer at the University of Calgary.

At work, I get to spend my days running around a university campus taking pictures of everything from a rooster with prosthetic legs made in a 3D printer, to wild students dressed in costumes jumping into freezing cold water for charity. It can be pretty awsome.

Outside of work, I am a supporter of Linux and open source software. I am also a bit of a film geek.

Univ. Calgary Prosthetic Rooster [ed. note: He’s not kidding - That’s a rooster with prosthetic legs…]

I see you were trained in photojournalism. Is this still your primary photographic focus?

Though I definitely enjoy portraits, fashion and lifestyle photography, my day to day work as a photographer at a university is very similar to my photojournalism days.

I have to work with whatever poor lighting conditions I am given, and I have to turn around those photos quickly to meet deadlines.

However, I recently became an uncle for the first time to a baby boy, so I imagine I will be expanding into new born and toddler photography very soon :)

Riley Brandt Environment Portrait Sample Environmental Portrait by Riley Brandt

How long have you been a photographer?

Photography started as a hobby for me when I was living the Czech Republic in the late 90s and early 2000s. My first SLR camera was the classic Canon AE1 (which I still have).

I didn’t start to work as a full time professional photographer until I graduated from the Journalism program at SAIT Polytechnic in 2008.

What type of photography do you enjoy doing the most?

In a nutshell, I enjoy photographing people. This includes both portraits and candid moments at events.

I love meeting someone with an interesting story, and then trying to capture some of their personality in an image.

At events, I’ve witnessed everything from the joy of someone meeting an astronaut they idolize, to the anguish of a parent at graduation collecting a degree instead of their child who was killed. Capturing genuine emotion at events is challenging, and overwhelming at times, but is also very gratifying.

It would be hard for me to choose between candids or portraits. I enjoy them both.

Riley Brandt Portraits Portraits by Riley Brandt

How would you describe your personal style?

I’ve been told several times that my images are very “clean”. Which I think means I limit the image to only a few key elements, and remove any major distractions.

If you had to choose your favorite image from your portfolio, what would it be?

I don’t have a favorite image in my collection.

However, at the end of a work week, I usually have at least one image that I am really happy with. A photo that I will look at again when I get home from work. An image that I look forward to seeing published. Those are my favorites.

Has free-software always been the foundation of your workflow?

Definitely not. I started with Adobe software, and still use it (and other non-free software) at work. Though hopefully that will change.

I switched to free software for all my personal work at home, because all my computers at home run Linux.

I also dislike at lot of Adobe’s actions as a company, ie: horrible security and switching to a “cloud” version of their software which is really just a DRM scheme.

There many significant reasons to not run non-free software, but what really motivated my switch initially was simply that Adobe never released a Linux version of their software.

What is your normal OS/platform?

I guess I am transitioning from Ubuntu to Fedora (both GNU/Linux). My main desktop is still running Ubuntu Gnome 14.04. But my laptop is running Fedora 21.

Ubuntu doesn’t offer an up to date version of the Gnome desktop environment. It also doesn’t use the Gnome Software Centre or many Gnome apps. Fedora does. So my desktop will be running Fedora in the near future as well.

Riley Brandt Summer Days Riley Brandt Summer Days Lifestyle by Riley Brandt

What drove you to consider creating a free-software centric course?

Because it was so difficult for me to transition from Adobe software to free software, I wanted to provide an easier option for others trying to do the same thing.

Instead of spending weeks or months searching through all the different manuals, tutorials and websites, someone can spend a weekend watching my course and be up and running quickly.

Also, it was just a great project to work on. I got to combine two of my passions, Linux and photography.

Is the course the same as your own approach?

Yes, it’s the same way I work.

I start with fundamentals like monitor calibration and file management. Then onto basics like correcting exposure, color, contrast and noise. After that, I cover less frequently used tools. It’s the same way I work.

The course focuses heavily on darktable for RAW processing - have you also tried any of the other options such as RawTherapee?

I originally tried digiKam because it looked like it had most of the features I needed. However, KDE and I are like oil and water. The user interface felt impenetrable to me, so I moved on.

I also tried RawTherapee, but only briefly. I got some bad results in the beginning, but that was probably due to my lack of familiarity with the software. I might give it another go one day.

Once darktable added advanced selective editing with masks, I was all in. I like the photo management element as well.

Riley Brandt Portraits

Have you considered expanding your (course) offerings to include other aspects of photography?

Umm.. not just yet. I first need to rest :)

If you were to expand the current course, what would you like to focus on next?

It’s hard to say right now. Possibly a more in depth look at GIMP. Or a series where viewers watch me edit photos from start to finish.

It took 10 months to create this course, will you be taking a break or starting right away on the next installment? :)

A break for sure :) I spent most of my weekends preparing and recording a lesson for the past year. So yes, first a break.

Some parting words?

I would like to recommend the Desktop Publishing course created by GIMP Magazine editor Steve Czajka for anyone who is trying to transition from Adobe InDesign to Scribus.

I would also love to see someone create a similar course for Inkscape.

The Course

Riley Brandt Photography Course Banner

The Open Source Photography Course is available for order now at Riley’s website. The course is:

  • Over 5 hours of video material
  • DRM free
  • 10% of net profits donated back to FOSS projects
  • Available in open format (WebM/VP8) or popular (H.264), all 1080p
  • $50USD

He has also released some preview videos of the course:

From his site is a nice course outline to get a feel for what is covered:

Course Outline

Chapter 1. Getting Started

  1. Course Introduction
    Welcome to The Open Source Photography Course
  2. Calibrate Your Monitor
    Start your photography workflow the right way by calibrating your monitor with dispcalGUI
  3. File Management
    Make archiving and searching for photos easier by using naming conventions and folder organization
  4. Download and Rename
    Use Rapid Photo Downloader to rename all your photos during the download process

Chapter 2. Raw Editing in darktable

  1. Introduction to darktable, Part One
    Get to know darktable’s user interface
  2. Introduction to darktable, Part Two
    Take a quick look at the slideshow view in darktable
  3. Import and Tag
    Import photos into darktable and tag them with keywords, copyright information and descriptions
  4. Rating Images
    Learn an efficient way to cull, rate, add color labels and filter photos in lighttable
  5. Darkroom Overview
    Learn the basics of the darkroom view including basic module adjustments and creating favorites
  6. Correcting Exposure, Part 1
    Correct exposure with the base curves, levels, exposure, and curves modules
  7. Correcting Exposure, Part 2
    See several examples of combining modules to correct an image’s exposure
  8. Correct White Balance
    Use presets and make manual changes in the white balance module to color correct your images
  9. Crop and Rotate
    Navigate through the many crop and rotate options including guides and automatic cropping
  10. Highlights and Shadows
    Recover details lost in the shadows and highlights of your photos
  11. Adding Contrast
    Make your images stand out by adding contrast with the levels, tone curve and contrast modules
  12. Sharpening
    Fix those soft images with the sharpen, equalizer and local contrast modules
  13. Clarity
    Sharpen up your midtones by utilizing the local contrast and equalizer modules
  14. Lens Correction
    Learn how to fix lens distortion, vignetting and chromatic aberrations
  15. Noise Reduction
    Learn the fastest, easiest and best way to clean up grainy images taken in low light
  16. Masks, Part one
    Discover the possibilities of selective editing with the shape, gradient and path tools
  17. Masks, Part Two
    Take you knowledge of masks further in this lesson about parametric masks
  18. Color Zones
    Learn how to limit your adjustments to a specific color’s hue, saturation or brightness
  19. Spot Removal
    Save time by making simple corrections in darktable, instead of opening up GIMP
  20. Snapshots
    Quickly compare different points in your editing history with snapshots
  21. Presets and Styles
    Save your favorite adjustments for later with presets and styles
  22. Batch Editing
    Save time by editing one image, then quickly applying those same edits to hundreds of images
  23. Searching for Images
    Learn how to sort and search through a large collection of images in Lighttable
  24. Adding Effects
    Get creative in the effects group with vignetting, framing, split toning and more
  25. Exporting Photos
    Learn how to rename, resize and convert you RAW photos to JPEG, TIFF and other formats

Chapter 3. Touch Ups in GIMP

  1. Introduction to GIMP
    Install GIMP, then get to know your way around the user interface
  2. Setting Up GIMP, Part 1
    Customize the user interface, adjust a few tools and install color profiles
  3. Setting Up GIMP, Part 2
    Set keyboard shortcuts that mimic Photoshop’s and install a couple of plugins
  4. Touch Ups
    Use the heal tool and the clone tool to clean up your photos
  5. Layer Masks
    Learn how to make selective edits and non-destructive edits using layer masks
  6. Removing Distractions
    Combine layers, a helpful plugin and layer masks to remove distractions from your photos
  7. Preparing Images for the Web
    Reduce file size while retaining quality before you upload your photos to the web
  8. Getting Help and Finding the Community
    Find out which websites, mailing lists and forums to go to for help and friendly discussions

All the images in this post © Riley Brandt.

July 14, 2015

Hummingbird Quidditch!

[rufous hummingbird] After months of at most one hummingbird at the feeders every 15 minutes or so, yesterday afternoon the hummingbirds here all suddenly went crazy. Since then, my patio looks like a tiny Battle of Britain, There are at least four males involved in the fighting, plus a couple of females who sneak in to steal a sip whenever the principals retreat for a moment.

I posted that to the local birding list and someone came up with a better comparison: "it looks like a Quidditch game on the back porch". Perfect! And someone else compared the hummer guarding the feeder to "an avid fan at Wimbledon", referring to the way his head keeps flicking back and forth between the two feeders under his control.

Last year I never saw anything like this. There was a week or so at the very end of summer where I'd occasionally see three hummingbirds contending at the very end of the day for their bedtime snack, but no more than that. I think putting out more feeders has a lot to do with it.

All the dogfighting (or quidditch) is amazing to watch, and to listen to. But I have to wonder how these little guys manage to survive when they spend all their time helicoptering after each other and no time actually eating. Not to mention the way the males chase females away from the food when the females need to be taking care of chicks.

[calliope hummingbird]

I know there's a rufous hummingbird (shown above) and a broad-tailed hummingbird -- the broad-tailed makes a whistling sound with his wings as he dives in for the attack. I know there a black-chinned hummer around because I saw his characteristic tail-waggle as he used the feeder outside the nook a few days before the real combat started. But I didn't realize until I checked my photos this morning that one of the combatants is a calliope hummingbird. They're usually the latest to arrive, and the rarest. I hadn't realized we had any calliopes yet this year, so I was very happy to see the male's throat streamers when I looked at the photo. So all four of the species we'd normally expect to see here in northern New Mexico are represented.

I've always envied places that have a row of feeders and dozens of hummingbirds all vying for position. But I would put out two feeders and never see them both occupied at once -- one male always keeps an eye on both feeders and drives away all competitors, including females -- so putting out a third feeder seemed pointless. But late last year I decided to try something new: put out more feeders, but make sure some of them are around the corner hidden from the main feeders. Then one tyrant can't watch them all, and other hummers can establish a beachhead.

It seems to be working: at least, we have a lot more activity so far than last year, even though I never seem to see any hummers at the fourth feeder, hidden up near the bedroom. Maybe I need to move that one; and I just bought a fifth, so I'll try putting that somewhere on the other side of the house and see how it affects the feeders on the patio.

I still don't have dozens of hummingbirds like some places have (the Sopaipilla Factory restaurant in Pojoaque is the best place I've seen around here to watch hummingbirds). But I'm making progress

Building a better catalog file

Inside a windows driver package you’ll probably see a few .dll‘s, a .inf file and a .cat file. If you’ve ever been curious in Windows you’ll have double clicked it and it would show some technical information about the driver and some information about who signed the file.

We want to use this file to avoid having to get vendors to manually sign the firmware file with a GPG detached signature, which also implies trusting the Microsoft WHQL certificate. These are my notes on my adventure so far.

There are not many resources on this stuff, and I’d like to thank dwmw2 and dhowels for all their help so far answering all my stupid questions. osslsigncode is also useful to see how other signing is implemented.

So, the basics. A .cat file is a SMIME PKCS DER file. We can dump the file using:

openssl asn1parse -in ecfirmware.cat  -inform DER

and if we were signing just one file we should be able to verify the .cat file with something like this:

wget http://www.microsoft.com/pki/certs/MicRooCerAut_2010-06-23.crt
openssl x509 -in MicRooCerAut_2010-06-23.crt -inform DER -out ms/msroot.pem -outform PEM
cat ms/*.pem > ms/certs.pem
openssl smime -verify -CAfile ms/certs.pem -in ecfirmware.cat -inform DER -attime $(date +%s --date=2015-01-01) -content ECFirmware.
Verification failed

(Ignore the need to have the root certificate for now, that seems to be a bug in OpenSSL and they probably have bigger fires to fight at this point)

…but it’s not. We have a pkcs7-signed blob and we need to work out how to get the signature to actually *match* and then we have to work out how to interpret the pkcs7-signed data blob, and use the sha256sums therein to validate the actual data. OpenSSL doesn’t know how to interpret the MS content type OID ( so it wimps out and doesn’t put any data into the digest at all.

We can get the blob using a simple:

dd if=ecfirmware.cat of=ecfirmware.cat.payload bs=1 skip=66 count=1340

…which now verifies:

openssl smime -verify -CAfile ms/certs.pem -in ecfirmware.cat -inform DER -attime $(date +%s --date=2015-01-01) -content ecfirmware.cat.payload
Verification successful

The blob appears to be a few tables of UTF-16 filename and SHA1/SHA256 checksummed data, encoded in ASN.1 notation. I’ve spent quite a few evenings decompiling the DER file into an ASN file without a whole lot of success (there are 14(!) layers of structures to contend with) and I’ve still not got an ASN file that can correctly parse my DER file for my simple unsigned v1 (ohh yes, v1 = SHA1, v2 = SHA256) test files. There is also a lot of junk data in the blob, and some questionable design choices which mean it’s a huge pain to read. Even if I manage to write the code to read the .cat data blob I’ve then got to populate the data (including the junk data…) so that Windows will accept my file to avoid needing a Microsoft box to generate all firmware images. Also add to the mix that the ASN.1 data is different on different Windows versions (with legacy versions overridden), which explains why you see things like rather than translated titles in the catalog viewer in Windows XP when trying to view .cat files created on Windows 7.

I’ve come to the conclusion that writing a library to reliably read and write all versions of .cat files is probably about 3 months work, and that’s 3 months I simply don’t have. Given there isn’t actually a specification (apart from a super small guide on how to use the MS crypto API) it would also be an uphill battle with every Windows release.

We could of course do something Linux specific that does the same thing, although that obviously would not work on Windows and means we have to ask the vendors to do an extra step in release engineering. Using GPG would be easiest, but a lot of the hardware vendors seem wed to the PKCS certificate mechanism, and I suppose it does mean you can layer certificates for root trust, vendors and contractors. GPG signing the firmware file only doesn’t actually give us a file-list with the digest values of the other metadata in the .cab file.

A naive solution would be to do something like this:

sha25sum firmware.inf firmware.metainfo.xml firmware.bin > firmware.digest
openssl dgst -sha256 -sign cert-private.pem -out firmware.sign firmware.digest
openssl dgst -sha256 -verify cert-pubkey.pem -signature firmware.sign firmware.files

But to actually extract the firmware.digest file we need the private key. We can check prepared data using the public key, but that means shipping firmware.digest and firmware.sign when we only really want one file (.cab files checksum the files internally, so we can be sure against data corruption).

Before I go crazy and invent yet another file format specification does anybody know of a signed digest format with an open specification? Better ideas certainly welcome, thanks.


July 13, 2015

darktable on Windows

darktable on Windows

Why don't you provide a Windows build?

Due to the heated debate lately, a short foreword:

We do not want to harass, insult or criticize anyone due to his or her choice of operating system. Still, from time to time we encounter comments from people accusing us of ignorance or even disrespect towards Windows users. If any of our statements can be interpreted such, we want to apologize for that – and once more give the full explanation of our lacking Windows support.

The darktable project

darktable is developed and maintained by a small group of people in their spare time, just for fun. We do not have any funds, do not provide travel reimbursements for conferences or meetings, and don’t even have a legal entity at the moment. In other words: None of the developers has ever seen (and most likely will ever see) a single $(INSERT YOUR CURRENCY) for the development of darktable, which is thus a project purely driven by enthusiasm and curiosity.

The development environment

The team is quite mixed, some have a professional background in computing, others don’t. But all love photography and like exploring the full information recorded by the camera themselves. Most new features are added to darktable as an expert for, let’s say GPU computing, steps up and is willing to provide and maintain code for the new feature.

Up till now there is one technical thing that unites all developers: None of them is using Windows as operating system. Some are using Mac OSX, Solaris, etc, but most run some Linux distribution. New flavors of operating systems kept being added to our list with people willing to support their favorite system joining the team.

Also (since this stands out a bit as “commercial operating system”) Mac OS X support arrived in exactly this way. Someone (parafin!) popped up, said: “I like this software, and I want to run darktable on my Mac.”, compiled it on OS X and since then does testing and package building for the Mac OS X operating system. And this is not an easy job. Initially there were just snapshot builds from git, no official releases, not even release candidates – but already the first complaints about the quality arrived. Finally, there was a lot of time invested in working around specific peculiarities of this operating system to make it work and provide builds for every new version of darktable released.

This nicely shows one of the consequences of the project’s organizational (non-) structure and development approach: at first, every developer cares about darktable running on his personal system.

Code contributions and feature requests

Usually feature requests from users or from the community are treated like a brainstorming session. Someone proposes a new feature, people think and discuss about it – and if someone likes the idea and has time to code it, it might eventually come – if the team agrees on including the feature.

But life is not a picnic. You probably wouldn’t pass by your neighbor and demand from him to repair your broken car – just because you know he loves to tinker with his vintage car collection at home.
Same applies here. No one feels comfortable if suddenly request are being made that would require a non-negligible amount of work – but with no return for the person carrying out the work, neither moneywise nor intellectually.

This is the feeling created every time someone just passes by leaving as only statement: “Why isn’t there a Windows build (yet)?”.

Providing a Windows build for darktable

The answer has always been the same: because no one stepped up doing it. None of the passers-by requesting a Windows build actually took the initiative, just downloaded the source code and started the compilation. No one approached the development team with actual build errors and problems encountered during a compilation using MinGW or else on Windows. The only thing ever aired were requests for ready-made binaries.

As stated earlier here, the development of darktable is totally about one’s own initiative. This project (as many others) is not about ordering things and getting them delivered. It’s about starting things, participating and contributing. It’s about trying things out yourself. It’s FLOSS.

One argument that pops up from time to time is: “darktable’s user base would grow immensely with a Windows build!”. This might be true. But – what’s the benefit from this? Why should a developer care how many people are using the software if his or her sole motivation was producing a nice software that he/she could process raw files with?

On the contrary: more users usually means more support, more bug tracker tickets, more work. And this work usually isn’t the pleasing sort, hunting seldom bugs occurring with some rare camera’s files on some other operating system is usually not exactly what people love to spent their Saturday afternoon on.

This argumentation would totally make sense if darktable would be sold, the developers paid and the overall profit would depend on the number of people using the software. No one can be blamed for sending such requests to a company selling their software or service (for your money or your data, whatever) – and it is up to them to make an economical decision on whether it makes sense to invest the time and manpower or not.

But this is different.

Not building darktable on Windows is not a technical issue after all. There certainly are problems of portability, and code changes would be necessary, but in the end it would probably work out. The real problem is (as has been pointed out by the darktable development team many times in the past) the maintenance of the build as well as all the dependencies that the package requires.

The darktable team is trying to deliver a high-quality reliable software. Photographers rely on being able to re-process their old developments with recent versions of darktable obtaining exactly the same result – and that on many platforms, being it CPUs or GPUs with OpenCL. Satisfying this objective requires quite some testing, thinking and maintenance work.

Spawning another build on a platform that not a single developer is using would mean lots and lots of testing – in unfamiliar terrain, and with no fun attached at all. Releasing a half-way working, barely tested build for Windows would harm the project’s reputation and diminish the confidence in the software treating your photographs carefully.

We hope that this reasoning is comprehensible and that no one feels disrespected due to the choice of operating system.


That other OS

Why don't you provide a Windows build?

Due to the heated debate lately, a short foreword:

We do not want to harass, insult or criticize anyone due to his or her choice of operating system. Still, from time to time we encounter comments from people accusing us of ignorance or even disrespect towards Windows users. If any of our statements can be interpreted such, we want to apologize for that – and once more give the full explanation of our lacking Windows support.

The darktable project

darktable is developed and maintained by a small group of people in their spare time, just for fun. We do not have any funds, do not provide travel reimbursements for conferences or meetings, and don't even have a legal entity at the moment. In other words: None of the developers has ever seen (and most likely will ever see) a single $(INSERT YOUR CURRENCY) for the development of darktable, which is thus a project purely driven by enthusiasm and curiosity.

The development environment

The team is quite mixed, some have a professional background in computing, others don't. But all love photography and like exploring the full information recorded by the camera themselves. Most new features are added to darktable as an expert for, let's say GPU computing, steps up and is willing to provide and maintain code for the new feature.

Up till now there is one technical thing that unites all developers: None of them is using Windows as operating system. Some are using Mac OSX, Solaris, etc, but most run some Linux distribution. New flavors of operating systems kept being added to our list with people willing to support their favorite system joining the team.

Also (since this stands out a bit as “commercial operating system”) Mac OS X support arrived in exactly this way. Someone (parafin!) popped up, said: “I like this software, and I want to run darktable on my Mac.”, compiled it on OS X and since then does testing and package building for the Mac OS X operating system. And this is not an easy job. Initially there were just snapshot builds from git, no official releases, not even release candidates – but already the first complaints about the quality arrived. Finally, there was a lot of time invested in working around specific peculiarities of this operating system to make it work and provide builds for every new version of darktable released.

This nicely shows one of the consequences of the project's organizational (non-) structure and development approach: at first, every developer cares about darktable running on his personal system.

Code contributions and feature requests

Usually feature requests from users or from the community are treated like a brainstorming session. Someone proposes a new feature, people think and discuss about it – and if someone likes the idea and has time to code it, it might eventually come – if the team agrees on including the feature.

But life is not a picnic. You probably wouldn't pass by your neighbor and demand from him to repair your broken car – just because you know he loves to tinker with his vintage car collection at home.
Same applies here. No one feels comfortable if suddenly request are being made that would require a non-negligible amount of work – but with no return for the person carrying out the work, neither moneywise nor intellectually.

This is the feeling created every time someone just passes by leaving as only statement: “Why isn't there a Windows build (yet)?”.

Providing a Windows build for darktable

The answer has always been the same: because no one stepped up doing it. None of the passers-by requesting a Windows build actually took the initiative, just downloaded the source code and started the compilation. No one approached the development team with actual build errors and problems encountered during a compilation using MinGW or else on Windows. The only thing ever aired were requests for ready-made binaries.

As stated earlier here, the development of darktable is totally about one's own initiative. This project (as many others) is not about ordering things and getting them delivered. It's about starting things, participating and contributing. It's about trying things out yourself. It's FLOSS.

One argument that pops up from time to time is: “darktable's user base would grow immensely with a Windows build!”. This might be true. But – what's the benefit from this? Why should a developer care how many people are using the software if his or her sole motivation was producing a nice software that he/she could process raw files with?

On the contrary: more users usually means more support, more bug tracker tickets, more work. And this work usually isn't the pleasing sort, hunting seldom bugs occurring with some rare camera's files on some other operating system is usually not exactly what people love to spent their Saturday afternoon on.

This argumentation would totally make sense if darktable would be sold, the developers paid and the overall profit would depend on the number of people using the software. No one can be blamed for sending such requests to a company selling their software or service (for your money or your data, whatever) – and it is up to them to make an economical decision on whether it makes sense to invest the time and manpower or not.

But this is different.

Not building darktable on Windows is not a technical issue after all. There certainly are problems of portability, and code changes would be necessary, but in the end it would probably work out. The real problem is (as has been pointed out by the darktable development team many times in the past) the maintenance of the build as well as all the dependencies that the package requires.

The darktable team is trying to deliver a high-quality reliable software. Photographers rely on being able to re-process their old developments with recent versions of darktable obtaining exactly the same result – and that on many platforms, being it CPUs or GPUs with OpenCL. Satisfying this objective requires quite some testing, thinking and maintenance work.

Spawning another build on a platform that not a single developer is using would mean lots and lots of testing – in unfamiliar terrain, and with no fun attached at all. Releasing a half-way working, barely tested build for Windows would harm the project's reputation and diminish the confidence in the software treating your photographs carefully.

We hope that this reasoning is comprehensible and that no one feels disrespected due to the choice of operating system.


That other OS

Translators needed for 1.2.0

MyPaint badly needs your language skills to make the 1.2.0 release a reality. Please help us out by translating the program into your language. We literally cannot make v1.2.0 a good release of MyPaint without your help, so to help you out we’ve made it as easy as we can for you to get involved by translating program texts.

Translation status: Graphical status badge for all mypaint project translations
Begin translating now: https://hosted.weblate.org/engage/mypaint/

Rosetta StoneThe texts in the MyPaint application are in heavy need of updating for the 23 languages currently supported. If you’re fluent in a language other than English, and have a good working knowledge of MyPaint and the English language, then you can help our translation effort.

We’re using a really cool online translation service called WebLate, another Open Source project whose developers have very graciously offered us free hosting. It integrates with our Github development workflow very nicely indeed, so well in fact that I’m hoping to use it for continuous translation after 1.2.0 has been released.

To get involved, click on the begin translating now link above, and sign in with Github, Google, or Facebook. You can create an account limited to just the Weblate developers’ hosted service too. There are two parts to MyPaint: the main application, and its brush-painting library. Both components need translating.

Maintaining language files can be a lot of work, so you should get credit for the work you do. The usual workflow isn’t anonymous: your email address and sign-in name will be recorded in the commit log on Github, and you can put your names in the about box by translating the marker string “translator-credits” when it comes up! If you’d prefer to work anonymously, you don’t have to sign in: you can just make suggestions via WebLate for other translators to review and integrate.

Even if your language is complete, you can help by sharing the link above among your colleagues and friends on social media.

Thank you to all of our current translators, and in advance to new translators, for all the wonderful work you’re doing. I put a lot of my time into MyPaint trying to make sure that it’s beautiful, responsive, and stable. I deeply appreciate all the work that others do on the project too and, from a monoglot like myself, some of the most inspiring work I see happening on the project by others is all the effort put into making MyPaint comprehensible and international. Many, many thank yous.

Frozen for 1.2.0

Quick note to say that MyPaint is now frozen for the upcoming 1.2.0 release. Expect announcements here about dates for specific betas, plus previews and screenshots of new features; however the most current project status can be seen on our Github milestones page.

July 10, 2015

Fri 2015/Jul/10

  • Package repositories FAIL

    Today I was asking around something like, mobile websites work without Flash; how come non-mobile Twitter and YouTube want Flash on my desktop (which is disabled now because of all the 0-day exploits?

    Elad kindly told me that if I install the GStreamer codecs, and disable Flash, it should work. I didn't have those codecs on my not-for-watching-TV machine, so I set to it.

    openSUSE cannot distribute the codecs themselves, so the community does it with an external, convenient one-click-install web-button. When you click it, the packaging machinery churns and you get asked if you want to trust the Packman repository — where all the good non-default stuff packages are.

    Packman's help        page

    It's plain HTTP. No HSTS or anything. It tells you the fingerprint of the repository's signing key... over plain HTTP. On the FAQ page, there is a link to download that public key over plain FTP.

    Packman's key over        plain FTP

    Now, that key is the "PackMan Build Service" key, a key from 2007 with only 1024-bit DSA. The key is not signed by anybody.

    PackMan        Build Service key

    However, the key that the one-click install wants to use is another one, the main PackMan Project key.

    PackMan Project        key

    It has three signatures, but when I went down the rabbit hole of fetching each of those keys to see if I knew those people — I have heard of two of them, but my little web of trust doesn't have them.

    So, YOLO, right? "Accept". "Trust". Because "Cancel" is the only other option.

    The installation churns some more, and it gives me this:

    libdvdcss repository   is unsigned

    YOLO all the way.

    I'm just saying, that if you wanted to pwn people who install codecs, there are many awesome places here to do it.

    But anyway. After uninstalling flash-player, flash-player-gnome, freshplayerplugin, pullin-flash-player, the HTML5 video player works in Firefox and my fat desktop now feels as modern as my phone.

    Update:Hubert Figuière has an add-on for Firefox that will replace embedded Flash video players in other websites with HTML5, the No-flash add-on.

July 09, 2015

Krita 2.9.6 released!

After a month of bugfixing, we give you Krita 2.9.6! With lots of bugfixes, but bugfixes aren’t the only thing in 2.9.6, we also have a few new features!

The biggest change is that we now have selection modifiers! They are configured as follows:

  • Shift+click: add to selection.
  • Alt+click: subtract from selection.
  • Shift+alt+click: intersect selection
  • Ctrl+click: replace selection (for when you have set the
  • selection mode to something else but replace).

These don’t work with the path tool yet, and aren’t configurable, but we’re going to work on that. Check out the manual page for the selection tools for more information on how this relates to constraint and from center for the rectangle and ellipse select.

Also new: Continuous transform and crop!

Now, when you applied a transform or crop, and directly afterwards click on the canvas, Krita will recall the previous transform or crop, and allow you to adjust that instead! If you press ‘esc’ when in this ‘continuous mode’, Krita will forget the continuous transform, and allow you to start a new one.

The final of the big new features must be that the tool-options can now be put into the toolbar:

tool options in the toobar

By default it’s still a docker, but you can configure it in settings->configure Krita->general. You can also easily summon this menu with the ‘\’ key!

And Thorsten Zachmann has improved the speed of all the color adjustment filters, often by a factor of four or more.

Full list of features new to 2.9.6:

  • Add possibility to continue a Crop Tool action
  • Speed up of color balance, desaturate, dodge, hsv adjustment, index color per-channel and posterize filters.
  • Activate Cut/Copy Sharp actions in the menu
  • Implemented continuation of the transform with clicking on canvas
  • new default workspace
  • Add new shortcuts (‘\’ opens the tool options, f5 opens the brush editor, f7 opens the preset selector.)
  • Show the tool options in a popup (toggle this on or off in the general preferences, needs restarting Krita)
  • Add three new default shortcuts (Create group layer = Ctrl+G, Merge Selected layer = Ctrl+Alt+E, Scale image to new size = Alt+Ctrl+I )
  • Add an ‘hide pop-up on mouseclick option’ to advanced color selector.
  • Make brush ‘speed’ sensor work properly
  • Allow preview for “Image Background Color and Transparency” dialog.
  • Selection modifier patch is finally in! (shift=add, alt=subtract, shift+alt=intersect, ctrl=replace. Path tool doesn’t work yet, and they can’t be configured yet)

Bugfixes new to 2.9.6

  • BUG:346932 Fix crash when saving a pattern to a *.kra
  • Make Group Layer return correct extent and exact bounds when in pass-through mode
  • Make fixes to pass-through mode.
  • Added an optional optimization to slider spin box
  • BUG:348599 Fix node activating on the wrong image
  • BUG:349792 Fix deleting a color in the palette docker
  • BUG:349823 Fix scale to image size while adding a file layer
  • Fixed wrapping issue for all dial widgets in Layer Styles dialog
  • Fix calculation of y-res when loading .kra files
  • BUG:349598 Prevent a divide by zero
  • BUG:347800 Reset cursor when canvas is extended to avoid cursor getting stuck in “pointing hand” mode
  • BUG:348730 Fix tool options visibility by default
  • BUG:349446 Fix issue where changing theme doesn’t update user config
  • BUG:348451 Fix internal brush name of LJF smoke.
  • BUG:349424 Set documents created from clipboard to modified
  • BUG:349451 Make more robust: check pointers before use
  • Use our own code to save the merged image for kra and ora (is faster)
  • BUG:313296 Fix Hairy brush not to paint black over transparent pixels in Soak Ink mode
  • Fix PVS warning in hairy brush
  • (gmic) Try to workaround the problem with busy cursor
  • BUG:348750 Don’t limit the allowed dock areas
  • BUG:348795 Fix uninitialized m_maxPresets
  • BUG:349346 (gmic) If there is selection, do not synchronize image size
  • BUG:348887 Disable autoscroll for the fill-tool as well.
  • BUG:348914 Rename the fill layers.



Taming annoyances in the new Google Maps

For a year or so, I've been appending "output=classic" to any Google Maps URL. But Google disabled Classic mode last month. (There have been a few other ways to get classic Google maps back, but Google is gradually disabling them one by one.)

I have basically three problems with the new maps:

  1. If you search for something, the screen is taken up by a huge box showing you what you searched for; if you click the "x" to dismiss the huge box so you can see the map underneath, the box disappears but so does the pin showing your search target.
  2. A big swath at the bottom of the screen is taken up by a filmstrip of photos from the location, and it's an extra click to dismiss that.
  3. Moving or zooming the map is very, very slow: it relies on OpenGL support in the browser, which doesn't work well on Linux in general, or on a lot of graphics cards on any platform.

Now that I don't have the "classic" option any more, I've had to find ways around the problems -- either that, or switch to Bing maps. Here's how to make the maps usable in Firefox.

First, for the slowness: the cure is to disable webgl in Firefox. Go to about:config and search for webgl. Then doubleclick on the line for webgl.disabled to make it true.

For the other two, you can add userContent lines to tell Firefox to hide those boxes.

Locate your Firefox profile. Inside it, edit chrome/userContent.css (create that file if it doesn't already exist), and add the following two lines:

div#cards { display: none !important; }
div#viewcard { display: none !important; }

Voilà! The boxes that used to hide the map are now invisible. Of course, that also means you can't use anything inside them; but I never found them useful for anything anyway.

July 07, 2015

What's New, Some New Tutorials, and PIXLS!

What's been going on?! A bunch!

In case you've not noticed around here, I've been transitioning tutorials and photography related stuff over to PIXLS.US.

I built that site from scratch, so it's taken a bit of my time... I've also been slowly porting some of my older tutorials that I thought would still be useful over there. I've also been convincing all sorts of awesome folks from the community to help out by writing/recording tutorials for everyone, and we've already got quite a few nice ones over there:

A Blended Panorama with PhotoFlow

Basic Landscape Exposure Blending with GIMP and G'MIC

An Open Source Portrait (Mairi)

Skin Retouching with Wavelet Decompose

Luminosity Masking in darktable

Digital B&W Conversion (GIMP)

So just a gentle reminder that the tutorials have all mostly moved to PIXLS.US. Head over there for the newest versions and brand-new material, like the latest post from the creator of PhotoFlow, Andrea Ferrero on Panorama Exposure Blending with Hugin and PhotoFlow!

Also, don't forget to come by the forums and join the community at:


That's not to say I've abandoned this blog, just that I've been busy trying to kickstart a community over there! I'm also accepting submissions and/or ideas for new articles. Feel free to email me!

PhotoFlow Blended Panorama Tutorial

PhotoFlow Blended Panorama Tutorial

Andrea Ferrero has been busy!

After quite a bit of back and forth I am quite happy to be able to announce that the latest tutorial is up: A Blended Panorama with PhotoFlow! This contribution comes from Andrea Ferrero, the creator of a new project: PhotoFlow.

In it, he walks through a process of stitching a panorama together using Hugin and blending multiple exposure options through masking in PhotoFlow (see lede image). The results are quite nice and natural looking!

Local Contrast Enhancement: Gaussian vs. Bilateral

Andrea also runs through a quick video comparison of doing LCE using both a Gaussian and Bilateral blur, in case you ever wanted to see them compared side-by-side:

He started a topic post about it in the forums as well.

Thoughts on the Main Page

Over on discuss I started a thread to talk about some possible changes to the main page of the site.

Specifically I’m talking about the background lede image at the very top of the main page:

I had originally created that image as a placeholder in Blender. The site is intended as a photography-centric site, so the natural thought was why not use photos as a background instead?

The thought is to rotate through images as provided by the community. I’ve also mocked up two version of using an image as a background.

Simple replacement of the image with photos from the community. This is the most popular in the poll on the forum at the moment. The image will be rotated amongst images provided by community members. I just need to make sure that the text shown is legible over whatever the image may be…

Full viewport splash version, where the image fills the viewport. This is not very popular from the feedback I received (thank you akk, ankh, muks, DrSlony, LebedevRI, and others on irc!). I personally like the idea but I can understand why others may not like it.

If anyone wants to chime in (or vote in the poll) then head over to the forum topic and let us know your thoughts!

Also, a big thank you to Morgan Hardwood for allowing us to use that image as a background example. If you want a nice way to support F/OSS development, it just so happens that Morgan is a developer for RawTherapee, and a print of that image is available for purchase. Contact him for details.

July 06, 2015

The votes are in!

Here’s the definitive list of stretch goal votes. A whopping 94,1% of eligible voters (622 of 661) actually voted: 94,9% of kickstarter backers and 84,01% of paypal backers. Thank you again, everyone who pledged, donated and voted, for your support!

Votes Stretch goal Phabricator Task
0 N/A Extra Lazy Brush: interactive tool for coloring the image in a couple of strokes T372
1 120 19.29% 10. Animated file formats export: animated gif, animated png and spritemaps T116
2 56 9.00% 8. Rulers and guides: drag out guides from the rulers and generate, save and load common sets of guides. Save guides with the document. T114
3 51 8.20% 1. Multiple layer selection improvements T105
4 48 7.72% 19. Make it possible to edit brush tips in Krita T125
5 42 6.75% 21. Implement a Heads-Up-Display to manipulate the common brush settings: opacity, size, flow and others. T127
6 38 6.11% 2. Update the look & feel of the layer docker panel (1500 euro stretch goal) T106
7 37 5.95% 22. Fuzzy strokes: make the stroke consistent, but add randomness between strokes. T166
8 33 5.31% 5. Improve grids: add a grid docker, add new grid definitions, snap to grid T109
9 31 4.98% 6. Manage palettes and color swatches T112
10 28 4.50% 18. Stacked brushes: stack two or more brushes together and use them in one stroke T124

These didn’t make it, but we’re keeping them for next time:

  Votes   Stretch goal
11 23 3.70% 4. Select presets using keyboard shortcuts
12 19 3.05% 13. Scale from center pivot: right now, we transform from the corners, not the pivot point.
13 19 3.05% 9. Composition helps: vector objects that you can place and that help with creating rules of thirds, spiral, golden mean and other compositions.
14 18 2.89% 7. Implement a Heads-Up-Display for easy manipulation of the view
15 17 2.73% 20. Select textures on the fly to use in textured brushes
16 9 1.45% 15. HDR gradients
17 9 1.45% 11. Add precision to the layer move tool
18 8 1.29% 17. Gradient map filter
19 5 0.80% 16. On-canvas gradient previews
20 5 0.80% 12. Show a tooltip when hovering over a layer with content to show which one you’re going to move.
21 3 0.48% 3. Improve feedback when using more than one color space in a single image
22 3 0.48% 14. Add a gradient editor for stop gradients

July 04, 2015

Create a signed app with Cordova

I wrote last week about developing apps with PhoneGap/Cordova. But one thing I didn't cover. When you type cordova build, you're building only a debug version of your app. If you want to release it, you have to sign it. Figuring out how turned out to be a little tricky.

Most pages on the web say you can sign your apps by creating platforms/android/ant.properties with the same keystore information in it that you'd put in an ant build, then running cordova build android --release

But Cordova completely ignored my ant.properties file and went on creating a debug .apk file and no signed one.

I found various other purported solutions on the web, like creating a build.json file in the app's top-level directory ... but that just made Cordova die with a syntax error inside one of its own files). This is the only method that worked for me:

Create a file called platforms/android/release-signing.properties, and put this in it:

// if you don't want to enter the password at every build, use this:

Then cordova build android --release finally works, and creates a file called platforms/android/build/outputs/apk/android-release.apk

July 02, 2015

libmypaint is ready for translation

MyPaint is well on its way to feature and string freeze, but its brush library is stable enough to be translated now.

You can help! Example status page from WebLateThe developers of WebLate, a really nice online translation tool, have offered us hosting for translations.

Translation status: Graphical status badge for all mypaint project translations
Join: https://hosted.weblate.org/engage/mypaint/

If you’re fluent in a language other than English, or know a FOSS-friendly person who is, you can help with the translation effort. Please share the link above as widely as you can, or dive in yourself and start translating brush setting texts. It’s a surprisingly simple workflow: you translate program texts one at a time resolving any discrepancies and correcting problems the system has discovered. Each text has a link back to the source code too, if you want to see where it was set up. At the end of translating into your language you get a nice fully green progress bar, a glowing sense of satisfaction, and your email address in the commit log ☺

If you want to help out and good language skills, we’d really appreciate your assistance. Helping to translate a project is a great way of learning about how it works internally, and it’s one of the easiest and most effective ways of getting involved in the Free/Open Source culture and putting great software into people’s hands, worldwide.

July 01, 2015

Web Open Font Format (WOFF) for Web Documents

The Web Open Font Format (short WOFF; here using Aladin font) is several years old. Still it took some time to get to a point, where WOFF is almost painless to use on the linux desktop. WOFF is based on OpenType style fonts and is in some way similar to the more known True Type Font (.ttf). TTF fonts are widely known and used on the Windows platform. Those feature rich kind of fonts are used for high quality font displaying for the system and local office-and design documents. WOFF aims at closing the gap towards making those features available on the web. With these fonts it becomes possible to show nice looking fonts on paper and web presentations in almost the same way. In order to make WOFF a success, several open source projects joined forces, among them Pango and Qt, and contributed to harfbuzz, a OpenType text shaping engine. Firefox and other web engines can handle WOFF inside SVG web graphics and HTML web documents using harfbuzz. Inkscape uses at least since version 0.91.1 harfbuzz too for text inside SVG web graphics. As Inkscape is able to produce PDF’s, designing for both the web and print world at the same time becomes easier on Linux.

Where to find and get WOFF fonts?
Open Font Library and Google host huge font collections . And there are more out on the web.

How to install WOFF?
For using inside inkscape one needs to install the fonts locally. Just copy the fonts to your personal ~/.fonts/ path and run

fc-cache -f -v

After that procedure the fonts are visible inside a newly started Inkscape.

How to deploy SVG and WOFF on the Web?
Thankfully WOFF in SVG documents is similar to HTML documents. However simply uploading a Inkscape SVG to the web as is will not be enough to show WOFF fonts. While viewing the document locally is fine, Firefox and friends need to find those fonts independent of the localy installed fonts. Right now you need to manually edit your Inkscape SVG to point to the online location of your fonts . For that open the SVG file in a text editor and place a CSS font-face reference right after the <svg> element like:

<style type=”text/css”>
@font-face {
font-family: “Aladin”;
src: url(“fonts/Aladin-Regular.woff”) format(“woff”);

How to print a Inkscape SVG document containing WOFF?
Just convert to PDF from Inkscape’s file menue. Inkscape takes care for embedding the needed fonts and creates a portable PDF.

In case your prefered software is not yet WOFF ready, try the woff2otf python script for converting to the old TTF format.

Hope this small post gets some of you on the font fun path.

Fedora Hubs Update!!!


The dream is real – we are cranking away, actively building this very cool, open source, socially-oriented collaboration platform for Fedora.

Myself and Meghan Richardson, the Fedora Engineering Team’s UX intern for this summer, have been cranking out UI mockups over the past month or so (Meghan way more than me at this point. :) )

Screenshot from 2015-06-23 09-24-44

We also had another brainstorming session. We ran the Fedora Hubs Hackfest, a prequel to the Fedora Release Engineering FAD a couple of weeks ago.

After a lot of issues with the video, full video of the hackfest is now finally available (the reason for the delay in my posting this :) ).

Let’s talk about what went down during this hackfest and where we are today with Fedora Hubs:

What is Fedora Hubs, Exactly?

(Skip directly to this part of the video)

We talked about two elevator pitches for explaining it:

  • It’s an ‘intranet’ page for the Fedora Project. You work on all these different projects in Fedora, and it’s a single place you can get information on all of them as a contributor.
  • It’s a social network for Fedora contributors. One place to go to keep up with everything across the project in ways that aren’t currently possible. We have a lot of places where teams do things differently, and it’s a way to provide a consistent contributor experience across projects / teams.

Who are we building it for?

(Skip directly to this part of the video)

  • New Fedora Contributors – A big goal of this project is to enable more contributors and make bootstrapping yourself as a Fedora contributor less of a daunting task.
  • Existing Fedora Contributors – They already have a workflow, and already know what they’re doing. We need to accommodate them and not break their workflows.

The main philosophy here is to provide a compelling user experience for new users that can potentially enhance the experience for existing contributors but at the very least will never disrupt the current workflow of those existing contributors. Let’s look at this through the example of IRC, which Meghan has mocked up in the form of a web client built into Fedora Hubs aimed at new contributor use:

If you’re an experienced contributor, you’ve probably got an IRC client, and you’re probalby used to using IRC and wouldn’t want to use a web client. IRC, though, is a barrier to new contributors. It’s more technical than the types of chat systems they’re accustomed to. It becomes another hurdle on top of 20 or so other hurdles they have to clear in the process of joining as a contributor – completely unrelated to the actual work they want to do (whatever it is – design, marketing, docs, ambassadors, etc.)

New contributors should be able to interact with the hubs IRC client without having to install anything else or really learn a whole lot about IRC. Existing contributors can opt into using it if they want, or they can simply disable the functionality in the hubs web interface and continue using their IRC clients as they have been.

Hackfest Attendee Introductions

(Skip directly to this part of the video)

Next, Paul suggested we go around the room and introduce ourselves for anybody interested in the project (and watching the video.)

  • Máirín Duffy (mizmo) – Fedora Engineering UX designer working on the UX design for the hubs project
  • Meghan Richardson (mrichard) – Fedora Engineering UX intern from MSU also working on the UX design for the hubs project
  • Remy Decausemaker (decause) – Fedora Community lead, Fedora Council member
  • Luke Macken (lmacken) – Works on Fedora Infrastructure, release engineering, tools, QA
  • Adam Miller (maxamillion) – Works on Release engineering for Fedora, working on build tooling and automation for composes and other things
  • Ralph Bean (threebean) – Software engineer on Fedora Engineering team, will be spending a lot of time working on hubs in the next year
  • Stephen Gallagher (sgallagh) – Architect at Red Hat working on the Server platform, on Fedora’s Server working group, interested in helping onboard as many people as possible
  • Aurélien Bompard (abompard) – Software developer, lead developer of Hyperkitty
  • David Gay (oddshocks) – Works on Fedora infrastructure team and cloud teams, hoping to work on Fedora Hubs in the next year
  • Paul Frields (sticksteR) – Fedora Engineering team manager
  • Pierre-Yves Chibon (pingou) – Fedora Infrastructure team member working mostly on web development
  • Patrick Uiterwijk (puiterwijk) – Member of Fedora’s system administration team
  • Xavier Lamien (SmootherFrOgZ) – Fedora Infrastructure team member working on Fedora cloud SIG
  • Atanas Beloborodov (nask0) – A very new contributor to Fedora, he is a web developer based in Bulgaria.
  • (Matthew Miller and Langdon White joined us after the intros)

Game to Explore Fedora Hub’s Target Users

(Skip directly to this part of the video)

We played a game called ‘Pain Gain’ to explore both of the types of users we are targeting: new contributors and experienced Fedora contributors. We started talking about Experienced Contributors. I opened up a shared Inkscape window and made two columns: “pain” and “gain:”

  • For the pain column, we came up with things that are a pain for experienced contributors the way our systems / processes currently work.
  • For the gain column, we listed out ways that Fedora Hubs could provide benefits for experienced contributors.

Then we rinsed and repeated for new contributors:


While we discussed the pains/gains, we also came up with a lot of sidebar ideas that we documented in an “Idea Bucket” area in the file:


I was worried that this wouldn’t work well in a video chat context, but I screen-shared my Inkscape window and wrote down suggestions as they were brought up and I think we came out with a useful list of ideas. I was actually surprised at the number of pains and gains on the experienced contributor side: I had assumed new contributors would have way more pains and gains and that the experienced contributors wouldn’t have that many.

Prototype Demo

(Skip directly to this part of the video)

Screenshot from 2015-06-23 12-57-27

Ralph gave us a demo of his Fedora Hubs prototype – first he walked us through how it’s built, then gave the demo.


In the README there is full explanation of how the prototype works so I won’t reiterate everything there. Some points that came up during this part of the meeting:

  • Would we support hubs running without Javascript? The current prototype completely relies on JS. Without JS, it would be hard to do widgets like the IRC widget. Some of the JS frameworks come with built-in fail modes. There are some accessibility issues with ways of doing things with JS, but a good design can ensure that won’t happen. For the most part, we are going to try to support what a default Fedora workstation install could support.
  • vi hotkeys for Hubs would be awesome. :) Fedora Tagger does this!
  • The way the widgets work now, each widget has to define a data function that gets called with a session object, and it has to return JSON-ifiable python code. That gets stored in memcached and is how the wsgi app and backend communicate. If you can write a data function to return JSON and write a template the data gets plugged into – that’s mainly what’s needed. Take a look at the stats widget – it’s pretty simple!
  • All widgets also need a ‘should_invalidate()’ function that lets the system know what kinds of information apply to which widgets. Every fedmsg has to go through every widget to see if it invalidates a given widget’s data – we were worried that this would result in a terrible performance issue, but by the end of the hackfest we had that figured out.
  • Right now the templates are ginja2, but Ralph thinks we should move to client-side (javascript) templates. The reason is that when updated data gets pushed over websockets from the bus, it can involve garbage communication any time new changes in data come across – it’s simpler that the widget doesn’t have to request the templates and instead the templates are already there in the client.
  • Angular could be a nice client-side way of doing the templates, but Ralph had heard some rumors that AngularJS 2 was going to support only Chrome, and AngularJS 1.3 and 2 aren’t compatible. nask0 has a lot of experience with Angular though and does not think v2 is going to be Chrome-only.
  • TODO: Smoother transitions for when widgets pop into view as they load on an initial load.
  • Langdon wondered if there would be a way to consider individual widgets being able to function as stand-alones on desktops or mobile. The raw zeromq pipes could be hooked up to do this, but the current design uses EventSource which is web-specific and wouldn’t translate to say a desktop widget. Fedora Hubs will emit its own fedmsgs too, so you could build a desktop widget using that as well.
  • Cache invalidation issues was the main driver of the slowness in Fedora Packages, but now we have a cache that updates very quickly so we get constant time access to delivering those pages.

Mockup Review

Screenshot from 2015-06-23 13-48-56

Next, Meghan walked us through the latest (at the time :) we have more now!) mockups for Fedora Hubs, many based on suggestions and ideas from our May meetup (the 2nd hubs video chat.)

Creating / Editing Hubs

(Skip directly to this part of the video)

First, she walked us through her mockups for creating/editing hubs – how a hub admin would be able to modify / set up their hub. (Mockup (download from ‘Raw’ and view in Inkscape to see all screens.)) Things you can modify are the welcome message, colors, what widgets get displayed, the configuration for widgets (e.g. what IRC channel is associated with the hub?), and how to add widgets, among many other things.

Meghan also put together a blog post detailing these mockups.

One point that came up here – a difference is that when users edit their own hubs, they can’t associate an IRC channel with it, but a nick and a network, to enable their profile viewers to pm them.

We talked about hub admins vs FAS group admins. Should they be different or exactly the same? We could make a new role in FAS – “hub admin” – and store it there if it’s another one. Ralph recommended keeping it simple by having FAS group admins and hub admins one and the same. Some groups are more strict about group admins in FAS, some are not. Would there be scenarios where we’d want people to be able to admin the FAS group for a team but not be able to modify the hub layout (or vice-versa?) Maybe nesting the roles – if you’re a FAS admin you can be FAS admin + hub admin, if you’re a hub admin you can just admin the hub but not the FAS group.

Another thing we talked about is theming hubs. Luke mentioned that Reddit allows admins to have free reign in terms of modifying the CSS. Matthew mentioned having a set of backgrounds to choose from, like former Fedora wallpapers. David cautioned that we want to maintain some uniformity across the hubs to help enable new contributors – he gave the example of Facebook, where key navigational elements are not configurable. I suggested maybe they could only tweak certain CSS classes. Any customizations could be stored in the database.

Another point: members vs subscribers on a hub. Subscribers ‘subscribe’ to a hub, members ‘join’ a hub. Subscribing to a hub adds it to your bookmarks in the main horizontal nav bar, and enables certain notifications for that hub to appear in your feed. We talked about different vocabulary for ‘subscribe’ vs ‘join’ – instead of ‘subscribe’ we talking about ‘following’ or ‘starring’ (as in Github) vs joining. (Breaking News :) Since then Meghan has mocked up the different modes for these buttons and added the “star” concept! See below.)


We had a bit of an extended discussion about a lot of the different ways someone could be affiliated with a team/project that has a hub. Is following/subscribing too non-committal? Should we have a rank system so you could move your way up ranks, or is it a redundant gameification given the badge system we have in place? (Maybe we can assign ranks based on badges earned?) Part of the issue here is for others to identify the authority of the other people they’re interacting with, but another part is for helping people feel more a part of the community and feel like valued members. Subscribing is more like following a news feed, being a member is more being part of the team.

Joining Hubs

(Skip directly to this part of the video)

The next set of mockups Meghan went through showed us the workflow of how a user requests membership in a given hub and how the admin receives the membership request and handles it.

We also tangented^Wtalked about the welcome message on hubs and how to dismiss or minimize them. I think we concluded that we would let people collapse them and remove them, and if they remove them we’ll give them a notification that if they want to view them at any time they can click on “Community Rules and Guidelines.”

Similarly, the notification to let the admin know that a user has requested access to something and they dismiss it and want to tend to it later – it will appear in the admin’s personal stream as well for later retrieval.

We talked about how to make action items in a user’s notification feed appear differently than informational notifications; some kind of different visual design for them. One idea that came up was having tabs at the top to filter between types of notifications (action, informational, etc.) I explained how we were thinking about having a contextual filter system in the top right of each ‘card’ or notification to let users show or hide content too. Meghan is working on mockups for this currently.

David had the idea of having action items assigned to people appear as actions within their personal stream… since then I have mocked this up:


Personal Profiles

(Skip directly to this part of the video)

Next Meghan walked us through the mockups she worked on for personal profiles / personal streams. One widget she mocked up is for personal library widgets. Other widgets included a personal badges earned display, hubs you’re a member of, IRC private message, a personal profile.

Meghan also talked about privacy with respect to profiles and we had a bit of a discussion about that. Maybe, for example, by default your library could be private, maybe your stream only shows your five most recent notifications and if someone is approved (using a handshake) as a follower of yours they can see the whole stream. Part of this is sort of a bike lock thing…. everything in a user’s profile is broadcast on fedmsg, but having it easily accessible in one place in a nice interface makes it a lot easier (like not having a lock on your bike.) One thing Langdon brought up is that we don’t want to give people a false sense of privacy. So we have to be careful about the messaging we do around it. We thought about whether or not we wanted to offer this intermediate ‘preview’ state for people’s profiles for those viewing them without the handshake. An alternative would be to let the user know who is following them when they first start following them and to maintain a roster of followers so it is clear who is reading their information.

Here’s the blog post Meghan wrote up on the joining hubs and personal profile mockups with each of the mockups and more details.

Bookmarks / main nav

(Skip directly to this part of the video)

The main horizontal navbar in Fedora Hubs is basically a bookmarks bar of the hubs you’re most interested in. Meghan walked us through the bookmarks mockups – she also covered these mockups in detail on her bookmarks blog post.


Yes. Yes, it is.

So you may be wondering when this is going to be available. Well, we’re working on it. We could always use more help….


Where’s stuff happening?

How does one help? Well, let me walk you through where things are taking place, so you can follow along more closely than my lazy blog posts if you so desire:

  • Chat with us: #fedora-hubs on irc.freenode.net is where most of the folks working on Fedora Hubs hang out, day in and day out. threebean’s hooked up a bot in there too that pushes notifications when folks check in code or mockup updates.
  • Mockups repo: Meghan and I have our mockups repo at https://github.com/fedoradesign/fedora-hubs, which we both have hooked up via Sparkleshare. (You are free to check it out without Sparkleshare and poke around as you like, of course.)
  • Code repo: The code is kept in a Pagure repo at https://pagure.io/fedora-hubs. You’ll want to check out the ‘develop’ branch and follow the README instructions to get all setup. (If I can do it, you can. :) )
  • Feature planning / Bug reporting: We are using Pagure’s issue tracker at https://pagure.io/fedora-hubs/issues to plan out features and track bugs. One way we are using this which I think is kind of interesting – it’s the first time I’ve used a ticketing system in exactly this way – is that for every widget in the mockups, we’ve opened up a ticket that serves as the design spec with mockups from our mockup repo embedded in the ticket.
  • Project tracking: This one is a bit experimental. But the Fedora infra and webdev guys set up http://taiga.fedoraproject.org – an open source kanban board – that Meghan and I started using to keep track of our todo list since we had been passing post-it notes back and forth and that gets a bit unwieldy. It’s just us designers using it so far, but you are more than welcome to join if you’d like. Log in with your Fedora staging password (you can reset it if it’s not working and it’ll only affect stg) and ping us in #fedora-hubs to have your account added to the kanban board.
  • Notification Inventory: This is an inventory that Meghan started of the notifications we’ve come up with for hubs in the mockups.
  • Nomenclature Diagram for Fedora Hubs: We’ve got a lot of neat little features and widgets and bits and bobs in Fedora Hubs, but it can be confusing talking about them without a consistent naming scheme. Meghan created this diagram to help sort out what things are called.

How can I help?

Well, I’m sure glad you asked. :) There’s a few ways you can easily dive in and help right now, from development to design to coming up with cool ideas for features / notifications:

  1. Come up with ideas for notifications you would find useful in Fedora Hubs! Add your ideas to our notification inventory and hit us up in #fedora-hubs to discuss!
  2. Look through our mockups and come up with ideas for new widgets and/or features in Fedora Hubs! The easiest way to do this is probably to peruse the mini specs we have in the pagure issue tracker for the project. But you’re free to look around our mockups repo as well! You can file your widget ideas in Pagure (start the issue name with “Idea:” and we’ll review them and discuss!
  3. Help us develop the widgets we’ve planned! We’ve got little mini design specs for the widgets in the Fedora Hubs pagure issue tracker. If a widget ticket is unassigned (and most are!), it’s open and free for you to start hacking on! Ask Meghan and I any questions in IRC about the spec / design as needed. Take a look at the stats widget that Ralph reviewed in explaining the architecture during the hackfest, and watch Ralph’s demo and explanation of how Hubs is built to see how the widgets are put together.
  4. There are many other ways to help (ask around in #fedora-hubs to learn more,) but I think these have a pretty low barrier for starting up depending on your skillset and I think they are pretty clearly documented so you can be confident you’re working on tasks that need to get done and aren’t duplicating efforts!

    Hope to see you in #fedora-hubs! :)

June 30, 2015

Parsing Option ROM Firmware

A few weeks ago an issue was opened on fwupd by pippin. He was basically asking for a command to return all the hashes of the firmwares installed on his hardware, which I initially didn’t really see the point of doing. However, after doing a few hours research about all the malware that can hide in VBIOS for graphics cards, option ROM in network cards, and keyboard matrix EC processors I was suitably worried also. I figured fixing the issue was a good idea. Of course, malware could perhaps hide itself (i.e. hiding in an unused padding segment and masking itself out on read) but this at least raises the bar from a security audit point of view, and is somewhat easier than opening the case and attaching a SPI programmer to the chip itself.

Fast forward a few nights. We can now verify ATI, NVIDIA, INTEL and ColorHug firmware. I’ve not got any other hardware with ROM that I can read from userspace, so this is where I need your help. I need willing volunteers to compile fwupd from git master (or rebuild my srpm) and then run:

cd fwupd/src
find /sys/devices -name rom -exec sudo ./fwupdmgr dump-rom {} \;

All being well you should see something like this:

/sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0/rom -> f21e1d2c969dedbefcf5acfdab4fa0c5ff111a57 [Version:]

If you see something just that, you’re not super helpful to me. If you see Error reading from file: Input/output error then you’re also not so helpful as the kernel module for your hardware is exporting a rom file and not hooking up the read vfuncs. If you get an error like Failed to detect firmware header [8950] or Firmware version extractor not known then you’ve just become interesting. If that’s you, can you send the rom file to richard_at_hughsie.com as an attachment along with any details you know about the hardware. Thanks!


Interview with Livio Fania


Could you tell us something about yourself?

I’m Livio Fania. I’m Italian Illustrator living in France.

Do you paint professionally, as a hobby artist, or both?

I paint professionally.

What genre(s) do you work in?

I make illustrations for press, posters and children books. My universe is made by geometrical shapes, stylized characters and flashy colors.

Whose work inspires you most — who are your role models as an artist?

I like the work of João Fazenda, Riccardo Guasco and Nick Iluzada among many others.

What makes you choose digital over traditional painting?

I did not take a definite choice. Even if I work mainly digitally, I still have a lot of fun using traditional tools such as colored pencils, brush pens and watercolors. Besides, in the 90% of cases I draw by hand, I scan, and just at the end of the process I grab my graphic tablet stylus.

I do not think that working digitally means to be faster. On the contrary, I can work more quickly by hand, especially in the first sketching phases. What digital art allows is CONTROL all over the process. If you keep your layer stack well organized, you can always edit your art without losing the original version, and this is very useful when your client asks for changes. If you work with traditional tools and you drop your ink in the wrong place, you can’t press Ctrl+z.


How did you find out about Krita?

I discovered Krita through a video conference posted on David Revoy’s blog. Even if I don’t particularly like his universe, I think he is probably the most influential artist using FLOSS tools, and I’m very grateful to him for sharing his knowledge with the community. Previously, I used to work with MyPaint, mainly for its minimalist interface which was perfect for the small laptop I had. Then I discovered that Krita was more versatile and better developed, so I took some time to learn it and now I could not do without it.

What was your first impression?

At first I thought it was not the right tool for me. Actually, most digital artists use Krita for its painting features, like blending modes and textured brushes, which allow to obtain realistic light effects. Personally, I think that realism can be very boring and that is why I paint in a stylized way with uniform tints. Besides, I like to bound my range of possibilities in a set of limited elements: palettes of 5-8 colors and 2-3 brushes. So at the beginning I felt like Krita had too many options for me. But little by little I adapted the GUI to my workflow. Now I really think everybody can find their own way to use Krita, no matter the painting style they use.

What do you love about Krita?

Two elements I really love:
1) The favourite presets docker which pops up with right click. It contains everything you need to keep painting and it is a pleasure to control everything with a glance.
2) The Composition tab, which allows to completely change the color palette or experiment with new effects without losing the original version of a drawing.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I think that selections are not intuitive at all and could be improved. When dealing with complex selections, it is time-consuming to check the selecting mode in the option tab (replace, intersect, subtract) and proceed accordingly. Especially considering that by default the selecting mode is the same you had when you used the tool last time (but in the meantime you probably forgot it). I think it would be much better if every time a selection tool is taken, it would be be in “normal” mode by default, and then one can switch to a different modes by pressing Ctrl/Shift.

What sets Krita apart from the other tools that you use?

Krita is by far the most complete digital painting tool developed on Linux. It is widely customizable (interface, workspaces, shortcuts, tabs) and it offers a very powerful brush engine, even compared to proprietary applications. Also, a very important aspect is the that the Krita foundation has a solid organization and develops it in a continuous way thanks to donations, Kickstarter campaigns etcetera. This is particularly important in the open source community, where we have sometimes well designed projects which disappear because they are not supported properly.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

The musicians in the field.

What techniques and brushes did you use in it?

As i said, I like to have limited presets. In this illustration I mostly used the “pastel_texture_thin” brush which is part of the default set of brushes in Krita. I love its texture and the fact that it is pressure sensitive. Also, I applied a global bitmap texture on an overlay layer.

Where can people see more of your work?


Anything else you’d like to share?

Yes, I would like to add that I also release all my illustrations under a Creative Commons license, so you can Download my portfolio, copy it and use it for non-commercial purposes.

June 29, 2015

Approaching string freeze and beta release cycle

It’s time to start putting the next release together, folks!

I would like to announce a freeze of all translated strings on Sat 11th July 2015, and then begin working on the first beta release properly. New features raised as a Github Pull Request by the end of Sat 4th July stand a chance of getting in, but midnight on that day is the deadline for new code submissions if they touch text the user will see on screen.

The next release will be numbered 1.2.0; we are currently in the alpha phase of development for it, but that phase will end shortly.

Fixing the remaining alpha-cycle bugs is going well. We currently only have four bugs left in the cycle milestone, and that number will diminish further shortly. The main goal right now is to merge and test any pending small features that people want to get into 1.2.0, and to thoroughly spellcheck and review the English-language source strings so that things will be better for our translators.

Expect announcements about the translation effort, and dates for beta releases shortly.

Just Say It!

While I love typing on small on screen keyboards on my phone, it is much easier to just talk. When we did the HUD we added speech recognition there, and it processed the audio on the device giving the great experience of controlling your phone with your voice. And that worked well with the limited command set exported by the application, but to do generic voice, today, that requires more processing power than a phone can reasonably provide. Which made me pretty excited to find out about HP's IDOL on Demand service.

I made a small application for Ubuntu Phone that records the audio you speak at it, and sends it up to the HP IDOL on Demand service. The HP service then does the speech recognition on it and returns the text back to us. Once I have the text (with help from Ken VanDine) I set it up to use Content Hub to export the text to any other application that can receive it. This way you can use speech recognition to write your Telegram notes, without Telegram having to know anything about speech at all.

The application is called Just Say It! and is in the Ubuntu App Store right now. It isn't beautiful, but definitely shows what can be done with this type of technology today. I hope to make it prettier and add additional features in the future. If you'd like to see how I did it you can look at the source.

As an aside: I can't get any of the non-English languages to work. This could be because I'm not a native speaker of those languages. If people could try them I'd love to know if they're useful.

Chollas in bloom, and other early summer treats

[Bee in cholla blossom] We have three or four cholla cacti on our property. Impressive, pretty cacti, but we were disappointed last year that they never bloomed. They looked like they were forming buds ... and then one day the buds were gone. We thought maybe some animal ate them before the flowers had a chance to open.

Not this year! All of our chollas have gone crazy, with the early rain followed by hot weather. Last week we thought they were spectacular, but they just kept getting better and better. In the heat of the day, it's a bee party: they're aswarm with at least three species of bees and wasps (I don't know enough about bees to identify them, but I can tell they're different from one another) plus some tiny gnat-like insects.

I wrote a few weeks ago about the piñons bursting with cones. What I didn't realize was that these little red-brown cones are all the male, pollen-bearing cones. The ones that bear the seeds, apparently, are the larger bright green cones, and we don't have many of those. But maybe they're just small now, and there will be more later. Keeping fingers crossed. The tall spikes of new growth are called "candles" and there are lots of those, so I guess the trees are happy.

[Desert willow in bloom] Other plants besides cacti are blooming. Last fall we planted a desert willow from a local native plant nursery. The desert willow isn't actually native to White Rock -- we're around the upper end of its elevation range -- but we missed the Mojave desert willow we'd planted back in San Jose, and wanted to try one of the Southwest varieties here. Apparently they're all the same species, Chilopsis linearis.

But we didn't expect the flowers to be so showy! A couple of blossoms just opened today for the first time, and they're as beautiful as any of the cultivated flowers in the garden. I think that means our willow is a 'Rio Salado' type.

Not all the growing plants are good. We've been keeping ourselves busy pulling up tumbleweed (Russian thistle) and stickseed while they're young, trying to prevent them from seeding. But more on that in a separate post.

As I write this, a bluebird is performing short aerobatic flights outside the window. Curiously, it's usually the female doing the showy flying; there's a male out there too, balancing himself on a piñon candle, but he doesn't seem to feel the need to show off. Is the female catching flies, showing off for the male, or just enjoying herself? I don't know, but I'm happy to have bluebirds around. Still no definite sign of whether anyone's nesting in our bluebird box. We have ash-throated flycatchers paired up nearby too, and I'm told they use bluebird boxes more than the bluebirds do. They're both beautiful birds, and welcome here.

Image gallery: Chollas in bloom (and other early summer flowers.

June 28, 2015


A couple of FreeCAD architecture/BIM related questions that I get often: Is FreeCAD ready enough to do serious BIM work? This is a very complex question, and the answer could be yes or no, depending on what's important to you. It of course also depends on what is BIM for you, because clearly enough, there isn't a universal...

June 26, 2015

A Blended Panorama with PhotoFlow

A Blended Panorama with PhotoFlow

Creating panoramas with Hugin and PhotoFlow

The goal of this tutorial is to show how to create a sort-of-HDR panoramic image using only Free and Open Source tools. To explain my workflow I will use the image below as an example.

This panorama was obtained from the combination of six views, each consisting of three bracketed shots at -1EV, 0EV and +1EV exposure. The three exposures are stitched together with the Hugin suite, and then exposure-blended with enfuse. The PhotoFlow RAW editor is used to prepare the initial images and to finalize the processing of the assembled panorama. The final result of the post-processing is below:

Final result Final result of the panorama editing (click to compare to simple +1EV exposure)

In this case I have used the brightest image for the foreground, the darkest one for the sky and clouds, and and exposure-fused one for a seamless transition between the two.

The rest of the post will show how to get there…

Before we continue, let me advise you that I’m not a pro, and that the tips and “recommendations” that I’ll be giving in this post are mostly derived from trial-and-error and common sense. Feel free to correct/add/suggest anything… we are all here to learn!

Taking the shots

Shooting a panorama requires a bit of preparation and planning to make sure that one can get the best out of Hugin when stitching the shots together. Here is my personal “checklist”:

  • Manual Focus - set the camera to manual focus, so that the focus plane is the same for all shots
  • Overlap Shots - make sure that each frame has sufficient overlap with the previous one (something between 1/2 and 1/3 of the total area), so that hugin can find enough control points to align the images and determine the lens correction parameters
  • Follow A Straight Line - when taking the shots, try to follow as much as possible a straight line (keeping for example the horizon at the same height in your viewfinder); if you have a tripod, use it!
  • Frame Appropriately - to maximize the angle of view, frame vertically for an horizontal panorama (and vice-versa for a vertical one)
  • Leave Some Room - frame the shots a bit wider than needed, to avoid bad surprises when cropping the stitched panorama
  • Fixed Exposure - take all shots with a fixed exposure (manual or locked) to avoid luminance variations that might not be fully compensated by hugin
  • Bracket if Needed - if you shoot during a sunny day, the brightness might vary significantly across the whole panorama; in this case, take three or more bracketed exposures for each view (we will see later how to blend them in the post-processing)

Processing the RAW files

If you plan to create the panorama starting from the in-camera Jpeg images, you can safely skip this section. On the other hand, if you are shooting RAW you will need to process and prepare all the input images for Hugin. In this case it is important to make sure that the RAW processing parameters are exactly the same for all the shots. The best is to adjust the parameters on one reference image, and then batch-process the rest of the images using those settings.

Using PhotoFlow

Loading and processing a RAW file is rather easy:

  1. Click the “Open” button and choose the appropriate RAW file from your hard disk; the image preview area will show at this point a grey and rather dark image

  2. Add a “RAW developer” layer; a configuration dialog will show up which allows to access and modify all the typical RAW processing parameters (white balance, exposure, color conversion, etc… see screenshots below).

More details on the RAW processing in PhotoFlow can be found in this tutorial.

Once the result is ok the RAW processing parameters need to be saved into a preset. This can be done following a couple of simple steps:

  1. Select the “RAW developer” layer and click on the “Save” button below the layers list widget (at the bottom-right of the photoflow’s window)

  2. A file chooser dialog chooser dialog will pop-up, where one has to choose an appropriate file name and location for the preset and then click “Save”;
    the preset file name must have a “.pfp” extension

The saved preset needs then to be applied to all the RAW files in the set. Under Linux, PhotoFlow comes with an handy script that automates the process. The script is called pfconv and can be found here. It is a wrapper around the pfbatch and exiftool commands, and is used to process and convert a bunch of files to TIFF format. Save the script in one of the folders included in your PATH environment variable (for example /usr/local/bin) and make it executable:

sudo chmod u+x /usr/local/bin/pfconv

Processing all RAW files of a given folder is quite easy. Assuming that the RAW processing preset is stored in the same folder under the name raw_params.pfp, run this commands in your preferred terminal application:

cd panorama_dir
pfconv -p raw_params.pfp *.NEF

Of course, you have to change panorama_dir to your actual folder and the .NEF extension to the one of your RAW fles.

Now go for a cup of coffee, and be patient… a panorama with three or five bracketed shots for each view can easily have more than 50 files, and the processing can take half an hour or more. Once the processing completed, there will be one tiff file for each RAW image, an the fun with Hugin can start!

Assembling the shots

Hugin is a powerful and free software suite for stitching multiple shots into a seamless panorama, and more. Under Linux, Hugin can be usually installed through the package manager of your distribution. In the case of Ubuntu-based distros it can be usually installed with:

sudo apt-get install hugin

If you are running Hugin for the first time, I suggest to switch the interface type to Advanced in order to have full control over the available parameters.

The first steps have to be done in the Photos tab:

  1. Click on Add images and load all the tiff files included in your panorama. Hugin should automatically determine the lens focal length and the exposure values from the EXIF data embedded in the tiff files.

  2. Click on Create control points to let hugin determine the anchor points that will be used to align the images and to determine the lens correction parameters so that all shots overlap perfectly. If the scene contains a large amount of clouds that have likely moved during the shooting, you can try setting the feature matching algorithm to cpfind+celeste to automatically exclude non-reliable control points in the clouds.

  3. Set the geometric parameters to Positions and Barrel Distortion and hit the Calculate button.

  4. Set the photometric parameters to High dynamic range, fixed exposure (since we are going to stitch bracketed shots that have been taken with fixed exposures), and hit the Calculate button again.

At this point we can have a first look at the assembled panorama. Hugin provides an OpenGL-based previewer that can be opened by clicking on the on the GL icon in the top toolbar (marked with the arrow in the above screenshot). This will open a window like this:

If the shots have been taken handheld and are not perfectly aligned, the panorama will probably look a bit “wavy” like in my example. This can be easily fixed by clicking on the Straighten button (at the top of the Move/Drag tab). Next, the image can be centered in the preview area with the Center and Fit buttons.

If the horizon is still not straight, you can further correct it by dragging the center of the image up or down:

At this point, one can switch to the Projection tab and play with the different options. I usually find the Cylindrical projection better than the Equirectangular that is proposed by default (the vertical dimension is less “compressed”). For architectural panoramas that are not too wide, the Rectilinear projection can be a good option since vertical lines are kept straight.

If the projection type is changed, one has to click once more on the Center and Fit buttons.

Finally, you can switch to the Crop tab and click on the HDR Autocrop button to determine the limits of the area containing only valid pixels.

We are now done with the preview window; it can be closed and we can go back to the main window, in the Stitcher tab. Here we have to set the options to produce the output images the way we want. The idea is to blend each bracketed exposure into a separate panorama, and then use enfuse to create the final exposure-blended version. The intermediate panoramas, which will be saved along with the enfuse output, are already aligned with respect to each other and can be combined using different type of masks (luminosity, gradients, freehand, etc…).

The Stitcher tab has to be configured as in the image below, selecting Exposure fused from any arrangement and Blended layers of similar exposure, without exposure correction. I usually set the output format to TIFF to avoid compression artifacts.

The final act starts by clicking on the Stitch! button. The input images will be distorted, corrected for the lens vignetting and blended into seamless panoramas. The whole process is likely to take quite long, so it is probably a good opportunity for taking a pause…

At the end of the processing, few new images should appear in the output directory: one with an “blended_fused.tif” suffix containing the output of the final enfuse step, and few with an “_exposure????.tif” suffix that contain intermediate panoramas for each exposure value.

Blending the exposures

Very often, photo editing is all about getting what your eyes have seen out of what your camera has captured.

The image that will be edited through this tutorial is no exception: the human vision system can “compensate” large luminosity variations and can “record” scenes with a wider dynamic range than your camera sensor. In the following I will attempt to restore such large dynamics by combining under- and over-exposed shots together, in a way that does not produce unpleasing halos or artifacts. Nevertheless, I have intentionally pushed the edit a bit “over the top” in order to better show how far one can go with such a technique.

This second part introduces a certain number of quite general editing ideas, mixed with details specific to their realization in PhotoFlow. Most of what is described here can be reproduced in GIMP with little extra effort, but without the ease of non-destructive editing.

The steps that I followed to go from one to the other can be more or less outlined like that:

  1. take the foreground from the +1EV version and the clouds from the -1EV version; use the exposure-blended Hugin output to improve the transition between the two exposures

  2. apply an S-shaped tonal curve to increase the overall brightness and add contrast.

  3. apply a combination of the a and b channels of the CIE-Lab colorspace in overlay blend mode to give more “pop” to the green and yellow regions in the foreground

The image below shows side-by-side three of the output images produced with Hugin at the end of the first part. The left part contains the brightest panorama, obtained by blending the shots taken at +1EV. The right part contains the darkest version, obtained from the shots taken at -1EV. Finally, the central part shows the result of running the enfuse program to combine the -1EV, 0EV and +1EV panoramas.

Comparison between the +1EV exposure (left), the enfuse output (center) and the -1EV exposure (right)

Exposure blending in general

In scenes that exhibit strong brightness variations, one often needs to combine different exposures in order to compress the dynamic range so that the overall contrast can be further tweaked without the risk of losing details in the shadows or highlights.

In this case, the name of the game is “seamless blending”, i.e. combining the exposures in a way that looks natural, without visible transitions or halos. In our specific case, the easiest thing would be to simply combine the +1EV and -1EV images through some smooth transition, like in the example below.

Simple blending of the +1EV and -1EV exposures

The result is not too bad, however it is very difficult to avoid some brightening of the bottom part of the clouds (or alternatively some darkening of the hills), something that will most likely look artificial even if the effect is subtle (our brain will recognize that something is wrong, even if one cannot clearly explain the reason…). We need something to “bridge” the two images, so that the transition looks more natural.

At this point it is good to recall that the last step performed by Hugin was to call the enfuse program to blend the three bracketed exposures. The enfuse output is somehow intermediate between the -1EV and +1EV versions, however a side-by-side comparison with the 0EV image reveals the subtle and sophisticated work done by the program: the foreground hill is brighter and the clouds are darker than in the 0EV version. And even more importantly, this job is done without triggering any alarm in your brain! Hence, the enfuse output is a perfect candidate to improve the transition between the hill and the sky.

Final result Enfuse output (click to see 0EV version)

Exposure blending in PhotoFlow

It is time to put all the stuff together. First of all, we should open PhotoFlow and load the +1EV image. Next we need to add the enfuse output on top of it: for that you first need to add a new layer (1) and choose the Open image tool from the dialog that will open up (2)(see below).

Inserting as image from disk as a layer

After clicking the “OK” button, a new layer will be added and the corresponding configuration dialog will be shown. There you can choose the name of the file to be added; in this case, choose the one ending with “_blended_fused.tif” among those created by Hugin:

“Open image” tool dialog

Layer masks: theory (a bit) and practice (a lot)

For the moment, the new layer completely replaces the background image. This is not the desired result: instead, we want to keep the hills from the background layer and only take the clouds from the “_blended_fused.tif” version. In other words, we need a layer mask.

To access the mask associated to the “enfuse” layer, double-click on the small gradient icon next to the name of the layer itself. This will open a new tab with an initially empty stack, where we can start adding layers to generate the desired mask.

How to access the grayscale mask associated to a layer

In PhotoFlow, masks are edited the same way as the rest of the image: through a stack of layers that can be associated to most of the available tools. In this specific case, we are going to use a combination of gradients and curves to create a smooth transition that follows the shape of the edge between the hills and the clouds. The technique is explained in detail in this screencast.

To avoid the boring and lengthy procedure of creating all the necessary layers, you can download this preset file and load it as shown below:

The mask is initially a simple vertical linear gradient. At the bottom (where the mask is black) the associated layer is completely transparent and therefore hidden, while at the top (where the mask is white) the layer is completely opaque and therefore replaces anything below it. Everywhere in between, the layer has a degree of transparency equal to the shade of gray in the mask.

In order to show the mask, activate the “show active layer” radio button below the preview area, and then select the layer that has to be visualized. In the example above, I am showing the output of the topmost layer in the mask, the one called “transition”. Double-clicking on the name of the “transition layer allows to open the corresponding configuration dialog, where the parameters of the layer (a curves adjustment in this case) can be modified. The curve is initially a simple diagonal: output values exactly match input ones.

If the rightmost point in the curve is moved to the left, and the leftmost to the right, it is possible to modify the vertical gradient and the reduce the size of the transition between pure black and pure white, as shown below:

We are getting closer to our goal of revealing the hills from the background layer, by making the corresponding portion of the mask purely black. However, the transition we have obtained so far is straight, while the contour of the hills has a quite complex curvy shape… this is where the second curves adjustment, associated to the “modulation” layer, comes into play.

As one can see from the screenshot above, between the bottom gradient and the “transition” curve there is a group of three layers: an horizontal gradient, a modulation curve and invert operation. Moreover, the group itself is combined with the bottom vertical gradient in grain merge blending mode.

Double-clicking on the “modulation” layer reveals a tone curve which is initially flat: output values are always 50% independently of the input. Since the output of this “modulation” curve is combined with the bottom gradient in grain merge mode, nothing happens for the moment. However, something interesting happens when a new point is added and dragged in the curve: the shape of the mask matches exactly the curve, like in the example below.

The sky/hills transition

The technique introduced above is used here to create a precise and smooth transition between the sky and the hills. As you can see, with a sufficiently large number of points in the modulation curve one can precisely follow the shape of the hills:

The result of the blending looks like that (click the image to see the initial +1EV version):

Final result Enfuse output blended with the +1EV image (click to see the initial +1EV version)

The sky looks already much denser and saturated in this version, and the clouds have gained in volume and tonal variations. However, the -1EV image looks even better, therefore we are going to take the sky and clouds from it.

To include the -1EV image we are going to follow the same procedure done already in the case of the enfuse output:

  1. add a new layer of type “Open image” and load the -1EV Hugin output (I’ve named this new layer “sky”)

  2. open the mask of the newly created layer and add a transition that reveals only the upper portion of the image

Fortunately we are not obliged to recreate the mask from scratch. PhotoFlow includes a feature called layer cloning, which allows to dynamically copy the content of one layer into another one. Dynamically in the sense that the pixel data gets copied on the fly, such that the destination always reflects the most recent state of the source layer.

After activating the mask of the “sky” layer, add a new layer inside it and choose the “clone layer” tool (see screenshot below).

Cloning a layer from one mask to another

In the tool configuration dialog that will pop-up, one has to choose the desired source layer among those proposed in the list under the label “Layer name”. The generic naming scheme of the layers in the list is “[root group name]/root layer name/OMap/[mask group name]/[maks layer name]”, where the items inside square brackets are optional.

Choice of the clone source layer

In this specific case, I want to apply a smoother transition curve to the same base gradient already used in the mask of the “enfuse” layer. For that we need to choose “enfuse/OMap/gradient modulation (blended)” in order to clone the output of the “gradient modulation” group after the grain merge blend, and then add a new curves tool above the cloned layer:

The final transition mask between the hills and the sky

The result of all the efforts done up to now is shown below; it can be compared with the initial starting point by clicking on the image itself:

Final result Edited image after blending the upper portion of the -1EV version through a layer mask. Click to see the initial +1EV image.

Contrast and saturation

We are not quite done yet, as the image is still a bit too dark and flat, however this version will “tolerate” some contrast and luminance boost much better than a single exposure. In this case I’ve added a curves adjustment at the top of the layer’s stack, and I’ve drawn an S-shaped RGB tone curve as shown below:

The effect of this tone curve is to increase the overall brightness of the image (the middle point is moved to the left) and to compress the shadows and highlights without modifying the black and white points (i.e. the extremes of the curve). This curve definitely gives “pop” to the image (click to see the version before the tone adjustment):

Final result Result of the S-shaped tonal adjustment (click the image to see the version before the adjustment).

However, this comes at the expense of an overall increase in the color saturation, which is a typical side effect of RGB curves. While this saturation boost looks quite nice in the hills, the effect is rather disastrous in the sky. The blue as turned electric, and is far from what a nice, saturated blue sky should look like!

However, there is a simple fix to this problem: change the blend mode of the curves layer from Normal to Luminosity. The tone curve in this case only modified the luminosity of the image, but preserves as much as possible the original colors. The difference between normal and lumnosity blending is shown below (click to see the Normal blending). As one can see, the Luminosity blend tends to produce a duller image, therefore we will need to fix the overall saturation in the next step.

Luminosity blend S-shaped tonal adjustment with Luminosity blend mode (click the image to see the version with Normal blend mode).

To adjust the overall saturation of the image, let’s now add an Hue/Saturation layer above the tone curve and set the saturation value to +50. The result is shown below (click to see the Luminosity blend output).

Saturation boost Saturation set to +50 (click the image to see the Luminosity blend output).

This definitely looks better on the hills, however the sky is again “too blue”. The solution is to decrease the saturation of the top part through an opacity mask. In this case I have followed the same steps as for the mask of the sky blend, but I’ve changed the transition curve to the one shown here:

Saturation mask

In the bottom part the mask is perfectly white, and therefore a +50 saturation boost is applied. On the top the mask is instead just about 30%, and therefore the saturation is increased of only about +15. This gives a better overall color balance to the whole image:

Saturation boost after mask Saturation set to +50 through a transition mask (click the image to see the Luminosity blend output).

Lab blending

The image is already quite ok, but I still would like to add some more tonal variations in the hills. This could be done with lots of different techniques, but in this case I will use one that is very simple and straightforward, and that does not require any complex curve or mask since it uses the image data itself. The basic idea is to take the a and/or b channels of the Lab colorspace, and combine them with the image itself in Overlay blend mode. This will introduce tonal variations depending on the color of the pixels (since the a and b channels only encode the color information). Here I will assume you are quite familiar wit the Lab colorspace. Otherwise, here is the link to the Wikipedia page that should give you enough informations to follow the rest of the tutorial.

Looking at the image, one can already guess that most of the areas in the hills have a yellow component, and will therefore be positive in the b channel, while the sky and clouds are neutral or strongly blue, and therefore have b values that are negative or close to zero. The grass is obviously green and therefore negative in the a channel, while the wineyards are brownish and therefore most likely with positive a values. In PhotoFlow the a and b values are re-mapped to a range between 0 and 100%, so that for example a=0 corresponds to 50%. You will see that this is very convenient for channel blending.

My goal is to lighten the green and the yellow tones, to create a better contrast around the wineyards and add some “volume” to the grass and trees. Let’s first of all inspect the a channel: for that, we’ll need to add a group layer on top of everything (I’ve called it “ab overlay”) and then added a clone layer inside this group. The source of the clone layer is set to the a channel of the “backgroud” layer, as shown in this screenshot:

a channel clone Cloning of the Lab “a” channel of the background layer

A copy of the a channel is shown below, with the contrast enhanced to better see the tonal variations (click to see the original versions):

Saturation boost after mask The Lab a channel (boosted contrast)

As we have already seen, in the a channel the grass is negative and therefore looks dark in the image above. If we want to lighten the grass we therefore need to invert it, to obtain this:

Saturation boost after mask The inverted Lab a channel (boosted contrast)

Let’s now consider the b channel: as sursprising as it might seem, the grass is actually more yellow than green, or at least the b channel values in the grass are higher than the inverted a values. In addition, the trees at the top of the hill stick nicely out of the clouds, much more than in the a channel. All in all, a combination of the two Lab channels seems to be the best for what we want to achieve.

With one exception: the blue sky is very dark in the b channel, while the goal is to leave the sky almost unchanged. The solution is to blend the b channel into the a channel in Lighten mode, so that only the b pixels that are lighter than the corresponding a ones end up in the blended image. The result is shown below (click on the image to see the b channel).

b channel lighten blend b channel blended in Lighten mode (boosted contrast, click the image to see the b channel itself).

And this are the blended a and b channels with the original contrast:

b channel lighten blend The final a and b mask, without contrast correction

The last act is to change the blending mode of the “ab overlay” group to Overlay: the grass and trees get some nice “pop”, while the sky remains basically unchanged:

ab overlay Lab channels overlay (click to see the image after the saturation adjustment).

I’m now almost satisfied with the result, except for one thing: the Lab overlay makes the yellow area on the left of the image way too bright. The solution is a gradient mask (horizontal this time) associated to the “ab overlay group”, to exclude the left part of the image as shown below:

overlay blend mask

The final, masked image is shown here, to be compared with the initial starting point:

final result The image after the masked Lab overlay blend (click to see the initial +1EV version).

The Final Touch

Through the tutorial I have intentionally pushed the editing quite above what I would personally find acceptable. The idea was to show how far one can go with the techniques I have described; fortunatey, the non-destructive editing allows us to go back on our steps and reduce the strength of the various effects until the result looks really ok.

In this specific case, I have lowered the opacity of the “contrast” layer to 90%, the one of the “saturation” layer to 80% and the one of the “ab overlay” group to 40%. Then, feeling that the “b channel” blend was still brightening the yellow areas too much, I have reduced the opacity of the “b channel” layer to 70%.

opacity adjustment Opacities adjusted for a “softer” edit (click on the image to see the previous version).

Another thing I still did not like in the image was the overall color balance: the grass in the foreground looked a bit too “emerald” instead of “yellowish green”, therefore I thought that the image could profit of a general warming up of the colors. For that I have added a curves layer at the top of the editing stack, and brought down the middle of the curve in both the green and blue channels. The move needs to be quite subtle: I brought the middle point down from 50% to 47% in the greens and 45% in the blues, and then I further reduced the opacity of the adjustment to 50%. Here comes the warmed-up version, compared with the image before:

opacity adjustment “Warmer” version (click to see the previous version)

At this point I was almost satisfied. However, I still found that the green stuff at the bottom-right of the image attracted too much my attention and distracted the eye. Therefore I darkened the bottom of the image with a slightly curved gradient applied in “soft light” blend mode. The gradient was created with the same technique used for blending the various exposures. The transition curve is shown below: in this case, the top part was set to 50% gray (remember that we blend the gradient in “soft light” mode) and the bottom part was moved a bit below 50% to obtain a slightly darkening effect:

vignetting gradient Gradient used for darkening the bottom of the image.

It’s done! If you managed to follow me ‘till the end, you are now rewarded with the final image in all its glory, that you can again compare with the initial starting point.

final result The final image (click to see the initial +1EV version).

It has been a quite long journey to arrive here… and I hope not to have lost too many followers on the way!

June 24, 2015

Introducing the Linux Vendor Firmware Service

As some of you may know, I’ve spent the last couple of months talking with various Red Hat partners and other OpenHardware vendors that produce firmware updates. These include most of the laptop vendors that you know and love, along with a few more companies making very specialized hardware.

We’ve now got a process, fwupd, that is capable of taking the packaged update and applying it to the hardware using various forms of upload mechanism. We’ve got a specification, AppStream, which is used to describe the updates and provide metadata for what firmware updates are available to be installed. What we were missing was to “close the circle” and provide a web service for small and medium size vendors to use to upload new firmware and make it available to Linux users.

Microsoft already provides such a thing for vendors to use, and it’s part of the Microsoft Update service. From the vendors I’ve talked to, the majority don’t want to run any tools on their firmware to generate metadata. Most of them don’t even want to commit to hosting the metadata or firmware files in the same place forever, and with a couple of exceptions actually like the Microsoft Update model.

I’ve created a simple web service that’s being called Linux Vendor Firmware Service (perhaps not the final name). You can see the site in action here, although it’s not terribly useful or exciting if you’re not a hardware vendor.

If you are vendor that produces firmware and want an access key for the beta site, please let me know. All firmware uploaded will be transferred to the final site, although I’m still waiting to hear back from Red Hat legal about a longer version of the redistribution agreement.

Anyway, comments very welcome, thanks.

designing openType‐features UI /intro

This blog post kicks off my involvement with bringing openType features to F/LOSS (typo‐)graphical software. I will explain why this is a special, complicated project, followed by the approach and structure I will apply to this design project and finish with what to expect as deliverables.

a bit of a situation

First things first. It is quite likely that when you are reading this, you know what openType features in fonts are. But just in case you don’t, here is a friendly, illustrated explanation of some of the features, without putting you straight into corporate specification hell. The reason I said ‘some’ will become clear below.

What is interesting is that there is a riot going on. The 800‑pound gorillas of (typo‐)graphical software—the adobe creative suite applications—have such bad and disparate UI for handling openType features that a grass‐roots protest movement started among typographers and font designers to do something about it. What followed was a a petition and a hasty promise by adobe to do better—in the future.

meanwhile in Toronto…

These events prodded Nathan Willis into action, because ‘open‐source applications aren’t any better in this regard.’ He organised a openType workshop at this year’s LGM to get a process started to change that. I went there because this is smack in the middle of one of my fields of specialisation: interaction for creatives. As you can read in Nathan’s report, I got immediately drawn into the UI discussion and now we have a loose‐knit project.

The contents and vibe of the questions, and my answers, in the UI discussion all pointed in a certain direction, that I was only able to name a day later: harmonised openType features for all F/LOSS (typo‐)graphical applications definitely has an infrastructure component.

the untouchables

Pure infrastructure—e.g. tap water, electricity, telecoms—offers its designers some unique challenges:

everybody uses it
and everybody’s needs are equally important; there is no opportunity to optimise the design for the specific needs of user groups.
nobody cares
usage is ubiquitous, i.e. we all do not even register that we are using this stuff all the time—until it stops working, then we miss it a hundred times a day. This makes it very hard to research; no recollection, feelings or values are connected to infrastructure, just entitlement.
anyplace, anywhere, anytime
there is no specific contextual information to work with: why is it used; what is the goal; what does it mean in the overall scheme of things; how much is a little, and a lot; it is used sparsely, all the time, at regular intervals, in bursts? It all depends and it all happens. Just deal with it, all of it.
millions of use cases
(not that I consider use cases a method that contributes positively to any software project, but‐) in the case of infrastructure something funny and instructive happens: after a week or two of exploration and mapping, the number of use cases grows exponentially towards a million and… keeps on growing. I have seen this happen, it is like peeling an onion and for every layer you peel off, the number goes up by an order of magnitude. These millions of use cases are an expression of everybody using it anyplace, anywhere, anytime.
heterogeneous capabilities
this is not always the case, but what is available can vary, a lot. For instance public transport: how many connections (incl. zero) are available for a given trip—and how fast, frequent and comfortable these are—is set by the network routes and timetables. An asked‑for capability is on offer, or not. It all depends and it all happens. Just deal with it, all of it.

I have worked as design lead on two infrastructure projects. One was Nokia dual‑SIM, the other openPrinting, where we designed printing dialogs for all linux users (everyone), as used in 10.000 applications (anyplace, anywhere, anytime), connected to 10.000 different printer models (heterogeneous capabilities). I dubbed it the project with five million use cases.

Ah, and since both application and printer configure the available options of the print dialog, there are potentially 100 million configurations. Even if in reality the variability is far less (say, just 1% on both application and printer side; i.e. 100 significantly different printer models and 100 apps that add serious, vital printing options), then it is still an overwhelming 10.000 configurations.

drowning, not waving

In my experience, designing infrastructure is very demanding. All the time one switches between making the big, abstract plan for everything, and designing, minutely, one of many details in complete isolation. Mid‑level interaction design, the journeyman, run‑of‐the‑mill, lay‑out‐a‑screen level, is completely missing.

It is like landscaping a featureless dessert, where every grain of sand is a detail that has to be dealt with. With no focus on particular users, no basis for research, no context, no just‑design‐the‐example, millions of use cases and highly variable capabilities, I have seen very capable design colleagues lose their bearings and give up.

back at the ranch

Enough war stories. How large is this infrastructure component of openType features in (typo‐)graphical software? Let’s check the list:

  • everybody uses it—nope. Whether the user groups turn out to be defined real narrow or quite wide—a matter of vision—they will have in common that all of them know their typesetting. That is a craft, not common knowledge.
  • nobody cares—well, soon they won’t. Right now there is upheaval because nothing is working. As soon as users get a working solution in the programs they use, it will become as interesting as the streetlights in your street.
  • anyplace, anywhere, anytime—right on! This has to work in (typo‐)graphical software; all of it—even the kind I have never heard of, or that will be invented in five years from now. All we know, is that serious typesetting is performed there by users, on any length of text selection.
  • millions of use cases—not quite. The limited user group provides the breaks here. But there is no such limit from the application side; on the contrary: most of these are (open‐ended) tools for creatives. Just thinking about how flexible a medium text is, for information or shapes, gives me the confidence to say that 10.000 use cases could be compiled, if someone would sit down and do it.
  • heterogeneous capabilities—hell yeah! OpenType‐features support in fonts is all over the place and not just because of negligence. First there is the kaleidoscopic diversity of scripts used around the world, most of which you and I have never heard of. Latin script is just the tip of the iceberg. Furthermore, what is supported, and how each supported feature is actually realised, is completely up to the font designer. The openType‐features standard is open‐ended and creates opportunities for adding sophistication. This is only limited by the combined imagination of the font design community.

Adding that up, we get a score of 3&half out of 5. By doing this exercise I have just found out that openType features in (typo‐)graphical software is 70% infrastructural. This is what I meant with that it is a special, complicated project.

structure—the future

In projects like these structuring the design work is make‑or‐break; either we set off in the right direction, or never get to any destination—not even a wrong one. The structure I use is no secret. Here is my adaptation for this project:

A product vision is not that easy to formulate for pure infrastructure; it tends to shrink towards ‘because it’s there.’ For instance at openPrinting the vision was ‘printing that just works.’ I still regret not having twisted some arms to get a value statement added to that. There were times that this value void was keeping us from creating true next‐generation solutions.

Apart from ‘what’s the value?’ also ‘who is this for?’ needs to be clarified; as we saw earlier, openType features is not for everyone. The identity question, ‘what is it we are making?’ may be a lot less spectacular, but it needs to be agreed. I will take this to the Create mailing list first, mainly to find out who are the ‘fish that swim upfront’, i.e. the people with vision and drive. Step two is an online vision session, resulting in a defined vision.

The deliverable is a to‑the‐point vision statement. If you want to get a good picture of what that entails, then I recommend you read this super‐informative blog post. Bonus: it is completely font‐related.

we want the funk, the whole funk, nothing but the funk

A deep understanding of the functionality is the key to success in this project. I already got burned once with openType features in the Metapolator project. Several font designers told me: ‘it is only a bunch of substitution rules.’ Until it turned out it isn’t. Then at the LGM meeting another surprise complication surfaced. Later I briefly check the specification and there is yet another.

This is what I meant before with that friendly page explaining some of the features. I do not trust it to be complete (and it is only Latin‐centric, anyway). As interaction architect I will have to be completely on top of the functionality, never having to rely on someone else to explain me what ‘is in the box.’ This means knowing the openType standards.

Central to it is the feature tags specification and the feature definition syntax. This contains both the material for understanding of how complicated it all can get and the structures that I can use to formulate UI solutions. It is one of the few aspects that are firm and finite in this project.

The deliverable is a functionality overview, written up in the project wiki.

talking heads

I will do user research, say interview half a dozen users, to gain insight into the act of typesetting, the other aspect that is firm and finite in this project. Which users to recruit depends on what is defined in the product vision. Note that the focus is on the essence of typesetting, while ignoring its specific role in the different (typo‐)graphical applications, and not get swamped by the latter’s diversity.

The deliverable is notes of interest from the interviews, written up in the wiki.

I look forward to an exchange with F/LOSS (typo‐)graphical applications via the Create list. This is not intended to get some kind of inventory of all the apps and how different they are. In this project that is taken as abstract and infinite—the good old infrastructural way.

What I want to find out is in how many different ways openType features must, or can, be integrated in the UIs of (typo‐)graphical applications. In blunt terms: how much space is there available for this stuff, what shape does it have and what is the duty cycle (permanently displayed, or a pop‑up, or…)? These diverse application needs are clustered into just enough UI models (say, six) and used below.

The deliverable is the UI models, written up in the wiki.

getting an eyeful

Then it is time to do an expert evaluation of existing openType‐features UI and all these UI ideas offered by users when the petition did its rounds. All of these get evaluated against—

  • the product vision: does it realise the goals? Is it appropriate for the defined user groups?
  • the functionality: can it cope with the heterogeneous capabilities?
  • the user research: how tuned is it for the essence of typesetting?
  • the UI models: how well does it fit with each model?

All of it gets analysed, then sorted into the good, the bad and the ugly. There will be a tiny amount of gold, mostly in the form ideas and intentions—not really what one would call a design—and a large catalog of what exactly not to do.

The deliverable is notes of interest from the evaluation, written up in the wiki.

warp drive

Then comes the moment to stop looking backwards and start working forwards; to start creating the future. First a solutions model is made. This is a combination of a broad‐strokes solution that cuts the project down to manageable proportions and a defined approach how to deal with the rest, the more detailed design work.

The next stage is to design a generic solution, one that already deals with all of it, all the hairy stuff: text selections of any length, all the heterogeneous capabilities, the typesetting workflow, clear representation of all openType features available and their current state. This will be specified in a wiki, in the form of UI patterns.

With the generic solution in place it will be real clear for the central software library in this small universe, HarfBuzz, which new capabilities it will need to offer to F/LOSS (typo‐)graphical software.

home straight

The final design phase is to work out the generic solution for each UI model. These will still be toolkit agnostic (not specific for KDE or gnome) and, btw, for desktop UI‐only (touch is a whole ’nother kettle of fish). This will also be specified in the wiki.

With this, every (typo‐)graphical software project can go to the wiki, pick a UI model that most matches their own UI structure and see a concrete UI design that, with a minimum of adaptations, they can implement in their own application. They will find that HarfBuzz fully supports their implementation.

While working on Metapolator in the last year I had good experience with sharing what I was doing almost every day I was working on it, through its community. There were encouragement, ideas, discussions, petitions and corrections—all useful. I think this can be replicated on the Create list.

June 23, 2015

Cross-Platform Android Development Toolkits: Kivy vs. PhoneGap / Cordova

Although Ant builds have made Android development much easier, I've long been curious about the cross-platform phone development apps: you write a simple app in some common language, like HTML or Python, then run something that can turn it into apps on multiple mobile platforms, like Android, iOS, Blackberry, Windows phone, UbuntoOS, FirefoxOS or Tizen.

Last week I tried two of the many cross-platform mobile frameworks: Kivy and PhoneGap.

Kivy lets you develop in Python, which sounded like a big plus. I went to a Kivy talk at PyCon a year ago and it looked pretty interesting. PhoneGap takes web apps written in HTML, CSS and Javascript and packages them like native applications. PhoneGap seems much more popular, but I wanted to see how it and Kivy compared. Both projects are free, open source software.

If you want to skip the gory details, skip to the summary: how do Kivy and PhoneGap compare?


I tried PhoneGap first. It's based on Node.js, so the first step was installing that. Debian has packages for nodejs, so apt-get install nodejs npm nodejs-legacy did the trick. You need nodejs-legacy to get the "node" command, which you'll need for installing PhoneGap.

Now comes a confusing part. You'll be using npm to install ... something. But depending on which tutorial you're following, it may tell you to install and use either phonegap or cordova.

Cordova is an Apache project which is intertwined with PhoneGap. After reading all their FAQs on the subject, I'm as confused as ever about where PhoneGap ends and Cordova begins, which one is newer, which one is more open-source, whether I should say I'm developing in PhoneGap or Cordova, or even whether I should be asking questions on the #phonegap or #cordova channels on Freenode. (The one question I had, which came up later in the process, I asked on #phonegap and got a helpful answer very quickly.) Neither one is packaged in Debian.

After some searching for a good, comprehensive tutorial, I ended up on a The Cordova tutorial rather than a PhoneGap one. So I typed:

sudo npm install -g cordova

Once it's installed, you can create a new app, add the android platform (assuming you already have android development tools installed) and build your new app:

cordova create hello com.example.hello HelloWorld
cordova platform add android
cordova build


Error: Please install Android target: "android-22"
Apparently Cordova/Phonegap can only build with its own preferred version of android, which currently is 22. Editing files to specify android-19 didn't work for me; it just gave errors at a different point.

So I fired up the Android SDK manager, selected android-22 for install, accepted the license ... and waited ... and waited. In the end it took over two hours to download the android-22 SDK; the system image is 13Gb! So that's a bit of a strike against PhoneGap.

While I was waiting for android-22 to download, I took a look at Kivy.


As a Python enthusiast, I wanted to like Kivy best. Plus, it's in the Debian repositories: I installed it with sudo apt-get install python-kivy python-kivy-examples

They have a nice quickstart tutorial for writing a Hello World app on their site. You write it, run it locally in python to bring up a window and see what the app will look like. But then the tutorial immediately jumps into more advanced programming without telling you how to build and deploy your Hello World. For Android, that information is in the Android Packaging Guide. They recommend an app called Buildozer (cute name), which you have to pull from git, build and install.

buildozer init
buildozer android debug deploy run
got started on building ... but then I noticed that it was attempting to download and build its own version of apache ant (sort of a Java version of make). I already have ant -- I've been using it for weeks for building my own Java android apps. Why did it want a different version?

The file buildozer.spec in your project's directory lets you uncomment and customize variables like:

# (int) Android SDK version to use
android.sdk = 21

# (str) Android NDK directory (if empty, it will be automatically downloaded.)
# android.ndk_path = 

# (str) Android SDK directory (if empty, it will be automatically downloaded.)
# android.sdk_path = 

Unlike a lot of Android build packages, buildozer will not inherit variables like ANDROID_SDK, ANDROID_NDK and ANDROID_HOME from your environment; you must edit buildozer.spec.

But that doesn't help with ant. Fortunately, when I inspected the Python code for buildozer itself, I discovered there was another variable that isn't mentioned in the default spec file. Just add this line:

android.ant_path = /usr/bin

Next, buildozer gave me a slew of compilation errors:

kivy/graphics/opengl.c: No such file or directory
 ... many many more lines of compilation interspersed with errors
kivy/graphics/vbo.c:1:2: error: #error Do not use this file, it is the result of a failed Cython compilation.

I had to ask on #kivy to solve that one. It turns out that the current version of cython, 0.22, doesn't work with kivy stable. My choices were to uninstall kivy and pull the development version from git, or to uninstall cython and install version 0.21.2 via pip. I opted for the latter option. Either way, there's no "make clean", so removing the dist and build directories let me start over with the new cython.

apt-get purge cython
sudo pip install Cython==0.21.2
rm -rf ./.buildozer/android/platform/python-for-android/dist
rm -rf ./.buildozer/android/platform/python-for-android/build

Buildozer was now happy, and proceeded to download and build Python-2.7.2, pygame and a large collection of other Python libraries for the ARM platform. Apparently each app packages the Python language and all libraries it needs into the Android .apk file.

Eventually I ran into trouble because I'd named my python file hello.py instead of main.py; apparently this is something you're not allowed to change and they don't mention it in the docs, but that was easily solved. Then I ran into trouble again:

Exception: Unable to find capture version in ./main.py (looking for `__version__ = ['"](.*)['"]`)
The buildozer.spec file offers two types of versioning: by default "method 1" is enabled, but I never figured out how to get past that error with "method 1" so I commented it out and uncommented "method 2". With that, I was finally able to build an Android package.

The .apk file it created was quite large because of all the embedded Python libraries: for the little 77-line pong demo, /usr/share/kivy-examples/tutorials/pong in the Debian kivy-examples package, the apk came out 7.3Mb. For comparison, my FeedViewer native java app, roughly 2000 lines of Java plus a few XML files, produces a 44k apk.

The next step was to make a real mini app. But when I looked through the Kivy examples, they all seemed highly specialized, and I couldn't find any documentation that addressed issues like what widgets were available or how to lay them out. How do I add a basic text widget? How do I put a button next to it? How do I get the app to launch in portrait rather than landscape mode? Is there any way to speed up the very slow initialization?

I'd spent a few hours on Kivy and made a Hello World app, but I was having trouble figuring out how to do anything more. I needed a change of scenery.

PhoneGap, redux

By this time, android-22 had finally finished downloading. I was ready to try PhoneGap again.

This time,

cordova platforms add android
cordova build
worked fine. It took a long time, because it downloaded the huge gradle build system rather than using something simpler like ant. I already have a copy of gradle somewhere (I downloaded it for the OsmAnd build), but it's not in my path, and I was too beaten down by this point to figure out where it was and how to get cordova to point to it.

Cordova eventually produced a 1.8Mb "hello world" apk -- a quarter the size of the Kivy package, though 20 times as big as a native Java app. Deployed on Android, it initialized much faster than the Kivy app, and came up in portrait mode but rotated correctly if I rotated the phone.

Editing the HTML, CSS and Javascript was fairly simple. You'll want to replace pretty much all of the default CSS if you don't want your app monopolized by the Cordova icon.

The only tricky part was file access: opening a file:// URL didn't work. I asked on #phonegap and someone helpfully told me I'd need the file plugin. That was easy to find in the documentation, and I added it like this:

cordova plugin search file
cordova plugin add org.apache.cordova.file

My final apk, for a small web app I use regularly on Android, was almost the same size as their hello world example: 1.8Mb. And it works great: phonegap had no problem playing an audio clip, something that was tricky when I was trying to do the same thing from a native Android java WebView class.

Summary: How do Kivy and PhoneGap compare?

This has been a long article, I know. So how do Kivy and PhoneGap compare, and which one will I be using?

They both need a large amount of disk space for the development environment. I wish I had good numbers to give you, but I was working with both systems at the same time, and their packages are scattered all over the disk so I haven't found a good way of measuring their size. I suspect PhoneGap is quite a bit bigger, because it uses gradle rather than ant and because it insists on android-22.

On the other hand, PhoneGap wins big on packaged application size: its .apk files are a quarter the size of Kivy's.

PhoneGap definitely wins on documentation. Kivy has seemingly lots of documentation, but its tutorials jumped around rather than following a logical sequence, and I had trouble finding answers to basic questions like "How do I display a text field with a button?" PhoneGap doesn't need that, because the UI is basic HTML and CSS -- limited though they are, at least most people know how to use them.

Finally, PhoneGap wins on startup speed. For my very simple test app, startup was more or less immediate, while the Kivy Hello World app required several seconds of startup time on my Galaxy S4.

Kivy is an interesting project. I like the ant-based build, the straightforward .spec file, and of course the Python language. But it still has some catching up to do in performance and documentation. For throwing together a simple app and packaging it for Android, I have to give the win to PhoneGap.

Font Features Land in Inkscape Trunk

I’ve just landed basic font features support in the development version of Inkscape. What are font features and why should you be excited? (And maybe why should you not be too excited.)

The letter combination 'st' shown without a ligature and with a 'historical' ligature.

Font Features

Font features support allows one to enable (or disable) the OpenType tables within a given font, allowing you to select alternative glyphs for rendering text.

A series of examples showing the same text with and without applying various OpenType tables.

A sample of font features in action. The font is Linux Biolinum which has reasonable OpenType tables. Try the SVG (with WOFF).

The new CSS Fonts Module Level 3 adds a variety of CSS properties for defining which OpenType tables to enable/disable (as well as having nice examples of each property’s use — this is one of the more readable W3C specifications). Inkscape trunk supports the ‘font-variants-liguatures’, ‘font-variant-caps’, ‘font-variant-numeric’, ‘font-variant-position’, and ‘font-feature-settings’ properties. The properties can be set under the Variants tab in the Text and Font dialog.

The 'Variants' Tab in the 'Text and Fonts' dialog showing a series of buttons to select which font features are enabled.

The Variants tab in the Text and Font dialog.

Why you shouldn’t be too excited

Being able to enable various font features within a font is quite exciting but there are quite a few caveats at the moment:

  • One must use a trunk build of Inkscape linked with the latest unstable version of Pango (1.37.1 or greater).
  • Font feature support in fonts is usually minimal and often buggy. It’s hard to know what OpenType tables are available in which fonts.
  • Browser support is sparse. Firefox has rather good support. Chrome support seems limited to ligatures.
  • Correct display of alternative glyphs requires that the same font as used in content creation is used for rendering. On the Web the best way to do this is to use WOFF but Inkscape has no support for using User fonts (this is a future goal of Inkscape but will require considerable work).


I would like to thank: Behdad Esfahbod, maintainer of Pango for adding the code to Pango to make accessing the OpenType tables possible. Thanks as well to Matthias Clasen and Akira Togoh who are the source of the patch to Pango. Thanks also to all the people that supported the Inkscape Hackfest in Toronto where I was able to meet and discuss Pango issues with Behdad in person and also where the idea of adding font feature support to Inkscape germinated.

June 22, 2015

Penultimate Kickstarter voting results

Two weeks before voting closes we’re at a response rate of 91.38%: 604 of 661 possible votes. If you’re eligible to vote and haven’t done so yet, you have until 10am CEST on July 6 to make the response rate even higher! Note that no-award backers who have pledged 15 euros or more can also vote, though they haven’t received a survey. If this is you, please send mail to irina@krita.org, either with your vote or to ask for the list.

We collected enough pledges for nine whole stretch goals. Two 1500-euro backers each added a stretch goal of their own: one already in the list (“Update the look & feel of the layer docker” which is at #6, meaning that #10, “Stacked brushes” got in as well) and one off-list, “Lazy Brush” — you can see how it works here.

The table below shows the penultimate results, with in the last column the related phabricator task. To access phabricator you will need a KDE identity account, which can be made at identity.kde.org. If you have a forum account, you already have made a KDE identity account. Use this login information in the ‘LDAP’ login area. The phabricator tasks are where we discuss the requirements of each feature. This means that all considerations about the implementation are mentioned here. You can subscribe to a phabricator task to get e-mail updates on it.

Votes Stretch goal Phabricator Task
0 N/A Extra Lazy Brush: interactive tool for coloring the image in a couple of strokes T372
1 116 19.02% 10. Animated file formats export: animated gif, animated png and spritemaps T116
2 54 8.85% 8. Rulers and guides: drag out guides from the rulers and generate, save and load common sets of guides. Save guides with the document. T114
3 50 8.20% 1. Multiple layer selection improvements T105
4 47 7.70% 19. Make it possible to edit brush tips in Krita T125
5 41 6.72% 21. Implement a Heads-Up-Display to manipulate the common brush settings: opacity, size, flow and others. T127
6 38 6.23% 2. Update the look & feel of the layer docker panel (1500 euro stretch goal) T106
7 36 5.90% 22. Fuzzy strokes: make the stroke consistent, but add randomness between strokes. T166
8 33 5.41% 5. Improve grids: add a grid docker, add new grid definitions, snap to grid T109
9 31 5.08% 6. Manage palettes and color swatches T112
10 28 4.59% 18. Stacked brushes: stack two or more brushes together and use them in one stroke T124

And these didn’t make it, but we’re keeping them for next time:

  Votes   Stretch goal
11 23 3.77% 4. Select presets using keyboard shortcuts
12 19 3.11% 13. Scale from center pivot: right now, we transform from the corners, not the pivot point.
13 18 2.95% 9. Composition helps: vector objects that you can place and that help with creating rules of thirds, spiral, golden mean and other compositions.
14 18 2.95% 7. Implement a Heads-Up-Display for easy manipulation of the view
15 16 2.62% 20. Select textures on the fly to use in textured brushes
16 9 1.48% 15. HDR gradients
17 9 1.48% 11. Add precision to the layer move tool
18 8 1.31% 17. Gradient map filter
19 5 0.82% 16. On-canvas gradient previews
20 5 0.82% 12. Show a tooltip when hovering over a layer with content to show which one you’re going to move.
21 3 0.49% 3. Improve feedback when using more than one color space in a single image
22 3 0.49% 14. Add a gradient editor for stop gradients

June 20, 2015

Call to testers

We work to restoration of multimedia support within Stellarium (playback for audio and video).

Stellarium can playback for audio and video on Linux and Windows (partially?) at the moment. Development team has prepared binary packages for 3 supported platforms for public testing of this feature.

Please check work of GZ_videotest_MP4.ssc and GZ_videotest_WMV.ssc scripts within Stellarium on Windows/OS X (Download page: https://launchpad.net/stellarium/+download)

Ubuntu users (14.10+) can use this PPA for testing - ppa:alexwolf/stellarium-media

Thank you!

June 19, 2015

Krita and Bug Week!

It’s been a while since we made a new build of Krita… So, here’s Krita! In all the hectics surrounding the Kickstarter campaign, we worked our tails off to add new features, improvements and fixes, and that caused considerable churn in the code. And that, in turn, meant that the was a bit, well, dot zero! So here’s a with the following improvements:


  • Implemented a composite RGB curve for Curves filter
  • Adding a Fish Eye Vanishing Point assistant.
  • Added concentric ellipse assistant.
  • Have the Settings dialog’s default button only set the defaults for the  currently selected settings page.
  • Added memory configuration options, including the location of the temporary scratch files
  • Add a profiler option: https://userbase.kde.org/Krita/Manual/Preferences/Performance
  • Create a copy of a currently open image (wish 348256)
  • Add a one way pressure sensor(in the sensors) (wish 344753 )
  • Show memory consumption in the statusbar

Fixed Bugs

  • Only set the resolution using tiff tags if they exist, this caused issues with Krita saving JPEG files to .kra.
  • BUG:349078 Fix trimming an image under Filter Layers
  • BUG:324505,294122 Fix Adjustment layers composition
  • Bug 349185 Fix explicitly showing the cursor when the Stabilizer is active
  • Fix showing a floating message when switching MDI subwindows
  • BUG:348533 Fixed a bug when the tools became disabled after new document creation
  • BUG:331708,349108 Fix a crash when redoing actions
  • BUG:348737 Fix copy/pasto: fade isn’t speed
  • BUG:345762 Mirror View now correctly remembers which subwindow is mirrored.
  • BUG:349058 Fixed bug where rulers were only visible on the canvas that was active when the option was first toggled. Fixed similar bugs with Mirror View and Wrap Around Mode.
  • BUG:331708 Fix a crash when trying to redo after canceling a stroke
  • Fixes an issue where some config files may not be picked up by the config system.
  • BUG:299555 Change cursor to “forbidden” when active layer is locked or can’t be edited with the active tool.
  • BUG:345564 Don’t warn about image file being invalid after user cancels “Background Image (overrides color)” dialog while configuring Krita
  • BUG: 348886:Don’t scroll up the list while adding or removing resources to the bundle
  • fix default presets for bristle engine, restoring scale value to 1
  • fixed a small bug in wdglayerproperties.ui that made the color profile not show up properly in the layer properties dialog. Patch by Amadiro, thanks!
  • BUG: 348507 Fix issue with import PDF dialog resolution
  • BUG:347004 Filter preview button difficult to see state
  • BUG:345754 Fixes perspective assistant lockup
  • Remember current meta-data author.
  • BUG:348726 Be more careful when ‘smart’ merging metadata
  • BUG:348652 Correctly initialize the temporary swap file
  • Fix loading PSD files saved in OpenCanvas



Now, that’s not to say that is perfect… And the increased interest in Krita has also led to an increase in reported bugs! We’ve got about 315 open bugs now, which is a record!

In fact, we need help. We need help with what’s called bug triage: checking which bugs are actually duplicates of each other and which bugs are actually reproducible and which bugs are more like wishes than bugs.

And then we need to do something about the bugs that are proper, valid and reproducible! So, we propose to have our first 2015 bug weekend. We’d like to invite everyone to install Krita and go through some bugs in  the bug list and help us triage!

Here’s the list of bugs that need urgent triaging:

Unconfirmed, reopened, need-info Krita Bugs

Let’s get this list to zero!

And of course, there are also bugs that are already confirmed, but might have duplicated in the list above:

Confirmed Krita Bugs

We’re not looking for new bugs — but if you find one, take a moment to read Dmitry’s guide on bug reporting.

Here’s the Bug Hunter Howto, too. Join us this weekend and help us get the bug count down and the bug list manageable! In the coming two weeks, the developers will be busy fixing bugs so we can have a really stable base for all the kickstarter work!

Mosquitoes-hunter by David Revoy

Bug hunter by David Revoy

June 18, 2015

rethinking text handling in GIMP /1

At the beginning of this month, most of the open source graphics applications community convened for the libre graphics meeting in Vienna, Austria. After a one‐year hiatus, the GIMP team was back in force, and so were two of its UI team members, my colleague Kate Price and yours truly. We delivered a lecture about our most recent GIMP project, which we will write up in three parts. Here is the first.

beyond the text tool

This project was the first one in our series of open internships. I had created these last year, combining mentoring, open working and getting serious interaction design work done for the GIMP project.

Dominique Schmidt worked with us on this project, which goal is to rethink everything associated with text handling in GIMP. It would have been cool to have Dominique on stage in Vienna, telling the story himself. But he had this holiday booked; to a tropical destination; and surprisingly he insisted on going. Since projects means teamwork at m+mi works, Kate and I were instead fully able to report from the trenches.

The text project is quite a wide‐ranging one and at the moment of writing it is in‑progress. So there are going to be no magic bullets, or detailed interaction design specs to be presented—yet. Certainly a wide‐ranging project demands a structured approach, else it goes nowhere. It is exactly this structure that we will use to present it here, in this and the follow‑up blogposts.


Step one: compiling a vision for the project. With text—and editing, styling it—being so ubiquitous in computers, it is very easy to get stuck in nuts‑and‐bolts discussions about it. The trick is to concentrate on the big issue: what is the meaning of text in GIMP work? What we needed was a vision: ‘what is it; who is it for and where is the value?’

The vision is compiled out of quite a few elements: of course it has to align with the overall product vision of GIMP; we interviewed the GIMP developers who have worked on text; it includes the GEGL future of non‑linear working; and we held an informal user survey on the developer mailing list—plenty of users there—about the essence of working with text.

building blocks

To show how the resulting vision, worked out, let’s discuss it line‑by‑line:

  • ‘Text in GIMP is always part of the composition—unless it is an annotation.’

This puts text thoroughly in its proper place; it is never the end‑goal, by itself. Also defined is a separate annotation workflow: users adding notes for themselves or for collaboration purposes. This sets us up for a small side project: annotations in GIMP.

  • ‘The canvas is not a page; there is no such thing as paging in GIMP.’

I love this one. The first part was phrased by Dominique, the second by Kate. This puts clear limits on what text functionality GIMP needs: beyond paragraphs, but short of page‐related stuff. Note that ‘paging’ is a verb, it is about working with pages and managing pages.

  • ‘Text is both for reading and used as graphical shapes; meta data in text—mark‑up, semantics—are not supported.’

This puts on equal footing that text is for information transport and just shapes; an excellent example where the GIMP context makes a big difference. The second part excludes any meta data based processing: e.g. auto‐layouting or auto‐image manipulation.

And now, we get to the value section:

  • GIMP users get: minute control over typography and the layout of text on the canvas.’

If there is one thing we learned from surveying users, it is the essence of typography: to control exactly, down to the (sub)pixel, the placement of every text glyph. This control is exerted via the typographical parameters: margins, leading, tracking, kerning, etc. GIMP needs to support the full spectrum of these and support top‑notch typographical workflows.

  • GIMP users get: internationalisation of text handling, for all locales supported by unicode.’

This thoroughly expands our horizon, we have to look at use of text word‐wide: many different writing systems, different writing directions. But it also sets clear limits: if it cannot be represented in unicode, it is not in scope.

  • GIMP users get: text remains editable forever.’

This anchors the GEGL dream in the project: no matter how many heavy graphical treatments have been applied on top of a piece of text, one can always change it and see the treated result immediately. But also included here is a deep understanding of projects and workflows. E.g. Murphy’s law: a mistake in the text is always found at the last moment. Or the fact that clients always keep changing the text, even after the delivery date.

  • GIMP users get: super‐fast workflow, when they are experienced.’

This reflects that GIMP is for graphics production work and the speed‑of‐use requirements that accompany that.

it’s a wrap

And there we have it. Here they are again, together as the vision statement:

  • Text in GIMP is always part of the composition—unless it is an annotation;
  • the canvas is not a page; there is no such thing as paging in GIMP;
  • text is both for reading and used as graphical shapes; meta data in text—mark‑up, semantics—are not supported.

GIMP users get:

  • minute control over typography and the layout of text on the canvas;
  • internationalisation of text handling, for all locales supported by unicode;
  • text remains editable forever;
  • super‐fast workflow, when they are experienced.

Nice and compact, so that it can be used as a tool. But these seven simple sentences pack a punch. Just formulating them has knocked this project into shape. The goals are clear from hereon.

And on that note, I hand over to Kate, who will continue our overview of the steps we took, in part two.

June 15, 2015

Interview with Graphos


Could you tell us something about yourself?

My name is Przemek Świszcz. I also publish as Graphos. I’m a drawer and a graphic artist. I do comic strips and illustrations and create in 3D as well.

Do you paint professionally, as a hobby artist, or both?

I draw professionally but it is also my hobby. And fortunately, I can bring it together.

What genre(s) do you work in?

I’m interested mostly in fantasy, science fiction and humorous topics.

Whose work inspires you most — who are your role models as an artist?

Among the many excellent artists, my favorite creators are Grzegorz Rosiński, Janusz Christa, Simon Bisley and Don Rosa. However, not only comics artists but also many others are inspiration for me.

How and when did you get to try digital painting for the first time?

This is connected with computer games. I’m a gamer myself so I combined these two hobbies. That’s the reason why I draw such forms as concept art. I decided to try my hand at digital.

What makes you choose digital over traditional painting?

I still use some traditional techniques like watercolor, acrylic and drawing ink. But digital gives endless possibilities and enables editing. Most of all, it is very handy. I have all works on the computer immediately, there is no need to scan and process. This is especially important with regard to comic books.

How did you find out about Krita?

I found out about Krita when I bought a graphic tablet and I was looking for an appropriate drawing tool. I read lots of positive opinions about this program on the Internet forum www.blender.pl, so I decided to try.

What was your first impression?

My first impression of this program was “Wow, it is really good, very handy and intuitive”.

What do you love about Krita?

I like the fact that Krita is free and being updated and added to all the time, It’s a great and professional tool to create comic strips.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Krita is already a really good computer program. Some errors and crashes appears from time to time but with every new version it’s getting better. I think that tools to create animation could be a great novelty. I’m looking forward to the next version of Krita.

What sets Krita apart from the other tools that you use?

I can already do most of my work with Krita. I use other programs occasionally because some of them have useful options for processing drawings and photos.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

That could be the drawing with the dragon and dwarf or fantasy illustrations in cartoon style with the brawny man. It gives me lots of fun, both drawing it and finding out new possibilities of Krita.

What techniques and brushes did you use in it?

I mainly used draft pencils and draft brushes which turn out to be my favourite tools in Krita.

Where can people see more of your work?

The majority of my works you can find here: http://drawcrowd.com/graphos/projects

Anything else you’d like to share?

Thank you for inviting me. I hope that Krita will acquire more and more users because it is worth it, keep it up :)

June 11, 2015

designing interaction for creative pros /1

Last week at LGM 2015 I did a lecture on one of my fields of specialisation: designing interaction for creatives. There were four sections and I will cover each of them in a separate blog post. Here is part one.

The lecture coincided with the launch of the demo of Metapolator, a project I have been working on since LGM 2014. All the practical examples will be from that project and my designs for it.

see what I mean?

‘So what’s Metapolator?’ you might ask. Well, there is a definition for that:

‘Metapolator is an open web tool for making many fonts. It supports working in a font design space, instead of one glyph, one face, at a time.

‘With Metapolator, “pro” font designers are able to create and edit fonts and font families much faster, with inherent consistency. They gain unique exploration possibilities and the tools to quickly adapt typefaces to different media and domains of use.

‘With Metapolator, typographers gain the possibility to change existing fonts—or even create new ones—to their needs.

‘Metapolator is extendible through plugins and custom specimens. It contains all the tools and fine control that designers need to finish a font.’

theme time

That is the product vision of Metapolator, which I helped to define the moment I got involved with the project. You can read all about that in the making‑of.

One of the key questions answered in a product vision is: who is this for? And with that, I have arrived at what this blog post is about:

Products need a clear, narrow definition of their target users groups. Software for creatives needs a clear definition whether it is for professionals, or not.

Checking the vision, we see that Metapolator user groups are well defined. They are ‘“pro” font designers’ and ‘typographers.’ The former are pro by definition and the latter come with their own full set of baggage; they are pro by implication.

define it like a pro

But what does pro actually mean? And why is it in quotes in the Metapolator vision? Well, the rather down‐to‐earth definition of professional—earning money with an occupation—is not helping us here. There are many making‐the‐rent professionals who are terrible hacks at what they do.

Instead it is useful to think of pros as those who have mastered a craft—a creative craft in our case. Examples of these are drawing, painting; photographing, filming, writing, animating, and editing these; sewing, the list goes on and on.

Making software for creative pros means making it for those who have worked at least 10.000 hours in that field, honing their craft. And also making it for for the apprentices and journeymen who are working to get there. These two groups do not need special ‘training wheels’ modes; they just need to get their hands dirty with the real thing.

the point

The real world just called and left a message:

making it for pros comes at a price.

First of all, it is very demanding—I will cover this in the follow‑up posts. Second, it puts some real limits on who else you can make it for. Making it for…

is perfectly focussed, to meet those demanding needs.
pros + enthusiasts
(the latter also known as prosumers.) This compromises how good one can make it for pros; better keep in check how sprawling that enthusiast faction is allowed to be.
pros + enthusiasts + casual users
forget it, because pros and casual have diametrically opposite needs. There is no room in the UI for both, and with room I mean screen real estate and communication bandwidth.
pros + casual users
for the same reasons one can royally forget about this one too. Enough said.

the fall‐out

You might think: ‘duh, that speaks for itself, just make the right choice and roll with it.’ If it was only that easy. My experience has been that projects really do not like to commit here, especially when they know the consequences outlined above. And when they did make a choice, I have seen the natural tendency to worm out of it later.

I guess that having clear goals is scary for quite a few folks. Having focussed user groups means saying ‘we don’t care about you’ to vast groups of people. Only the visionary think of that as positive.

Furthermore, clear goals are a fast and effective tool to weed out bad ideas, on an industrial scale. That’s good for the product, but upsets the people who came up with these ideas. So they renegotiate on the clear goals, attacking the root of the rejection.

no fudging!

In short: define it; is your software for creatives made for pros, or not? Then compile a set of coherent user groups. In the case of Metapolator the ‘pro’ font designers and typographers fit together beautifully. Once defined, stick with it.

That’s it for part one. Here is part two: a tale of cars.

[editor’s note: Gee Peter, this post contains a lot of talk about pros, but where is the creative angle?] True, the gist this post is valid for all professionals. The upcoming parts will feature more ‘creative’ content, more Metapolator, and illustrations.

writing a product vision for Metapolator

A week ago I kicked off my involvement with the Metapolator project as I always do: with a product vision session. Metapolator is an open project and it was the first time I did the session online, so you have the chance to see the session recording (warning: 2&half hours long), which is a rare opportunity to witness such a highly strategic meeting; normally this is top‐secret stuff.

boom boom

For those not familiar with a product vision, it is a statement that we define as ‘the heartbeat of your product, it is what you are making, reduced down to its core essence.’ A clear vision helps a project to focus, to fight off distractions and to take tough design decisions.

To get a vision on the table I moderate a session with the people who drive the product development, who I simply ask ‘what is it we are making, who is it for, and where is the value?’ The session lasts until I am satisfied with the answers. I then write up the vision statement in a few short paragraphs and fine-tune it with the session participants.

To cut to the chase, here is the product vision statement for Metapolator:

‘Metapolator is an open web tool for making many fonts. It supports working in a font design space, instead of one glyph, one face, at a time.
‘With Metapolator, “pro” font designers are able to create and edit fonts and font families much faster, with inherent consistency. They gain unique exploration possibilities and the tools to quickly adapt typefaces to different media and domains of use.
‘With Metapolator, typographers gain the possibility to change existing fonts—or even create new ones—to their needs.
‘Metapolator is extendible through plugins and custom specimens. It contains all the tools and fine control that designers need to finish a font.’

mass deconstruction

I think that makes it already quite clear what Metapolator is. However, to demonstrate what goes into writing a product vision, and to serve as a more fleshed out vision briefing, I will now discuss it sentence by sentence.

‘Metapolator is an open web tool for making many fonts.’
  • There is no standard template for writing a product vision, the structure it needs is as varied as the projects I work with. But then again it has always worked for me to lead off with a statement of identity; to start answering the question ‘what is it we are making?’ And here we have it.
  • open or libre? This was discussed during the session. At the end Simon Egli, Metapolator founder and driving force, wanted to express that we aim beyond just libre (i.e. open source code) and that ‘open’ also applies to the vibe of the tool on the user side.
  • web‑based: this is not just a statement of the technology used, of the fact that it runs in the browser. It is also a solid commitment that it runs on all desktops—mac, win and linux. And it implies that starting to use Metapolator is as easy as clicking/typing the right URL; nothing more required.
  • tool or application? The former fits better with the fact that font design and typography are master crafts (I can just see the tool in the hand of the master).
  • making or designing fonts? I have learned in the last couple of weeks that there is a font design phase where a designer concentrates on shaping eight strategic characters (for latin fonts). This is followed by a production phase where the whole character set is fleshed out, the spacing between all character pairs set, then different weights (e.g. thin and bold) are derived and maybe also narrow end extended variants. This phase is very laborious and often outsourced. ‘Making’ fonts captures both design and production phases.
  • many fonts: this is the heart of the matter. You can see from the previous point that making fonts is up to now a piecemeal activity. Metapolator is going to change that. It is dedicated to either making many different fonts in a row, or a large font family, even a collection of related families. The implication is that in the user interaction of Metapolator the focus is on making many fonts and the user needs for making many fonts take precedence in all design decisions.
‘It supports working in a font design space, instead of one glyph, one face, at a time.’
  • The first sentence said that Metapolator is going to change the world—by introducing a tool for making many fonts, something not seen before; this second one tells us how.
  • supports is not a word one uses lightly in a vision. ‘Supports XYZ’ does not mean it is just technically possible to do XYZ; it means here that this is going to be a world‐class product to do XYZ, which can only be realised with world‐class user interaction to do XYZ.
  • design space is one of these wonderful things that come up in a product vision session. Super‐user Wei Huang coined the phrase when describing working with the current version of Metapolator. It captures very nicely the working in a continuum that Metapolator supports, as contrasted with the traditional piecemeal approach, represented by ‘one glyph, one face, at a time.’ What is great for a vision is that ‘design space’ captures the vibe that working with metapolator should have, but that it is not explicit on the realisation of it. This means there is room for innovation, through technological R&D and interaction design.
‘With Metapolator, “pro” font designers are able to create and edit fonts and font families much faster, with inherent consistency.’
  • With “pro” font designers we encounter the first user group, starting to answer ‘who is it for?’ “Pro” is in quotes because it is not the earning‑a‐living part that interests us, it is the fact that these people mastered a craft.
  • create and edit balances the two activities; it is not all about creating from scratch.
  • fonts and font families balances making very different fonts with making families; it is not all about the latter.
  • much faster is the first value statement, starting to answer ‘where is the value?’ Metapolator stands for an impressive speed increase in font design and production, by abolishing the piecemeal approach.
  • inherent consistency is the second value statement. Because the work is performed by users in the font design space, where everything is connected and continuous, the conventional user overhead of keeping everything consistent disappears.
‘They gain unique exploration possibilities and the tools to quickly adapt typefaces to different media and domains of use.’
  • exploration possibilities is part feature, part value statement, part field of use and part vibe. All these four are completely different things (e.g. there is inherently zero value in a feature), captured in two words.
  • quickly adapt is a continuation of the ‘much faster’ value statement above, highlighting complementary fields of use for it.
‘With Metapolator, typographers gain the possibility to change existing fonts—or even create new ones—to their needs.’
  • And with typographers we encounter the second user group. These are people who use fonts, with a whole set of typographical skills and expertise implied.
  • possibility to change is the value statement for this user group. This is a huge deal. Normally typographers have neither the skills, nor the time, to modify a font. Metapolator will open up this world to them, with that fast speed and inherent consistency that was mentioned before.
  • create new goes one step further than the previous point. Here we have now a commitment to enable more ambitious typographers (that is what ‘even’ stands for) to create new fonts.
  • to their needs is a context we should be aware of. These typographers will be designing something, anything with text, and that is their main goal. Changing or creating a font is for them a worthwhile way to get it done. But it is only part of their job, not the job. Note that the needs of typographers includes applying some very heavy graphical treatments to fonts.
‘Metapolator is extendible through plugins and custom specimens.’
  • extendible through plugins is one realisation of the ‘open’ aspect mentioned in the first sentence. This makes Metapolator a platform and its extendability will have to be taken into account in every step of its design.
  • custom specimens is slightly borderline to mention in a vision; you could say it is just a feature. I included it because it programs the project to properly support working with type specimens.
‘It contains all the tools and fine control that designers need to finish a font.’
  • all the tools: this was the result of me probing during the vision session whether Metapolator is thought to be part of a tool chain, or independent. This means that it must be designed to work stand‑alone.
  • fine control: again the result of probing, this time whether Metapolator includes the finesse to take care of those important details, on a glyph level. Yes, it all needs to be there.
  • that designers need makes it clear by whose standards the tools and control needs to be made: that of the two user groups.

this space has intentionally been left blank

Just as important as what it says in a product vision is what it doesn’t say. What it does not say Metapolator is, Metapolator is explicitly not. Not a vector drawing application, not a type layout program, not a system font manager, not a tablet or smartphone app.

The list goes on and on, and I am sure some users will come up with highly creative fields of use. That is up to them, maybe it works out or they are able to cover their needs with a plugin they write, or have written for them. For the Metapolator team that is charming to hear, but definitely out of scope.

User groups that are not mentioned, i.e. everybody who is not a “pro” font designer or a typographer, are welcome to check out Metapolator, it is free software. If their needs overlap partly with that of the defined user groups, then Metapolator will work out partly for them. But the needs of all these users are of no concern to the Metapolator team.

If that sounds harsh, then remember what a product vision is for: it helps a project to focus, to fight off distractions and to take tough design decisions. That part starts now.

designing interaction for creative pros /2

Part two of my LGM 2015 lecture (here is part one). It is a tale of cars. For many years I have had these images in my head and used them in my design practice. Let’s check them out.

freude am fahren

First up is the family car:

a catalog shot of a family car source: netcarshow.com

It stands for general software. It is comfortable, safe and general‐purpose. All you need to use it is a minimum of skills, familiarity and practice—in the case of cars this is covered by qualifying for a driving licence.

In the case of software, we are talking casual and enthusiast use. A good example is web browsers. One can start using them with a minimum of skills and practice. After gaining some experience one can comfortably drive use a browser on a daily basis. If a pro web browser exists, then it has escaped my radar.

(It would make a very interesting project, a pro web browser. But first a product maker would have to stand up with a solid vision of pro web browsing; its user groups; and some big innovation that is valuable for these users.


When I think of creative pro interfaces, I think of this:

a rally car blasting around a corner on a rallystage in nature source: imgbuddy.com

The rally car. It is still a car, but… different. It is defined by performance. And from that, we can learn a couple of things.

speed, baby

First, creative pros work fast. They ‘wield the knife’ without doubt. A telltale sign of mastery is the speed of execution. I have this in mind all the time when designing for creative pros.

I vividly remember one of the earliest LGMs, Andy Fitzsimon went on stage and demonstrated combining pixel and vector in one image. The pace was impressive, Andy was performing nearly two operations per second.

Bam bam bam bam. At a tempo of 120 beats per minute; the solid tempo of marching bands and disco. That is the rhythm I aim to support, when designing for creative pros.

command and control

Second, creative pros really know their material, the medium they work with. They can, and need to, work with this material as direct and intimate as possible, in order to fulfil creative or commecial goals. This all can be technology‐assisted, as it is with software, but the technology has to stay out of the way, so that it does not break the bond between master and material.

The material I am talking about is that of film, graphics, music, animation, garments, et cetera. These can be digital, yes. However data and code of the software‐in‐use are not part of a creative pro’s material. Developers are always shocked, angry, then sad to learn this.

Thus Metapolator, has been designed for font designers and typographers who know what makes a font and what makes it tick. They know the role of the strokes, curves, points, the black and the white, and of the spacing. They are experienced in shaping these to get results. It is this material that—by design—Metapolator users access, just that it is organised such that they can work ten times faster.

dog eat dog

Third, it’s a competitive world. Creatives pros are not just in business. Also in zero‐budget circles there are real fun and/or prestigious projects where exactly those with proven creative performance, and ability to deliver, get asked.

Tools and software are in constant competition, also in the world of F/LOSS. It is a constant tussle: which ones provide next‐generation workflows with more speed and/or more room for creativity? Only competitive tools make masters competitive.

the point

Now that we got the picture, here is the conflict. The rules—the law and industry mores—that make good family cars may be a bad idea to apply to rally cars. And what makes rally cars competitive, may simply be illegal for family cars.

Every serious software platform has its HIG (human interface guidelines). It is the law, a spiritual guide and a huge source of security for developers. That is, for general software. It is only partly authoritative for software for creative pros. Because truly sticking to the HIG, while done all in good faith, will render creative pro software non‐competitive.

vorsprung durch technik

Rally cars contain custom parts, handmade from high performance materials like aluminium, titanium, carbon, etc. This is expensive and done because nothing off‐the‐shelf is sufficient.

Similarly creative pro software contains custom widgets, handmade at great expense—in design and development. For a decade I have witnessed that it is a force of nature to end up in that situation. Not for the sake of being cool or different, but all in the name of performance.

tough cookie

So, with loose laws and a natural tendency for custom widgets, can you do just what you like when you make creative pro software? Well no. It is tough, you still have to do the right thing. If this situation makes you feel rather lost, without guidance, then reach out and find yourself an interaction designer who really knows this type of material. Make them your compass.

picture show

To illustrate all this, let’s look at some of my designs for Metapolator.

of a glyph—surrounded by two others—all the points that make up its     skeleton are connected by outward radiating lines to big circular handles

Speed, baby! Big handles to select and move individual points on the skeleton of a glyph (i.e. direct control of the material). During a brainstorm session with Metapolator product visionary Simon Egli, he noticed how the points could be connected by rigid sticks to big handles.

I worked out the design with big (fast) handles available for furious working, but out of the way of the glyph, so it can be continuously evaluated (unbroken workflow).

four sliders for mixing fonts, one is reversed and has its thumb aligned     with another slider

This is a custom slider set for freely mixing master fonts—metapolation—to make new fonts. In this case four fonts, but it has been designed to easily scale up to nine or more; a Metapolator strength (vis‑à‐vis the competition).

One of the sliders—‘Alternate’—is in an “illegal” configuration; it is reversed. This is done to implement the rule that the mix of fonts has to always add up to 100%. There is special coupled behaviour between the sliders to ensure that.

The design of this part included a generous amount of exploration and several major revisions. Standard widgets and following the HIG would not deliver that every sliders setting maps to one unique font mix. Apart from a consistency goal, that is also about maximising input resolution. So I broke some rules and went custom.

a crossing 2-D axies system coupled to a single axis, with at least     3 fonts on each axis, with a font family and a single font instance placed     on them

This is also a metapolation control. In this case a three‐dimensional one involving eight master fonts. Working with that many fonts is really a pro thing; you have to know what you are doing and have the experience to set up, identify and pick the ‘good font’ results.

The long blue arrow is a font family, with nine or so fonts as members. The whole family can be manipulated as one entity (i.e. placed and spanned in this 3D space) as can each member font individually.

glyphs a, b and c set in 3 different fonts, with point selections across them

Final example: complex selections. Across three different fonts and three different glyphs, several points have been selected. Now they can be manipulated at the same time. That is definitely not consumer‑grade.

If that looks easy, I say ‘you’re welcome.’ It takes serious planning ahead in the design to allow this interaction; for the three fonts to appear, editable, at the same time; for deep selections within several glyphs to be possible and manageable—the big handles‑on‐sticks help also here.

vroom, vroom

In short: if there is one thing that I want you to take away from this blog post, then it is that image of the rally car. How different its construction, deployment and handling are. Making software for creative pros means making a product that is definitely not consumer‑grade.

That’s it for part two. Go straight to part three: 50–50, equal opportunities.

design lessons with Daft Punk

I am sure you have noticed the Daft Punk marketing master plan that is taking over all media channels at the moment. And I admit that I am happy to consume—and inhale—anything (semi‐)intelligent that is being written about them.

Yesterday I read this Observer interview with the ‘notoriously shy French duo.’ Afterwards, intuition told me there was something vaguely familiar about what they had said. I checked again and sure enough, plenty of it applies to (interaction) design.

punk rules, OK?

Below are Daft Punk quotes I lifted from the article, followed by what I associate with each. There are also a couple of cameo appearances by hit‑production legend Nile Rodgers.

‘The music that’s being done today has lost its magic and its poetry because it’s rooted in everyday life and is highly technological.’

Wow, not the most hands‑on quote to start with. But I swore that I’d present them in the order they appear in the article. With the mentioned ‘magic and poetry’, I associate fantastic design work. This means sweeping solutions, for which there needs to be at least one designer on the project with a big‐picture view.

Being constantly ‘rooted in everyday life’—e.g. relying on testing (A/B or usability); or working piecemeal, or driven by user requests, or in firefighter mode—shortens the horizons and shrinks the goals. It surely programs the project for mediocrity, i.e. humdrum, incremental solutions.

Every user has to deal every day with software that ‘is highly technological.’ Everybody thinks this sucks. Making software is highly technological when one is staring at code; when thinking about code; when taking prototyping capabilities into account; when technology informs the interaction, verbatim. Designing great interaction means not making any of these mistakes.

‘In early interviews they came across as suspicious and aloof. “It’s because you’re 18 and you feel maybe guilty: why are we chosen to do these things?” says Thomas. “There’s definitely reasons to feel less uncomfortable now. It’s one thing to say you’re going to do it and another to have done it for 20 years.”’

Now that is the voice of experience talking. The first part of it is this early phase; fresh out of school and real (work) life is starting. This suspicion of one’s own talents, entering a company, scene or industry and expecting the folks around you to be like you, see things like you. And then they don’t. Very confusing, who is wrong here?

The second part is having ‘done it for 20 years.’ If that involved a portfolio of successful work; continuous self‐development; the discovery of what a difference ‘being experienced’ makes and getting to know a few peers, then it has become more comfortable to be a designer. Just don’t get too comfortable; make sure every new project you take on challenges and develops you.

‘The only secret to being in control is to have it in the beginning. Retaining control is still hard, but obtaining control is virtually impossible.’

The first level where this holds is getting a design implemented. Quite often developers like to first put some temporary—and highly technological—interaction while they sort out the non‑UI code. The real design will be implemented later. Then time ticks away, the design lands in a drawer and the ‘temporary’ UI gets shipped.

I do not think this is a malicious trick, but it happens so often that I do not buy it anymore. The only secret to getting interaction design implemented is to do it in the beginning.

The second level is that of the overall collaboration; ‘obtaining control is virtually impossible,’ no matter how big a boost a designer has given the project. So one has to start out with control from the beginning, it has to be endowed by the project leadership. And then one has to work hard to retain it.

‘Guy‑Man, who designed the artwork, says that Thomas is the “hands‑on technician” while he is the “filter”: the man who stands back and says oui or non.’

Filter is the stuff designers are made of. In the case of interaction designers it means filtering out of all the things users say, the things they actually need. It means saying non to many things that are simply technologically possible, but useless, and oui to exactly that what realises the product, addresses users needs and is, yes, technologically possible.

Being the filter does not always make you friends, having to say non to cool‐sounding initiatives that in the bigger scheme of things are incredibly unhelpful. But being a yes‑man makes an ineffective designer, with non‑designed results.

Making software is not a game with unlimited time and resources; user interaction is not one with unlimited screen space and communication bandwidth. A filter is crucial.

‘“The genius is never in the writing, it’s in the rewriting,” says Rodgers. “Whenever they put out records I can hear the amount of work that’s gone into them—those microscopically small decisions that other people won’t even think about. It’s cool, but they massage it so it’s not just cool—it’s amazing.”’

I learned some years ago that it is not only the BIG plans and sweeping solutions that make a master designer. It is also in the details. All the tiny details.

All these ‘microscopically small decisions’ have to be taken in the way that strengthen the overall design, or else it will crumble to dust. This creates tension with all the collaborators, who ‘won’t even think about’ these details. They cannot see the point, the crumbling. Masters do.

‘We wish people could be influenced by our approach as much as our output. It’s about breaking the rules and doing something different rather than taking some arrangements we did 10 years ago that have now become a formula.’

Design is not a formula, not a sauce you pour over software. Design is a process, performed by those who can. A designer cannot tell upfront what the design will be like, but knows where to start, what to tackle and when it is done. That sounds trivial, but for non‑designers these four points work exactly opposite.

Apply the design process to a unique (i.e. non‑copycat) project and you will get an appropriate and unique design. Blindly applying this design to another project is by definition inappropriate.

‘“Computers aren’t really music instruments,” he sniffs. “And the only way to listen to it is on a computer as well. Human creativity is the ultimate interface. It’s much more powerful than the mouse or the touch screen.”’

This quote hits the nail on the head by setting the flow of creativity between humans as the baseline and then noting how computer interfaces are completely humbled by it. It is too easy to forget about this when your everyday world is making software.

The truth about software for designers (of music, graphics and other media) is that not much of it is designed—the interaction I mean, although it may look cool. Being software for a niche market makes it armpit of usability material: developers talking directly to users, implementing their feature request in a highly technological way.

To make an end to this sad state of affairs, a design process needs to be introduced that is rooted in a complete—but filtered—understanding of the activity called human creativity.

‘Enjoying the Hollywood analogy, Thomas says Daft Punk were the album’s screenwriters and directors while the guest performers were the actors, but actors who were given licence to write their own lines.’

I am also enjoying that analogy, and the delicate balance that is implied. On the one hand, interaction designers get to create the embodiment of the software ‘out of thin air’ and write it down in some form of specification, the screenplay. Being in the position of seeing how everything is connected, it also falls naturally to them to direct the further realisation, by developers and media designers.

If that sounds very bossy to you, it is balanced by the fact that these developers and media designers already have complete ‘licence to write their own lines.’ For developers every line of code they write is literary theirs.

The delicate balance depends on developers and media designers being able to contribute to the interaction design process—in both meanings of that phrase. And it depends on all written lines fitting the screenplay.

‘“What I worked on was quite bare bones and everything else grew up around me,” says Nile Rodgers. “They just wanted me to be free to play. That’s the way we used to make records back in the day. It almost felt like we’d moved back in time.”’

This is what design processes are about; to create a space where one is free to play. This in the dead‐serious context of solving the problem. Play is used to get around a wall or two that stand between the designer and the solution for making it work.

It takes a ‘quite bare‐bones’ environment to be free: pencil and paper in the case of interaction design. That may ‘feel like moving back in time’ but it is actually really liberating; it offers a great interface for human creativity. Once you got around those walls and hit upon the solution, every part of the design can grow up around what you played.

And on that note, I’ll finish today’s blog post.

12 things you can do to succeed, enterprise edition

Part three of the mini‐series I am running at the moment on the usual social channels—twitter, g+, linkedin and xing—called positive action ships successful products. There, for every wishful thought that persists in the (mobile) software industry, I supply a complementary positive action.

Today’s offering is enterprise grade; let’s turn water into wine. If you are a product maker, or manage a product‐shipping organisation, then you can initiate at least one of these today:

Better go for it, deploy user research and design to ensure that new is really better.
cf. ‘Better play it safe, because it has not been done before.’
Better go for it, ban meetings; get the makers to collaborate in (pairs of) pairs.
cf. ‘Better play it safe, so it won’t cause all these extra rounds of meetings.’
Better go for it, evangelise the new, listen carefully to any needs, ignore naysayers.
cf. ‘Better play it safe, because the first feedback was rather reserved.’
Better go for it, make it an offer that can’t be refused—if it gets nixed, go underground.
cf. ‘Better play it safe, so we get the OK.’
Better go for it, negotiate until you trust that the engineers can build the design.
cf. ‘Better play it safe, because the engineers say it cannot be done.’
Better go for it, it is faster to build a completely new core product from scratch.
cf. ‘Better play it safe, because that code base is spaghetti.’
Better go for it, and enjoy every minute; save time through structure, research + design.
cf. ‘Better play it safe, so we all can go home at five—and on 14:30 (D) / to the pub (GB) on friday.’
Better go for it, because the blame will fall on us anyway.
cf. ‘Better play it safe, because the blame will fall on us.’
Better go for it, once the core product blows away the competition, features can be added.
cf. ‘Better play it safe, so we have time for more features.’
Better go for it, use frequent user testing to debug the innovative design.
cf. ‘Better play it safe, to pass the usability test.’
Better go for it, define a new game, on your terms, and ditch them old millstones.
cf. ‘Better play it safe, to pass the regression test.’
Better go for it, model careers are built on delivering remarkable results.
cf. ‘Better play it safe, to not jeopardise my promotion.’

ask not what this blog can do for you…

Now, what else can you do? First of all, you can spread the word; share this blog post. Second, the series continues, so I invite you to connect via twitter, g+, linkedin, or xing, and get a fresh jolt of positive action every workday.

And third, if you able and willing to take some positive action, then email or call us. We will happy to help you ship successful products.

ps: you can check out part two if you missed it.

a half‑century of success

This is the final instalment of the mini‐series I ran on the usual social channels—twitter, g+, linkedin and xing—called positive action ships successful products. There, for every wishful thought that persists in the (mobile) software industry, I supplied a complementary positive action.

To complete the round number of fifty, I present the final dozen + two of these for your reference. If you are a product maker, or manage a product‐shipping organisation, then you can initiate at least one of these today:

Make the lead designers of your hard‐ and software work as a pair; make them inseparable.
cf. ‘The hardware specs are fixed, now we can start with the software design.’
Define your focus so tightly, it hurts (a bit); deploy it so you ship, instead of discuss.
cf. ‘We spent ages discussing this, trying to find a solution that pleased everyone.’
Make interaction design the backbone of your product realisation; or compete on low, low price.
cf. ‘We thought we could spend a couple of man‐days on the low‐hanging usability fruit.’
Deploy lightweight design and engineering documentation to keep everyone with the programme.
cf. ‘The source is the ultimate documentation.’
Ban hacks, at least from those who are supposed to shape your product for the long term.
cf. ‘There is no need to go for the gold‐taps solution.’
Set a ‘feature budget’ and set it way below bloat; be frugal, spend it on user value.
cf. ‘It does not hurt to have those features as well.’
Set the goal to be competitive on each platform you support—that starts with your interaction.
cf. ‘One code base; fully cross‐platform.’
Root out boilerplate thinking for any product aspect; your design process is your QA.
cf. ‘You have to pick your battles.’
Set up your designers for big impact on the internals of your software, instead of vice versa.
cf. ‘Once you get familiar with the internal workings of our software, it becomes easy to use.’
Define your target user group(s) so tightly, it hurts; focus on their needs, exclusively.
cf. ‘Our specific target user group is: everyone.’
Introduce this KPI: the more your developers think the UI is ‘on the wrong track,’ the better.
cf. ‘Our developers are very experienced; they make the UI of their modules as they see fit.’
Hire those who are able to take your interaction beyond the HIG, once you achieve compliance.
cf. ‘We religiously adhere to the HIG.’
Regularly analyse workarounds adopted by your users; distill from them additional user needs.’
cf. ‘You can do that by [writing, running] a script.’
Make the connection: product–users–tech. Design is the process, the solution and realisation.
cf. ‘What do you mean “it’s all connected”? we just don’t have the time for those bits and pieces.’

ask not what this blog can do for you…

Now, what else can you do? First of all, you can spread the word; share this blog post. Second, I invite you to connect via twitter, g+, linkedin, or xing.

And third, if you able and willing to take some positive action, then email or call us. We will happy to help you ship successful products.

ps: you can check out part three if you missed it.

Krita Lime PPA: always fresh versions for Ubuntu users!

A great piece of news for Ubuntu Krita users is coming today! We have just opened a repository with regular builds of Krita git master!

Link: https://launchpad.net/~dimula73/+archive/krita

The main purpose of this PPA is to provide everyone with an always fresh version of Krita, without the need to update the whole system. Now one can get all the latest Krita features without a delay.

At the moment git master version has at least three features, which are absent in Krita 2.7 Beta1 (and cannot be merged there due to code freeze):

  • New "New Image From Clipboard" dialog with a nice preview widget implemented by our new contributor Matjaž Rous
  • New "pseudo-infinite" canvas feature (read here) for dynamical image resizing
  • New "Overview Docker" which lets you see the whole image at a glance
To install the newest Krita you need to do a few steps:
  1. Check that you don't have any original calligra or krita packages provided by your distribution or project-neon (we don't check that automatically currently)
  2. Add the PPA to repositories list:
    sudo add-apt-repository ppa:dimula73/krita
  3. Update the cache: 
    sudo apt-get update 
  4. Install Krita: 
    sudo sudo apt-get install krita-testing krita-testing-dbg 
Update: (not needed anymore)
After installing this package you should restart X-server to get environment variables updated!

Of course, being based on git-master may sometimes result in a bit of instability, so make sure you report any problems so we can fix them! :)

Interim tally of Kickstarter votes

Only one week after we sent out our Kickstarter survey, 581 of the 661 15-euro-and-up backers (including the PayPal backers) have sent in their votes. This is a response rate of a whopping 87.90%! Here’s the current tally:

  Votes   Stretch goal
1 113 19.45% 10. Animated file formats export: animated gif, animated png and spritemaps
2 53 9.12% 8. Rulers and guides: drag out guides from the rulers and generate, save and load common sets of guides. Save guides with the document.
3 46 7.92% 1. Multiple layer selection improvements
4 45 7.75% 19. Make it possible to edit brush tips in Krita
5 38 6.54% 21. Implement a Heads-Up-Display to manipulate the common brush settings: opacity, size, flow and others.
6 37 6.37% 2. Update the look & feel of the layer docker panel
7 35 6.02% 22. Fuzzy strokes: make the stroke consistent, but add randomness between strokes.
8 30 5.16% 5. Improve grids: add a grid docker, add new grid definitions, snap to grid
9 29 4.99% 6. Manage palettes and color swatches
10 26 4.48% 18. Stacked brushes: stack two or more brushes together and use them in one stroke
11 21 3.61% 4. Select presets using keyboard shortcuts
12 18 3.10% 13. Scale from center pivot: right now, we transform from the corners, not the pivot point.
13 17 2.93% 9. Composition helps: vector objects that you can place and that help with creating rules of thirds, spiral, golden mean and other compositions.
14 17 2.93% 7. Implement a Heads-Up-]Display for easy manipulation of the view
15 15 2.58% 20. Select textures on the fly to use in textured brushes
16 9 1.55% 15. HDR gradients
17 9 1.55% 11. Add precision to the layer move tool
18 7 1.20% 17. Gradient map filter
19 5 0.86% 16. On-canvas gradient previews
20 5 0.86% 12. Show a tooltip when hovering over a layer with content to show which one you’re going to move.
21 3 0.52% 3. Improve feedback when using more than one color space in a single image
22 3 0.52% 14. Add a gradient editor for stop gradients

If you’re entitled to vote and haven’t done so yet, please do! Any vote received on or before July 6, a full month after sending out the survey, will count.

June 10, 2015

Krita 2.9.5 Released

The Kickstarter was a success, but that didn’t keep us from adding new features and fixing bugs! We made quite a bit of progress including adding pass-through mode to group layers, allowing inherit alpha to be used on all layer types, better PSD support, and adding an on-canvas preview of the color being picked. We even added a new brush preset history docker! You can see the full release notes below.

Krita 2.9.5 also fixes a critical bug in Please upgrade if you experience crashes after restarting Krita.

New Features:

  • Add a lightness curve to the per-channel filter (bug 324332)
  • Add a brush preset history docker (bug 322425)
  • Add an all-files option to the file-open dialog
  • Add global light to the layer styles functionality (bug 348178)
  • Allow the user to choose a profile for untagged PNG images (bug 345913, 348014)
  • Add a built-in performance logger
  • Added a default set of paintop preset tags (these are not deletable yet!)
  • Add support for author profiles (default, anonymous, custom) to .kra files
  • Add buttons and actions for layer styles to the Layer docker
  • Add ctrl-f shortcut for re-applying the previously used filter (bug 348119)
  • Warn Intel users that they might have to update their display driver
  • Implement loading/saving of layer styles to PSD files
  • Add support for loading/saving patterns used in layer styles
  • Allow inherit alpha on all types of layers
  • Add a pass-through switch for group layers (bug 347746, 185448)
  • Implement saving of group layers to PSD
  • Add support for WebP (on Linux)
  • Add a shortcut (Ctrl-Shift-N) for edit/paste into New Image (bug 344750)
  • Add on-canvas preview of the current color when picking colors (bug 338128)
  • Add a mypaint-style circle brush outline.
  • Split the cursor configuration into outline selection and cursor selection
  • Add loading and saving transparancy masks to PSD groups

Performance improvements:

  • Remove delay on stroke start when using Krita with a translation

Bug fixes:

  • Fix view rotation menu by adding rotation actions
  • Fix crash when duplicating a global selection mask (bug 348461)
  • Improve the GUI for the advanced color selector settings (wrench icon on Advanced color selector)
  • Fix resetting the number of favorite presets in the popup (bug 344610)
  • Set proper activation flags for the Clear action (bug 34838)
  • Fix several bugs handling multiple documents, views and windows (bug 348341, bug 348162)
  • Fix the limits for document resolution (bug 348339)
  • Fix saving multiple layers with layer styles to .kra files (bug 348178)
  • Fix display of 16 bit/channel RGB images (bug 343765)
  • Fix the P_Graphite_Pencil_grain.gih brush tip file
  • Fix updating the projection when undoing removing a layer (bug 345600)
  • Improve handling of command-line arguments
  • Fix the autosave recovery dialog on Windows
  • Fix creating templates from the current image (bug 348021)
  • Fix layer styles and inherit alpha (bug 347120)
  • Work around crash in the Oxygen widget style when animations are enabled (bug 347367)
  • When loading JPEG files saved by Photoshop, also check the metadata for resolution information (bug 347572)
  • Don’t crash when trying to isolate a transform mask (transform masks cannot be painted on) (bug 347622)
  • Correctly load Burn, Color Burn blending modes from PSD (bug 333454)
  • Allow select-opaque on group layers (bug 347500)
  • Fix clone brush to show the outline even if it’s globally hidden (bug 288194)
  • Fix saving of gradients to layer styles
  • Improve the layout of the sliders in the toolbar
  • Fix loading floating point TIFF files (bug 344334)



Role change: Now snappier

Happy to announce that I'm changing roles at Canonical, moving down the stack to join the Snappy team. It is in some ways a formalization of my most recent work which has been more on application lifecycle and containment than higher level stuff like indicators and other user services. I'll be working on the core snappy team to ensuring that snappy works for a wide variety of use cases, from small sensors embedded in your world to phones to services running in the cloud. For me Snappy formalizes a lot of trends that we're seeing all over computing today so I'm excited to get more involved with it.

To kick things off I'll be working on making Snaps easier to build and maintain using the native dependency systems that exist already for most languages. The beautiful part about bundling is that we no longer have to force our dependency system on others, they can choose what works best for them. But, we still need to make integrating with it easy.

New adventures bringing new challenges are where I like to roam. I'll still be around though, and might even contribute a patch or two to some of my old haunts.

June 09, 2015

Basic Landscape Exposure Blending with GIMP and G'MIC

Basic Landscape Exposure Blending with GIMP and G'MIC

Exploring exposure blending entirely in GIMP

Photographer Ian Hex had previously explored the topic of exposure blending with us by using luminosity masks in darktable. For his first video tutorial he’s revisiting the subject entirely in GIMP and G’MIC.

Have a look and let him know what you think in the forum. He’s promised more if he gets a good response from people - so let’s give him some encouragement!

released darktable 1.6.7

We are happy to announce that darktable 1.6.7 has been released.

The release notes and relevant downloads can be found attached to this git tag:
Please only use our provided packages ("darktable-1.6.7.*" tar.xz and dmg) not the auto-created tarballs from github ("Source code", zip and tar.gz). The latter are just git snapshots and will not work! Here are the direct links to tar.xz and dmg:

this is another point release in the stable 1.6.x series.

sha256sum darktable-1.6.7.tar.xz
sha256sum darktable-1.6.7.dmg



  • improvements to facebook export
  • interpolation fixups
  • demosaic code cleanups
  • slideshow should handle very small images better
  • improve Olympus lens detection
  • various minor memory leak fixes
  • various other fixes
  • Pentax (K-x) DNG old embedded preview left over is now removed
  • modern OSX display profile handling

camera support

  • Nikon D7200 (both 12bit and 14bit compressed NEFs)
  • Nikon Coolpix P340
  • Canon EOS 750D
  • Canon EOS 760D
  • Canon EOS M2
  • Panasonic DMC-CM1
  • Panasonic DMC-GF7 (4:3 only)
  • Olympus XZ-10
  • Olympus SP570UZ
  • Samsung NX500
  • Fuji F600EXR

aspect ratios

  • Pansonic DMC-G5
  • Panasonic DMC-GM5
  • Panasonic FZ200

white balance presets

  • Nikon D7200
  • Nikon Coolpix P340
  • Panasonic DMC-GM1
  • Panasonic DMC-GM5
  • Olympus E-M10 (updated)
  • Olympus E-PL7
  • Olympus XZ-10

noise profiles

  • Canon Powershot G9
  • Sony A350


  • Nikon D7200
  • Nikon D7000
  • Nikon D750
  • Nikon D90


  • Catalan
  • German
  • Spanish
  • Swedish

June 08, 2015

Adventure Dental

[Adventure Dental] This sign, in Santa Fe, always makes me do a double-take.

Would you go to a dentist or eye doctor named "Adventure Dental"?

Personally, I prefer that my dental and vision visits are as un-adventurous as possible.

June 06, 2015

Blender at SIGGRAPH 2015

Siggraph 2015 is in Los Angeles, downtown, from 9-13 August. This is the highlight of the year for everyone who’s into 3D Computer Graphics. As usual you can find Blender users/developers all over, but especially here:

  • Sunday 3-5 PM: Birds of a Feather (free access)
    – Presentation of last year’s work and upcoming projects by chairman Ton Roosendaal. Includes time for everyone to speak up and share!
    – Viewing of the Cosmos Laundromat open movie (12 minutes)
    – Artists/developers demos and showcase, including several people of the BI crew.
  • Tuesday, Wednesday, Thursday: Tradeshow booth #1111
    Great place to meet, hangout, get demos, or share your feedback with everyone. You can always find plenty of Blender developers and users here

Meeting Point: Lobby or pool bar of hotel Figueroa (to be confirmed). This is one of the first hotels you pass by if you walk to downtown from the Convention Center. The pool bar is a pleasant hangout, open for everyone – just walk into the hotel lobby and take the corridor to the left of the desk.

  • Free Tradeshow tickets.
    Use the coupon code EXHBI9507 and register here.
    Unfortunately the registration system was made by zealot marketeers – ignore all the additional conference things you can purchase, keep clicking and you’ll end up with a free ticket!

Interesting Usertest and Incoming

Interesting Usertest and Incoming

A view of someone using the site and contributing

I ran across a neat website the other day for getting actual user feedback when viewing your website: UserTesting. They have a free option called peek that records a short (~5 min.) screencast of a user visiting the site and narrating their impressions.

Peek Logo

You can imagine this to be quite interesting to someone building a site.

It appears the service asks its testers to answer three specific questions (I am assuming this is for the free service mainly):

  • What is your first impression of this web page? What is this page for?
  • What is the first thing you would like to do on this page? Please go ahead and try to do that now. Please describe your experience.
  • What stood out to you on this website? What, if anything, frustrated you about this site? Please summarize your thoughts regarding this website.

Here’s the actual video they sent me (can also be found on their website):

I don’t have much to say about the testing. It was very insightful and helpful to hear someones view coming to the site fresh. I’m glad that my focus on simplicity is appreciated!

It was interesting that the navigation drawer wasn’t used, or found, until the very end of the session. It was also interesting to hear the testers thoughts around scrolling down the main page (is it so rare these days for content to be longer than a single screen - above the fold?).

Exposure Blended Panorama Coming Soon

The creator of new processing project PhotoFlow, Andrea Ferrero, is being kind enough to take a break from coding to write a new tutorial for us: “Exposure Blended Panoramas with Hugin and Photoflow”!

I’ve been collaborating with him on getting things in order to publish and this looks like it’s going to be a fun tutorial!


We’ve been talking back and forth trying to find a good workflow for contributors to be able to provide submissions as easily as possible. At the moment I translate any submissions into Markdown/HTML as needed from whatever source the author decides to throw at me. This is less than ideal (but at least it’s nice and easy for authors - which is more important to me than having to port them manually).

Github Submissions

For those comfortable with Git and Github I have created a neat option to submit posts. You can fork my PIXLS.US repository from here:


Just follow the instructions on that page, and issue a pull request when you’re done. Simple! :) You may want to communicate with me to let me know the status of the submission, in case you’re still working on it, or it’s ready to be published.

Any Old Files

Of course, if you want to submit some content, please don’t feel you have to use Github if you’re not comfortable with it. Feel free to write it any way that works best for you (as I said, my native build files are usually simple Markdown). You can also reach out to me and let me know what you may be thinking ahead of time, as I might be able to help out.

June 05, 2015

a half‑century of product fail

This is the final instalment of the mini‐series I ran on the usual social channels—twitter, g+, linkedin and xing—called wishful thinking breeds failed products. It distilled what I have witnessed and heard during 20 years in the (mobile) software industry.

To complete the round number of fifty, I present the final dozen + two wishful thoughts for future reference. I am curious if you recognise some of these:

‘The hardware specs are fixed, now we can start with the software design.’
‘We spent ages discussing this, trying to find a solution that pleased everyone.’
‘We thought we could spend a couple of man‐days on the low‐hanging usability fruit.’
‘The source is the ultimate documentation.’
‘There is no need to go for the gold‐taps solution.’
‘It does not hurt to have those features as well.’
‘One code base; fully cross‐platform.’
‘You have to pick your battles.’
‘Once you get familiar with the internal workings of our software, it becomes easy to use.’
‘Our specific target user group is: everyone.’
‘Our developers are very experienced; they make the UI of their modules as they see fit.’
‘We religiously adhere to the HIG.’
‘You can do that by [writing, running] a script.’
‘What do you mean “it’s all connected”? we just don’t have the time for those bits and pieces.’

ask not what this blog can do for you…

Now, what can you do? First of all, you can spread the word; share this blog post. Second, I invite you to connect via twitter, g+, linkedin, or xing; there is a new series starting, again with a thought every workday.

And third, if you recognise that some of the wishful thinking is practiced at your software project and you can and want to do something about it, then email or call us. We will treat your case in total confidence.

ps: you can check out part three if you missed it.

Time for some Kiki Fanart!

Kiki fanart is always welcome — so here is Banajune‘s take on Kiki!


June 04, 2015

designing interaction for creative pros /3

Part three of my LGM 2015 lecture (here is part one and two). It is about equal opportunities in creative‐pro interaction. To see what I mean, let’s make something: a square.

two‐way street

There are two ways for masters to get the job done. The first way is to start somewhere and to keep chipping away at it until it is right:

creating a square by starting with a rectangle, putting it bottom-left     corner into place, then size the top-right one to perfection heads up: animated gif

So let’s throw in some material, move and size it (bam, bam, bam)—right, done. That was quick and the result is perfect.

like putty

This is called free working; squeeze it until it feels right. It is always hands‑on and I always move both my hands in a moulding motion when I think of it, to remind me what it feels like.

Although done by feeling, it is still fast and furious. Don’t mistake this for ‘trying out’, ‘fiddling’ or ‘let’s see where we end up’; that is for dilettantes. When masters pick up their tools, it is with great confidence that the result they have in mind will be achieved in a predictable, and short, amount of time.

on the other hand…

The second way for masters to get the job done is to plan a bit and then create a precise, parametric, set‑up:

top, bottom, left and right guide lines that mark out the perfect square

This is called a jig. Now the master only has to ‘cut’ once and a perfect result is achieved:

top, bottom, left and right guide lines appear one by one, then the     perfect square appears between them another animated gif

measure twice, cut once

This is called measured working. It is an analytical approach and involves planning ahead. It delivers precise results, to the limits of the medium. You will find it in many places; everywhere where the hands‑on factor is zero, parameters are entered and—bam—the result is achieved in one stroke.

It might be tempting to think that setting up the jig always involves punching in numbers. However also making choices from discrete sets, e.g. picking a color from a set of swatches, is part of it. Thus it is better to talk in general of entering parameters.


I did not make up all this by myself. I am indebted to this very cool book that goes deep into the matter of free and measured working, as practiced for centuries by masters. Luckily it is back in print:

the cover of the book the nature and art of workmanship, by david pye

Once familiar with this duality in how masters work, it can be used to analyse their workflows. For instance while reading this article about Brian Eno working with the band James.

In the sidebar (Eno’s Gear) it says ‘I don’t think he even saves the sounds that he gets. He just knocks them up from scratch every time’ about using one piece of gear, and ‘It’s stuffed full of his own presets’ about another. Reading that, I thought: that has, respectively, the vibe of free and measured working.

I have looped that insight back into my designs of creative‐pro software from then on. That is, giving equal importance to users building a collection of presets and knocking‑it‑up‐from‐scratch for tool set‑ups, configuring the work environment and assets (brush shapes, patterns, gradients, et cetera).

(There are more nuggets of that’s‐how‐masters‐work in the Eno article; see if you can spot them.)

the point

And with that I have arrived at rule numero one of this blog post:

All masters work free and measured; the only thing predictable about it is that it occurs 50–50, with no patterns.

We cannot rely on a given master taking the same route—free or measured—for all the different tasks they perform. It’s a mix, and a different mix for every master. Thus design strategies based on ‘remember if this user tends to do things free or measured’ are flawed.

We cannot rely on a given task being predominantly performed via either route—free or measured—by masters. It’s a mix, a 50–50 mix. Thus design strategies based on ‘analyse the task; is it free or measured?’ are flawed.

same, not same

The same master doing the same task will pick a different route—free or measured—at different times, based on the context they are in. For instance how difficult the overall project is. And for sure their own mood plays a role; are they under stress, are they tired (that night shift meeting that deadline)?

Masters will guesstimate the shortest route to success under the circumstances—and then take it.

dig it

With this 50–50 mix and no patterns, software for creative pros has only one choice:

Equal opportunity: offer every operation that users can perform in—at least—two ways: one free, one measured.

If you now say either ‘man, this will double my software in size’, or ‘yeah, my software already does that’, then my reply is: experience says that once we really start checking, you will see that current creative‐pro software achieves 60–80% equal opportunity.

how low can you go?

The question is not how do we prevent this list of operations from ballooning. It is: are there any more innocent, boring, easy to overlook operations to go on our list? For instance: setting the document size. Yeah boring, but often enough key to the creative result. A crop tool is the free way to do that operation.

From the Brian Eno episode above we have seen that it is not enough to filter the operations list by ‘does it change the creative end result?’ There we saw that meta‐operations (set up tools, configuring the work environment and assets) are also fully in scope.

picture show

To illustrate all this, let’s look at some of my designs for Metapolator.

the parameters panel listing type parameters on both master and glyph level,     for each parameter values, modifications and effective values are listed.     a popup is used to add a math operator (+) to a parameter (tension) a final animated gif

This is measured central: the parameters panel. Literally here parameters are entered and—bam—applied. With the popup action shown the system is taken to the next level. Preferably for wide‐ranging selections, expressions of change (e.g. new value = A × old + B) can be built.

the curve of the blowl stroke of the b glyph is being edited with use of     some big handles

Most on‑canvas interaction is by nature of the free variety. The hands‑on factor is simply up for grabs. In Metapolator this interaction complements the parameter panel shown above to achieve equal opportunity.

a specimen is shown with a text generated out of all letter-pair     combinations out of the word adhesion

Specimens are a huge factor in the Metapolator design. It is the place to evaluate if the typefaces are right. That makes it also the logical place to squeeze it until it is right: free working.

All on‑canvas interaction is performed directly in the specimens for this reason. If that looks natural and normal to you, I say ‘you’re welcome.’ This is completely novel in the field of font design software.

four sliders for mixing fonts, above each slider blue markers, below     each a number equivalent to its setting

Here are these fellows again, the slider set for freely mixing master fonts to make new fonts. These new fonts are shown by the blue markers, so that users can feel the clustering and spread of these new fonts—clearly a component of free working.

The numbers you see are all editable, also quickly in a row. This supports measured working. That number input is straightforward and gives predictable and repeatable results was a big factor for me to choose the algorithm of these sliders over alternatives.

boom, boom

In short: software for creative pros has to offer every operation that users can perform in two ways: one free—squeeze it until it feels right—one measured—involving planning ahead, entering parameters and ‘cutting’ once.

That’s it for part three. Stay tuned for part four: how to be good.

June 03, 2015

We’ve done it!

We ended with €30,520 on kickstarter and €3108 through paypal — making for a grand total of €33,628, and that means LOD, Animation and nine stretch goal. We’re so happy. It’s really an amazing result. So, thanks and hugs to all our supporters! And we promise to make Krita better and better — and better!

We’re already working on the surveys, and if you backed through paypal and didn’t get a survey by next week, please mail us. For paypal, we have to do a bit of manual work!That’s all for now, we’re a bit tired after the most intense 30 days of the year :-)

June 02, 2015

Piñon cones!

[Baby piñon cones] I've been having fun wandering the yard looking at piñon cones. We went all last summer without seeing cones on any of our trees, which seemed very mysterious ... though the book I found on piñon pines said they follow a three-year cycle. This year, nearly all of our trees have little yellow-green cones developing.

[piñon spikes with no cones] A few of the trees look like most of our piñons last year: long spikes but no cones developing on any of them. I don't know if it's a difference in the weather this year, or that three-year cycle I read about in the book. I also see on the web that there's a 2-7 year interval between good piñon crops, so clearly there are other factors.

It's going to be fun to see them develop, and to monitor them over the next several years. Maybe we'll actually get some piñon nuts eventually (or piñon jays to steal the nuts). I don't know if baby cones now means nuts later this summer, or not until next summer. Time to check that book out of the library again ...

Fedora Design Team Update (Two for One!)

Fedora Design Team Logo

I have been very occupied in recent weeks with piggies of various shapes, sizes, and missions in life [1], so I missed posting the last design team meeting update. This is going to be a quick two-for-one with mostly links and not any summary at all. I’ve been trying hard to run the meetings so the auto-generated summaries are more usable, but I am always happy for tips on doing this even better from meetbot pros (like you? :) ?)


Fedora Design Team Meeting 19 May 2015

Fedora Design Team Meeting 2 June 2015

See you next time?

Our meetings are every 2 weeks; we send reminders to the design-team mailing list and you can also find out if there is a meeting by checking out the design team category on FedoCal.


[1] Expect some explanation in a few weeks, or look for me or Dan Walsh at the Red Hat Summit later this month. :)

Twenty-four hours to go…

Kickstarter months are much longer than ordinary months. At least, so it seems to us! It’s also a really exciting time. But we’re nearing the finish line now.

The current score is €2675 donated through paypal and €28,463 pledged on Kickstarter! That’s a total if €31,138. That’s seven-and-half stretch goals! Two, however, are already claimed by the choose-your-stretch-goal award.

Big thanks to everyone who has joined to help make Krita better and better!

In any case, time for a last sprint! This time tomorrow morning, the campaign is over!


May 30, 2015

Google Photos - Can I get out?

Google Photos

Google Photos came out a couple of days ago and well, it looks great.

But it begs the question: what happens with my photos once I hand them over? Should I want to move elsewhere, what are my options?

Question 1: Does it take good care of my photos?

Good news: if you choose to backup originals (the non-free version), everything you put in will come back out unmodified. I tested this with a couple different file types: plain JPEGs, RAW files and movies.

Once uploaded, you can download each file one-by-one through the action buttons on the top-right of your screen:

Photo actions

Downloaded photos have matching checksums, so that’s positive. It does what it promises.

Update: not quite, see below

Question 2: Can I get my photos out?

As mentioned before there’s the download button. This gives you one photo at a time, which isn’t much of an option if you have a rather large library.

You can make a selection and download them as a zip file:

Bulk download

Only downside is that it doesn’t work. Once the selection is large enough, it silently fails.

There is another option, slightly more hidden:

Show in Google Drive

You can enable a magic “Google Photos” folder in the settings menu, which will then show up in Google Drive.

Combined with the desktop app, it allows you to sync back your collection to your machine.

I once again did my comparison test. See if you can spot the problem.

Original file:

$ ls -al _MG_1379.CR2 
-rwxr-xr-x@ 1 ruben  staff  16800206 Oct 10  2012 _MG_1379.CR2*
$ shasum -a 256 _MG_1379.CR2 
fbfb86dac6d24c6b25d931628d24b779f1bb95f9f93c99c5f8c95a8cd100e458  _MG_1379.CR2

File synced from Google Drive:

$ ls -al _MG_1379.CR2 
-rw-------  1 ruben  staff  1989894 May 30 18:38 _MG_1379.CR2
$ shasum -a 256 _MG_1379.CR2 
0769b7e68a092421c5b8176a9c098d4aa326dfae939518ad23d3d62d78d8979a  _MG_1379.CR2

My 16Mb RAW file has been compressed into something under 2Mb. That’s… bad.

Question 3: What about metadata?

Despite all the machine learning and computer vision technology, you’ll still want to label your events manually. There’s no way Google will know that “Trip to Thailand” should actually be labeled “Honeymoon”.

But once you do all that work, can you export the metadata?

As it stands, there doesn’t seem to be any way to do so. No API in sight (for now?).

Update: It’s supported in Google Takeout. But that’s still a manual (and painful) task. I’d love to be able to do continuous backups through an API.


The apps, the syncing, the sharing, it works really really well. But for now it seems to be a one-way story. If you use Google Photos, I highly recommend you keep a copy of your photos elsewhere. You might want them back one day.

What I’d really like to see:

  • A good API that allows access to all metadata. After all, it is my own data.
  • An explanation on why my RAW files were compressed. That’s exactly not what you want with RAW files.

Keeping an eye on it.

Comments | @rubenv on Twitter

May 29, 2015

Why aren't you using github?

Is a question we, Krita developers, get asked a lot. As in, many times a week. Some people are confused enough that they think that github is somehow the "official" place to put git repositories -- more official than projects.kde.org, phabricator.kde.org, git.gnome.org or where-ever else. Github, after all, is so much more convenient: you only need a github account or login with your social media account. It's so much more social, it's so cosy, and no worries about licensing either! So refreshing and modern.

So much better than, say, SourceForge ever was! Evil SourceForge, having failed to make a business out of hosting commercial software development projects is now descending to wrapping existing free software Windows installers in malware-distributing, ad-laden installer wrappers.

The thing is, though, Github might be the cool place to hack on code these days, the favourite place to host your projects: that is exactly what SourceForge was, too, back in the days. And Github's business model is exactly what SourceForge's was. And if that isn't a warning against giving your first-born children in the hands of a big, faceless, profit-oriented, venture-capital-backed company, then I don't know what is!

And yes, I have heard the arguments. Github is so familiar, so convenient, you can always remove your project (until Github decides to resurrect it, of course), it's git, so you're not losing your code revision history! But what about other artefacts: wiki, documents, bugs, tasks? Maybe you can export them now, I haven't checked, but what will you import it into?

I've spent over ten years of my life on Krita. I care about Krita. I don't want to run that sort of risk. One thing I've learned in the course of a mis-spent professional life is that you always should keep the core of your business in your own hands. You shouldn't outsource that!

So, one big reason for not moving Krita's development to github is that I simply do not trust them.

That's a negative reason, but there are also positive reasons. And they all have to do with KDE.

I know that a lot of people like to bitch about KDE -- they like to bitch about the layout of the forum, the performance of the repo browser, the size of the libraries, the releases of new versions of the Plasma Desktop, about fifteen year old conflicts with the FSF (which somehow proves to them that KDE isn't trustworthy...) The fact is that especially in the Linux world, a bunch of people decided ages ago they didn't like KDE, it wasn't their tribe and they apparently find it enjoyable to kick like a mule everytime we do something.

Well, shucks to them.

Then there are people for whom the free software world is a strange place. You don't see something like Corel Painter being hosted together with a bunch of other software on a bigger entity's website. It's confusing! But it's still strange, to many people, to see that Krita shares a bug tracker, a forum, a mailing list platform, a git repository platform with a bunch of other projects that they aren't interested in.

Well, I see that as a learning moment

And not as a hint that we should separate out and... Start using using github? Which would also mean sharing infra with a bunch of other projects, but without any sense of community?

Because that is what make KDE valuable for Krita: the community. KDE is a big community of people who are making free software for end users. All kinds of free software, a wild variety. But KDE as a community is extremely open. Anyone can get a KDE identity, and it doesn't take a lot of effort to actually get commit access to all the source code, to all projects. Once in, you can work on everything.

All the pieces needed to develop software are here: websites, forums, wiki's, bug trackers, repo hosting, mailing lists, continuous-integration, file hosting, todo management, calendaring, collaborative editing, file hosting. The system admin team does an incredible job keeping it all up and running, and the best thing is: we own it. We, the community, own our platforms and our data. We cannot be forced by a venture capitalist to monetize our projects by adding malware installers. We own our stuff, which means we can trust our stuff.

And we can improve our platform: try to improve a closed-source, company-owned platform like github! So suggestions for improvement are welcome: we're now looking into phabricator, which is a very nice platform giving a lot of the advantages of github (but with some weird limitations: it very clearly wasn't made for hosting hundreds of git repos and hundreds of projects!), we're looking into question-and-answers websites. Recently, the continuous integration system got improved a whole bunch. All awesome deveopments!

But moving development to github? Bad idea.

Interview with David Revoy


Could you tell us something about yourself?

I’m a 33-year-old French CG artist. I worked for many industries: traditional-painting, illustration, concept-art, teaching. Maybe you’ve already come across some of my artwork while browsing the web, for example my work on open movies (Sintel, Tears of Steel, Cosmos Laundromat) or on various board games  (Philip Jose Farmer’s ‘The maker of universes’, Lutinfernal, BobbySitter) or book series (Fedeylin, Club of Magic Horse) and artworks like Alice in Wonderland or Yin Yang of World Hunger. Something I think specific about me is that I rarely accept ready-made ideas, I work to build my own opinions. This process leads me to reject many things accepted as normal by my contemporaries: TV, proprietary software, politics, religion… I despair when I hear someone saying “I do this or this because everyone does it”. I like independence, cats and deep blue color.

Do you paint professionally, as a hobby artist, or both?

I’m a happy artist doing both. Nowadays I work mainly on my own web comic, Pepper&Carrot. An open web comic done with Krita and supported by the readers. Managing everything on this project is hard and challenging, but extremely rewarding on a personal level. Pepper&Carrot is the project of my dreams.

What genre(s) do you work in?

I’ve worked in many genres, but currently I’m sticking to a homemade fantasy world for a general audience.

Whose work inspires you most — who are your role models as an artist?

I do not really have a role model, but I’m deeply impressed by artists able to melt the limits between industries, as Yoshitaka Amano did between concept art, illustration and painting.

How did you get to try digital painting for the first time?

My first real digital-painting contact was with Deluxe Paint II on MS-DOS in 1992. As a kid in the nineties, I was very lucky to have a computer at home. Fortunately, my parents and siblings were afraid of the home computer and I had it all to myself. For the younger generation reading this, just imagine: no internet, Windows 3.1, VGA graphics (640x480px, 256 colors).

What makes you choose digital over traditional painting?

I left the school system and my parents’ home at 18 years old. I was too much of a rebel to follow any type of studies and eager to start my own life far from any influence. I first worked as a street portraitist in Avignon then. Outside the tourist season I started to do traditional painting. What I remember was the stock -the physical size of it- over 100 canvases take up a lot of room in a small apartment. I also had long drying times for commissions in oil, and when something wasn’t accepted by a client, I had to start over…

I discovered modern digital painting thanks to my first internet connection around 2000 and the first forums about it. I was amazed: brilliant colors, rich gradients, a lot of fantasy artworks. Before 2000, you had to pay for a book or go to exhibitions to see new artworks. And suddenly many artists were on the internet, and you could see thousands of artworks daily. Forums where starting to open everywhere and CG artist shared tips, tutorials, work-in-progress threads. The internet of CG artist was new, full of hope and full of humanity…

I bought a tablet to start to paint digitally during this period. I didn’t know many things about software, so my first years of digital painting were made with Photoshop Elements (bundled with the tablet). With digital painting, I could experiment with many themes I could never have sold on canvas. Then I met online publishers interested in my digital art and started to work more and more as a digital painter with an official Photoshop licence, Corel Painter, etcetera. In 2003 I ended my career as a traditional painter when a client decided to buy my whole stock of canvas.

How did you find out about Krita?

I first heard about Krita on forum news, around 2007. Krita was a Linux-only program at this time and I was still a Windows user then (I moved to using Gnu/Linux full-time in 2009). I remember I spent time to try to install it and didn’t succeed. I had a dual-boot with Linux-Mint 4.0 and I was already enthusiastic about open-source technologies, especially Blender.

My first contact with drawing in Krita was in 2009 when I was preparing my work as art director on the Sintel project and studied all the open source painting applications on Linux (Gimp, Gimp-painter fork, Mypaint, Qaquarelle, Gogh, Krita, Drawpiles, and even the web-based paint-chat ones). I really wanted to use only open source on GNU/Linux for the concept art. This crazy idea was a big turn in my career, and more when I decided to stick to it after the Sintel project.

I started my first years of 100% GNU/Linux using a mix of Gimp-painter 2.6, Mypaint and Alchemy. I published many tutorials and open DVDs about it: Chaos&Evolutions or Blend&Paint to document all the new tips I found. But Gimp-painter 2.6 was harder and harder to install because all GNU/Linux distributions were pushing Gimp 2.8 as default, and the two versions couldn’t live side by side. I wasn’t happy with Gimp 2.8. It was impossible to paint with it when they released it, and the Gimp-painter features I liked were not merged into the official release. Mypaint on the other side was transitioning in pain to newer technologies and the main developer left the project… I remember I felt stuck for a while, asking myself if my rebel move to only GNU/Linux was worth it. Nothing was really evolving positively about digital painting on GNU/Linux at this time.

Then I decided to start following Krita actively and invest as much time as I could in it. Krita wasn’t popular at all back in the day: 2.2/2.3 wasn’t ready, not for production, and the first years that I used it I started out by accepting the various regressions. I adapted, bug-reported, helped other artists build it, showed the features of new releases, communicated about it and the most important: kept painting with it. It was a good choice. I was convinced by three factors:

  1. the project vision, clearly set up to be a digital-painting application.
  2. the massive energy, passion and time put on it by Boudewijn Rempt, Dmitry Kazakov, Lukáš Tvrdý, Sven Langkamp and many other developers.
  3. the friendly community.

What was your first impression?

It was in 2009, and it was impossible to paint smoothly on a 1000x1000px canvas. Krita already had a lot of features: CMYK, rich brush engines, a single windows interface, selections, transform tool, etcetera… but most of those were half working or broken when you wanted to make real use of it. The project was missing users, beta-testers. I’m proud to have already reported over 200 bugs to the Krita bug tracker since then. Nowadays, I’m sort of part of the Krita team, I even made my first official commit last week.

What do you love about Krita?

This will sound geeky, but probably my favourite feature is the command-line exporter.

~$ krita in.kra –export-filename out.png

This feature is a central key to speed-up my workflow, I included this command in bash scripts to batch transform Krita files to low-res JPGs, hi-res PNG, and so on. It allows me to keep only a single source file in my work folder ; all derived versions (internet version, publisher version) are auto-generated when the .kra source file is changed. This way I’m never afraid that I have to export everything again when I do a single change on a painting source file, or when one of the 16 languages of Pepper&Carrot gets an update. I just do it, save, and everything else, generation, watermarking, ftp upload and so on are automatised.

Check the Source files of Pepper&Carrot if you are curious to see what automatised export output looks like. I wish the command-line export could do a bit more, for example adding the possibility to export top-level groups to multiple files.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Stability needs improvement.

I invite all Krita users who wants to help making Krita more stable to report their bugs, and not expect someone else will do it for them or expect the developers will see it.

But there is one big issue in this process ; the bug-report website is not user friendly at all and not visual. It has many limited features (formatting, inserting pictures or videos). If the Krita project wants to keep trusting the userbase only to do volunteer beta-testing at a professional level, I think the project will need to make the life of the beta-testers easier.

It make me remember how the Mypaint project was also affected by this with the old bug-tracker. When the project moved the bug tracker to Github the amount of new issues reported just went insane. Much discussion happens on it now; avatars of users, formatting with title/bold/italic, inserting pictures make it way more friendly and human. Look at this type of bug-report with image and all: it’s a lot more adapted to how artists, the general audience or visually driven persons might want to report a bug. And I’m also pretty sure it would help developers to better see and solve the issues.

What sets Krita apart from the other tools that you use?

Krita (and digital-painting apps in general, or digital sculpting) are really a different thing from other software. Krita really needs to be realtime between me and the computer. Painting is a realtime experience and, when I paint, I can really feel when Krita doesn’t follow the rhythm. I think that’s why everyone is so happy to see the Krita team working on the performance topic in the kickstarter campaign.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

It would be the the latest episode of Pepper&Carrot. As an artist constantly evolving and changing, the latest piece is probably the one telling more things about where I am right now. Older artworks like the portrait of Charles Darwin or Lecture tell different stories, more near to where I was in 2012.

What techniques and brushes did you use in it?

I used my brush kit on it, and tried to paint directly what I had in mind using almost no extra layers. I painted it flat as I would do for a production concept-art speed painting. Then I refined on the top the level of details and constrained myself to not smooth the result too much.

Where can people see more of your work?

Probably on my portfolio www.davidrevoy.com.

Anything else you’d like to share?

I invite you to network with me on twitter, google+, or deviantArt if you want to chat about Krita or follow my new artwork, tutorials and resources. I also started a Youtube channel with video tutorials about Krita. Do not hesitate to comment and also share your tips or suggestions in the comments, I read them all and often reply. I’m also often connected to the IRC #krita channel on freenode. I’m using the nickname ‘deevad’. See you there!

Command-line builds for Android using ant

I recently needed to update an old Android app that I hadn't touched in years. My Eclipse setup is way out of date, and I've been hearing about more and more projects switching to using command-line builds. I wanted to ditch my fiddly, difficult to install Eclipse setup and switch to something easier to use.

Some of the big open-source packages, like OsmAnd, have switched to gradle for their Java builds. So I tried to install gradle -- and on Debian, apt-get install gradle wanted to pull in a total of 153 packages! Maybe gradle wasn't the best option to pursue.

But there's another option for command-line android builds: ant. When I tried apt-get install ant, since I already have Java installed (I think the relevant package is openjdk-7-jdk), it installed without needing a single additional package. For a small program, that's clearly a better way to go!

Then I needed to create a build directory and move my project into it. That turned out to be fairly easy, too -- certainly compared to the hours it spent setting up an Eclipse environment. Here's how to set up your ant Android build:

First install the Android "Stand-alone SDK Tools" from Installing the Android SDK. This requires a fair amount of clicking around, accepting licenses, and waiting for a long download.

Now install an SDK or two. Use android sdk to install new SDK versions, and android list targets to see what versions you have installed.

Create a new directory for your project, cd into it, and then:

android create project --name YourProject --path . --target android-19 --package tld.yourdomain.YourProject --activity YourProject
Adjust the Android target for the version you want to use.

When this is done, type ant with no arguments to make sure the directory structure was created properly. If it doesn't print errors, that's a good sign.

Check that local.properties has sdk.dir set correctly. It should have picked that up from your environment.

There will be a stub source file in src/tld/yourdomain/YourProject.java. Edit it as needed, or, if you're transferring a project from another build system such as eclipse, copy the existing .java files to that directory.

If you have custom icons for your project, or other resources like layout or menu files, put them in the appropriate directories under res. The directory structure is the same as in eclipse, but unlike an eclipse build, you can edit the files at any time without the build mysteriously breaking.

Signing your app

Now you'll need a key to sign your app. Eclipse generates a debugging key automatically, but ant doesn't. It's better to use a real key anyway, since debugging keys expire and need to be regenerated periodically.

If you don't already have a key, generate one with:

keytool -genkey -v -keystore my-key.keystore -alias mykey -keyalg RSA -sigalg SHA1withRSA -keysize 2048 -validity 10000
It will ask you for a password; be sure to use one you won't forget (or record it somewhere). You can use any filename you want instead of my-key.keystore, and any alias you want instead of mykey.

Now create a file called ant.properties containing these two lines:

Some tutorials tell you to put this in build.properties, but that's outdated and no longer works.

If you forget your key alias, you can find out with this command and the password:

keytool -list -keystore /path/to/my-key.keystore

Optionally, you can also include your key's password:

If you don't, you'll be prompted twice for the password (which echoes on the terminal, so be aware of that if anyone is bored enough to watch over your shoulder as you build packages. I guess build-signing keys aren't considered particularly high security). Of course, you should make sure not to include both the private keystore file and the password in any public code repository.


Finally, you're ready to build!

ant release

If you get an error like:

AndroidManifest.xml:6: error: Error: No resource found that matches the given name (at 'icon' with value '@drawable/ic_launcher').
it's because older eclipse builds wanted icons named icon.png, while ant wants them named ic_launcher.png. You can fix this either by renaming your icons to res/drawable-hdpi/ic_launcher.png (and the same for res/drawable-lpdi and -mdpi), or by removing everything under bin (rm -rf bin/*) and then editing AndroidManifest.xml. If you don't clear bin before rebuilding, bin/AndroidManifest.xml will take precendence over the AndroidManifest.xml in the root, so you might have to edit both files.

After ant release, your binary will be in bin/YourProject-release.apk. If you have an adb connection, you can (re)install it with: adb install -r bin/YourProject-release.apk

Done! So much easier than eclipse, and you can use any editor you want, and check your files into any version control system.

That just leaves the coding part. If only Java development were as easy as Python or C ...

May 27, 2015

A New (Old) Tutorial

A New (Old) Tutorial

Revisiting an Open Source Portrait (Mairi)

A little while back I had attempted to document a shoot with my friend and model, Mairi. In particular I wanted to capture a start-to-finish workflow for processing a portrait using free software. There are often many tutorials for individual portions of a retouching process but rarely do they get seen in the context of a full workflow.

The results became a two-part post on my blog. For posterity (as well as for those who may have missed it the first time around) I am republishing the second part of the tutorial Postprocessing here.

Though the post was originally published in 2013 the process it describes is still quite current (and mostly still my same personal workflow). This tutorial covers the retouching in post while the original article about setting up and conducting the shoot is still over on my personal blog.

Mairi Portrait Final The finished result from the tutorial.
by Pat David (cba).

The tutorial may read a little long but the process is relatively quick once it’s been done a few times. Hopefully it proves to be helpful to others as a workflow to use or tweak for their own process!

Coming Soon

I am still working on getting some sample shots to demonstrate the previously mentioned noise free shadows idea using dual exposures. I just need to find some sample shots that will be instructive while still at least being something nice to look at…

Also, another guest post is coming down the pipes from the creator of PhotoFlow, Andrea Ferrero! He’ll be talking about creating blended panorama images using Hugin and PhotoFlow. Judging by the results on his sample image, this will be a fun tutorial to look out for!

SK1 Print Design adding support for Palettes (Colour Swatches)

SK1 Print Design is an interesting project. They found the vector graphics program Sketch was useful to their business, and maintained their own customized version, eventually becoming a project all of their own. I'm not involved with SK1 Print Design myself but I do follow their newsfeed on Facebook, where they regularly post information about their work.

They have added import and export support for a variety of Colour Palettes, including SOC (StarOffice Colours, i.e. the OpenDocument standard used by OpenOffice.org and LibreOffice) and CorelDraw XML Palettes and more. For users who already have CorelDraw this should allow them to reuse their existing Pantone palettes.

They are also continuing their work to merge their SK1 and PrintDesign branches. The next release seems very promising.

ColorHugALS and Sensor HID

As Bastien hinted in his last blog post, we now have some new test firmware for the ColorHugALS device. The ever-awesome Benjamin Tissoires has been hacking on an alternative device firmware, this time implementing the Sensor HID interface that Microsoft is suggesting vendors use for internal ambient light sensors on tablets and laptops for Windows 8.

Implementing this new interface has several advantages:

  • The sensor should “just work” with Windows 8 without a driver
  • The sensor now works with iio-sensor-proxy without writing any interface code
  • We can test the HID code in the kernel with a device we can hack to do strange things
  • colorhug-als1-large

    So, if you want to test the new GNOME ambient light sensor code, flash your ColorHugALS with this file using colorhug-cmd flash-firmware ColorHugALS-SensorHID.bin — the flash process will appear to fail right at the end, but this is just because we’ve not yet written the HID version of the SetFlashSuccess call that instructs the bootloader to start the firmware automatically when inserted. This isn’t actually such a bad thing for an experimental firmware, but means when you remove then insert your ALS device you’ll have to do colorhug-cmd boot-flash to switch from the flashing red LED bootloader mode into the new firmware mode.

    If it’s too broken for you right now, you can go back to the real firmware using colorhug-cmd when in bootloader mode.


    There are still 17 ColorHugALS devices in stock, if you want to buy one for testing. Once they’ve gone, they’re gone, I don’t think I’ll be building another batch unless there’s a lot more demand as right now I’m building them at a loss.

    May 26, 2015

    Interview with Andrei Rudenko

    Could you tell us something about yourself?

    My name is Andrei Rudenko, I’m a freelance illustrator, graduated from the Academy of Fine Arts (as a painter) in Chisinau (Moldova). I have many hobbies, I like icon/UI design, photography, learned a few programming languages and make games in my spare time, and also have about 10 releases on musical labels as 2R. For now I’m trying to improve my skills in illustration and game development.

    Do you paint professionally, as a hobby artist, or both?

    Both, it is good when your hobby is your job.

    What genre(s) do you work in?

    I like surrealism, critical realism. I don’t care about genre much, I think the taste and culture in art is more important.

    Whose work inspires you most — who are your role models as an artist?

    I really like the Renaissance artists, Russian Wanderers, also Jacques-Louis David, Caravaggio, Anthony van Dyck, and Roberto Ferri.

    When did you try digital painting for the first time?

    I think about 2010, trying to paint in Photoshop but I didn’t like that to draw with, and I left it until I found Krita.

    What makes you choose digital over traditional painting?

    Digital painting has its advantages, speed, tools, ctrl+z. For me it is a place for experiments, which I can then use in traditional painting.­

    How did you find out about Krita?

    When I became interested in Linux and open source. I found Krita, it had everything that I needed for a digital painting. For me it is important to repeat that feeling like you paint using traditional materials.

    What was your first impression?

    As soon as I discovered a powerful brush engine. I realized that this is what I was looking for a long time.

    What do you love about Krita?

    I like its tools, as I have already said the brush engine, the large variety of settings. I like the team who are developing Krita, very nice people. And of course it is free.

    What do you think needs improvement in Krita? Is there anything that really annoys you?

    I think better vector graphics tools, for designers. Also make some fixes for pixel art artists.

    What sets Krita apart from the other tools that you use?

    The possibility to customize it the way you like.

    If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

    Monk for Diablo 3 contest, there is a lot of work and a lot that needs to be done. But Krita gave me everything i needed to make this art.

    What techniques and brushes did you use in it?

    Most of the time I use the color smudge brush (with dulling), like in traditional oil painting. For details a simple circle brush, for leaves I made a special brush with scattering. Almost all brushes I made myself, and also patterns for brushes I made from my texture photos.

    Where can people see more of your work?


    Anything else you’d like to share?

    Thank you for inviting me to this interview. And thank you, Krita team, for Krita. ;)

    Fedora 22 and missing applications

    Quite a few people are going to be installing Fedora 22 in the coming days, searching for things in the software center and not finding what they want. This is because some applications still don’t ship AppData files, which have become compulsory for this release. So far, over 53% of applications shipped in Fedora ship the required software center metadata, up from the original 12% in Fedora 21. If you don’t like this, you can either use dnf to install the package on the command line, or set gsettings set org.gnome.software require-appdata false. If you want to see your application in the software center in the future, please file a bug either upstream or downstream (I’ve already filed a lot of upstream bugs) or even better write the metadata and get it installed either upstream tarball or downstream in the Fedora package. Most upstream and downstream maintainers have shipped the extra software center information, but some others might need a little reminder about it from users.

    May 25, 2015

    Interview with Griatch

    Surprise of Trolls, by Griatch

    Could you tell us something about yourself?

    I, Griatch, am from Sweden. When not doing artwork I am a astrophysicist, mainly doing computer modeling of astronomical objects. I also spend time writing fiction, creating my own music and being the lead developer of Evennia, an open-source, professional-quality library for creating multiplayer text games (muds). I also try to squeeze in a roleplaying game or two now and then, as well as a beer at the local pub.

    Do you paint professionally, as a hobby artist, or both?

    A little bit of both. I’m mainly a hobby painter, but lately I’ve also taken on professional work and I’m currently commissioned to do all artwork and technical illustration for an upcoming book on black holes (to be published in Swedish). Great fun!

    What genre(s) do you work in?

    I try to be pretty broad in my genres and have dabbled in anything from fantasy and horror to sci-fi, comics and still life. I mostly do fantasy, sci-fi and other fantastical imagery but I often go for the mundane aspects of those genres, portraying scenes and characters doing non-epic things. I try to experiment a lot but like to convey or hint at some sort of story in my artwork.

    Whose work inspires you most — who are your role models as an artist?

    There are too many to list, including many involved in the Krita project! One thing you quickly learn as an artist (and in any field, I’ve found) is that no matter how well you think you are doing for yourself, there are always others who are way better at it. Which is great since it means you can learn from them!

    How did you get to try digital painting for the first time?

    I did my first digital drawing with a mouse on an Amiga 500 back in the mid-nineties. I used the classical program Deluxe Paint. You worked in glorious 32 colours (64 with the “halfbrite” hardware hack) on a whopping 320×240 pixel canvas. I made fantasy pictures and a 100+ frame animation in that program, inspired by the old Amiga game Syndicate.

    But even though I used the computer quite a bit for drawing, digital art was at the time something very different from analogue art – pixel art is cool but it is a completely separate style. So I kept doing most my artwork in traditional media until much later.

    What made you choose digital over traditional painting?

    I painted in oils since I was seven and kept doing so up until my university years. I dropped the oils when moving to a small student apartment – I didn’t have the space for the equipment nor the willingness to sleep in the smell. So I drew in charcoal and pencils for many years. I eventually got a Linux machine in the early 2000’s and whereas my first tries with GIMP were abysmal (it was not really useful for me until version 2+), I eventually made my first GIMP images, based on scanned originals. When I got myself a Wacom Graphire tablet I quickly transitioned to using the computer exclusively. With the pen I felt I could do pretty much anything I could on paper, with the added benefits of undo’s and perfect erasing. I’ve not looked back since.

    How did you find out about Krita?

    I’ve known about Krita for a long time, I might have first heard about it around the time I started to complement my GIMP work with MyPaint for painting. Since I exclusively draw in Linux, the open-source painting world is something I try to keep in touch with.

    What was your first impression?

    My first try of Krita was with an early version, before the developers stated their intention of focusing on digital painting. That impression was not very good, to be honest. The program had a very experimental feel to it and felt slow, bloated and unstable. The kind of program you made a mental note of for the future but couldn’t actually use yet. Krita has come a long way since then and today I have no stability or performance issues.

    What do you love about Krita?

    Being a digital painter, I guess I should list the brush engines and nice painting features first here. And these are indeed good. But the feature I find myself most endeared with is the transform tool. After all my years of using GIMP, where applying scale/rotate/flip/morph etc is done by separate tools or even separate filters, Krita’s unified transform tool is refreshing and a joy to use.

    What do you think needs improvement in Krita? Is there anything that really annoys you?

    I do wish more GUI toolkits would support the GTK2 direct-assignment of keyboard shortcuts: Hover over the option in the menu, then click the keyboard shortcut you want to that menu item. Fast and easy, no scrolling/searching through lists of functions deep in the keyboard shortcut settings. I also would like to see keyboard shortcuts assigned to all the favourite brushes so you can swap mid-stroke rather than having to move the pen around on the pop-up menu.

    Apart from this, with the latest releases, most of my previous reservations with the program have melted away actually. Apart from stability concerns, one of the reasons I was slow to adopt Krita in the past was otherwise that Krita seems to want to do it all. Krita has brushes, filters, even vector tools under the same umbrella. I did (and still often do) my painting in MyPaint, my image manipulation in GIMP and my vector graphics in Inkscape – each doing one aspect very well, in traditional Unix/Linux fashion. For the longest time Krita’s role in this workflow was … unclear. However, the latest versions of Krita have improved the integration between its parts a lot, making it actually viable for me to stay in Krita for the entire workflow when creating a raster image.

    The KDE forum and bug reporting infrastructure it relies on hides Krita effectively from view as one of many KDE projects. Compared to the pretty and modern Krita main website, the KDE web pages you reach once you dive deeper are bland and frankly off-putting, a generic place to which I have no particular urge to contribute. That the Krita KDE-forum can’t even downscale an image for you, but requires you to first yourself rescale the image before uploading, is so old-fashioned that it’s clear the place was never originally intended to be hosting art. So yes, this part is an annoyance, unrelated to the program itself as it is.

    What sets Krita apart from the other tools that you use?

    The transform tool mentioned above and the sketch-brush engine which is great fun. The perspective tool is also a very cool addition, just to name a few things. Krita seems to have the development push, support and ambition to create a professional and polished experience. So it will be very interesting to follow its development in the future.

    If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

    “The Curious Look”. This is a fun image of a recurring character of mine. Whereas I had done images in Krita before, this one was the first I decided to make in Krita from beginning to end.

    What techniques and brushes did you use in it?

    This is completely hand-painted only using one of Krita’s sketch brushes, which I was having great fun with!

    Where can people see more of your work?

    You can find my artwork on DeviantArt here: http://griatch-art.deviantart.com/
    I have made many tutorials for making art in OSS programs: http://griatch-art.deviantart.com/journal/Tutorials-237116359
    I also have a Youtube channel with amply commented timelapse painting videos: https://www.youtube.com/user/griatch/videos

    Anything else you’d like to share?

    Nothing more than wishing the Krita devs good luck with the future development of the program!

    May 23, 2015

    dupefinder - Removing duplicate files on different machines

    Imagine you have an old and a new computer. You want to get rid of that old computer, but it still contains loads of files. Some of them are already on the new one, some aren’t. You want to get the ones that aren’t: those are the ones you want to copy before tossing the old machine out.

    That was the problem I was faced with. Not willing to do this tedious task of comparing and merging files manually, I decided to wrote a small tool for it. Since it might be useful to others, I’ve made it open-source.

    Introducing dupefinder

    Here’s how it works:

    1. Use dupefinder to generate a catalog of all files on your new machine.
    2. Transfer this catalog to the old machine
    3. Use dupefinder to detect and delete any known duplicate
    4. Anything that remains on the old machine is unique and needs to be transfered to the new machine

    You can get in two ways: there are pre-built binaries on Github or you may use go get:

    go get github.com/rubenv/dupefinder/...

    Usage should be pretty self-explanatory:

    Usage: dupefinder -generate filename folder...
        Generates a catalog file at filename based on one or more folders
    Usage: dupefinder -detect [-dryrun / -rm] filename folder...
        Detects duplicates using a catalog file in on one or more folders
      -detect=false: Detect duplicate files using a catalog
      -dryrun=false: Print what would be deleted
      -generate=false: Generate a catalog file
      -rm=false: Delete detected duplicates (at your own risk!)

    Full source code on Github

    Technical details

    Dupefinder was written using Go, which is my default choice of language nowadays for these kind of tools.

    There’s no doubt that you could use any language to solve this problem, but Go really shines here. The combination of lightweight-threads (goroutines) and message-passing (channels) make it possible to have clean and simple code that is extremely fast.

    Internally, dupefinder looks like this:

    Each of these boxes is a goroutine. There is one hashing routine per CPU core. The arrows indicate channels.

    The beauty of this design is that it’s simple and efficient: the file crawler ensures that there is always work to do for the hashers, the hashers just do one small task (read a file and hash it) and there’s one small task that takes care of processing the results.

    The end-result?

    A multi-threaded design, with no locking misery (the channels take care of that), in what is basically one small source file.

    Any language can be used to get this design, but Go makes it so simple to quickly write this in a correct and (dare I say it?) beautiful way.

    And let’s not forget the simple fact that this trivially compiles to a native binary on pretty much any operationg system that exists. Highly performant cross-platform code with no headaches, in no time.

    The distinct lack of bells and whistles makes Go a bit of an odd duck among modern programming languages. But that’s a good thing. It takes some time to wrap your head around the language, but it’s a truly refreshing experience once you do. If you haven’t done so, I highly recommend playing around with Go.

    Random questions

    Comments | @rubenv on Twitter

    Interview with Mary Winkler

    Could you tell us something about yourself?

    My name is Mary Winkler and I work under the brand Acrylicana. I love coffee, cats, pastels, neons, sunshine, and sparkles.

    Do you paint professionally, as a hobby artist, or both?

    Professionally mostly, but also just because I love creating. If I can make a mark, painting, drawing, crafting, etcetera, I will.

    What genre(s) do you work in?

    Realism, kawaii, stylized, pop art… there’s a lot of terms that define my art, and it has changed and continues to change over time.

    Whose work inspires you most — who are your role models as an artist?

    I adore the work of Peter Max, Macoto, Junko Mizuno, Lisa Frank, Bouguereau, and Erte, as well as artist friends Miss Kika, Anneli Olander, Zambicandy, and Brittany Ngo.

    How did you get to try digital painting for the first time?

    In high school my oldest brother bought me an off-brand graphics tablet, well over a decade ago. I’ve been creating art digitally ever since.

    What makes you choose digital over traditional painting?

    I love both mediums, actually. If I’m pressed for time, working with a client, or just don’t want a mess, digital is the way to be. Most of my work is done digitally. I do love to be able to paint up wood, canvas, or paper with acrylics or watercolors for gallery shows or small pieces to put in my shop.

    How did you find out about Krita?

    I was writing an article for Tuts+ covering drawing and design programs that weren’t made by Adobe. I had some twitter followers mention it and later when the article ran a few people commented about the program because I missed it. Rectified that mistake by painting for three days straight and haven’t shut up about Krita.

    What was your first impression?

    The program immediately detected my tablet (Wacom Cintiq) and while some larger file sizes and my machine can produce a little bit of lag, the program doesn’t freeze on me, crash unexpectedly, or cause weird jagged lines when they should be smooth. Krita’s smooth and lighter than Photoshop, and has such good painting tools!

    What do you love about Krita?

    LOVE the blending tools. I’m used to those of Paint Tool SAI, and finding a program whose brushes are far more customizable and can do more is digital art heaven. Especially an open source one!

    What do you think needs improvement in Krita? Is there anything that really annoys you?

    I know it’s small, but I’d love a zoom tool in the toolbar. I’m happy to push plus and minus, but not seeing the little magnifying glass first thing was something I missed from a new user standpoint.

    What sets Krita apart from the other tools that you use?

    It’s not hogging all of my RAM like Painter or Adobe products can. While it’s lighter than those programs, it’s also packed with more features than something like FireAlpaca or Paint Tool SAI. I have the ability to customize brushes and tools fantastically, and have barely done so so far thanks to kickass default tools.

    If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

    So far I’ve only done two: the tart piece and a poster design for an upcoming gallery show. I love them both and cannot choose. I do plan on adding hundreds of doodles done in Krita to my harddrive.

    What techniques and brushes did you use in it?

    So far I love the watercolor-style brushes, sparkle brushes, and the blending ones. I’ve been playing with default ones mostly to get the hang of what Krita has to offer. Simply love anything that is intuitive in its use. Immediately I could apply my painting techniques to the program without having to learn new ways to use layers or complex blending or painting styles. It’s like working with acrylics, and I love that.

    Where can people see more of your work?

    You can follow me on behance, instagram, facebook, twitter, and deviantart.

    Anything else you’d like to share?

    I write a lot of tutorials for Tuts+ (http://tutsplus.com/authors/mary-winkler) and add videos occasionally on youtube (https://www.youtube.com/user/acrylicana). I hope to add Krita to my roster of tutorials/courses/process videos soon. :)

    May 22, 2015

    iio-sensor-proxy 1.0 is out!

    Modern (and some less modern) laptops and tablets have a lot of builtin sensors: accelerometer for screen positioning, ambient light sensors to adjust the screen brightness, compass for navigation, proximity sensors to turn off the screen when next to your ear, etc.


    We've supported accelerometers in GNOME/Linux for a number of years, following work on the WeTab. The accelerometer appeared as an input device, and sent kernel events when the orientation of the screen changed.

    Recent devices, especially Windows 8 compatible devices, instead export a HID device, which, under Linux, is handled through the IIO subsystem. So the first version of iio-sensor-proxy took readings from the IIO sub-system and emulated the WeTab's accelerometer: a few too many levels of indirection.

    The 1.0 version of the daemon implements a D-Bus interface, which means we can support more than accelerometers. The D-Bus API, this time, is modelled after the Android and iOS APIs.


    Accelerometers will work in GNOME 3.18 as well as it used to, once a few bugs have been merged[1]. If you need support for older versions of GNOME, you can try using version 0.1 of the proxy.

    Orientation lock in action

    As we've adding ambient light sensor support in the 1.0 release, time to put in practice best practice mentioned by Owen's post about battery usage. We already had code like that in gnome-power-manager nearly 10 years ago, but it really didn't work very well.

    The major problem at the time was that ambient light sensor reading weren't in any particular unit (values had different meanings for different vendors) and the user felt that they were fighting against the computer for the control of the backlight.

    Richard fixed that though, adapting work he did on the ColorHug ALS sensor, and the brightness is now completely in the user's control, and adapts to the user's tastes. This means that we can implement the simplest of UIs for its configuration.

    Power saving in action

    This will be available in the upcoming GNOME 3.17.2 development release.

    Looking ahead

    For future versions, we'll want to export the raw accelerometer readings, so that applications, including games, can make use of them, which might bring up security issues. SDL, Firefox, WebKit could all do with being adapted, in the near future.

    We're also looking at adding compass support (thanks Elad!), which Geoclue will then export to applications, so that location and heading data is collected through a single API.

    Richard and Benjamin Tissoires, of fixing input devices fame, are currently working on making the ColorHug-ALS compatible with Windows 8, meaning it would work out of the box with iio-sensor-proxy.


    We're currently using GitHub for bug and code tracking. Releases are mirrored on freedesktop.org, as GitHub is known to mangle filenames. API documentation is available on developer.gnome.org.

    [1]: gnome-settings-daemon, gnome-shell, and systemd will need patches

    Google taking the SMIL out of SVG.

    Google has recently announced their intention to drop SMIL support in Blink, the rendering engine for Chrome. SMIL is a way to animate SVG’s in a declarative way. Google’s argument is that SMIL animation has not become hugely popular and that Web Animations will provide the same functionality. As a result of this announcement, the SVG working group decided to move SMIL from SVG 2 and into its own specification. One could say that SMIL is on life support at the moment.

    SMIL’s lack of use is most likely due to its lack of support in IE. Microsoft has declared they will not implement SMIL in IE but they have hinted in the past that they are open to a native JS implementation built on top of Web Animations.

    So why would losing SMIL be a great loss?

    1. SMIL declarative animations are easier to write compared to JavaScript or CSS/Web Animations.
    2. SMIL animations are in general more performant.
    3. With SMIL animations one can independently animate different attributes and properties.
    4. JavaScript is not allowed to run inside SVGs in many situations due to security issues so it is not a viable alternative in many cases.
    5. Web Animations don’t replace all the functionality of SMIL. For example, one cannot animate attributes including paths. In particular you won’t be able to do this:
    Morphing Batman logos.

    A variety of Batman logos, animated with SMIL.

    Ironically, YouTube is planning on using SMIL to animate buttons.

    As usual, if you are reading this in a blog aggregator and the images don’t display correctly, try viewing on my blog website. Aggregators don’t play well with SVG.

    (For more on animating paths, see my blog post on path animations.)

    You can read about Google’s intention and the debate that is going at the chromium.org Google group. If you use SMIL or plan to, let Google know that it is important to you.

    A figure just to have a nice image in Google+ (which doesn’t do SVG… another reason to frown):

    Frown face.

    Second stretchgoal reached and new builds!

    We’ve got our second stretchgoal through both Kickstarter and the Paypal donations! We hope we can get many more so that you, our users, get to choose more ways for us to improve Krita. And we have got half a third stretch goal actually implemented: modifier keys for selections!

    Oh — and check out Wolthera’s updated brush packs! There are brush packs for inking, painting, filters (with a new heal brush!), washes, flow-normal maps, doodle brushes, experimental brushes and the awesome lace brush in the SFX brush pack!

    We’ve had a really busy week. We already gave you an idea of our latest test-build on Monday, but we had to hold back because of the revived crash file recovery wizard on windows… that liked to crash. But it’s fixed now, and we’ve got new builds for you!

    So what is exactly new in this build? Especially interesting are all the improvements to PSD import/export support. Yesterday we learned that Katarzyna uses PSD as her working format when working with Krita – we still don’t recommend that, but it’s easier now!

    Check the pass-through switch in the group layer entry in the layerbox!

    Check the pass-through switch in the group layer entry in the layerbox!

    • Dmitry implemented Pass-Through mode for group layers. Note: filter, transform and transparency masks and pass-through mode don’t work together yet, but loading and saving groups from and to PSD now does! Pass-through is not a fake blending mode as in Photoshop: it is a switch on the group layer. See the screenshot!
    • We now can load and save layerstyles, with patterns from PSD files! Get out your dusty PSDs for testing!
    • Use the right Krita blending mode when a PSD image contains Color Burn.
    • Add Lighter Color and Darker Color blending modes and load them from PSD.
    • When using Krita with a translation active on windows, the delay on starting a stroke is a bit less, but we’re still working on eliminating that delay completely.
    • The color picker cursor now shows the currently picked and previous color.
    • Layer styles can now be used with inherit-alpha
    • Fix some issues with finding templates.
    • Work around an issue in the oxygen widget style on Linux that would crash the OpenGL-based canvas due to double initialization
    • Don’t toggle the layer options when right-clicking on a layer icon to get the context menu (patch by Victor Wåhlström)
    • Update the Window menu when a subwindow closes
    • Load newer Photoshop-generated JPG files correctly by reading the resolution information from the TIFF tags as well. (Yes, JPG resolution is marked in the exiv metadata using TFF tags if you save from Photoshop…)
    • Show the image name in the window menu if it hasn’t been saved yet.
    • Don’t crash when trying to apply isolate-layer on a transform mask
    • Add webp support (at least on Linux, untested on Windows)
    • Add a shortcut to edit/paste into a new image. Patch by Tiffany!
    • Fix the autosave recovery dialog on Windows for unnamed autosaves!
    • Added a warning for intel users who may still be dealing with the broken driver. If Krita works fine for you, just click okay. If not, update your drivers!

    New builds for Linux are being created at the moment and will be available through the usual channels.



    From Vista and up, Windows 7 and up is recommended. There is no Windows XP build. If you have a 64 bits version of Windows, don’t use the 32 bits build! The zip files do not need installing, just unpacking, but do not come with the Visual Studio C runtime that is included in the msi installer.


    (Please keep in mind that these builds are unstable and experimental. Stuff is expected not to work. We make them so we know we’re not introducting build problems and to invite hackers to help us with Krita on OSX.)

    May 21, 2015

    Krita comes to Discworld!

    We found out that the German Discworld covers were made with Krita, and had the privilege to ask the artist to talk about her work!

    (Don’t forget to check out our 2015 Kickstarter campaign as well!)
    Color of Magic z napisami Katarzyna Oleska

    Hi. My name is Katarzyna Oleska and I am an Illustrator working for publishers, magazines and private clients. A couple of months ago, I came across a free program for painters called Krita. My experience of free programs in the past wasn’t great, but to my surprise, Krita was different. At first I was overwhelmed with the number of dockers and settings, but soon found that they were there for a reason. I fell in love with Krita so much that I left my old Corel Painter and started using Krita to paint my commissions. Most of my recent Terry Pratchett covers were painted in Krita.

    How did you get into illustration/book cover painting in the first place?

    I started painting covers back in 2003 when I was still studying architecture. I’d always liked to draw and paint and wanted to see my works in print. So one day I took a chance and e-mailed one of the publishers I wanted to work for. I attached a couple of samples of my works and I got my first job straight away. Pretty lucky. Back then I was still working traditionally but as time went by I bought a tablet and started working digitally.

    Pyramids z napisami Katarzyna Oleska

    How do you find jobs?

    It really depends. Some of the commissions come to me and some I have to chase. If the commission comes to me it’s usually through word of mouth or because the client saw my works online. But I also approach new publishers, send them my work samples, my portfolio etcetera.

    Can you choose which books you illustrate, or do you just do what a publisher throws you?

    Unfortunately I don’t have the comfort of choosing what I want to illustrate. I can refuse politely if I think I can’t deliver a good illustration, for example when I feel my style wouldn’t fit the story. But publishers usually know what I am good at, they know my portfolio and I have never really refused any cover yet.

    How do you determine which scene/character(s) to put on the cover?

    The best decision of which scene or characters to put on the cover can only be made if I know the story so whenever I have a chance to read a book I take it. Being a fan of reading myself, I know how important it is for the cover to reflect the story inside; especially with a series like the Terry Pratchett Discworld novels. I was already a huge Terry Pratchett fan, so that wasn’t a problem.

    When choosing a scene to paint I usually try to analyse where the main focus of the story is. Very often I am tempted to paint a scene that would look amazing on the cover but I catch myself in time and remind myself that this particular scene, though amazing, wouldn’t really sell the story. So I choose the one that will do it better and will also resonate with the title. For example with “Guards, Guards” the only reasonable choice was to paint the Guards running away from a dragon they were trying to track down. Nothing else would really fit.

    Guards Guards z napisami Katarzyna Oleska

    Sometimes, however, it’s impossible to read a book because of a tight deadline or the language it was written in. When that happens I try to make sure I find out as much as possible about the book from the publisher.

    What sets Krita apart from other tools you’ve used?

    The first and most obvious thing is that it’s free. I love than young artists will now have access to such a great tool without spending lots of money. But I would never recommend a program based solely on the price. I have used some free programs and never liked them. They would last a very short time on my computer. With Krita it’s different – I think it’s already a strong competitor to the best known programs on the market.

    For me, Krita feels very natural to use. I have worked both in Photoshop and Painter before and although I like them, I’ve always been hoping to find a program that sits somewhere in between those two. As an illustrator I am mostly interested in paint tools. Photoshop has always seemed too technical and not so intuitive. Painter, while trying to deliver the painterly feel, wasn’t really delivering it. With Krita I feel almost like I’m painting. The number of settings for brushes can be overwhelming at first, but it helps to create brushes that are customized specifically for me. I especially like how Krita manages patterns in brushes.

    Sorcery z napisami Katarzyna Oleska

    What does Krita already do better, and what could make it better still?

    As well as the brushes, I also love the vector tools in Krita. I have never before seen a program where tools would change their characteristics depending on what kind of layer we use them on (paint/vector).

    I also love that I can pick a color with ctrl and dynamically change the size of the brush by holding shift and dragging my pen. I often only have to use my pinky to control these two.

    Rotating the canvas is easy (space-shift) and I am addicted to the mirror tool as I use it to verify the proportions in my paintings (mirroring the image helps spot mistakes). I love that when I’m using two windows for one file the mirror tool only affects one of the windows. The warping tool is also great. I don’t use it much, but I tried it out and I love the way it works. Multiple Brushes and Wrap Around Mode are great too, they make creating patterns so easy. But one of my favourite things is that I can choose my own Advanced Color Setting Shape and Type and that there are so many options that come with it.

    Things that could be improved: when I overwrite a brush preset I cannot keep the old icon I created. Perhaps an option to keep the old icon could be added. Seems like a small problem but when using many brushes I get used to the icon and when it’s gone I have to search for my brush. The other improvement would be the ability to merge multiple layers together.

    Can you give a quick overview of your workflow?

    Sure. I actually prepared a short video that shows how I work. It’s a sketch for Terry Pratchett’s “Wyrd Sisters”. I used the older version of Krita back then but the workflow remains the same.

    Do you work closely with the publisher for a book cover, or do you only deliver a painting so you don’t see the result until it’s published?

    Very often, before I even start sketching, the publisher will send me a draft of the cover’s layout so that I am aware of how much space I have to work with. Sometimes however, when the publisher doesn’t know the final layout, they give me some directions and let me decide how much space I want to leave for the lettering. Usually after I’ve handed them the initial sketch they can correct me, and ask to change the composition a bit. When it comes to the finished illustration I have full control over it until I’ve e-mailed it to the publisher. Once they have approved it, how it looks when it is published is out of my hands. Sometimes they will send me the final version of the cover, so that I know what it will look like in print and I can make some last minute suggestions but I don’t have real control over the cover itself.

    What are the special requirements (colour, resolution, file format) and challenges when you work for print?

    I like to work with bigger formats. I think a painting looks better when painted big and then shrunk to the size of the cover rather than when it’s painted with only small size in mind. A big size forces me to be more precise in details so in the end the image looks more crisp and the quality is better. Besides, the client may in the future want to use the painting for a poster and then I know the painting will look great.

    I usually work with psd files. I use many layers and this is the best file type for me. When I send out the final image I flatten the image and save it as a tiff file. It may be heavier than jpg but there is no loss in quality. Also I work in an RGB mode but I always switch to CMYK in the end to see if I like how it’s going to look in print (CMYK has fewer colors). If necessary I correct any mistakes I see.

    To see more of Katarzyna’s work, visit her site: www.katarzynaoleska.com

    Publishing House: Piper –www.piper.de
    Lettering: Guter Punkt – www.guter-punkt.de

    Twenty years of Qt!

    I first encountered Qt in Linux Journal in 1996. Back then, I wasn't much of programmer: I had written a GPL'ed mail and Usenet client in Visual Basic and was playing around with Linux. I wanted to write a word processor, because that was really missing, back then.

    I tried to use xforms, but that didn't have open source (it was binary only, can you believe it?), and besides, horrible. Since I didn't particularly care about having a GUI, I tried to use curses, which was worse. I had taken a look at Motif years before, so I didn't look again. Sun's OPEN LOOK had a toolkit that looked nice, but wasn't. For work, I had used Delphi and MFC, and had had to make sense of Oracle 2000. None of those were useful for writing a word processor for Linux.

    And then, suddenly, out of the blue (though I remembered some of the names involved with Qt as being involved with my favorite ZX Spectrum emulator), appeared Qt. Qt made sense: the classes were helpfully named, the api's were clear and sensible, the documentation was good, the look and feel worked fine. It had everything you'd need to write a real application. Awesome!

    So I got started and discovered that, in the first place, I didn't know what makes a word processor tick, and in the second place, I didn't know C++... So my project foundered, the way projects tend to do, if you're trying to do stuff all by your lonesome.

    Then I changed jobs, stopped working on a broken-by-design Visual Basic + Oracle laboratory automation system for Touw in Deventer, started working with Tryllian, building Java-based virtual agent systems. Fun! I learned a lot at that job, it's basically where I Was taught programming properly. I discovered Python, too, and loved it! Pity about that weird tkInter toolkit! And I started using KDE as soon as I had a computer with more than 4 megabytes of ram, and KDE used Qt.

    Qt still made sense to me, but I still didn't know C++, though it looked to me that Qt made C++ almost as easy as Java, maybe easier, because there were seriously dumb bits to Java.

    Then PyQt arrived. I cannot figure out anymore when that was: Wikipedia doesn't even tell me when it was first released! But I threw myself into it! Started writing my first tutorials in 1999 and followed up writing a whole book on PyQt. My main project back then was Kura, an alternative to SIL's Shoebox, a linguistic database application that later got handed over to the Ludwig-Maximilians University in Munich.

    I never could make sense out of Java's GUI toolkit: Swing didn't swing it for me. But that was work stuff, and from 2003, I worked on Krita. I started doing my own painting application in PyQt, about the time Martin Renold started MyPaint. I quickly decided that I couldn't do a painting application on my own, just like I couldn't do a word processor on my own. By that time I had taken a good hard look at GTK as well, and concluded that anyone who'd propose to a customer to base a project on GTK should be sued for malpractice. Qt just made so much more sense to me...

    So I found Krita, learned bits of C++, and since then there haven't been many days that I haven't written Qt-based code. And without Qt, I probably would have started a second-hand bookshop or something. Qt not just lets me code with pleasure and profit, it is what keeps me sane, coding!

    May 18, 2015

    An Open Source Portrait (Mairi)

    An Open Source Portrait (Mairi)

    Processing a portrait session

    This is an article I had written long ago (originally published in 2013). The material is still quite relevant and the workflow hasn’t really changed, so I am republishing it here for posterity and those that may have missed it the first time around.

    The previous post for this article went over the shoot that led to this image.

    If you’d like to follow along with the image of Mairi, you can download the files from the links below.

    Download the .ORF RAW file [Google Drive]
    Download the full resolution .JPG output from RawTherapee.
    Download the Full Resolution .XCF file [.7zip - 265MB]
    If you want to use the .XCF file just to see what I did, I recommend the ½ resolution file, as it’s smaller: Download the ½ Resolution .XCF file [.7zip - 60MB]
    These files are being made available under a Creative Commons Attribution, Non-Commercial, Share Alike license (CC-BY-SA-NC).

    To whet your appetite, here is the final result of all of the postprocessing done in this tutorial (click to compare it to no retouching):

    Mairi Final Result The final result I’m aiming for.
    Click to compare to original.

    Picking Your Image

    This is a hard thing to quantify, as each of us is driven by our own vision and style. In my case, I wanted something a little more somber looking with a focus on her eyes (they are the window to the soul, right?). There’s just something I like about big, bright eyes in a portrait, particularly in women.

    I also personally liked the grey sweater against the grey background as well. I felt that it put more focus on the colors of her skin, hair, and eyes. So that pretty much narrowed me down to this contact sheet:

    Mairi contact sheet Narrowing it down to this set.

    Looking over the shots, I decided I liked the images with the hood up, but her hair down and flowing around her. This puts me in the top two rows, with only a few left to decide upon. At this point I narrowed it down to one that I liked best - grey sweater, hood up but not pulled back against her head, hair flowing out of it, and big eyes.

    This is pretty common, I’d imagine. You can grab several frames, but in the end hopefully just the right amount of small details will come together and you’ll find something that you really like. In my case it was this one:

    Mairi Raw I finally decided on this shot based on the color, hair, eyes, and slight smile.

    Now hold on a minute. The image above is the JPG straight out of the camera. As you can see, I’ve underexposed this one a little bit, and the colors are not anywhere near where I’d like them to be. If you’re following along don’t download this version of the image. I’ll have a much better starting JPG after we run it through some RAW development first!

    If you’re impatient, jump to that section and get the image there.

    Raw Processing

    There are a few RAW conversion options out there in the land of F/OSS. Here’s a small list of popular ones to peruse:

    One of the reasons I love using F/OSS is the availability (usually) of the software across my OS’s. In my case I went with RawTherapee a while back and liked it, so I’ve stuck with it so far (even though I had to build my own OSX versions).

    So, my workflow includes RawTherapee at this point. You should be able to follow along in other converters, but I’m going to focus on RT because that’s what I’m using. If you shoot only in JPG (seriously, use RAW if you can), you can skip this section and head directly down to GIMP Retouching.

    Load it up

    After starting up RawTherapee, you’ll be in the File Browser interface, waiting for you to select a folder of images. You can navigate to your folder of images through the file browser on the left side of the window. It may take a bit while RawTherapee generates thumbnails of all the images in your directory.

    RawTherapee File Browser RawTherapee file browser view.
    (Navigate folders on the left pane)

    Once you’ve located your image, double clicking it in the main window will open it up for editing. If you’re using a default install/options on RT, chances are a “Default” profile will be applied to your image that has Auto Levels turned on.

    Mairi RawTherapee Default The base image with “Default” profile applied (auto levels).

    Chances are that Auto Levels will not look very good. My Default processing profile usually does not look so hot (no noise reduction, auto levels, etc.). That’s ok, because we are going to fix this right up in the next few sections.

    Adjust Exposure

    I like to control the exposure and processing on my RAW images. Auto Levels may work for some, but once you get used to some basic corrections and how to use them it’s relatively quick and painless to dial-in something you like quickly.

    Again - much of what I’m going to describe is subjective, and will depend on personal taste and vision. This just happens to be how I work, adjust as needed for you own workflow. :)

    To give me a good starting point I will usually remove all adjustments to the image, and reset everything back to zero. This is easy to do as my Default profile has nothing done to it other than Auto Levels.

    RawTherapee Default Exposure Values Auto Levels values on the Exposure panel.

    A quick and easy way to reset the Exposure values on the Exposure panel is to use the Neutral button on that panel (I’ve outlined it in green above). You can also hit the small “undo” arrows next to each slider to set that slider back to zero as well.

    At this point the image exposure is set to a baseline we can begin working on. For reference, here is my image after zeroing out all of the exposure sliders and the saturation:

    Mairi RawTherapee Zero Values With all exposure adjustments (and saturation) set to zero.

    Exposure Compensation

    The first thing I’ll begin adjusting is the Exposure Compensation for the image. You want to be paying careful attention to the histogram for the image to know what your adjustments to Exposure Compensation are doing, and to keep from blowing things out.

    I personally begin pushing the Exposure Compensation until one of the RGB channels just begins butting up against the right side of the histogram. Here is what the histogram looks like for the neutral exposure:

    RawTherapee Neutral Histogram Neutral exposure histogram.

    After adjusting Exposure Compensation I get the Red channel snug up against the right side of the histogram:

    RawTherapee Histogram Exposure Compensation Exposure Compensation until the values just touch the right side.

    If you go a little too far, you’ll notice one of the channels will spike against the side, and if you really go too far, you’ll get a small colored box in the upper right corner indicating that channel has gone out of range (is blown out).

    So here is what my image looks like now with only the Exposure Compensation adjusted to a better range:

    Mairi RawTherapee Exposure Compensation Exposure Compensation adjusted to 2.40.

    The Exposure panel in RT now looks like this (only the Exposure Compensation has been adjusted):

    RawTherapee Exposure Compensation Panel Exposure Compensation set to 2.40 for this image.

    If the highlights in your image begin to get slightly out of range, you may need to make adjustments to the Highlight recovery amount/threshold, but in my case the image was slightly under-exposed, so I kept it zero.

    There is also a great visual method of seeing where your exposures for each channel are at, and to avoid hightlight/shadow clipping. Along the top of your main image window, to the right, there are some icons that look like this:

    RawTherapee Clipping Channels Channel previews, Highlight & Shadow clipping indicators

    The Channel previews let’s you individually toggle each of the R,G,B, and Luminosity previews for the image. You can use these with the Highlight and Shadow clipping indicators to see which channels are clipping and where.

    Highlight and Shadow clipping indicators will visually show you on your image where the values go beyond the threshold for each. For highlights, it’s any values that are greater than 253, and for shadows it’s any values that are lower than 8.

    To illustrate, here is what my image looks like in RT with the Exposure Compensation set to 2.40 from above:

    Mairi RawTherapee Clipping Channels With Highlight & Shadow clipping turned on.

    I don’t mind the shadows clipping in the dark regions of the image, though I can make adjustments to the Black Point (below) to modify that. The highlight clipping on her face is of more concern to me. I certainly don’t want that!

    At this point I can dial in my Exposure Compensation for the highlights by backing it down slightly. As I ease off it I should be seeing the dark patch for Highlight Clipping growing smaller. I’ll stop when it’s either all gone, or just about all gone.

    I wasn’t too far off in my initial adjustment, and only had to back the Exposure Compensation off to 2.30 to remove most of the highlight clipping.

    Settings so far (everything else zero)…

    Exposure Compensation 2.30

    Black Point

    At this point I will usually zoom a bit into a shadow area of my image that might include dark/black tones. The blacks feel a little flat to me, and I’m going to increase the black level just a bit to darken them up.

    I want to be zoomed in a bit so I can determine at which point the black point crushes any details that I want to be visible still. You want your blacks to be dark if possible, but you want to keep details in the shadows if possible (it’s really, really subjective where this point is, but I’ll err on the conservative side since I am still going to process colors a little bit in GIMP later).

    Starting with a Black point of zero:

    Mairi Detail Black 0

    I will increase the Black point while keeping an eye on those shadow details, increasing it until I like how the blacks look and I haven’t destroyed detail in the dark tones. I finally settled on a Black value of 150 as seen here:

    Mairi Detail Black 150 Black value set at 150 (still keeping sweater details in the shadows).
    Click to compare to previous.

    Watch out for Shadow Recovery when you first start adjusting the Black Point. It’s default might be a different value than zero (mine is at 50), and the Neutral button won’t set it back to zero (resetting it will give it back to it’s default value of 50). You may want to push it manually to zero, and if you feel you want to bump shadow details a bit, then start pushing it up.

    I know things look noisy at the moment, but we’ll deal with that in the next section (there is no noise reduction being applied at this point).

    Settings so far (everything else zero)…

    Exposure Compensation 2.30
    Black 150

    Brightness, Contrast, and Saturation

    For this image I didn’t feel the need to modify these values, but this is purely subjective (again). If you do modify these values, keep an eye on the histogram and what it’s doing to keep things from getting out of range/whack again.

    White Balance

    Hopefully you had the right White Balance set during your shoot in camera. If not, it’s ok - we’re shooting in RAW so we can just set it as needed now.

    I happen to have had my in-camera WB set to Flash, so the embedded WB settings in my RAW file metadata are pretty close. In my shot, however, you’ll notice that there is a bit of a white window visible in the left of the frame. I happen to know that the window is quite white, and should be rendered as such in my image.

    As a side note, what I really should have done was to get myself a good reference for balancing the white balance, and to shoot it as part of my setup. Something like the X-Rite MSCCC ColorChecker Classic, or even a WhiBal G7 Certified Neutral White Balance Card. These are a little pricey, but any good 18% grey card will do, really. I just happen to know that my window borders are a pure white, so I’m cheating a bit here…

    So here is what our image looks like at the moment:

    Mairi White Balance Camera Image so far, with White Balance set to Camera (Default).

    The White Balance for your image can be adjusted from the Color panel:

    RawTherapee Default Color Default Color panel showing Camera white balance.

    You can try out some of the presets in the Method drop-down - there are the typical settings there for Sunny, Shade, Flashes, etc… In my case I am going to use the Spot WB option. Clicking that button will let me pick a section of my image that should be color neutral.

    In my case, I know that the window border should be white (and color neutral), so I will pick from that area on my image. Doing so will shift my WB, and will produce a result that looks like this:

    Mairi Camera White Balance WB based on white window border.
    Click to compare Camera based

    I also happen to know that the grey colored walls in the background are close to neutral, but with the slightest hint of blue in them. If I used the grey wall instead of the white window, I would introduce the slightest warm cast to the image. I tried it (choosing a section of the grey wall on the right side of the background), and actually prefer the slightly warmer color, personally:

    Mairi White Balance Wall WB based on the grey wall background (right side of image).
    Click to compare to window WB.

    The difference is ever so slight, but it is there. In my original final image, I went with the balance pulled from the wall, so I will continue with that version here. If you’re curious, here is what my WB values look like:

    RawTherapee Spot White Balance Window After setting Spot WB to the window.

    Seriously, though, don’t rely on luck. Get a grey/color card to correct color casts if you can…

    Settings so far (everything else zero)…

    Exposure Compensation 2.30
    Black 150
    WB Temperature 7300
    WB Tint 0.545

    Noise Reduction & Sharpening

    Chances are the RAW image is going to look pretty noisy zoomed in a bit. This isn’t unusual since we are dealing with RAW data. There are two noise reduction (NR) options in RT, and we are going to want to use both.

    Impulse Noise Reduction

    This NR will remove pixels that have a high impulse deviation from surrounding pixels. Basically the “salt and pepper” noise you may notice in your images where individual pixels are oddly brighter/darker than the surrounding pixels.

    If I zoom into a portion of my image (not far from where I was looking at shadows for setting a black point), I’ll see this:

    Noise Reduction Crop None Closeup crop with no Impulse Noise Reduction.

    I’ll normally play a bit with the Impulse NR to alleviate the specks while still retaining details. As with most NR methods - going a bit too far will obliterate some details with the noise. The trick is to find a happy medium between the two. In my case, I settled on a value of 55 (the default is 50):

    Impulse Noise Reduction 55 Impulse NR set to a value of 55.
    Click to compare to no NR.

    I could have gone a bit further (and have in others from this series), and pushed it up to the 60-70 range, but it’s a matter of taste and weighing the tradeoffs.

    Luminance/Chrominance Noise Reduction

    These two NR methods will suppress noise in the luminance channel (brightness), and the blue/red chrominances.

    I will use a light hand with these NR values. The defaults are 5 for each, and it should make a noticeable difference just with the default values. If you push the Luminance NR too far, you’ll smear fine details right off your image. If you push the Chrominance NR too far, you’ll suck the life out of the colors in your image.

    Not surprisingly, it’s another trade off. In my case, I pushed the L/C NR just a tiny bit past the default to 6 and 6 respectively.

    You’ll be able to see the effect of chrominance NR by looking at the flat colored grey wall in the background. Just don’t forget to check other areas of your image with the settings you choose. For me it was a close look at her iris, where pushing the chrominance NR too far lost some of the beautiful colors in her eye.

    Compare the same crop from above with and without Luminance/Chrominance noise reduction applied:

    Noise Reduction Luminance Chrominance 6 6 With Luminance & Chrominance NR set to 6.
    Click to compare without.

    If you’ve read my previous article on B&W conversion, you’ll know that I don’t mind a little noise/grain in my images at all, so this level doesn’t bother me in the least. I could chase the noise even further if I really wanted to, but always remember that doing so is going to be at the expense of detail/color in your final result. As with most things in life, moderation is key!


    If you are going to sharpen your image a bit, this is probably the best time to do so. The problem is that usually sharpening is the last bit of post-processing you should do to your image, due to it’s destructive nature. Plus, lately I’ve grown accustomed to sharpening by using an extra wavelet scale during my skin retouching in GIMP (you’ll see below in a bit).

    So, I’ll avoid sharpening at this stage. If I was going to use it here at all, it would be just very, very light. Also, if you do any sharpening at this stage, try to make sure that it happens after any noise reduction in the pipeline.

    Settings so far (everything else zero)…

    Exposure Compensation 2.30
    Black 150
    WB Temperature 7300
    WB Tint 0.545
    Impulse NR 55
    Luminance NR 6
    Chrominance NR 6

    Lens Correction

    This is actually a section that deserves its own post, detailing methods for correcting for lens barrel distortion with Hugin. RawTherapee actually has an “Automatic Distortion Correction” that will effect pincushion distortion in your images.

    In my case, I was shooting at the long end of the lens at 50mm, and the distortion is minimal. So I didn’t bother with correcting this (it might have been needed at a shorter focal length, and being closer to the subject, though).

    In Summary

    That about wraps up the RAW “development” I’m going to do on this image. I try to keep things minimal where possible, though I could have gone further and adjusted color tones and LAB adjustments here as well. In fact, with the exception of Wavelet Decompose for skin retouching, and some other masking/painting operations, I could do most of what I want for this portrait entirely in RawTherapee.

    I know that this reads really long, but the truth is that once I am accustomed to a workflow, this takes less than 5 minutes from start to finish (faster if I’ve already fiddled with other images from the same set). All I really modified here was Exposure, White Balance, and Noise Reduction.

    Finally, as I hinted at earlier, here is the final version after doing all of these RAW edits, as we get ready to bring the image into GIMP for further processing:

    Mairi Final Version from RawTherapee This is the one to download if you want to follow along in GIMP below.
    Just click the image to open in a new window, then save it from there.

    GIMP Retouching

    Well, here we are. Finally. It’s the home stretch now, so don’t give up just yet!

    If you didn’t follow along with the RAW processing earlier, you can download the full resolution JPG output from RawTherapee by clicking here:

    Download the full resolution JPG output from RawTherapee

    Armed with our final results from RawTherapee, we’re now ready to do a little retouching to the image.

    The overall workflow and the order in which I approach them is dependent on my mood mostly. Most times, I enjoy doing skin retouching, so I’ll often jump right in with Wavelet Decompose and play around. Really, though, I should start shifting Wavelet Decompose to a later part of my workflow, and fix other things like removing objects from the background and fixing flyaway hairs first.

    This way, I can directly re-use wavelet scales for a slight wavelet sharpening while I have them.

    Looking at this image so far, I can spot a few broad things that I want to correct, and I’m going to address them in this order:

    1. Touchup flyaway hairs
    2. Crop & remove distracting background elements
    3. Skin retouching with Wavelet Decompose
    4. Contour paint highlights
    5. Apply some color curves

    Touchup Flyaway Hairs

    If you can have the model bring a hairbrush with them to a shoot - DO IT. Seriously. Your eyes and carpal tunnel will thank me later.

    Even with a brush or hairstylist/make-up artist the occasional hair will decide to rebel and do its own thing. This will require us to get down to the details and fix those hairs up.

    Luckily for me, Mairis hair mostly cooperated with us during the shoot (and where it didn’t I kind of liked it). To illustrate this step, though, I’m going to clean up some of the stray hairs on the left side of the image (the right side of her face).

    Luckily for me, the background is a consistent color/texture. This means cloning out these hairs shouldn’t be too much of a problem, but there are still some things you should keep in mind while doing this.

    Here is the area that I’d like to clean up a little bit:

    Mairi Hair Left Original Sometimes you just have to work one strand of hair at a time…
    GIMP Clone Tool Hair

    I will usually use a hard-edged brush because a soft-edge will smear details on its edges, and can often be spotted pretty easily by the eye. This works because the background is relatively constant in grain and color.

    I’ll sample from an area near the hair I want to remove, and set the brush to be “Aligned”. I also try to keep the brush size as small as I can and still remove the hair.

    The thing to keep in mind is how the hair is actually flowing, and to follow that. I will often follow outlying strands of hair back to where they start from the head, and begin cloning them out from there.

    I also try not to get too ambitious (some stray hairs are sometimes fine). Removing too many at once can lead to unrealistic results, so I try to be conservative, and to constantly zoom out and check my work visually.

    Try not to leave hairs prematurely cut off in space if possible, it tends to look a bit distracting. If you want to remove a hair that crosses over another strand that you may want to keep, make sure to adjust the source of the clone brush so you can do it without leaving a gap in the leftover strand.

    Here is a quick 5 minute touchup of some of the stray hairs (click to compare to the original):

    GIMP Hair Clean Clone Click to compare.

    Occasionally, you’ll need to fix hairs that are crossing over other hair (sort of like a virtual “brushing” of the hair). In these cases, you really have to pay careful attention to how the hair flows and to use that as a guide when choosing a sample point with either the clone or heal brush.

    If this sounds like a lot of work - it is. Thankfully, once you’ve become accustomed to doing it, and doing it well, you’ll find yourself picking up a lot of speed. It’s one of those things that’s worth learning to do right, and to let practice speed it up for you.

    I actually like the cascading hair around her face opening up to a pretty color, so that’s about as far as I’m going to go with stray hairs on this image.

    Fixing the Background & Cropping

    With the limited space I had to shoot this portrait, it’s no surprise that I had gotten some undesirable background elements, like the window edges.

    There’s a couple of ways I could go about fixing these - I could fix the background in place, or I can crop out the elements I don’t want.

    In my final version shown in the previous post, I wanted to crop tighter, so it worked out well to remove the window on the left. To illustrate how we can remove the window, I’m going to leave the aspect ratio as it is, and walk through removing the distracting background elements.

    Removing Background Elements

    Because most of the background is already a (relatively) solid color, this isn’t too hard. There’s just a couple of simple things to keep in mind.

    The way I’m going to approach this is to make a duplicate of my current layer, and to move the duplicate into place such that the background will cover up parts of the window I want to remove. Then I’ll mask the duplicate layer to hide the window.

    I start by choosing an area of the background that’s similar in color/tone:

    GIMP Mairi Background Fix Start Thankfully the background is relatively consistent.

    I’ll then move the duplicate layer so that the green area covers up the window to the left:

    GIMP Mairi Background Fix End Position the duplicate layer so the green area now covers up the window.

    Here is what this looks like in GIMP, with the duplicate layer set to 90% opacity over the base layer (so you can see where the window edge is):

    GIMP Mairi Background Shift Moving the duplicate layer over to cover the window.

    Now I’ll add a black (fully transparent) layer mask over the duplicate layer, and I’ll paint white on the mask to cover up the window edge (with a soft-edged brush). This give me results that look like this:

    Mairi GIMP background shift masked After applying a transparent mask, and painting white over the window edge.

    The problem is that the background area from the duplicate is a bit darker than the base layer background, and the seam is visible where they are masked. To fix this, I can just adjust the lightness of the duplicate layer until I get a good match.

    I used Hue-Saturation to adjust the lightness (because I wasn’t sure if I would need to adjust the hue slightly as well - turns out I didn’t). I found that increasing the Lightness value to 3 got me reasonably close:

    GIMP Mairi Background lightened After increasing duplicate layer Lightness to 3.

    To further fix the lower part of the window, I just repeated all the steps above with another duplicate of the base layer, just shifted to cover the lower part of the window. I had to mask along her sweater. Here is the result after repeating the above steps:

    GIMP Mairi background masked finished After repeating above steps for the lower left corner.

    The results are ok, but could be just a little bit better. Visually, the falloff of light on the background doesn’t match what’s happening on her body, so I added a small gradient to the lower left corner to give it a more natural looking light falloff:

    GIMP Mairi background masked gradient Adding a gradient to the lower left background helps it look more natural.

    Fixing the slight window/shadow on the right is easily done with a clone/heal tool combination. The final result of quickly cleaning up the background is this:

    GIMP Mairi background final fix Finished cleaning up the background.

    I could have spent a little more with this, but I’m happy with the results for the purpose of this post. If your cloning efforts leave obvious transitions between tones, the Heal tool can be helpful for alleviating this (especially when used with large brush radii, just be prepared to wait a bit).

    With the background squared away, we can move on to one of my favorite things to play with, skin retouching!

    Skin Retouching with Wavelet Decompose

    I had previously written about using Wavelet Decompose as a means for touching up skin. As I said in that post, and will repeat here:

    The best way to utilize this tool is with a light touch.

    Re-read that sentence and keep it in mind as we move forward.

    Don’t make mannequins.

    Ok, with a layer that contains all of the changes we’ve made so far rolled up, we can now decompose the image to wavelet scales. In my case I almost always use the default of 5 scales unless there’s a good reason to increase/decrease that number.

    For anyone new to this method, the basic idea of Wavelet Decompose is that it will break down your images to multiple layers, each containing a specific set of details based on their relative size, and a residual layer with color/tonal information. For instance, Wavelet scale 1 will contain only the finest details in your image, while each successive scale will contain larger and larger details.

    The benefit to us is that these details are isolated on each layer, meaning we can modify details on one layer without affecting other details from other layers (or adjust the colors/tones on the residual layer without modifying the details).

    Here is an example of the resulting layers we get when running Wavelet Decompose:

    GIMP Wavelet Separation Example Wavelet scales from 1 (finest) to the Residual

    After running Wavelet Decompose, we’ll find ourselves with 6 new layers: Residual + 5 Wavelet scales. I am going to start on Wavelet scale 5.

    If you hold down Shift and click on a layer visibility icon, you’ll isolate just that single layer as visible. Do this now to Wavelet scale 5, and let’s have a look at what we’re dealing with.

    I usually work on skin retouching in sections. Usually I’ll consider the forehead, nose, cheeks to smile lines, chin, and upper lip all as separate sections (trying to follow normal facial contours). Something like this:

    GIMP Wavelet Decompose Region Breakdown Rough breakdown of each area I’ll work on separately

    I’m going to start with the forehead. I’ll work with detail scales first, and follow up with touchups on the residual scale if needed to even out color tones. Here is what Wavelet scale 5 looks like isolated:

    GIMP Wavelet Scale 5 forehead Forehead, Wavelet scale 5

    It may not seem obvious, especially if you don’t use wavelet scales much, but there’s a lot of large scale tonal imperfections here. Look at the same image, but with the levels normalized:

    GIMP Wavelet Scale 5 forehead These are the tones we want to smooth out

    Normalizing the wavelet scale lets you see the tones that we want to smooth out.

    My normal workflow is to have all of the wavelet scales and residual visible (each of the wavelet scales has a layer blending mode of Grain Merge). This way I’m visually seeing the overall image results. Then I will select each wavelet scale as I work on it.

    I’ll normally use the Free Select Tool to select the forehead. I’ll usually have the Feather edges option turned on, with a large radius (maybe 1% of the smallest image dimensions roughly - so ~35 pixels here). Remember to have your layer selected that you want to work on.

    With my area selected, I’ll often run a Gaussian Blur (IIR) over the skin to smooth out those imperfections. The radius you use is dependent on how strong you want to smooth the tones out. Too much, and you’ll obliterate the details on that scale, so start small.

    Here is my selection I’ll work with (remember - my active layer is Wavelet scale 5):

    GIMP Wavelet Scale selection Forehead with selection (feather turned on to 35px)

    Now I’ll experiment with different Gaussian Blur radii to get a feel for how it will effect my entire image. I settled on a high-ish value of 35px radius, which gave me this as a result (click to compare to original):

    GIMP Wavelet Scale selection Forehead, Wavelet scale 5 after Gaussian Blur (IIR) 35px radius.
    Click to compare.

    Just with this small change to a single wavelet scale, we can already see a remarkable improvement to the underlying skin tones, and we haven’t hurt any of the fine details in the skin!

    In some cases, this may be all that is required for a particular area of skin. I could push things just a tiny bit further if I wanted by working globally again on a finer wavelet scale, but I’ve learned the hard way to back off early if possible.

    Instead, I’ll look at specific areas of the skin that I may want to touch up. For instance, the two frown lines in the center of the forehead. I may not want to remove them completely, but I may want to downplay how visible they are. Wavelet scales are perfect for this.

    GIMP Wavelet Scale selection Small frown lines I want to reduce

    Because each of the Wavelet scales are set to a layer blend mode of Grain Merge, this means that any area that has a completely grey color will not effect the final image. This means that you can paint with medium grey RGB(128,128,128) to completely remove a detail from a layer.

    You can also use the Blur/Sharpen brush to selectively blur an area of the image as well. (I’ve found that the Blur tool works best at smaller wavelet scales - it doesn’t appear to make a big difference on larger scales).

    So, if we look at Wavelet scale 5 where the frown lines are, we’ll see there’s not much there - it was already smoothed earlier. If we look at Wavelet scale 4 though, we’ll see them prominently.

    I’ll use the Heal Tool to sample from the same wavelet scale in a different location, and paint over just the frown lines. I’ll work on Wavelet scale 4 first. If needed, I can also move down to Wavelet scale 3 and repeat the same procedure there.

    A couple of quick passes just over the frown lines, and the results look like this:

    GIMP Wavelet Scale selection Cloning over frown line on scale 4 & 3.
    Click to compare.

    I could continue over any other blemishes I may want to correct, but small individual blemishes can usually be fixed with a little spot healing quickly.

    Moving on to the nose, the tones have different requirements. Overall, the tones on Wavelet scale 5 are similar to the forehead. In this case, a similar amount of blurring as the forehead on scale 5 will nicely smooth out the tones. Here is the nose after a slight blurring (click to see original):

    GIMP mairi wavelet decompose nose Nose with 35px Gaussian blur on Wavelet scale 5.
    Click to compare.

    There is a bit of color in the nose that is slightly uneven that I’d like to fix. This is relatively easy to do with wavelet scales, because I can modify the underlying color tones of the nose without destroying the details on the other scale layers.

    In this case, I’ll work on the Wavelet residual layer.

    I’ll use a Heal Tool with a large, soft brush. I’ll sample from about the middle of the nose, and clean up the slightly redder skin by healing new tones into that area. I’ll follow the contours of the nose and the way that the light is hitting it in order to match the underlying tones to what is already there.

    After a little work these are the results (click to compare to original):

    GIMP Wavelet Scale selection nose Healing on the Wavelet residual scale to even tones.
    Click to compare.

    Next I’ll take a look at the eyes and cheek on the brighter side of her face.

    GIMP Mairi wavelet decompose cheek original Overall tones are good here, just some slight retouching required

    The tones here are not bad, particularly on scale 5. After making my selection, I’ve applied a blur at 25px just to smooth things a bit.

    GIMP Mairi wavelet decompose cheek A slight 25px blur to smooth overall tones.
    Click to compare.

    The dark tones under/around the eyes is a bit different to deal with. As before, I’ll turn to working on the Wavelet residual layer to brighten up the color tones under the eyes.

    I use the Heal Tool to sample from a brighter area of skin near the eye. Then I’ll carefully paint into the dark tones to brighten them up, and to even the colors out with the surrounding skin.

    GIMP Mairi wavelet residual eyes Carefully cloning/healing brighter skin tones under the eyes.
    Click to compare to original.

    Wavelets are amazing for this type of adjustment, because I can brighten up/change the skin tones under the eyes without effecting the fine skin details here like small wrinkles and pores. The textual character remains unchanged, but the underlying skin tones can be modified easily.

    The same can be done for the slightly red tones on the cheek, and at the edge of her jaw. Which I did.

    I’m purposefully not going to modify the fine wrinkles under the eyes, either. These small imperfections will often bring great character to a face, and unless they are very distracting or bad, I find it best to leave them be.

    A good tip is that even though these small imperfections may seem large when you’re pixel peeping, get into the habit of zooming out to a sane zoom level and evaluate the image then. Sometimes you’ll find you’ve gone too far, and things begin to creep into mannequin territory.

    Don’t make mannequins!

    In Summary Again

    This entire post is getting a little long, so I’m going to stop here with the skin retouching breakdown.

    Also, that’s honestly about it as far as the process goes. Just repeat on the areas that are left (right cheek, chin, and upper lip). You can just apply the processes I described above to those other areas, in the same way.

    To summarize, here are the tools/steps I’ll use with Wavelet Decompose to retouch skin:

    • Area selection with Gaussian blur to even out overall tones at a particular scale
    • Paint with grey, Clone, Heal on wavelet scales to modify specific details
    • Clone/Heal on wavelet residual scale to modify underlying skin tones/colors (but leave details intact)

    Here are the final results after using only Wavelet Decompose (click to compare to original):

    Mairi GIMP Wavelet face final retouching After retouching in Wavelet Scales only.
    Click to compare to original.

    Spot Touchups

    There may be a few things that still need a little spot touchup that I didn’t bother to mess with in Wavelet scales.

    In my case, I’ll clone/heal out some small hairs along the jaw line, and touch up some small spots of skin individually. This is really just a light cleaning, and I usually do this at the pixel level (obnoxiously zoomed in, and small brush sizes).

    I also use a method for checking the skin for areas that I may want to touchup, but might not be immediately visible or noticeable. It uses the fact that the Blue channel of an image can show you just how scary skin can look (seriously, color decompose any image of skin, and look at the blue channel).

    Contour Painting Highlights

    One of the downsides of using Wavelet scales for modifying skin is that if you’re blurring on some of the scales, you’ll sometimes decrease the local contrast in your image. This isn’t so bad, but you may want to bring back some of the contrast in areas you’ve touched up.

    What I’m going to do is basically add some transparent layers over my image, and set their layer blend modes to “Overlay”.

    Then I’ll paint white over contours I want to enhance, and adjust the opacity of the layer to taste. (This is highly subjective, so I’m going to just show a quick idea of how I might approach it - you can get as nuts with this as you like…).

    Here I’ve added a new transparent layer on top of my image, and set the Layer Blend Mode to Overlay. Then I painted white onto contours that I want to highlight:

    Mairi GIMP Contour dodge burn highlight Painting on the Overlay layer along contours to highlight

    It looks strange right now, but I’ll add a large radius Gaussian Blur to smooth these tones out. I used a blur radius of 111 pixels. Here is what it looks like after the blur:

    Mairi GIMP Contour dodge burn highlight gaussian blur Blurring the Overlay layer with Gaussian Blur (111 pixel radius)

    Finally, I’ll adjust the opacity of the Overlay layer to taste. I’ll usually dial this way, way down so that it’s not so obvious. Here, I’ve dialed the opacity back to about 20%, which leaves us with this (click to compare):

    Mairi GIMP Contour dodge burn highlight final One After setting the Overlay layer to 20% opacity (still a little high for me, but it’s good for illustration).
    Click to compare.

    I will sometimes add a few more of these layers to enhance other parts of the image as well. I’ll use it (very lightly!!!) to enhance the eyes a bit, and in this case, I used an even larger layer to add some volume and highlights to her hair as well.

    Here is the results after adding some eye and hair highlight layers as well (click to compare no highlights):

    mairi gimp contour dodge burn final Face, eyes, and hair contour painting result.
    Click to compare.

    Color Curves

    Finally, I like to apply some color curves that I have around and use often. I’ve been heavily favoring a Portra emulation curve from Petteri Sulonen that he calls Portra-esque, especially for skin. It has a very pretty rolloff in the highlights that really renders pretty colors.

    If I feel it’s too much, I can always apply it on a duplicate of my image so far, and adjust opacity to suit. Here is the same image with only the Potra-esque curve applied:

    mairi gimp color tone curve portra Image so far, with a Portra-esque color curve applied.
    Click to compare.

    If you’re curious, I had written up a much more in-depth look at color curves for skin here: Getting Around in GIMP - More Color Curves (Skin). You can actually download the curves for Portra, Velvia, Provia emulation on that page.

    Final Sharpening

    Finally. The last step before saving out our image!

    For sharpening, I actually like to use one of the Wavelet scales that I generated earlier. I’ll just duplicate a low scale, like 2 or 3, and drag it on top of my layer stack to sharpen the details from that scale.

    In this case, I liked the details from Wavelet scale 2, so I duplicated that layer, and dragged it on top of my layer stack. The blend mode is already set to Grain Merge, so I don’t have to do anything else:

    mairi gimp sharpen wavelet scale Wavelet scale 2 copied to the top of the layer stack for sharpening.
    Click to compare.

    Finally at the End

    If you’re still with me - you really deserve a medal. I’m sorry this has run as long as it has, but I wanted to try to be as complete as I could.

    So, for a final comparison, here is the image we finished with (click to compare to what we started with before retouching in GIMP):

    mairi gimp final sharpen wavelet Our final result.
    Click to compare.

    Not too bad for a little bit of fiddling, I think! I know that this tutorial reads really, really long, but I promise that once you’ve understood the processes being used, it’s actually very quick in practice.

    I hope that this has been helpful to you in some way! If you happen to use anything from this tutorial please share it. I’d love to see what others do with these techniques.

    Software and Noise

    Software and Noise

    Wonderful response from everyone

    I want to take a moment to thank everyone for all of the kind words and support over the past week. A positive response can be a great motivator to help keep the momentum rolling (and everyone really has been super positive)!


    The Software page is live with a decent start at a list.

    I posted an announcement of the site launch over on reddit and one of the comments (from /u/cb900crdr) was that it might be helpful to have a list of links to programs. I had originally planned on having a page to list the various projects but removed it just before launch (until I could find some time to gather all the links).

    This was as good a reason as any to take a shot at putting a page together. I brought the topic up on the forums to get input from everyone as well. If you see that I’ve missed anything, please consider adding it to the list on the forum.

    I think it may be helpful to add at least a sentence or two description to identify what each project does for those not familiar with them. For instance, if you didn’t know what Hugin was before, the name by itself is not very helpful (or GIMP, or G’MIC, etc…). The problem is how to do it without cluttering up the page too much.


    I had also mentioned in this post on the forums about a neat method for basically replacing shadow tones in one image with those from second, overexposed image. The approach is similar in theory to tonemapping an HDR and is originally described by Guillermo Luijk (back in 2007).

    The process basically exploits the fact that digital sensors have a linear response (a basis for the advice ETTR - “Expose to the Right”). His suggested workflow is to use a second exposure of the scene but exposed +4EV. Then to adjust the exposure of the second image down -4EV and then replace the shadow tones in the base image with the adjusted (noise-reduced) one.

    I will write an article soon describing the workflow in a bit more detail. Stay tuned!

    Lede image: Unnecessary Noise Prohibited by Jens Schott Knudsen cbn

    First Target Reached!

    On Sunday, we made the base target for our Kickstarter! Unless too many backers decide to cancel their pledges, funding for making Krita really, really fast and animation support is secure! Now, of course, is not the time to fold the hands and lean back: it would be a pity if we don’t manage to reach a handful or even more stretch goals!

    But having reached this milestone, it’s time to make it easy to back the project through paypal:

    You can choose your reward level in the comment, and from 15 euros you’ll get your stretch goal voting rights, of course!

    Talking about stretch goals… Michael Abrahams surprised us all by submitting a patch on reviewboard that implemented most of the selection tools improvement stretch goal! Shift, alt, shift all, ctrl have been implemented for the polygonal, elliptical and rectangular selection tools. The rest is still todo, so it’s not ready for a build yet.

    We’ve been busy working on fixing other issues as well:

    • Dmitry implemented Pass-Through mode for group layers (note: filter, transform and transparency masks and pass-through mode don’t work together yet, but loading and saving from and to PSD does!
    • When using Krita with a translation active on windows, the delay on starting a stroke is a bit less, but we’re still working on eliminating that delay completely
    • The color picker cursor now shows the currently picked and previous color.
    • We now can load layerstyles (with some limitations) from PSD files. Saving is coming next!
    • Layer styles can now be used with inherit-alpha
    • Fix some issues with finding templates
    • Work around an issue in the oxygen widget style that would crash the OpenGL-based canvas due to double initialization
    • Don’t toggle the layer options when right-clicking on a layer icon to get the context menu (patch by Victor Wåhlström)
    • Update the Window menu when a subwindow closes
    • Load newer PSD-generated JPG files correctly by reading the resolution information from the TIFF tags as well. (Yes, JPG resolution is marked in the exiv metadata using TFF tags…)
    • Show the image name in the window menu if it hasn’t been saved yet.
    • Don’t crash when trying to apply isolate-layer on a transform mask
    • Add webp support (at least on Linux, untested on Windows)
    • Use the right Krita blending mode when a PSD image contains Color Burn.
    • Add Lighter Color and Darker Color blending modes and load them from PSD.
    • Add a shortcut to edit/paste into a new image. Patch by Tiffany!
    • Fix the autosave recovery dialog on Windows for unnamed autosaves

    Unfortunately, all this has left our codebase in a slightly unstable state…  We tried to make new builds, but they just aren’t good enough yet to share! Working on that, though, and hopefully we’ll get there by Wednesday!


    Interview with Evgeniy Krivoshekov


    Could you tell us something about yourself?

    Hi! My name is Evgeniy Krivoshekov, 27 years old, I’m from the Far East of Russia, Khabarovsk. I’m an engineer but have worked as sales manager, storekeeper and web programmer. Now I’m a 3d-modeller! I like to draw, read books, comics and manga, to watch fantastic movies and cartoons and to ride my bicycle.

    Do you paint professionally, as a hobby artist, or both?

    I’m not a pro-artist yet. Drawing is my hobby now but I really want to become a full-time professional artist. I take commissions for drawings occasionally, but not all the time.

    What genre(s) do you work in?

    Fantasy, still life.

    Whose work inspires you most — who are your role models as an artist?

    Wah! So many artists who inspire me!

    I think that I love not the artists but their works. For example: Peter Han’s drawings in traditional technique; Ilya Kuvshinov’s work in photoshop and with anime style; Dave Rapoza, who is an awesome artist who draws in traditional and digital technique with his own style and very detailed; Pascal Campion – his work is full of mood and motion and life! And all those artists who inspire me a little. I like many kinds of art: movies, cartoon, anime, manga and comics, music and all kinds of art inspire me,

    How and when did you get to try digital painting for the first time?

    Hmmmm… I’m not sure but I think that was in 2007 when my father bought our (my family’s) first computer for learning and studying. I was a student, my sister too, and we needed a computer. My first digital tablet was Genius, and the software was Adobe Photoshop CS2.

    What makes you choose digital over traditional painting?

    I don’t choose between digital and traditional drawing – I draw with digital and traditional techniques. I’ve been doing traditional drawing since childhood but digital drawing I think I’m just starting to learn.

    How did you find out about Krita?

    I think it was when I started using Linux about 3-4 years ago. Or when I found out about the artist David Revoy and read about Krita on his website.

    What was your first impression?

    Ow – it was realy cool! Krita’s GUI is like Photoshop but the brushes are like brushes in Sai, wonderful smudge brushes! It was a very fast program and it was made for Linux. I was so happy!

    What do you love about Krita?

    Surprisingly freely configurable interface. I used to draw in MyPaint or GIMP, but it was not so easy and comfortable as in Krita. Awesome smudge brushes, dark theme, Russian support by programmer Dmitriy Kazakov. The wheel with brushes and the color wheel on right-click of the mouse – what a nice idea! The system of dockers.

    What do you think needs improvement in Krita? Is there anything that really annoys you?

    Managing very high resolution files, the stability and especially ANIMATION! I want to do cartoons, that’s why I want an animation toolkit in Krita. It will be so cool to draw cartoons in Krita as in TV Paint. But Krita is so powerful and free.

    What sets Krita apart from the other tools that you use?

    I use Blender, MyPaint, GIMP and Krita but I rarely mix them. MyPaint and GIMP I rarely use, only when I really need them. Blender and Krita are my favourite software. I think that I will soon start to combine them for mix-art: 3d-art+hand-drawing.

    If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

    I think Frog-rider, Sunny, detailed work with an interesting plot about the merchant on the frog. Funny and simple – everything I like.

    What techniques and brushes did you use in it?

    I used airbrush and circle standard brushes, basic wet brush, fill block and fill circle brushes, ink brush for sketching, my own texture brush and move tool. That’s all I need for drawing. As regards techniques… sometimes I draw by value, sometimes from a sketch with lines, sometimes black and white with colors underneath (layer blending mode) or with colors without shading – it depends on my mood of the moment,

    Where can people see more of your work?

    My daily traditional and digital pieces on Instagram. Some photos, but many more drawings. More art at DeviantArt and Artstation.

    Anything else you’d like to share?

    I just want to say that anyone can draw, it’s all a matter of practice!

    May 15, 2015

    Of file modes, umasks and fmasks, and mounting FAT devices

    I have a bunch of devices that use VFAT filesystems. MP3 players, camera SD cards, SD cards in my Android tablet. I mount them through /etc/fstab, and the files always look executable, so when I ls -f them, they all have asterisks after their names. I don't generally execute files on these devices; I'd prefer the files to have a mode that doesn't make them look executable.

    I'd like the files to be mode 644 (or 0644 in most programming languages, since it's an octal, or base 8, number). 644 in binary is 110 100 100, or as the Unix ls command puts it, rw-r--r--.

    There's a directive, fmask, that you can put in fstab entries to control the mode of files when the device is mounted. (Here's Wikipedia's long umask article.) But how do you get from the mode you want the files to be, 644, to the mask?

    The mask (which corresponds to the umask command) represent the bits you don't want to have set. So, for instance, if you don't want the world-execute bit (1) set, you'd put 1 in the mask. If you don't want the world-write bit (2) set, as you likely don't, put 2 in the mask. So that's already a clue that I'm going to want the rightmost byte to be 3: I don't want files mounted from my MP3 player to be either world writable or executable.

    But I also don't want to have to puzzle out the details of all nine bits every time I set an fmask. Isn't there some way I can take the mode I want the files to be -- 644 -- and turn them into the mask I'd need to put in /etc/fstab or set as a umask?

    Fortunately, there is. It seemed like it ought to be straightforward, but it took a little fiddling to get it into a one-line command I can type. I made it a shell function in my .zshrc:

    # What's the complement of a number, e.g. the fmask in fstab to get
    # a given file mode for vfat files? Sample usage: invertmask 755
    invertmask() {
        python -c "print '0%o' % (~(0777 & 0$1) & 0777)"

    This takes whatever argument I give to it -- $1 -- and takes only the three rightmost bytes from it, (0777 & 0$1). It takes the bitwise NOT of that, ~. But the result of that is a negative number, and we only want the three rightmost bytes of the result, (result) & 0777, expressed as an octal number -- which we can do in python by printing it as %o. Whew!

    Here's a shorter, cleaner looking alias that does the same thing, though it's not as clear about what it's doing:

    invertmask1() {
        python -c "print '0%o' % (0777 - 0$1)"

    So now, for my MP3 player I can put this in /etc/fstab:

    UUID=0000-009E /mp3 vfat user,noauto,exec,fmask=133,shortname=lower 0 0

    How to open .pdn files? or: Things I wish I'd known earlier.

    Paint.net is a graphics program that uses its own binary file format .pdn that almost no other program can open. Paint.net has a large community and many plugins are available including a third part plugin that adds support for OpenRaster. Paint.net is written in C# and requires the Microsoft .Net runtime, meaning current versions work only on Windows Vista or later.

    If you need to open PDN files without using Paint.net there is an answer! Lazpaint can open .pdn files and also natively supports OpenRaster. Laz paint is available on Windows, Linux and MacOS X.

    In hindsight using Lazpaint would have been easier than taking a flat image and editing it to recreate the layer information I wanted. Although I respect the work done by Paint.net it is yet another example of time wasted and hassle caused by proprietary file formats and vendor lock-in.

    May 14, 2015

    Result of the work on GCompris is ready..

    As I finished the work for the time allocated by this campaign, here is a video showing the result:

    Of course being only 15% funded, I couldn’t complete the relooking for everything. But at least, I could update all the core components, all the main menu with all activities icons, and a good bunch of activities.

    Thanks again to everyone who helped making this possible; more updates about it later…

    May 12, 2015

    git: Moving partial changes between commits

    Now and then I face the fact that I’ve added changes to a commit I’d like to have moved into a different commit. Here is what you do:

    What’s there

    We have two commits. For illustration purposes I’ve trimmed the log output down:

    $ git log --stat
    commit 19c698a9ee91a5f03f1c3240fc957e6b328931f5
        WIP: adding tests
     parts/tests/functional/conftest.py       |  4 ++--
     parts/tests/functional/test_frobfrob.py  | 43 ++++++++++
     frobfrob.py                              | 14 +++++++++++++-
    commit c7ef6c3014ca9d049dea46fbed44010acf53ae79
        prepare frob frob schemas
     parts/tests/functional/conftest.py           | 31 +++++++++++++
     frobfrob/models.py                           | 32 +++++++++++++
    commit 5b30d351f51fda40d37d2f7dc25d2367bd37845a

    Now I want to move the changes made to conftest.py from commit c7ef6c3014ca9d049dea46fbed44010acf53ae79 into commit 19c698a9ee91a5f03f1c3240fc957e6b328931f5 (or HEAD).

    Pluck out the commit

    In order to pluck out the changes to conftest.py, we’ll reset the file against the previous commit 5b30d351f51fda40d37d2f7dc25d2367bd37845a (you could also use HEAD~3).

    $ git reset 5b30d351f51fda40d37d2f7dc25d2367bd37845a parts/tests/functional/conftest.py
    Unstaged changes after reset:
    M       parts/tests/functional/conftest.py
    $ git status -s
    MM parts/tests/functional/conftest.py

    As you can see, we will have staged changes and unstaged changes. The staged changes remove the additions to the conftest.py file and the unstaged changes add our code to conftest.py

    Remove and Add

    We now create two commits:

    1. Use the staged changes for a new commit which we’ll squash with c7ef6c3014ca9d049dea46fbed44010acf53ae79.
    2. Stage the unstaged changes and create another commit which we’ll squash with 19c698a9ee91a5f03f1c3240fc957e6b328931f5 or HEAD.
    # 1. commit Message is something like: squash: removes changes to conftest.py
    $ git commit
    # 2. commit
    # stage changes
    $ git add -p
    # commit, message will be something like: squash: adds changes to conftest.py
    $ git commit
    # we end up with two additional commits
    $ git log --oneline
    492ff22 Adds changes to conftest
    8485946 removes conftest files
    19c698a WIP: adding tests
    c7ef6c3 prepare frob frob schemas
    Interactive rebase put’s it all together

    Now use an interactive rebase to squash the changes with the right commits:

    $ git rebase -i HEAD~5

    May 10, 2015

    Sat 2015/May/09

    • Decay

      A beautiful, large, old house in downtown Xalapa, where patina is turning into ruin.

      Decayed door Decayed roof Floor tiles Door latch Decayed door

    May 09, 2015

    Interview with Amelia Hamrick

    duelling with a demon 800px

    Could you tell us something about yourself?

    My name is Amelia Hamrick, and I’m a junior music and fine arts double major at Oklahoma Christian University. I’m actually working towards a masters in library science, but I’d like to get into illustration, concept art and webcomics on the side!

    Do you paint professionally, as a hobby artist, or both?

    I’m still an art student but I hope to work on a professional level. I definitely draw a lot for fun outside of classwork though, so both!

    What genre(s) do you work in?

    Fantasy and sci-fi mostly, though I dabble in other genres!

    Whose work inspires you most — who are your role models as an

    Hmm… In no particular order, Hiromu Arakawa, Mike Mignola, Hayao Miyazaki, Maurice Noble, Bill Watterson, Roy Lichtenstein, Alphonse Mucha… There’s too many to list!

    How and when did you get to try digital painting for the first time?

    I started out trying to color ink drawings in GIMP with a mouse! That never turned out very well, haha. I talked my parents into getting me a little wacom bamboo tablet for Christmas when I was in… 9th grade, I think?

    What makes you choose digital over traditional painting?

    Layers, ctrl-Z, and transform tools, hahaha! I still do a lot of traditional work (I’ve really enjoyed working with gouache) but most of my not-schoolwork art is done digitally now.

    How did you find out about Krita?

    I used to go on occasional Google search sprees for all the latest drawing applications, and I found Krita during one of these about 3 or 4 years ago. My old computer couldn’t handle the windows build, though, and I hadn’t really gotten into Linux yet… I run ubuntuGNOME on my school-provided Macbook now, so I tried Krita again last year when it was just entering version 2.8, and I’ve used it ever since!

    What was your first impression?

    When I tried it for the very first time on an old computer it looked really impressive, but it was just too much for that poor old box, haha. Krita’s performance has improved tremendously!

    What do you love about Krita?

    It combines just about all the features of Photoshop that I’d use with a more streamlined interface, a MUCH better brush engine (loving the color blending and pressing E for erasing with any tool,) stroke smoothing and canvas mirroring like Paint Tool SAI… The feature I take most for granted, though, is the right-click preset palette!

    What do you think needs improvement in Krita? Is there anything that really annoys you?

    I would really love to see a more stable OS X build, as I often try to convince my fellow art students to give Krita a shot but they mostly use OS X and are wary of trying experimental builds. Other than that, the software in its current state is fantastic and I’m really excited for the new features the Kickstarter will bring (especially the animation tools and gradient map!)

    What sets Krita apart from the other tools that you use?

    I guess I’ve already answered this, but it combines features from several different painting programs I’ve tried into one killer app! It’s the only program where I haven’t felt like I’m missing anything from my workflow. And being open-source is the cherry on top!

    If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

    This poster that I drew for a friend of a scene from her Pathfinder character’s backstory is the most ambitious project I’ve completed in Krita! She won her magical flying-V guitar in a shredding duel with a punk rock demon… how can I pass that mental image up?

    What techniques and brushes did you use in it?

    I used primarily the ink pen with the tilt function (ink_tilt_20) and the hairy brush (bristles_hairy). I sketched and inked first of course, then painted everything in greyscale on one layer and colored on top of that with an overlay layer! I also used the perspective guides quite liberally (though I hadn’t yet figured out how to properly use the ruler assistants, haha)

    Where can people see more of your work?

    I have an art blog on tumblr, and I’m working on getting a proper portfolio website set up!

    Anything else you’d like to share?

    Thank you so much to everyone involved in developing Krita! I wish I could help code… maybe I can volunteer with updating the user manual!

    I’m also currently doing sketch commissions to help fund the Krita kickstarter!

    May 08, 2015

    PIXLS.US Now Live!

    I checked the first post I had made on PIXLS.US while I was building it, and it appears it was around the end of August, 2014. I had probably been working on it for at least a few weeks before that. Basically, it's almost been about 10 months since I started this crazy idea.

    Finally, we are "officially" launched and live. Phew!


    I don't normally ask for things from folks who read what I write here. I'm going to make an exception this time. I spent a lot of time building the infrastructure for what I hope will be an awesome community for free-software photographers.

    So naturally, I want to see it succeed. If you have a moment and don't mind, please consider sharing news of the launch! The more people that know about it, the better for everyone! We can't build a community if folks don't know it's there! :) (Of course, come by and join us yourselves as well!).

    I'll be porting more of my old tutorials over as well as writing new material there (and hopefully getting other talented folks to write as well).

    Thank You!

    Also, I want to take a moment to recognize and thank all of you who either donated or clicked on an ad. Those funds are what helped me pay for the server space to host the site as well as the forums, and will keep ads off the site. I'm basically just rolling any donations back into hosting the site and hopefully finding a way to pay folks for writing in the future. Thank you all!

    May 07, 2015

    It's Alive!

    It's Alive!

    Time to finally launch...

    Well, here we are. I just checked the first blog post and it was dated August 24th, 2014. I had probably been working on the back end of the site getting things running for the basic blog setup a few weeks prior to that. It’s almost been a full year since I started working on this idea.

    So it is with great pleasure that I can finally say…

    Welcome to PIXLS.US!

    If you’re just now joining us, let me re-iterate the mission statement for this website.

    PIXLS.US Mission Statement

    To provide tutorials, workflows and a showcase for high-quality photography using Free/Open Source Software.

    I started this site because the world of F/OSS photography is fractured across different places. There’s no good single place for photographers to collaborate around free software workflows, as well as a lack of good tutorials aimed at high-quality processing with free software.


    I have personally been writing tutorials on my blog for a few years now (holy crap). I primarily started doing it because while there are many tutorials for photo editing, they almost always stopped short of working towards high-quality results. The few tutorials that did try to address high quality results were all quite a few years old (and often in need of updating).

    With your help, I’m hoping to change that here.


    Workflows is something that doesn’t often get described either. Specifically, what a workflow looks like with free software. For instance, some thoughts off the top of my head:

    • Creating a panorama image from start to finish.
    • Shooting and editing fashion images.
    • Taking great portrait images, and how to retouch them.
    • What to watch out for when shooting macro.
    • Planning and shooting great astrophotography.
    • How to approach landscape editing.
    • Creating a composite dream image.

    These are just some of the ideas around workflows. It also doesn’t have to be only software-focused. There is a wealth of knowledge about practical techniques that we can all share as well.


    Quick - name 5 photographers whose work you love, that use free software. Did you have trouble reaching five? That’s another of the things that I would like to focus on here: showcasing amazing work from talented photographers that happen to use free software (and in some cases may be willing to share with us).

    I even started a thread on the forum to try and note some amazing photographers. I will try to work through that list and get them to open up and speak with us a bit about their work and process.

    By Us, For Us

    I am floored by how awesome the community has been. As I mentioned on my blog, the main reason for me to write was to give something back to the community. I learned so much for so long from others before me and the least I could do is try to help others as well.

    This community will be what we make it. Come help make it something awesome that we can all be proud of.

    Go sign up on the forum and let your voice be heard.

    Have an idea for an article? Let me know (in the forums or by email)!

    Make Some Noise!

    Finally, we are just starting out and are a small community at the moment. If you’re feeling up to it, please consider letting your social circles know that we’re here and what we’re trying to do. The only way for the community to grow is for people to know it’s here in the first place!

    Time to kick the tires on the new Fedora websites in staging!

    So a couple of weeks ago I mentioned the work robyduck and the Fedora websites team have been putting in on the new websites for Fedora, primarily, spins.fedoraproject.org and labs.fedoraproject.org. Here’s the handy little diagram I put together back then to explain:

    diagram showing four different fedora sites

    This week, robyduck got the new site designs into staging, which means you can try out the new work-in-progress sites right now and provide us your helpful feedback, suggestions (and dare I suggest it) content contributions to make the sites even better. :)


    Click below to visit the staging site:
    Screenshot from 2015-05-07 17:02:39


    Click below to visit the staging site:
    Screenshot from 2015-05-07 17:02:29

    You may notice as you peruse through the Fedora Labs staging site and the Fedora Spins staging site you’re going to see some bogus stuff. For example, the Robotics Suite page highlights Gimp and Inkscape as included applications. :) This is because a lot of the content is filler content and we need help from the users of these spins and experts in the individual technologies of what we should be featuring and how we should be describing these spins.

    So this is sort of a continuation of our earlier call for help, but this one is really mostly focused on content – we really need your help.


    With the staging sites for spins.fedoraproject.org and labs.fedoraproject.org up and running, we are hoping this will make it easier for folks to understand where we are lacking content and could use some help figuring out what to say about each spin. It helps to see it all in context for every spin.

    This is a good way to contribute to an open source project if you enjoy writing or documentation – we will handle all the details of getting the content into the pages, you would simply need to email us or blog comment (or whatever is easiest for you) the content you are contributing.

    If you are interested in helping us out or even have a particular interest in one of the following spins that is in most need of help, can you get in touch with us and we’ll help you get started?

    • Robotics Suite – needs list of featured applications with short descriptions.
    • Fedora Jam – needs list of featured applications with short descriptions. Could use an updated top-level description (the 2 paragraphs up top) as well.
    • Security Lab – needs list of featured applications with short descriptions.
    • Sugar on a Stick – needs list of featured applications with short descriptions.

    We’d appreciate any help you can provide. Get in touch in the comments to this post!

    Using the Transform Masks



    (This video has subtitles in english)


    Okay, so I’ve wanted to do a tutorial for transform masks for a while now, and this is sorta ending up to be a flower-drawing tutorial. Do note that this tutorial requires you to use Krita 2.9.4 at MINIMUM. It has a certain speed-up that allows you to work with transform masks reliably!

    I like drawing flowers because they are a bit of an unappreciated subject, yet allow for a lot of practice in terms of rendering. Also, you can explore cool tricks in Krita with them.

    Today’s flower is the Azalea flower. These flowers are usually pink to red and appear in clusters, the clusters allow me to exercise with transform masks!

    I got an image from Wikipedia for reference, mostly because it’s public domain, and as an artist I find it important to respect other artists. You can copy it and, if you already have a canvas, edit->paste as new image or new->from clipboard.

    Then, if you didn’t have a new canvas make one. I made an a5 300dpi canvas. This is not very big, but we’re only practicing. I also have the background colour set to a yellow-greyish colour (#CAC5B3), partly because it reminds me of paper, and partly because bright screen white can strain the eyes and make it difficult to focus on values and colours while painting. Also, due to the lack of strain on the eyes, you’ll find yourself soothed a bit. Other artists use #c0c0c0, or even more different values.

    So, if you go to window->tile, you will find that now your reference image and your working canvas are side by side. The reason I am using this instead of the docker is because I am lazy and don’t feel like saving the wikipedia image. We’re not going to touch the image much.

    Let’s get to drawing!


    First we make a bunch of branches. I picked a slightly darker colour here than usual, because I know that I’ll be painting over these branches with the lighter colours later on. Look at the reference how branches are formed.

    azelea_02_drawing flowers

    Then we make an approximation of a single flower on a layer. We make a few of these, all on separate layers.
    We also do not colour pick the red, but we guess at it. This is good practice, so we can learn to analyse a colour as well as how to use our colour selector. If we’d only pick colours, it would be difficult to understand the relationship between them, so it’s best to attempt matching them by eye.


    azelea_03_filling flowers

    I chose to make the flower shape opaque quickly by using the ‘behind’ blending mode. This’ll mean Krita is painting the new pixels behind the old ones. Very useful for quickly filling up shapes, just don’t forget to go back to ‘normal’ once you’re done.

    azelea_04_finished setup

    Now, we’ll put the flowers in the upper left corner, and group them. You can group by making a group layer, and selecting the flower layers in your docker with ctrl+click and dragging them into the group.
    The reason why we’re putting them in the upper left corner is because we’ll be selecting them a lot, and Krita allows you to select layers with ‘R’+Click on the canvas quickly. Just hold ‘R’ and click the pixels belonging to the layer you want, and Krita will select the layer in the layer docker.


    Clone Layers

    Now, we will make clusters.
    What we’ll be doing is that we select a given flower and then make a new clone layer. A clone layer is a layer that is literally a clone of the original. They can’t be edited themselves, but edit the original and the clone layer will follow suit. Clone Layers, and File layers, are our greatest friends when it comes to transform masks, and you’ll see why in a moment.


    You’ll quickly notice that our flowers are not good enough for a cluster: we need far more angles on the profile for example. if only there was a way to transform them… but we can’t do that with clone layers. Or can we?

    Enter Transform Masks!

    Transform Masks are a really powerful feature introduced in 2.9. They are in fact so powerful, that when you first use them, you can’t even begin to grasp where to use them.

    Transform masks allow us to do a transform operation onto a layer, any given layer, and have it be completely dynamic! This includes our clone layer flowers!

    How to use them:

    Right click the layer you want to do the transform on, and add a ‘transform mask’.

    A transform mask should now have been added. You can recognise them by the little ‘scissor’ icon.

    Now, with the transform mask selected, select the transform tool, and rotate our clone layer. Apply the transform.
    You know you’re successful when you can hide the transform mask, and the layer goes back to its original state!

    You can even go and edit your transform! Just activate the transform tool again while on a transform mask, and you will see the original transform so you can edit it. If you go to a different transform operation however, you will reset the transform completely, so watch out.



    We’ll be only using affine transformations in this tutorial (which are the regular and perspective transform), but this can also be done with warp, cage and liquify, which’ll have a bit of a delay (3 seconds to be precise). This is to prevent your computer from being over-occupied with these more complex transforms, so you can keep on painting.

    We continue on making our clusters till we have a nice arrangement.



    Now do the same thing for the leaves.


    Now, if you select the original paint layers and draw on them, you can see that all clone masks are immediately updated!

    Above you can see there’s been a new view added so we can focus on painting the flower and at the same time see how it’ll look. You can make a new view by going window->new view and selecting the name of your current canvas (save first!). Views can be rotated and mirrored differently.

    Now continue painting the original flowers and leaves, and we’ll move over to adding extra shadow to make it seem more lifelike!


    Alpha Inheritance

    We’re now going to use Alpha Inheritance. Alpha inheritance is an ill-understood concept, because a lot of programs use ‘clipping masks’ instead, which clip the layer’s alpha using only the alpha of the first next layer.

    Alpha inheritance, however, uses all layers in a stack, so all the layers in the group that haven’t got alpha inheritance active themselves, or all the layers in the stack when the layer isn’t in a group. Because most people have an opaque layer at the bottom of their layer stack, alpha inheritance doesn’t seem to do much.

    But for us, alpha inheritance is useful, because we can use all clone-layers in a cluster (if you grouped them), transformed or not, for clipping. Just draw a light blue square over all the flowers in a given cluster.


    Then press the last icon in the layer stack, the alpha-inherit button, to activate alpha-inheritance.


    Set the layer to multiply then, so it’ll look like everything’s darker blue.


    Then, with multiply and alpha inheritance on, use an eraser to remove the areas where there should be no shadow.


    For the highlights use exactly the same method, AND exactly the same colour, but instead set the layer to ‘Divide’ (you can find this amongst the ‘Arithmetic’ blending modes). Using Divide has exactly the opposite effect as using multiply with the same colour. The benefit of this is that you can easily set up a complementary harmony in your shadows and highlights using these two.


    Do this with all clusters and leaves, and maybe on the whole plant (you will first need to stick it into a group layer given the background is opaque) and you’re done!

    Transform masks can be used on paint layers, vector layers, group layers, clone layers and even file layers. I hope this tutorial has given you a nice idea on how to use them, and hope to see much more use of the transform masks in the future!

    You can get the file I made here to examine it further! (Caution: It will freeze up Krita if your version is below 2.9.4. The speed-ups in 2.9.4 are due to this file.)

    May 06, 2015

    Tips for passing Google's "Mobile Friendly" tests

    I saw on Slashdot that Google is going to start down-rating sites that don't meet its criteria of "mobile-friendly": Are you ready for Google's 'Mobilegeddon' on Tuesday?. And from the the Slashdot discussion, it was pretty clear that Google's definition included some arbitrary hoops to jump through.

    So I headed over to Google's Mobile-friendly test to check out some of my pages.

    Now, most of my website seemed to me like it ought to be pretty mobile friendly. It's size agnostic: I don't specify any arbitrary page widths in pixels, so most of my pages can resize down as far as necessary (I was under the impression that was what "responsive design" meant for websites, though I've been doing it for many years and it seems now that "responsive design" includes a whole lot of phone-specific tweaks and elaborate CSS for moving things around based on size.) I also don't set font sizes that might make the page less accessible to someone with vision problems -- or to someone on a small screen with high pixel density. So I was pretty confident.

    [Google's mobile-friendly test page] I shouldn't have been. Basically all of my pages failed. And in chasing down some of the problems I've learned a bit about Google's mobile rules, as well as about some weird quirks in how current mobile browsers render websites.

    Basically, all of my pages failed with the same three errors:

    • Text too small to read
    • Links too close together
    • Mobile viewport not set

    What? I wasn't specifying text size at all -- if the text is too small to read with the default font, surely that's a bug in the mobile browser, not a bug in my website. Same with links too close together, when I'm using the browser's default line spacing.

    But it turned out that the first two points were meaningless. They were just a side effect of that third error: the mobile viewport.

    The mandatory meta viewport tag

    It turns out that any page that doesn't add a new meta tag, called "viewport", will automatically fail Google's mobile friendly test and be downranked accordingly. What's that all about?

    Apparently it's originally Apple's fault. iPhones, by default, pretend their screen is 980 pixels wide instead of the actual 320 or 640, and render content accordingly, and so they shrink everything down by a factor of 3 (980/320). They do this assuming that most website designers will set a hard limit of 980 pixels (which I've always considered to be bad design) ... and further assuming that their users care more about seeing the beautiful layout of a website than about reading the website's text.

    And Google apparently felt, at some point during the Android development process, that they should copy Apple in this silly behavior. I'm not sure when Android started doing this; my Android 2.3 Samsung doesn't do it, so it must have happened later than that.

    Anyway, after implementing this, Apple then introduced a meta tag you can add to an HTML file to tell iPhone browsers not to do this scaling, and to display the text at normal text size. There are various forms for this tag, but the most common is:

    <meta name="viewport" content="width=device-width, initial-scale=1">
    (A lot of examples I found on the web at first suggested this: <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1"> but don't do that -- it prevents people from zooming in to see more detail, and hurts the accessibility of the page, since people who need to zoom in won't be able to. Here's more on that: Stop using the viewport meta tag (until you know how to use it.)

    Just to be clear, Google is telling us that in order not to have our pages downgraded, we have to add a new tag to every page on the web to tell mobile browsers not to do something silly that they shouldn't have been doing in the first place, and which Google implemented to copy a crazy thing Apple was doing.

    How width and initial-scale relate

    Documentation on how width and initial-scale relate to each other, and which takes precedence, are scant. Apple's documentation on the meta viewport tag says that setting initial-scale=1 automatically sets width=device-width. That implies that the two are basically equivalent: that they're only different if you want to do something else, like set a page width in pixels (use width=) or set the width to some ratio of the device width other than 1 (use initial-scale=.

    That means that using initial-scale=1 should imply width=device-width -- yet nearly everyone on the web seems to use both. So I'm doing that, too. Apparently there was once a point to it: some older iPhones had a bug involving switching orientation to landscape mode, and specifying both initial-scale=1 and width=device-width helped, but supposedly that's long since been fixed.

    initial-scale=2, by the way, sets the viewport to half what it would have been otherwise; so if the width would have been 320, it sets it to 160, so you'll see half as much. Why you'd want to set initial-scale to anything besides 1 in a web page, I don't know.

    If the width specified by initial-scale conflicts with that specified by width, supposedly iOS browsers will take the larger of the two, while Android won't accept a width directive less than 320, according to Quirks mode: testing Meta viewport.

    It would be lovely to be able to test this stuff; but my only Android device is running Android 2.3, which doesn't do all this silly zooming out. It does what a sensible small-screen device should do: it shows text at normal, readable size by default, and lets you zoom in or out if you need to.

    (Only marginally related, but interesting if you're doing elaborate stylesheets that take device resolution into account, is A List Apart's discussion, A Pixel Identity Crisis.)

    Control width of images

    [Image with max-width 100%] Once I added meta viewport tags, most of my pages passed the test. But I was seeing something else on some of my photo pages, as well as blog pages where I have inline images:

    • Content wider than screen
    • Links too close together

    Image pages are all about showing an image. Many of my images are wider than 320 pixels ... and thus get flagged as too wide for the screen. Note the scrollbars, and how you can only see a fraction of the image.

    There's a simple way to fix this, and unlike the meta viewport thing, it actually makes sense. The solution is to force images to be no wider than the screen with this little piece of CSS:

    <style type="text/css">
      img { max-width: 100%; height: auto; }

    [Image with max-width 100%] I've been using similar CSS in my RSS reader for several months, and I know how much better it made the web, on news sites that insist on using 1600 pixel wide images inline in stories. So I'm happy to add it to my photo pages. If someone on a mobile browser wants to view every hair in a squirrel's tail, they can still zoom in to the page, or long-press on the image to view it at full resolution. Or rotate to landscape mode.

    The CSS rule works for those wide page banners too. Or you can use overflow: hidden if the right side of your banner isn't all that important.

    Anyway, that takes care of the "page too wide" problem. As for the "Links too close together" even after I added the meta viewport tag, that was just plain bad HTML and CSS, showing that I don't do enough testing on different window sizes. I fixed it so the buttons lay out better and don't draw on top of each other on super narrow screens, which I should have done long ago. Likewise for some layout problems I found on my blog.

    So despite my annoyance with the whole viewport thing, Google's mandate did make me re-examine some pages that really needed fixing, and should have improved my website quite a bit for anyone looking at it on a small screen. I'm glad of that.

    It'll be a while before I have all my pages converted, especially that business of adding the meta tag to all of them. But readers, if you see usability problems with my site, whether on mobile devices or otherwise, please tell me about them!

    Krita 2.9.4 released!

    We’re not just keeping an eye on the kickstarter campaign (three days and almost at 50%! but go ahead and support us by all means, we’re not there yet!), we’re also working hard on Krita itself. Dmitry is busy with improving the performance of clone layers, adding PSD file support to the Layer Styles feature and fixing loading and saving masks to PSD files (we implemented that in October, but broke it subsequently…), and we’ve got a brand new release for you today.

    Well, I made packages for Windows available already on Sunday, but here’s scoop — what’s in, what not! Layer styles, startup speed improvements, memory consumption improvements, bug fixes!

    Big New Things

    And we mean big. This is the first release with the layer styles feature sponsored by last year’s kickstarter!

    • Implement Photoshop layer styles. Note: this is the first version. Some features are not implemented and we load and save only to Krita’s native file format and ASL style library files (not PSD files yet). There is also still a bug with masks and layer styles
    • make start up faster by not waiting for the presets to be loaded (startup times are now 30-50% faster )
    • Big speed improvement when using transform masks and filters. The move tool is about 20% faster.
    • Reduced the  download size of Krita for Windows by 33% (145MB to 97MB). This is the result of cleaning up unused files and fixing translations

    And then there are the bug fixes…

    • Fix the patch count of the color history
    • Lots of fixes to the layout of docker panels, dialogs and other parts of Krita
    • Lots of fixes for special widgets when using the Plastique style
    • Fix issues with resizing the icon size in resource selectors
    • Fix usability issues in the crop tool (reset size settings after doing cropping)
    • Add a function to hide docker titlebars
    • Fix issues with the default settings button
    • Save memory by not loading or saving texture information for brush presets that don’t use textures
    • Automatically add a tag based on the filename for all brush tips from Adobe ABR brush collections
    • Make Export and Save as default to the folder the original file came from
    • Make it possible to switch off compression for layers in kra files (bigger files, but faster saving)
    • Disable opening 32 bit float grayscale TIFF files: we don’t support that yet
    • Fix memory leak when using gradients
    • Fix color serialization from user interface to GMIC (bug 345639)
    • Fix crash when toggling GMIC preview checkbox (bug 344058)
    • Make it possible to re-enable the splash screen
    • Show the label for the sliders inside the slide, to save space.
    • Fix HSV options for the grid and spray brush
    • Don’t show the zoom on-canvas notification while loading an image
    • Fix many memory leaks
    • Fix the specific color selector docker so it doesn’t grow too big
    • Allow the breeze theme to be used on platforms other than KDE
    • Don’t crash when creating a pattern-filled shape if no pattern is selected (bug 346990)
    • Fix loading floating point TIFF files (bug 344334)
    • Fix loading tags for resources from installed bundles
    • Make it possible to ship default tags for our default resources (bug 338134 — needs more work to create a good default definition)
    • Remember the last chosen tag in the resource selectors (bug 346703)
    • Fix bug 346355: don’t say “All presets” in the brush tip selector’s filter



    (Please keep in mind that these builds are unstable and experimental. Stuff is expected not to work. We make them so we know we’re not introducting build problems and to invite hackers to help us with Krita on OSX.)