October 04, 2015

Aligning images to make an animation (or an image stack)

For the animations I made from the lunar eclipse last week, the hard part was aligning all the images so the moon (or, in the case of the moonrise image, the hillside) was in the same position in every time.

This is a problem that comes up a lot with astrophotography, where multiple images are stacked for a variety of reasons: to increase contrast, to increase detail, or to take an average of a series of images, as well as animations like I was making this time. And of course animations can be fun in any context, not just astrophotography.

In the tutorial that follows, clicking on the images will show a full sized screenshot with more detail.

Load all the images as layers in a single GIMP image

The first thing I did was load up all the images as layers in a single image: File->Open as Layers..., then navigate to where the images are and use shift-click to select all the filenames I wanted.

[Upper layer 50% opaque to align two layers]

Work on two layers at once

By clicking on the "eyeball" icon in the Layers dialog, I could adjust which layers were visible. For each pair of layers, I made the top layer about 50% opaque by dragging the opacity slider (it's not important that it be exactly at 50%, as long as you can see both images).

Then use the Move tool to drag the top image on top of the bottom image.

But it's hard to tell when they're exactly aligned

"Drag the top image on top of the bottom image": easy to say, hard to do. When the images are dim and red like that, and half of the image is nearly invisible, it's very hard to tell when they're exactly aligned.


Use a Contrast display filter

What helped was a Contrast filter. View->Display Filters... and in the dialog that pops up, click on Contrast, and click on the right arrow to move it to Active Filters.

The Contrast filter changes the colors so that dim red moon is fully visible, and it's much easier to tell when the layers are approximately on top of each other.


Use Difference mode for the final fine-tuning

Even with the Contrast filter, though, it's hard to see when the images are exactly on top of each other. When you have them within a few pixels, get rid of the contrast filter (you can keep the dialog up but disable the filter by un-checking its checkbox in Active Filters). Then, in the Layers dialog, slide the top layer's Opacity back to 100%, go to the Mode selector and set the layer's mode to Difference.

In Difference mode, you only see differences between the two layers. So if your alignment is off by a few pixels, it'll be much easier to see. Even in a case like an eclipse where the moon's appearance is changing from frame to frame as the earth's shadow moves across it, you can still get the best alignment by making the Difference between the two layers as small as you can.

Use the Move tool and the keyboard: left, right, up and down arrows move your layer by one pixel at a time. Pick a direction, hit the arrow key a couple of times and see how the difference changes. If it got bigger, use the opposite arrow key to go back the other way.

When you get to where there's almost no difference between the two layers, you're done. Change Mode back to Normal, make sure Opacity is at 100%, then move on to the next layer in the stack.

It's still a lot of work. I'd love to find a program that looks for circular or partially-circular shapes in successive images and does the alignment automatically. Someone on GIMP suggested I might be able to write something using OpenCV, which has circle-finding primitives (I've written briefly before about SimpleCV, a wrapper that makes OpenCV easy to use from Python). But doing the alignment by hand in GIMP, while somewhat tedious, didn't take as long as I expected once I got the hang of using the Contrast display filter along with Opacity and Difference mode.

Creating the animation

Once you have your layers, how do you turn them into an animation?

The obvious solution, which I originally intended to use, is to save as GIF and check the "animated" box. I tried that -- and discovered that the color errors you get when converting an image to indexed make a beautiful red lunar eclipse look absolutely awful.

So I threw together a Javascript script to animate images by loading a series of JPEGs. That meant that I needed to export all the layers from my GIMP image to separate JPG files.

GIMP doesn't have a built-in way to export all of an image's layers to separate new images. But that's an easy plug-in to write, and a web search found lots of plug-ins already written to do that job.

The one I ended up using was Lie Ryan's Python script in How to save different layers of a design in separate files; though a couple of others looked promising (I didn't try them), such as gimp-plugin-export-layers and save_all_layers.scm.

You can see the final animation here: Lunar eclipse of September 27, 2015: Animations.

October 03, 2015

The Art of Language Invention

"Careful now. I must say this right. Upe, I have killed your husband. There's a gold hairpin in his chest. If you need more soulprice, ask me. Upe, bing keng ... No, that doesn't start right. Wait, I should say, 'Upe, bing wisyeye keng birikyisde... And then say sorry. What's sorry in this stupid language? Don't know. Bing biyititba. I had to... She can have the gold hairpin, and the other one, that should be enough. I hope she didn't really love him."

This is a tiny fragment from the novel I was writing when I started hacking on Krita... I finished the last chapter last year, and added a new last chapter this year. The context? Yidenir, one the protagonists, an apprentice sorcerer, is left alone by her master in a Barushlani camp, where she lives among the women, in the inner courtyard. When she learns she has been abandoned, she goes to the men's side of the tent, argues with the warlord and to make sure he understand she's a sorcerer, kills his right-hand man, by ramming one of her hairpins in his chest. Then she goes back, and tries to figure out how to tell that henchman's wife that she has killed her husband. A couple of weeks isn't long enough to learn Den Barush, as Barushlani is called in Denden (where 'barush' is form of the word for 'mountain').

Together with the novel, I wrote parts of a grammar of Barushlani. I had written a special application to collect language data, called Kura, and a system that used docbook, fop and python to combine language data and descriptive text into a single grammar. I was a serious conlanger. Heck, I was a serious linguist, having had an article published in linguistics of the Tibeto-Burman Area.

But conlanging is how I started. I hadn't read Tolkien (much, the local library only had Volume II of Lord of the Rings, in a Dutch translation), I didn't know it was possible to invent a language. But around 1981 I started learning French, English and German, and with French came a grammar. A book that put down the rules of language in an orderly way, very attractively, too, I thought. And my mind was fizzing with this invented world, full of semi-hemi-demi-somewhat humans that I was sculpting in wax. And drawing. And trying to figure out the music of. My people needed a language!

So I started working on Denden. It's no coincidence that Denden has pretty much no logical phonology. Over the years, I found I had gotten sentimentally attached to words I invented early on, so while grammar was easy to rewrite and make more interesting, the words had to stay. More or less.

Then I started studying Chinese, found some like-minded people, like Irina, founded the Society for Linguafiction (conlang wasn't a word back then), got into a row with Leyden Esperantist Marc van Oostendorp who felt that languages should only be invented from idealistic motives, not aesthetic. I got into a memorable discussion in a second-hand bookshop when a philosopher told me smugly that I might have imagined I had invented a language, but that I was wrong because a) you cannot invent a language and b) an invented language is not a language.

I got into the community centered around the CONLANG mailing list. I did a couple of relays, a couple of translations, and then I started getting ambitious about my world: I started working on the first two novels. And then, of course, I got side-tracked a little, first by the rec.arts.sf.composition usenet group, where people could discuss their writing, and later on by Krita.

These days, when we need words and names for our long-running RPG campaign, we use Nepali for Aumen Sith, Persian for Iss-Peran. Only Valdyas and Velihas have proper native language. The shame!!

And apart from RPG and now and then writing a bit of fiction, I had more or less forgotten about my conlanging. The source code for Kura seems to be lost, I need to check some old CDR's, but I'm not very hopeful. The setup I used to build the grammars is pretty much unreconstructable, and the wordprocessor documents that have my oldest data don't load correctly anymore. (I did some very weird hacks, back then, including using a hex editor to make a Denden translation of WordPerfect 4.2.)

Until today, when young whipper-snapper David J. Peterson's book arrived, entitled "The art of language invention". Everything came back... The attempt to make sense of Yaguello's Les Fous du Langage (crap, but there wasn't much else..) Trying to convince other people that no, I wasn't crazy, trying to explain to auxlangers that, yes, doing this for fun was a valid use of my time. The Tolkienian sensation of having sixteen drafts of a dictionary and no longer knowing which version is correct. What's not in David's book, but... Telling your lover in her or your own language that you love her, and writing erotic poetry in that language, too. Marrying at the town hall wearing t-shirts printed with indecent texts in different conlangs, each white front with black letters shouting defiance at the frock-coated marriage registrar. (I don't believe in civil marriage.)

Reading the book made me realize that, of course, internet has changed what it means to be a conlanger. We started out with literally stenciled fanzines, swapping fanzine for fanzine, moving on to actual copiers. Quietly not telling my Nepali/Hayu/Dumi/Limbu/comparative linguistics teacher what I actually was assembling the library of Cambridge books on Language (the red and green series!) for.

Linguistically, David's book doesn't have much to offer me, of course. I adapted Mark Rosenfelder's Perl scripts to create a diachronically logical system of sound changes so I could generate the Barushlani vocabulary. I know, or maybe, knew, about phonology, morphology, syntax and semantics. I made my first fonts with Corel Draw in the early nineties. I had to hack around to get IPA into Word 2. But it was a fun read, and brought back some good memories.

And also some pet peeves... Dothraki! I'm not a Games of Thrones fan, I long for a nice, fun, cosy fantasy series where not everyone wants to kill, rape and enslave everyone else. I found the books unreadable and the television series unwatchable. And... Dothraki. David explains how he uses the words and names the author had sprinkled around the text to base the language on. Good job on his side. But those words! Martin's concept of "exotic language" basically boils down to "India is pretty exotic!" It reads like the random gleanings from the Linguistic Survey of India, or rather, those stories from the Boy's Own Library that deal with Hindoostan. Which is, no doubt, where the 'double' vowels come from. Kaheera's ee is the same ee as in Victorian spellings of baksheesh and so on. Harumph.

BUT if the connection with television series helps sell this book and get more people having fun conlanging, then it's all worth it! I'm going to see if I can revive that perl script, and maybe do some nice language for the people living in the lowlands west of the mountain range that shelters Broi, the capital of Emperor Rordal, or maybe finally do something about Vustlani, the language of his wife, Chazalla.

Let's go back to Yidenir, doing the laundry with poor disfigured Tsoy... Tsoy wants to sing!

"Yidenir, ngaimyibge?" Another fierce scowl.

"What did you say? -- do I sing? Er..." Yidenir was silent for a moment. Was this girl making fun of her? Or was she just trying to be friendly?

"Sadrabam aimyibgyi ingyot. Aimyibgyi ruysing ho," Tsoy explained patiently.

"Er, singing, is good, er allowed? when doing laundry? Oh, yes, I can sing... Denden only, is that all right? Er, aimyipkyi denden?"


"All right, then... Teach you a bit of Denden, too? Ngsahe Denden bingyop?" Yidenir offered.

Call to translators and testers

We plan to release Stellarium 0.14.0 at final week in October.

There are many new strings to translate in this release because we have many changes in sky cultures, landscapes and in the application. If you can assist with translation to any of the 134 languages which Stellarium supports, please go to Launchpad Translations and help us out: https://translations.launchpad.net/stellarium

Testing of new features also will be very helpful for preparation of the release.

Thank you!

October 01, 2015

Lunar eclipse animations

[Eclipsed moon rising] The lunar eclipse on Sunday was gorgeous. The moon rose already in eclipse, and was high in the sky by the time totality turned the moon a nice satisfying deep red.

I took my usual slipshod approach to astrophotography. I had my 90mm f/5.6 Maksutov lens set up on the patio with the camera attached, and I made a shot whenever it seemed like things had changed significantly, adjusting the exposure if the review image looked like it might be under- or overexposed, occasionally attempting to refocus. The rest of the time I spent socializing with friends, trading views through other telescopes and binoculars, and enjoying an apple tart a la mode.

So the images I ended up with aren't all they could be -- not as sharply focused as I'd like (I never have figured out a good way of focusing the Rebel on astronomy images) and rather grainy.

Still, I took enough images to be able to put together a couple of animations: one of the lovely moonrise over the mountains, and one of the sequence of the eclipse through totality.

Since the 90mm Mak was on a fixed tripod, the moon drifted through the field and I had to adjust it periodically as it drifted out. So the main trick to making animations was aligning all the moon images. I haven't found an automated way of doing that, alas, but I did come up with some useful GIMP techniques, which I'm in the process of writing up as a tutorial.

Once I got the images all aligned as layers in a GIMP image, I saved them as an animated GIF -- and immediately discovered that the color error you get when converting to an indexed GIF image loses all the beauty of those red colors. Ick!

So instead, I wrote a little Javascript animation function that loads images one by one at fixed intervals. That worked a lot better than the GIF animation, plus it lets me add a Start/Stop button.

You can view the animations (or the source for the javascript animation function) here: Lunar eclipse animations

Secrets of Krita: the Third Krita Training DVD

Comics with Krita author Timothée Giet is back with his second training DVD: Secrets of Krita, a collection of videos containing 100 lessons about the most important things to know when using Krita. In 10 chapters, you will discover with clear examples all the essential and hidden features that make Krita so powerful and awesome! The data DVD is English spoken with English subtitles.

Secrets of Krita – DVD (€29,95)

Secrets of Krita – Download (€29,95)

Table of Contents

04-Display Colors
06-Canvas-Only Mode
07-Canvas Input
08-Other Shortcuts
10-Advanced Color Selector

2-Generic Brush Settings
01-Popup Palette
02-Toolbar Shortcuts
03-Dirty Presets
05-Precision Setting
06-Soft Brush
07-Build-Up And Wash
08-Dynamic Settings
09-Lock Setting
10-Smoothing mode

3-Specific Brush Settings
01-Pixel Brush: Color Dynamics
02-Pixel Brush: Pixel Art Presets
03-Color Smudge Brush: Overlay Mode
04-Sketch Brush: How It Works
05-Bristle Brush: Ink Depletion
06-Shape Brush: Speed And Displace
07-Spray Brush: Shapes And Dynamics
08-Hatching Brush: Hatching Options
09-Clone Brush: Shortcuts And Modes
10-Deform Brush: Deformation modes

01-Background Modes
02-Layer Groups
03-Inherit Alpha
04-Erase Mode
05-Filter Layer And Mask
06-Layer Conversion
07-Split Alpha
08-Split Layer
09-File Layer
10-Layer Color space

01-Selection Operations
03-Selection View
04-Global Selection Mask
05-Local Selection Mask
06-Selection Painting
07-Select Opaque
08-Contiguous Selection
09-Vector Selection
10-Convert Selection

01-Crop Tool
02-Pseudo-Infinite Canvas
03-Transform Tool
04-Transform Tool – Free Transform
05-Transform Tool – Perspective
06-Transform Tool – Warp
07-Transform Tool – Cage
08-Transform Tool – Liquify
09-Transform A Group
10-Recursive Transform

01-Assistant Magnetism
02-Vanishing Point
06-Concentric Ellipse
07-Parallell Ruler
09-Infinite Ruler
10-Fish Eye Point

01-Filter Presets
02-Dodge And Burn
04-Index Colors
05-Color To Alpha
06-Alpha Curve
07-Color Transfer
09-Layer Styles

9-Vector Tools
01-Vector Drawing
02-Vector Editing
03-Stacked Shapes
05-Stroke Shapes
07-Artistic Text
08-Multiline Text
09-Pattern Editing
10-Gradient Editing

01-Mirror view
02-Wrap Around mode
03-Mirror Painting
05-Save Incremental
06-Save Group Layers
08-Task Sets
09-Color Selectors
10-Command Line

September 28, 2015

Interview with Anusha Bhanded


Could you tell us something about yourself?

My name is Ana, I live in India and I love doing digital art, at least as a beginner. I’m 13 years old.

Do you paint professionally, as a hobby artist, or both?

Well, I’m just a hobby artist, can’t say “artist” but I like art.

Whose work inspires you most — who are your role models as an  artist?

Actually my role model, not totally as an artist, but it’s Scott Cawthon and team, the creator of FNAF. i like the art he did in the game and the effects. Also Markus Persson and team, the pixel artist.

What makes you choose digital over traditional painting?

As per my thinking i am kind of good at painting and art stuff, even my parents and friends say so, and I loved to do everything i could do on any gadget. I had this bored feeling with a pencil and a paper, so I started digital painting! It’s fun to use Krita!


What do you love about Krita?

As I loved to do digital painting I surfed the internet for good apps. All of them were great but they were not free… well, I ended up with Krita! My first favourite thing about Krita is that it’s free! That’s good because there are so many young artists out there who deserve to use any free available programs as good as Krita. Krita has TONS of awesome brushes and you can use a variety of them!

How did you find out about Krita?

One day I was surfing on the internet for a good painting tool. Most people said paint tool SAI was the best. I even tried to download the cracked version but that did not work, and I ended up using an awesome program called Krita! =D

September 27, 2015

Make a series of contrasting colors with Python

[PyTopo with contrasting color track logs] Every now and then I need to create a series of contrasting colors. For instance, in my mapping app PyTopo, when displaying several track logs at once, I want them to be different colors so it's easy to tell which track is which.

Of course, I could make a list of five or ten different colors and cycle through the list. But I hate doing work that a computer could do for me.

Choosing random RGB (red, green and blue) values for the colors, though, doesn't work so well. Sometimes you end up getting two similar colors together. Other times, you get colors that just don't work well, because they're so light they look white, or so dark they look black, or so unsaturated they look like shades of grey.

What does work well is converting to the HSV color space: hue, saturation and value. Hue is a measure of the color -- that it's red, or blue, or yellow green, or orangeish, or a reddish purple. Saturation measures how intense the color is: is it a bright, vivid red or a washed-out red? Value tells you how light or dark it is: is it so pale it's almost white, so dark it's almost black, or somewhere in between? (A similar model, called HSL, substitutes Lightness for Value, but is similar enough in concept.)

[GIMP color chooser] If you're not familiar with HSV, you can get a good feel for it by playing with GIMP's color chooser (which pops up when you click the black Foreground or white Background color swatch in GIMP's toolbox). The vertical rainbow bar selects Hue. Once you have a hue, dragging up or down in the square changes Saturation; dragging right or left changes Value. You can also change one at a time by dragging the H, S or V sliders at the upper right of the dialog.

Why does this matter? Because once you've chosen a saturation and value, or at least ensured that saturation is fairly high and value is somewhere in the middle of its range, you can cycle through hues and be assured that you'll get colors that are fairly different each time. If you had a red last time, this time it'll be a green, or yellow, or blue, depending on how much you change the hue.

How does this work programmatically?

PyTopo uses Python-GTK, so I need a function that takes a gtk.gdk.Color and chooses a new, contrasting Color. Fortunately, gtk.gdk.Color already has hue, saturation and value built in. Color.hue is a floating-point number between 0 and 1, so I just have to choose how much to jump. Like this:

def contrasting_color(color):
    '''Returns a gtk.gdk.Color of similar saturation and value
       to the color passed in, but a contrasting hue.
       gtk.gdk.Color objects have a hue between 0 and 1.
    if not color:
        return self.first_track_color;

    # How much to jump in hue:
    jump = .37

    return gtk.gdk.color_from_hsv(color.hue + jump,

What if you're not using Python-GTK?

No problem. The first time I used this technique, I was generating Javascript code for a company's analytics web page. Python's colorsys module works fine for converting red, green, blue triples to HSV (or a variety of other colorspaces) which you can then use in whatever graphics package you prefer.

September 25, 2015

Philips Wireless, modernised

I've wanted a stand-alone radio in my office for a long time. I've been using a small portable radio, but it ate batteries quickly (probably a 4-pack of AA for a bit less of a work week's worth of listening), changing stations was cumbersome (hello FM dials) and the speaker was a bit teeny.

A couple of years back, I had a Raspberry Pi-based computer on pre-order (the Kano, highly recommended for kids, and beginners) through a crowd-funding site. So I scoured « brocantes » (imagine a mix of car boot sale and antiques fair, in France, with people emptying their attics) in search of a shell for my small computer. A whole lot of nothing until my wife came back from a week-end at a friend's with this:

Photo from Radio Historia

A Philips Octode Super 522A, from 1934, when SKUs were as superlative-laden and impenetrable as they are today.

Let's DIY

I started by removing the internal parts of the radio, without actually turning it on. When you get such old electronics, they need to be checked thoroughly before being plugged, and as I know nothing about tube radios, I preferred not to. And FM didn't exist when this came out, so not sure what I would have been able to do with it anyway.

Roomy, and dirty. The original speaker was removed, the front buttons didn't have anything holding them any more, and the nice backlit screen went away as well.

To replace the speaker, I went through quite a lot of research, looking for speakers that were embedded, rather than get a speaker in box that I would need to extricate from its container. Visaton make speakers that can be integrated into ceiling, vehicles, etc. That also allowed me to choose one that had a good enough range, and would fit into the one hole in my case.

To replace the screen, I settled on an OLED screen that I knew would work without too much work with the Raspberry Pi, a small AdaFruit SSD1306 one. Small amount of soldering that was up to my level of skills.

It worked, it worked!

Hey, soldering is easy. So because of the size of the speaker I selected, and the output power of the RPi, I needed an amp. The Velleman MK190 kit was cheap (€10), and should just be able to work with the 5V USB power supply I planned to use. Except that the schematics are really not good enough for an electronics starter. I spent a couple of afternoons verifying, checking on the Internet for alternate instructions, re-doing the solder points, to no avail.

'Sup Tiga!

So much wasted time, and got a cheap car amp with a power supply. You can probably find cheaper.

Finally, I got another Raspberry Pi, and SD card, so that the Kano, with its super wireless keyboard, could find a better home (it went to my godson, who seemed to enjoy the early game of Pong, and being a wizard).

Putting it all together

We'll need to hold everything together. I got a bit of help for somebody with a Dremel tool for the piece of wood that will hold the speaker, and another one that will stick three stove bolts out of the front, to hold the original tuning, mode and volume buttons.

A real joiner

I fast-forwarded the machine by a couple of years with a « Philips » figure-of-8 plug at the back, so machine's electrics would be well separated from the outside.

Screws into the side panel for the amp, blu-tack to hold the OLED screen for now, RPi on a few leftover bits of wood.


My first attempt at getting something that I could control on this small computer was lcdgrilo. Unfortunately, I would have had to write a Web UI for it (remember, my buttons are just stuck on, for now at least), and probably port the SSD1306 OLED screen's driver from Python, so not a good fit.

There's no proper Fedora support for Raspberry Pis, and while one can use a nearly stock Debian with a few additional firmware files on Raspberry Pis, Fedora chose not to support that slightly older SoC at all, which is obviously disappointing for somebody working on Fedora as a day job.

Looking for other radio retrofits, and there are plenty of quality ones on the Internet, and for various connected speakers backends, I found PiMusicBox. It's a Debian variant with Mopidy builtin, and a very easy to use initial setup: edit a settings file on the SD card image, boot and access the interface via a browser. Tada!

Once I had tested playback, I lowered the amp's volume to nearly zero, raised the web UI's volume to the maximum, and raised the amp's volume to the maximum bearable for the speaker. As I won't be able to access the amp's dial, we'll have this software only solution.

Wrapping up

I probably spent a longer time looking for software and hardware than actually making my connected radio, but it was an enjoyable couple of afternoons of work, and the software side isn't quite finished.

First, in terms of hardware support, I'll need to make this OLED screen work, how lazy of me. The audio setup is currently just the right speaker, as I'd like both the radios and AirPlay streams to be downmixed.

Secondly, Mopidy supports plugins to extend its sources, uses GStreamer, so would be a right fit for Grilo, making it easier for Mopidy users to extend through Lua.

Do note that the Raspberry Pi I used is a B+ model. For B models, it's recommended to use a separate DAC, because of the bad audio quality, even if the B+ isn't that much better. Testing out use the HDMI output with an HDMI to VGA+jack adapter might be a way to cut costs as well.

Possible improvements could include making the front-facing dials work (that's going to be a tough one), or adding RFID support, so I can wave items in front of it to turn it off, or play a particular radio.

In all, this radio cost me:
- 10 € for the radio case itself
- 36.50 € for the Raspberry Pi and SD card (I already had spare power supplies, and supported Wi-Fi dongle)
- 26.50 € for the OLED screen plus various cables
- 20 € for the speaker
- 18 € for the amp
- 21 € for various cables, bolts, planks of wood, etc.

I might also count the 14 € for the soldering iron, the 10 € for the Velleman amp, and about 10 € for adapters, cables, and supplies I didn't end up using.

So between 130 and 150 €, and a number of afternoons, but at the end, a very flexible piece of hardware that didn't really stretch my miniaturisation skills, and a completely unique piece of furniture.

In the future, I plan on playing with making my own 3-button keyboard, and making a remote speaker to plug in the living room's 5.1 amp with a C.H.I.P computer.

Happy hacking!

September 24, 2015

Done Porting!

Technically, we're done porting Krita to Qt5 and KDE Frameworks 5. That is to say, everything builds, links and Krita runs, and there are no dependencies on deprecated libraries or classes anymore. In the process, the majority of Calligra's libraries and plugins were also ported. It was not an easy process, and if there hadn't been sponsorship available for the porting work, it would not have happened. Not yet, in any case. It's something I've heard from other KDE project maintainers, too: without sponsorship to work on the port full-time, projects might have died.

Krita wouldn't have died, but looking back at the previous month's work, I wonder I didn't go crazy, in a loud way. I spent four, five days a week on porting, and fixing the porting documentation, and then one or two days on trying to keep the bug count down for the 2.9 branch. As Kalle noted, porting isn't hard, it's very mechanical work that despite all the scripts, still needs to be done by a human, one who can make judgement calls -- and one who isn't afraid of making mistakes. Lots of mistakes: it's unavoidable. Most of them seem to be fixed now, though. It's like running a racecourse in blinkers.

So, what were the hardest problems?

The winners, ex-equo are KStandardDirs to QStandardPaths and KUrl to QUrl.

The latter is weird, because, actually, we shouldn't be using QUrl at all. The reason KUrl was used in KOffice, now Calligra, is for handling network transparent file access. That's something I do use in Kate or KWrite, when writing blogs (my blog system is a bunch of nineties perl scripts) but which I am sure not a single Krita user is actually using. It's too slow and dangerous, with big files, to depend on, it's superseded by dropbox, one-drive, google-drive, owncloud and the rise of the cheap NAS. Not to mention that only Plasma Desktop users have access to it, because on all other platforms we use native file dialogs which don't give access to remote locations. All the QUrl's we use get created from local files and end up being translated to local filenames.

KStandardDirs is more interesting. KStandardDirs actually was two things in one: a way to figure out the paths where the system and the application can store stuff like configuration files, and a way to build a kind of resources database. You'd define a resource type, say "brush", and add a bunch of locations where brushes can be found. For instance, Krita looks for brushes in its own brushes folder, but also in the shared 'create project' brushes folder and could even look in gimp's brushes folder.

The resources part isn't part of QStandardPaths, but is used really heavily in Calligra. The central place where we load resources, KoResourceServer just couldn't be ported to QStandardPaths: we'd have to duplicate the code for every resource type. But there's no problem that cannot be solved with another layer of indirection and a lump of putty, and I created a KoResourcePaths class that can handle the resource aliases. I'm not totally convinced I ironed out all the bugs, but Krita starts and all resources are being loaded.

There were a few more classes that were deprecated, the foremost being KDialog. There's only a hundred or so places in Calligra where that class was used, and here the best solutions seemed to me to fork KDialog into KoDialog. Problem solved -- and honestly, I don't see why the class had to be deprecated in the first place.

Now all the basic porting has been done, it's time to figure out what is broken, and why. Here's a short list:

  • Loading icons. Right now, I need to patch out the use of the icon cache to load icons. But in any case I am still considering putting the icons in the executable as resources, because that makes packaging on Windows much easier.
  • Qt5's SVG loader had trouble with our svg icons; that was fixed by cleaning up the icons.
  • OpenGL was a huge task, needing a nearly complete rewrite -- it works now on my development system, but I'm not sure about others.
  • Qt5's tablet support is much better, but now that we use that instead of our own tablet support, we've lost some performance (work is ongoing) and some things have changed meaning, which means that the scratchpad and the popup palette are broken for tablet users.
  • In general, the user interface feels sluggish: even things like the preferences dialog's widgets are slow to react.

And when all that is fixed, there is more to do: make new build environments for Windows (hopefully we can start using MSVC 2015 now) and OSX, see about dropping superfluous dependencies on things like DBus and, then...

Testing, testing and testing!

But I am getting confident that Krita 3.0 could be something we can let people try and test this year. And here is, for your delectation, a delectable screenshot:

Vote in the Kiki Drawing Challenge Contest

This month, we ran a special edition of the monthly drawing contest: draw Kiki, and get your work on a kickstarter t-shirt! The contest now has drawn to a close, and it’s time to vote.

So… Vote for your favorite Kiki!

Here’s, as a teaser, Tyson Tan’s entry, entered hors-de-concours:


(Which, incidentally, also is going to be the splash screen for the next release.)

September 23, 2015

GNOME 3.18, here we go

As I'm known to do, a focus on the little things I worked on during the just released GNOME 3.18 development cycle.

Hardware support

The accelerometer support in GNOME now uses iio-sensor-proxy. This daemon also now supports ambient light sensors, which Richard used to implement the automatic brightness adjustment, and compasses, which are used in GeoClue and gnome-maps.

In kernel-land, I've fixed the detection of some Bosch accelerometers, added support for another Kyonix one, as used in some tablets.

I've also added quirks for out-of-the-box touchscreen support on some cheaper tablets using the goodix driver, and started reviewing a number of patches for that same touchscreen.

With Larry Finger, of Realtek kernel drivers fame, we've carried on cleaning up the Realtek 8723BS driver used in the majority of Windows-compatible tablets, in the Endless computer, and even in the $9 C.H.I.P. Linux computer.

Bluetooth UI changes

The Bluetooth panel now has better « empty states », explaining how to get Bluetooth working again when a hardware killswitch is used, or it's been turned off by hand. We've also made receiving files through OBEX Push easier, and builtin to the Bluetooth panel, so that you won't forget to turn it off when done, and won't have trouble finding it, as is the case for settings that aren't used often.


GNOME Videos has seen some work, mostly in the stabilisation, and bug fixing department, most of those fixes were also landed in the 3.16 version.

We've also been laying the groundwork in grilo for writing ever less code in C for plugin sources. Grilo Lua plugins can now use gnome-online-accounts to access keys for specific accounts, which we've used to re-implement the Pocket videos plugin, as well as the Last.fm cover art plugin.

All those changes should allow implementing OwnCloud support in gnome-music in GNOME 3.20.

My favourite GNOME 3.18 features

You can call them features, or bug fixes, but the overall improvements in the Wayland and touchpad/touchscreen support are pretty exciting. Do try it out when you get a GNOME 3.18 installation, and file bugs, it's coming soon!

Talking of bug fixes, this one means that I don't need to put in my password by hand when I want to access work related resources. Connect to the VPN, and I'm authenticated to Kerberos.

I've also got a particular attachment to the GeoClue GPS support through phones. This allows us to have more accurate geolocation support than any desktop environments around.

A few for later

The LibreOfficeKit support that will be coming to gnome-documents will help us get support for EPubs in gnome-books, as it will make it easier to plug in previewers other than the Evince widget.

Victor Toso has also been working through my Grilo bugs to allow us to implement a preview page when opening videos. Work has already started on that, so fingers crossed for GNOME 3.20!

Pirituba services center

This is a project we did for a small services center in São Paulo. The ground floor hosts three big stores, with different ceiling heights depending on their position on the site and their entrance level, and two upper floors of offices space, with open plan, dividable according to future necessities. This project is...

September 22, 2015

Average Book Covers and a New (official) GIMP Website (maybe)

A little while back I had a big streak of averaging anything I could get my hands on. I am still working on a couple of larger averaging projects (here's a small sneak peek - guess the movie?):

I'm trying out visualizing a movie by mean averaging all of the cuts. Turns out movies have way more than I thought - so it might be a while until I finish this one... :)

On the other hand, here's something neat that is recently finished...

JungleBook: Simple Kindle Ebook Cover Analysis

Jason van Gumster just posted this morning about a neat project he'd been toying with that is along similar lines of the Netflix Top 50 Covers by Genre, but takes it to a deeper level. He's written code to average the top 50 ebook covers on Amazon by genre:

Top 50 Kindle Covers by Jason van Gumster

By itself this is really pretty (to me - not sure if anyone else likes these things as much as I do) but Jason takes it further by providing some analysis and commentary on the resulting images in the context of ebook sales and popularity (visually) to people.

I highly recommend you visit Jason's post and read the whole post (it's not too long). It's really neat!

The GIMP Website

I had this note on my to-do list for ages to tinker with the GIMP website. I finally got off my butt and started a couple of weeks ago. I did a quick mockup to get a feel for the overall direction I wanted to head:

I've been hacking at it for a couple of weeks now and I kind of like how it's turning out. I'm still in the process of migrating old site content and making sure that legacy URI's aren't going to change. It may end up being a new site for GIMP. It also may not, so please don't hold your breath... :)

Here's where I am at the moment for a front page:

static GIMP page

Yes, that image is a link. The link will lead you to the page as I build it: http://static.gimp.org. See? It's like a prize for people who bother to read to the end! Feel free to hit me up with ideas or if you want to donate any artwork for the new page while I build it. I can't promise that I'll use anything anyone sends me, but if I do I will be sure to properly attribute! (Please consider a permissive license if you decide to send me something).

September 21, 2015

The meaning of "fetid"; Albireo; and musings on variations in sensory perception

[Fetid marigold, which actually smells wonderfully minty] The street for a substantial radius around my mailbox has a wonderful, strong minty smell. The smell is coming from a clump of modest little yellow flowers.

They're apparently Dyssodia papposa, whose common name is "fetid marigold". It's in the sunflower family, Asteraceae, not related to Lamiaceae, the mints.

"Fetid", of course, means "Having an offensive smell; stinking". When I google for fetid marigold, I find quotes like "This plant is so abundant, and exhales an odor so unpleasant as to sicken the traveler over the western prairies of Illinois, in autumn." And nobody says it smells like mint -- at least, googling for the plant and "mint" or "minty" gets nothing.

But Dave and I both find the smell very minty and pleasant, and so do most of the other local people I queried. What's going on?

[Fetid goosefoot] Another local plant which turns strikingly red in autumn has an even worse name: fetid goosefoot. On a recent hike, several of us made a point of smelling it. Sure enough: everybody except one found it minty and pleasant. But one person on the hike said "Eeeeew!"

It's amazing how people's sensory perception can vary. Everybody knows how people's taste varies: some people perceive broccoli and cabbage as bitter while others love the taste. Some people can't taste lobster and crab at all and find Parmesan cheese unpleasant.

And then there's color vision. Every amateur astronomer who's worked public star parties knows about Albireo. Also known as beta Cygni, Albireo is a double star, the head of the constellation of the swan or the foot of the Northern Cross. In a telescope, it's a double star, and a special type of double: what's known as a "color double", two stars which are very different colors from each other.

Most non-astronomers probably don't think of stars having colors. Mostly, color isn't obvious when you're looking at things at night: you're using your rods, the cells in your retina that are sensitive to dim light, not your cones, which provide color vision but need a fair amount of light to work right.

But when you have two things right next to each other that are different colors, the contrast becomes more obvious. Sort of.

[Albireo, from Jefffisher10 on Wikimedia Commons] Point a telescope at Albireo at a public star party and ask the next ten people what two colors they see. You'll get at least six, more likely eight, different answers. I've heard blue and red, blue and gold, red and gold, red and white, pink and blue ... and white and white (some people can't see the colors at all).

Officially, the bright component is actually a close binary, too close to resolve as separate stars. The components are Aa (magnitude 3.18, spectral type K2II) and Ac (magnitude 5.82, spectral type B8). (There doesn't seem to be an Albireo Ab.) Officially that makes Albireo A's combined color yellow or amber. The dimmer component, Albireo B, is magnitude 5.09 and spectral type B8Ve: officially it's blue.

But that doesn't make the rest of the observers wrong. Color vision is a funny thing, and it's a lot more individual than most people think. Especially in dim light, at the limits of perception. I'm sure I'll continue to ask that question when I show Albireo in my telescope, fascinated with the range of answers.

In case you're wondering, I see Albireo's components as salmon-pink and pale blue. I enjoy broccoli and lobster but find bell peppers bitter. And I love the minty smell of plants that a few people, apparently, find "fetid".

WebKitGTK+ 2.10

HTTP Disk Cache

WebKitGTK+ already had an HTTP disk cache implementation, simply using SoupCache, but Apple introduced a new cross-platform implementation to WebKit (just a few bits needed a platform specific implementation), so we decided to switch to it. This new cache has a lot of advantages over the SoupCache approach:

  • It’s fully integrated in the WebKit loading process, sharing some logic with the memory cache too.
  • It’s more efficient in terms of speed (the cache is in the NetworkProcess, but only the file descriptor is sent to the Web Process that mmaps the file) and disk usage (resource body and headers are stored in separate files in disk, using hard links for the body so that difference resources with the exactly same contents are only stored once).
  • It’s also more robust thanks to the lack of index. The synchronization between the index and the actual contents has always been a headache in SoupCache, with many resources leaked in disk, resources that are cache twice, etc.

The new disk cache is only used by the Network Process, so in case of using the shared secondary process model the SoupCache will still be used in the Web Process.

New inspector UI

The Web Inspector UI has been redesigned, you can see some of the differences in this screenshot:


For more details see this post in the Safari blog


This was one the few regressions we still had compared to WebKit1. When we switched to WebKit2 we lost IndexedDB support, but It’s now back in 2.10. It uses its own new process, the DatabaseProcess, to perform all database operations.


WebKitGTK+ 2.8 improved the overall performance thanks to the use of the bmalloc memory allocator. In 2.10 the overall performance has also improved, this time thanks to a new implementation of the locking primitives. All uses of mutex/condition have been replaced by a new implementation. You can see more details in the email Filip sent to webkit-dev or in the so detailed commit messages.

Screen Saver inhibitor

It’s more and more common to use the web browser to watch large videos in fullscreen mode, and quite annoying when the screen saver decides to “save” your screen every x minutes during the whole video. WebKitGTK+ 2.10 uses the Freedesktop.org ScreenSaver DBus service to inhibit the screen saver while a video is playing in fullscreen mode.

Font matching for strong aliases

WebKit’s font matching algorithm has improved, and now allows replacing fonts with metric-compatible equivalents. For example, sites that specify Arial will now get Liberation Sans, rather than your system’s default sans font (usually DejaVu). This makes text appear better on many pages, since some fonts require more space than others. The new algorithm is based on code from Skia that we expect will be used by Chrome in the future.

Improve image quality when using newer versions of cairo/pixman

The poor downscaling quality of cairo/pixman is a well known issue that was finally fixed in Cairo 1.14, however we were not taking advantage of it in WebKit even when using a recent enough version of cairo. The reason is that we were using CAIRO_FILTER_BILINEAR filter that was not affected by the cairo changes. So, we just switched to use CAIRO_FILTER_GOOD, that will use the BILINEAR filter in previous versions of Cairo (keeping the backwards compatibility), and a box filter for downscaling in newer versions. This drastically improves the image quality of downscaled images with a minim impact in performance.


Editor API

The lack of editor capabilities from the API point of view was blocking the migration to WebKit2 for some applications like Evolution. In 2.10 we have started to add the required API to ensure not only that the migration is possible for any application using a WebView in editable mode, but also that it will be more convenient to use.

So, for example, to monitor the state of the editor associated to a WebView, 2.10 provides a new class WebKitEditorState, that for now allows to monitor the typing attributestyping attributes. With WebKit1 you had to connect to the selection-changed signal and use the DOM bindings API to manually query the typing attributes. This is quite useful for updating the state of the editing buttons in the editor toolbar, for example. You just need to connect to WebKitEditorState::notify::typying-attributes and update the UI accordingly. For now typing attributes is the only thing you can monitor from the UI process API, but we will add more information when needed like the current cursor position, for example.

Having WebKitEditorState doesn’t mean we don’t need a selection-changed signal that we can monitor to query the DOM ourselves. But since in WebKit2 the DOM lives in the Web Process, the selection-changed signal has been added to the Web Extensions API. A new class WebKitWebEditor has been added, to represent the web editor associated to a WebKitWebPage, and can be obtained with webkit_web_page_get_editor(). And is this new class the one providing the selection-changed signal. So, you can connect to the signal and use the DOM API the same way it was done in WebKit1.

Some of the editor commands require an argument, like for example, the command to insert an image requires the image source URL. But both the WebKit1 and WebKit2 APIs only provided methods to run editor commands without any argument. This means that, once again, to implement something like insert-image or insert link, you had to use the DOM bindings to create and insert the new elements in the correct place. WebKitGTK+ 2.10 provides webkit_web_view_execute_editing_command_with_argument() to make this a lot more convenient.

You can test all this features using the new editor mode of MiniBrowser, simply run it with -e command line option and no arguments.


Website data

When browsing the web, websites are allowed to store data at the client side. It could be a cache, like the HTTP disk cache, or data required by web features like offline applications, local storage, IndexedDB, WebSQL, etc. All that data is currently stored in different directories and not all of those could be configured by the user. The new WebKitWebsiteDataManager class in 2.10 allows you to configure all those directories, either using a common base cache/data directory or providing a specific directory for every kind of data stored. It’s not mandatory to use it though, the default values are compatible with the ones previously used.

This gives the user more control over the browsing data stored in the client side, but in the future versions we plan to add support for actually handling the data, so that you will be able to query and delete the data stored by a particular security domain.

Web Processes limit

WebKitGTK+ currently supports two process models, the single shared secondary process and the multiple secondary processes. When using the latter, a new web process is created for every new web view. When there are a lot of web views created at the same time, the resources required to create all those processes could be too much in some systems. To improve that a bit 2.10 adds webkit_web_context_set_web_process_count_limit(), to set the maximum number of web process that can be created a the same time.

This new API can also be used to implement a slightly different version of the shared single process model. By using the multiple secondary process model with a limit of 1 web process, you still have a single shared web process, but using the multi-process mechanism, which means the network will happen in the Network Process, among other things. So, if you use the shared secondary process model in your application, unless your application only loads local resources, we recommend you to switch to multiple process model and use the limit to benefit from all the Network Process feature like the new disk cache, for example. Epiphany already does this for the secondary process model and web apps.

Missing media plugins installation permission request

When you try to play media, and the media backend doesn’t find the plugins/codecs required to play it, the missing plugin installation mechanism starts the package installer to allow the user to find and install the required plugins/codecs. This used to happen in the Web Process and without any way for the user to avoid it. WebKitGTK+ 2.10 provides a new WebKitPermissionRequest implementation that allows the user to block the request and prevent the installer from being invoked.

September 19, 2015

Danit Peleg – 3D Printing a Fashion Collection



By: Danit Preleg

In September 2014 I started working on my graduate collection for my Fashion Design degree at Shenkar.  This year, I decided to work with 3D printing, which I barely knew anything about. I wanted to check if it’d be possible to create an entire garment using technology accessible to anyone. So I embarked on my 3D printing journey, without really knowing what the end result would be.

The first piece I focused on was the “LIBERTE” jacket. I modeled the jacket using a software called Blender and produced 3D files; I could now start to experiment with different materials and printers.

Together with the amazing teams at TechFactoryPlus and XLN, we experimented with different printers (Makerbot, Prusa, and finally Witbox) and materials (e.g. PLA, soft PLA).

3d-print-fashion-jacketThe breakthrough came when I was introduced to FilaFlex, which is a new kind of filament; it’s strong, yet very flexible. Using FilaFlex and the Witbox printer, I finally was able to print my red jacket.

Once I figured out how to print textiles, I was on my way to create a full collection. It would take more than 2000 hours to print (every A4-sized sheet of textile took at least 20 hours to print)  so I had to step up my printer-game, to a full fledged “3D-printing farm” at home.
I would like to thank Yaniv Gershony, a 3D designer who volunteered to help me throughout the past 9 months.  He took my designs and transformed the to 3D models. He’s extremely talented and an expert Blender user. Here’s his website: https://yanivg.carbonmade.com/

And we used the following Blender add-ons during the process: Export paper model, Sverchok, Mesh lint, Booltool.









September 17, 2015

Portrait Lighting Cheat Sheets

Portrait Lighting Cheat Sheets

Blender to the Rescue!

Many moons ago I had written about acquiring a YN-560 speedlight for playing around with off-camera lighting. At the time I wanted to experiment with how different modifiers might be used in a portrait setting. Unfortunately, these were lighting modifiers that I didn’t own yet.

I wasn’t going to let that slow me down, though!

If you want to skip the how and why to get straight to the cheat sheets, click here.

Infinite Realities had released a full 3D scan by Lee Perry-Smith of his head that was graciously licensed under a Creative Commons Attribution 3.0 Unported License. For reference, here is a link to the object file and textures (80MB) and the displacement maps (65MB) from the Infinite Realities website.

What I did was to bring the high resolution scan and displacement maps into Blender and manually created my lights with modifiers in a virtual space. Then I could simply render what a particular light/modifier would look like with a realistic person being lit in any way I wanted.

Blender View Lighting Setup

This leads to all sorts of neat freedom to experiment with things to see how they might come out. Here’s another look at the lede image:

Blender Lighting Samples Various lighting setups test in Blender.

I had originally intended to make a nice bundled application that would allow someone to try all sorts of different lighting setups, but my skill in Blender only go so far. My skills at convincing others to help me didn’t go very far either. :)

So, if you’re ok with navigating around Blender already, feel free to check out my original blog post to download the .blend file and give it try! Jimmy Gunawan even took it further and modified the .blend to work with Blender cycles rendering as well.

With the power to create a lighting visualization of any scenario I then had to see if there was something cool I could make for others to use…

The Lighting Cheat Sheets

I couldn’t help but generate some lighting cheat sheets to help others use as a reference. I’ve seen some different ones around but I took advantage of having the most patient model in the world to do this with. :)

These were generated by rotating a 20” (virtual) softbox in a circle around the subject at 3 different elevations (0, 30°, and 60°).

Click the caption title for a link to the full resolution files:

Blender Lighting Setup 0 degrees Softbox 0° Portrait Lighting Cheat Sheet Reference
by Pat David (cba)
Blender Lighting Setup 30 degrees Softbox 30° Portrait Lighting Cheat Sheet Reference
by Pat David (cba)
Blender Lighting Setup 60 degrees Softbox 60° Portrait Lighting Cheat Sheet Reference
by Pat David (cba)

Hopefully these might prove useful as a reference for some folks. Share them, print them out, tape them to your lighting setups! :) I wonder if we could get some cool folks from the community to make something neat with them?

Image processing made easier with a powerful math expression evaluator.

Warning: This post contains personal thoughts about my research work in image processing. I’ll discuss about some of the issues I’m facing as an active developer of two open-source image processing frameworks (namely CImg and G’MIC). So keep in mind this will be a bit self-centered. There are high chances you find all this really boring if you are not a developer of image processing software yourself (and even if so). Anyhow, feel free to give your impressions after the reading!

1. Context and issues

In imaging science, image processing is processing of images using mathematical operations by using any form of signal processing for which the input is an image, such as a photograph or video frame.

That’s what Wikipedia says about  image processing. Selecting and ordering those mathematical operations is what actually defines algorithms, and implementing ready-to-use and interesting image processing algorithms is actually one of my goals, as well as making them available for interested users afterwards.

After all those years (>10) as a researcher in the image processing field (and like most of my colleagues), I can say I’ve already implemented a lot of these different algorithms. Mostly in C++ as far as I’m concerned. To be more precise, an important part of my work is even to design (and hopefully publish) my own image processing methods. Most of the time of course, my trials end up with clunky, ineffective and slow operators which give nothing interesting else than knowing the approach is not good enough to be followed. Someone who says everything he tries works right the first time is a liar. Step by step, I try to refine/optimize my prototypes or sometimes even take a completely different direction. Quickly, you realize that it is crucial in this job not to waste time when doing algorithm prototyping because the rate of success is in fact very low.


Don’t waste your time, in any occasion! (photo by (OvO), under CC-by-nc-sa.)

That’s actually one of the reason why I’ve started the G’MIC project. It was primarily designed as a helper to create and run custom image processing pipelines quickly (from the shell, basically). It saves me time, everyday. But the world of image processing algorithms is broad and sometimes you need to experiment with very low-level routines working at a pixel scale, trying such weird and unexpected stuffs that none of the “usual” image processing algorithms you already have in your toolbox can be of use as it is. Or it is used in a so diverted way that it gets hard to even think about using it adequately. In a word, your pixel-level algorithm won’t be expressed as a simple pipeline (or graph if you want to call it so) of macro-scale image processing operators. That’s the case for instance with most of the well known patch-based image processing algorithms (e.g. Non-Local-Means, or PatchMatch and plenty of variants), where each pixel value of the resulting image is computed from (a lot of) other pixel values whose spatial locations are sometimes not evenly distributed (but not random as well!).

Until now, when I was trying to implement this kind of algorithms, I was resigned to go back coding them in C++: It is one language I feel comfortable with, and I’m sure it will run fast enough most of the time. Indeed, computation time is often a bottleneck in image processing. Some of my colleagues are using scripting languages as Matlab or Python for algorithm prototyping. But they often need some tricks to avoid writing explicit code loops, or need to write at least some fast C/C++ modules that will be compiled and run from those higher-level interfaces, to ensure they get something fast enough (even for prototyping, I’m definitely not talking about optimized production code here!).


But, I’m not really satisfied with my C++ solution: Generally, I end up with several small pieces of C++ sources I need to compile and maintain. I can hardly re-use them in a bigger pipeline, or redistribute them as clean packages without a lot of extra work. Because they are just intended to be prototypes: They often have only basic command-line interfaces and thus cannot be directly integrated into bigger and user-friendly image processing frameworks. Making a prototype algorithm really usable by others requires at least to wrap it as a plug-in/module for [..copy the name of your favorite image processing tool or language here..]. This generally represents a lot of boring coding workthat may even require more time and efforts than writing the algorithm itself!  And I don’t talk about maintenance. If you’ve ever tried to maintain a 10-year old C++ prototype code, lost in one of your sub-sub-sub-sub folder in your $HOME, you know what I mean. I’d definitely prefer a simpler solution that let me spend more time on writing the algorithm itself than packaging it or making it usable. After all, the primary purpose of my work is to create cool algorithms, not really coding user interfaces for them. On the other way, I am a scientist and I’m also happy to share my discoveries with users (and possibly get feedback from them!). How to make those prototyped algorithms finally usable without spending too much time on making them usable ? :)

Ideally, I’d like something that could nicely integrate into G’MIC (my favorite framework for doing image processing stuffs, of course :) ). Even if at the end, those algorithms run a bit slower than they are in C++. One could suggest to make them as Octave or Scilab scripts/modules. But I’m the developer of G’MIC, so of course, I’d prefer a solution that help to extend my own project.

So finally, how could I code prototypes for new algorithms working at a pixel-level and make them readily available in G’MIC ? This question has worried me for a long time.

2. Algorithm code viewed as a complex math expression

In G’MIC, the closest thing to what I was looking for, is the command

-fill 'expression'

This command fills each pixel of a given image with the value evaluated from a “mathematical expression”. A mathematical expression being a quite vague concept, it appears you can already write some complex formulas. For instance, typing this on the command line:

$ gmic 400,400,1,3 -fill "X=x-w/2; Y=y-h/2; R=sqrt(X^2+Y^2); a=atan2(Y,X); if(R<=180,255*abs(cos(c+200*(x/w-0.5)*(y/h-0.5))),850*(a%(0.1*(c+1))))"

creates this weird-looking 400×400 color image (I advise you to put sunglasses):


Fig.1. One synthetic color image obtained by one application of the G’MIC command -fill.

Of course, the specified expression can refer to pixels of an existing input image. And so, it can modify the pixels of an image as well, as in the following example:

$ gmic leno.png -fill "(abs(i(x+1,y)-i(x-1,y)))^0.25"

which computes gamma-corrected differences of neighboring pixels along the X-axis, as shown below:


Fig.2.1. Original image leno.png


Fig.2.2. Result of the -fill command described above.

(As an aside, let me tell you I’ve recently received e-mails and messages from people who claim that using the image of our beloved Lena to illustrate an article or a blog post is “sexist” (someone even used the term “pornographic”…). I invite you reading the Lena story page if you don’t know why we commonly use this image. As I don’t want to hurt the over-sensibility of these people, I’ll be using a slight variation I’ve made by mixing a photograph of the blessed Jay Leno with the usual image of Lena. Let me call this the Leno image and everyone will be happy (but seriously, get a life!)).

So, as you can imagine, the command -fill already allows me to do a lof of complex and exotic things on images at a pixel level. Technically speaking, it uses the embedded math parser I’ve written for the CImg Library, a C++ open-source image processing library I’ve been developing since 1999 (and on which G’MIC is based). This math parser is quite small (around 1500 lines of C++ code) and quite fast as well, when it is applied on a whole image. That’s mainly because:

1. It uses parallelization (thanks to the use of OpenMP directives) to evaluate expressions on blocs of image pixels in a multi-threaded way.

2. Before being evaluated, the given math expression is pre-compiled on-the-fly by CImg, into a sequence of bytecodes. Then, the evaluation procedure (which is done for the whole image, pixel by pixel) requires only that bytecode sequence to be interpreted, which is way faster than parsing the input mathematical expression itself.

Anyway, I thought the complexity of the pixel-level algorithms I’d like to implement was really higher than just the evaluation of a mathematical formula. But wait… What is missing actually ? Not much more than loops and updatable variables… I already had variables (though non-updatable) and conditionals. Only loops were really missing. That looks like something I could try adding during my summer holidays, isn’t it ? 😉 So, that is where my efforts were focused on during these last weeks: I’ve added new functions to the CImg math parser that allow users to write their own loops in mathematical expressions, namely the functions dowhile(expr,_cond)whiledo(cond,expr) and for(init,cond,expr_end,_expr_body). Of course, it made me also review and re-implement large parts of the math parser code, and I took this opportunity to optimize the whole thing. A new version of the math parser has been made available for the release of G’MIC at the end of August. I’m still working on this expression evaluator in CImg and new improvements and optimizations are ready for the upcoming version of G’MIC (soon to be released).

3. A toy example: Julia fractals

So, what can we do now with these new math features in G’MIC ? Let me illustrate this with a toy example. The following custom G’MIC command renders a Julia fractal. To test it, you just have to copy/paste the following lines in a regular text file user.gmic:

julia_expr :
  -fill "
    zr = -1.2 + 2.4*x/w;
    zi = -1.2 + 2.4*y/h;
    for (iter = 0, zr^2+zi^2<=4 && iter<256, ++iter,
      t = zr^2 - zi^2 + 0.4;
      (zi *= 2*zr) += 0.2;
      zr = t
  -map 7

and invoke the new command -juliar_expr it defines by typing this in the terminal:

$ gmic user.gmic -julia_expr

Then, you’ll get this 1024×1024 color image:

Rendering of the Julia fractal set only from filling an image with a math expression.

Fig.3. Rendering of a Julia fractal only by filling an image with a complex math expression, containing an iteration loop.

As you see, this custom user command -julia_expr is very short and is mainly based on the invokation of the -fill command of G’MIC. But the coolest thing of all happens when we look at the rendering time of that function. The timing measure has been performed on an ASUS laptop with a dual-core HT i7 2Ghz. This is what I get:

Edit : This post has been edited, on 09/20/2015 to reflect the new timings due to math parser optimization done after the initial post of this article.

$ gmic user.gmic -tic -julia -toc
[gmic]-0./ Start G'MIC interpreter.
[gmic]-0./ Input custom command file 'user.gmic' (added 1 command, total 6195).
[gmic]-0./ Initialize timer.
[gmic]-0./julia/ Input black image at position 0 (1 image 1024x1024x1x1).
[gmic]-1./julia/ Fill image [0] with expression ' zr = -1.2 + 2.4*x/w; zi = -1.2 + 2.4*(...)( zi *= 2*zr) += 0.2; zr = t ); iter '.
[gmic]-1./julia/ Map cube color LUT on image [0], with dirichlet boundary conditions.
[gmic]-1./ Elapsed time : 0.631 s.

Less than 0.7 second to fill a 1024×1024 image where each of the 1,048,576 pixels may require up to 256 iterations of a computation loop ? Definitely not bad for a prototyped code written in 5 minutes and which does not require compilation to run! Note that all my CPUs have been active during the computation. Trying the same G’MIC code on my machine at work (a powerful 4x 3-core HT Xeon 2.6Ghz) makes the same render in 0.176 second only!

But of course, one could say:

Why not using the native G’MIC command -mandelbrot instead (here, native means hard-coded as a C++ function) ? It is probably way faster!

Let me compare my previous code with the following G’MIC invokation (which renders exactly the same image):

$ gmic 1024,1024 -tic -mandelbrot -1.2,-1.2,1.2,1.2,256,1,0.4,0.2 -map 7 -toc
[gmic]-0./ Start G'MIC interpreter.
[gmic]-0./ Input black image at position 0 (1 image 1024x1024x1x1).
[gmic]-1./ Initialize timer.
[gmic]-1./ Draw julia fractal on image [0], from complex area (-1.2,-1.2)-(1.2,1.2) with c0 = (0.4,0.2) and 256 iterations.
[gmic]-1./ Map cube color LUT on image [0], with dirichlet boundary conditions.
[gmic]-1./ Elapsed time : 0.055 s.

That’s 12 times faster, than the previous command -julia_expr run on my laptop, indeed! A bit reassuring to know that C++ compiled into assembly code is faster than CImg home-made bytecode compiled on the fly 😉

But the point is: Suppose now I want to slightly modify the rendering of the fractal, i.e. I don’t want to display the maximum iteration anymore for each pixel (variable iter), but the latest value of the variable zi before the divergence test occurs. Look how simple this is to create a slightly modified command -julia_expr2 that does exactly what I want to do. I have the full control on what the function does at a pixel level:

julia_expr2 :
  -fill "
    zr = -1.2 + 2.4*x/w;
    zi = -1.2 + 2.4*y/h;
    for (iter = 0, zr^2+zi^2<=4 && iter<256, ++iter,
      t = zr^2 - zi^2 + 0.4;
      (zi *= 2*zr) += 0.2;
      zr = t
  -normalize 0,255 -map 7

and this modified algorithm renders the image below (still in 0.7 second of course):

Fig.4. Slightly modified version of the Julia fractal by displaying another variable zi in the rendering algorithm.

Without these new loop features introduced in the math parser, I would have been forced to do one of these two things in G’MIC, to get the same result:

  1. Either, add some new options to the native command -mandelbrot to allow this new type of visualization. This basically means: Writing new pieces of C++ code, compiling a new version of G’MIC with these added features, package it and release it to make this available for everyone. Even if I already have some decent release scripts, this implies a lot of extra-work and packaging time. This is not like the user can get the new feature in a few minutes (if you’ve already used the filter update mechanism present in the G’MIC plug-in for GIMP, you know what I mean). And I don’t speak about all the possibilities I couldn’t think of (but the user will obviously need one day :) ), when adding such new display options to a native command like -mandebrot
  2. Or, write a new G’MIC custom script able to compute the same kind of result. This would be indeed the best way to make it available for other people quickly. But here, as the algorithm is very specific and works at a pixel level, doing it as a pipeline of macro-operators is quite a pain. It means I would have to use 3 nested -repeat...-done loops (which are basically the loops commands used in G’MIC pipelines) and it would probably take ages to render, as a G’MIC pipeline is always purely interpreted without any pre-compilation steps. Even by using multi-threading, it would have been a nightmare to compute here.

In fact, the quite long math expression we are using in command -julia_expr2 defines one complex algorithm as a whole, and we know it will be pre-compiled into a sequence of bytecodes by CImg before being evaluated for each of the 1024×1024=1,048,576 pixels that compose the image. Of course, we are not as fast as a native C++ implementation of the same command, but at the same time, we gain so much flexibility and genericity in the stuffs we can do now, that this disadvantage is easily forgiven. And the processing time stays reasonable. For fast algorithm prototyping, this feature seems to be incredibly nice! I won’t be forced to unsheathe my C ++ compiler every time I want to experiment some very specific image processing algorithms working at the pixel level.

4. A more serious example: the “Non-Local Means”

The Non-Local-Means is a quite famous patch-based denoising/smoothing algorithm in image processing, introduced in 2005 by A. Buades (beware, his home page contains images of Lena, please do not click if you are too sensitive!). I won’t enter in all the implementation details, as several different methods has been proposed in the literature just for implementing it. But one of the most simple (and slowest) technique requires 4 nested loops per image pixel. What a good opportunity to try writing this “slow” algorithm using the G’MIC -fill function! It took me less than 10 minutes, to be honest:

nlmeans_expr : -check "${1=10}>0 && isint(${2=3}) && $2>0 && isint(${3=1}) && $3>0"
  sigma=$1  # Denoising strength.
  hl=$2     # Lookup half-size.
  hp=$3     # Patch half-size.
  -fill "
    value = 0;
    sum_weights = 0;
    for (q = -"$hl", q<="$hl", ++q,
      for (p = -"$hl", p<="$hl", ++p,
        diff = 0;
        for (s = -"$hp", s<="$hp", ++s,
          for (r = -"$hp", r<="$hp", ++r,
            diff += (i(x+p+r,y+q+s) - i(x+r,y+s))^2
        weight = exp(-diff/(2*"$sigma")^2);
        value += weight*i(x+p,y+q);
        sum_weights += weight
    value/(1e-5 + sum_weights)

Now, let’s test it on a noisy version of the Leno image:

$ gmic user.gmic leno.png -noise 20 -c 0,255 -tic --nlmeans_expr 35,3,1 -toc
[gmic]-0./ Start G'MIC interpreter.
[gmic]-0./ Input custom command file 'user.gmic'(added 1 command, total 6195).
[gmic]-0./ Input file 'leno.png' at position 0 (1 image 512x512x1x3).
[gmic]-1./ Add gaussian noise to image [0], with standard deviation 20.
[gmic]-1./ Cut image [0] in range [0,255].
[gmic]-1./ Initialize timer.
[gmic]-1./nlmeans_expr/ Set local variable sigma='35'.
[gmic]-1./nlmeans_expr/ Set local variable hl='3'.
[gmic]-1./nlmeans_expr/ Set local variable hp='1'.
[gmic]-1./nlmeans_expr/ Fill image [0] with expression ' value=0; sum_weights=0; for(q = -3,q<(...)ight ) ); value/(1e-5 + sum_weights) '.
[gmic]-2./ Elapsed time : 3.156 s.

which results on these two images displayed on the screen, the noisy version (left), and the denoised one using the Non-Local-Means algorithm (right). Of course, the timing may differ from one machine to another. I guess my 3 seconds run here is seemly (tested on my powerful PC at the lab). It still takes less than 20 seconds on my laptop. A crop of the results are presented below. The initial Leno image is a 512×512 RGB image, and the timing has been measured for processing the whole image, of course.


Fig.5.1. Crop of a noisy version of the Leno image, degraded with gaussian noise, std=20.


Fig.5.2. Denoised version using the NL-means algorithm (custom command -nlmeans_expr).

Here again, you could argue that the native G’MIC command -denoise does the same thing and is faster to run. It is, definitely. Jérome Boulanger (one very active G’MIC contributor) has written a nice custom command -nlmeans that implements the NL-means with a smarter algorithm (avoiding the need for 4 nested loops per pixel) which runs even faster (and it is already available in the plug-in for GIMP). But that’s not the point. What I show here is I’m now able to do some (relatively fast) prototyping of algorithms working at a pixel level in G’MIC, without having to write and compile C++ code. But the best of all is about integration: if the algorithm appears to be interesting/effective enough, I can add it to the G’MIC standard library in a few minutes, and quickly create a filter for the GIMP plug-in as well. Let say it right away, probably 5 minutes after I’ve finished writing the first version of the algorithm, the plug-in users should be able to get it and use it on their own images (and give positive/negative feedback to help for future improvements). That’s what I call a smooth, quick and painless integration! And that is exactly the kind of algorithms I couldn’t implement before as custom G’MIC commands running at a decent speed.

To me, it clearly opens exciting perspectives to quickly prototype and integrate new custom image processing algorithms into G’MIC in the future!

5. The Vector Painting filter

In fact, this has happened earlier than expected. I’ve been able to add one of my latest image filter (named Vector painting) in the G’MIC plug-in for GIMP recently. It has been somehow unexpected, because I was just doing some debugging for improving the CImg math expression evaluator. Briefly, suppose you want to determine for each pixel of an image, the discrete spatial orientation of the maximal value variation, with an angular precision of 45°: For each pixel centered in a 3×3 neighborhood, I want to estimate which pixel of the neighborhood has the highest difference with the center pixel (measured as the squared difference between the two pixel values). To make things simpler, I’ve considered doing this on the image luminance only instead of using all the RGB color channels. At the end, I transform each pixel value into a label (an integer in range [1,8]) that represents one of the possible 45°-orientations of the plane. That was typically the kind of problems that would require the use of custom loops working at a pixel level, so something I couldn’t do easily before the loop feature introduced in my math parser (or I would have done the prototype in C++).

The solution to this problem was surprisingly easy to write. Here again, it didn’t take much more than 5 minutes of work:

foo :
  -fill "dmax = -1; nmax = 0;
         for (n = 0, ++n<=8,
           p = arg(n,-1,0,1,-1,1,-1,0,1);
           q = arg(n,-1,-1,-1,0,0,1,1,1);
           d = (j(p,q,0,0,0,1)-i)^2;
           if(d>dmax,dmax = d; nmax = n,nmax)

And if we apply this new custom command -foo on our Leno image,

$ gmic user.gmic leno.png -foo

We get this result (after re-normalization of the label image in range [0,255]). Keep in mind that each pixel of the resulting image is an integer label originally in range [1,8]. And by the way, the computation time is ridiculous here (178ms for this 512×512 image).


Fig.6. Each pixel of the Leno image is replaced by a label saying about which of its 3×3 neighbors is the most different from the central pixel.

It actually looks a bit ugly. But that’s not surprising, as the original image contains noise and you’ll get a lot of small random variations in flat regions. As a result, the labels you get in those regions are noisy as well. Now, what happens when we blur the image before computing the labels? That should regularize the resulting image of labels as well. Indeed:

$ gmic user.gmic leno.png --blur 1% -foo

returns this:


Fig.7. Each pixel of the blurred Leno image is replaced by a label saying about which of its 3×3 neighbors is the most different.

That’s interesting! Blurring the input image creates larger portions of constant labels, i.e. regions where the orientation of the maximal pixel variations is the same. And the original image contours keep appearing as natural frontiers of these labelled regions. Then, a natural idea would be to replace each connected region by the average color it overlays in the original color image. In G’MIC, this can be done easily with the command -blend shapeaverage.

$ gmic user.gmic leno.png --blur 1% -foo[-1] -blend shapeaverage

And what we get at the end is a nice piecewise constant abstraction of our initial image. Looks like a “vector painting”, no ? 😉


Fig.8. Result of the “shape average” blending between the original Leno image, and its map of labels, as obtained with command -foo.

As you may imagine, changing the amplitude of the blurring makes the result more or less abstract. Having this, It didn’t take too much time to create a filter that could be run directly from the G’MIC plug-in interface for GIMP. That’s the exact code I wrote to integrate my initial algorithm prototype in G’MIC and make it usable by everyone. It was done in less than 5 minutes, really:

#@gimp Vector painting : gimp_vector_painting, gimp_vector_painting_preview(1)
#@gimp : Details = float(9,0,10)
#@gimp : sep = separator(), Preview type = choice("Full","Forward horizontal","Forward vertical","Backward horizontal","Backward vertical","Duplicate horizontal","Duplicate vertical")
#@gimp : sep = separator(), note = note("<small>Author: <i>David Tschumperl&#233;</i>.\nLatest update: <i>08/25/2015</i>.</small>")
gimp_vector_painting :
  -repeat $! -l[$>]
    --luminance -b[-1] {10-$1}%,1,1
    -f[-1] "dmax = -1; nmax = 0;
            for (n = 0, ++n<=8,
              p = arg(n,-1,0,1,-1,1,-1,0,1);
              q = arg(n,-1,-1,-1,0,0,1,1,1);
              d = (j(p,q,0,0,0,1)-i)^2;
              if (d>dmax, dmax = d; nmax = n,nmax)
    -blend shapeaverage
  -endl -done

gimp_vector_painting_preview :
  -gimp_split_preview "-gimp_vector_painting $*",$-1

Here is the resulting filter, as it can be seen in the G’MIC plug-in for GIMP (requires version Just after I’ve pushed it in the G’MIC standard library:


Fig.9. The G’MIC plug-in for GIMP, running the “Vector Painting” filter.

Here again, that is how I conceive things should be done properly: 1. I create a quick algorithm prototype to transform an image into something else. 2. I decide that the algorithm is cool enough to be shared. 3. I add few lines to make it available immediately in the G’MIC image processing framework. What a gain of time compared to the time it would have taken by doing this in C++!

6. Comparison with ImageMagick’s -fx operator

While working on the improvement of my math expression evaluator in CImg, I’ve been wondering if what I was doing was not already existing in ImageMagick. Indeed, ImageMagick is one of the most well established open-source image processing framework, and I was almost sure they had already cope with the kind of questions I had for G’MIC. And of course, they had :)

So, they have a special operator -fx expression in convert that seems to be equivalent to what the G’MIC command -fill expression does. And yes, they probably had it for years, long before G’MIC even existed. But I admit I’ve almost completely stopped using ImageMagick tools when I’ve started developing my own C++ image processing library CImg, years ago. All the information you need to use this -fx operator in convert can be found on this documentation page, and even more examples on this page. Reading these pages was very instructive: I’ve noticed some interesting functions and notations they have in their own expression parser that I didn’t already have in mine (so, I’ve added some of them in my latest version of CImg!). Also I was particularly interested by this quote from their pages:

As people developed new types of image operations, they usually prototype it using a slow “-fx” operator first. When they have it worked out that ‘method’ is then converted into a new fast built-in operator in the ImageMagick Core library. Users are welcome to contribute their own “-fx” expressions (or other defined functions) that they feel would be a useful addition to IM, but which are not yet covered by other image operators, if they can be handled by one of the above generalized operators, it should be reasonably easy to add it.(…). What is really needed at this time is a FX expression compiler, that will pre-interpret the expression into a tighter and faster executable form. Someone was going to look into this but has since disappeared.

So it seems their -fx operator is quite slow as it re-parses the specified math expression for each image pixel. And when someone writes an interesting operator with -fx, they are willing to convert it into a C code and integrate this new built-in operator directly in the core ImageMagick library. It seems they don’t really mind adding new native hard-coded operators into IM, maybe even for very specific/unusual operators (at least they don’t mention it). That’s interesting, because that is precisely what I’m trying to avoid in G’MIC. My impression is that it’s often acceptable to be less efficient if the code we have to write for adding one feature is smaller, easier to maintain/upgrade and does not require releasing a new version to make this particular feature available. Personally, I’d always prefer to write a G’MIC custom command (so, a script that I can directly put in the G’MIC standard library) if possible, instead of adding the same feature as a new “native” built-in command (in C++). But maybe their -fx operator was that slow it was cumbersome to use in practice? I had to try!

And I’m a bit sorry to say this, but yes, it’s quite slow (and I have tested this on my pretty fast machine with 12 HT cores at 2.6Ghz). The ImageMagick -fx operator is able to use multiple cores which is clearly a good thing, but even with that, it is cumbersome to use on reasonably big images, with complex math expressions. In a sense, that reassures me about the usefulness of having developed my own math expression compiler in CImg. This pre-compilation step of the math expression into a shorter bytecode sequence seems then to be almost mandatory. I’ve done a quick timing comparison for some simple image effects that can be achieved similarly with both expression evaluators of G’MIC and ImageMagick. Most of the examples below have been actually taken from the -fx documentation pages. I’m dividing and multiplying my image values by 255 in the G’MIC examples below because ImageMagick formulas assume that RGB values of the pixels are defined in range [0,1]. These tests have been done with a high-resolution input image (of a motorbike) with size 3072×2048, in RGB mode. I’ve checked that both the ImageMagick and G’MIC invokations render the same images.

# Test1: Apply a sigmoid contrast function on the image colors.

$ time convert motorbike.jpg -fx "(1.0/(1.0+exp(10.0*(0.5-u)))-0.006693)*1.0092503" im_sigmo.jpg

real	0m9.033s
user	3m18.527s
sys	0m2.604s

$ time gmic -verbose - motorbike.jpg -/ 255 -fill "(1.0/(1.0+exp(10.0*(0.5-i)))-0.006693)*1.0092503" -* 255 -o gmic_sigmo.jpg,75

real    0m0.474s
user    0m3.183s
sys     0m0.111s
# Test2: Create a radial gradient from scratch.
$ time convert -size 3072x2048 canvas: -fx "Xi=i-w/2; Yj=j-h/2; 1.2*(0.5-hypot(Xi,Yj)/70.0)+0.5" im_radial.jpg

real	0m29.895s
user	8m11.320s
sys	2m59.184s

$ time gmic -verbose - 3072,2048 -fill "Xi=x-w/2; Yj=y-h/2; 1.2*(0.5-hypot(Xi,Yj)/70.0)+0.5" -cut 0,1 -* 255 -o gmic_radial.jpg

real    0m0.234s
user    0m0.990s
sys     0m0.045s

# Test3: Create a keftales pattern gradient from scratch.
$ time convert -size 3072x2048 xc: -channel G -fx  'sin((i-w/2)*(j-h/2)/w)/2+.5' im_gradient.jpg

real	0m2.951s
user	1m2.310s
sys	0m0.853s

$ time gmic -verbose - 3072,2048 -fill "sin((x-w/2)*(y-h/2)/w)/2+.5" -* 255 -o gmic_gradient.jpg

real    0m0.302s
user    0m1.164s
sys     0m0.061s
# Test4: Compute mirrored image along the X-axis.
$ time convert motorbike.jpg -fx 'p{w-i-1,j}' im_mirror.jpg 2>&1

real	0m4.409s
user	1m33.702s
sys	0m1.254s

$ time gmic -verbose - motorbike.jpg -fill "i(w-x-1,y)" -o gmic_mirror.jpg

real    0m0.495s
user    0m1.367s
sys     0m0.106s

The pre-compilation of the math expressions clearly makes a difference!

I would be really interested to compare the expression evaluators for more complex expressions, as the one I’ve used to compute Julia fractals with G’MIC for instance. I don’t have a deep knowledge of the ImageMagick syntax, so I don’t know what would be the equivalent command line. If you have any idea on how to do that, please let me know! I’d be interested also to get an idea on how Matlab is performing for the same kind of equations.

7. Conclusion and perspectives

What I conclude from all of this ? Well, I’m actually pretty excited by what the latest version of my expression evaluator integrated in G’MIC / CImg can finally do. It looks like it runs at a decent speed, at least compared to the one used in ImageMagick (which is definitely a reference project for image processing). I had also the idea of comparing it with GraphicsMagick, but I must admit I didn’t find the same -fx operator in it. And I didn’t find something similar (Maybe you could help teaching me how it works for GraphicsMagick?).

I’ve been already able to propose one (simple) artistic filter that I find interesting (Vector painting), and I’m very confident that these improvements of the math expression evaluator will open a lot of new possibilities for G’MIC. For allowing the design of new filters for everyone of course, but also to make my algorithm prototyping work easier and faster.

Could it be the beginning of a new boost for G’MIC? What do you think?

September 16, 2015

An Inkscape SVG Filter Tutorial — Part 2

Part 1 introduced SVG filter primitives and demonstrated the creation of a Fabric filter effect. Part 2 shows various ways to colorize the fabric. It ends with an example of using the techniques learned here to draw part of a bag of coffee beans.

Dying the Fabric

Our fabric at this point is white. We can give it color a variety of ways. We could have started off with a colorized pattern but that would not allow us to change the color so easily. And as this is a tutorial on using filters, lets look at ways the color can be changed utilizing filter primitives.

Coloring with the Flood, Blend, and Composite Primitives

We can use the Flood filter primitive to create a sheet of solid color and then use the Blend filter primitive to combine it with the fabric. The resulting image bleeds into the background. We’ll use the Composite filter primitive to auto-clip the background.

The Flood Filter Primitive

Add the Flood filter primitive to the filter chain by selecting Flood and clicking on the Add Effect button. The fabric will turn a solid black. Like the Turbulence filter primitive, the Flood filter primitive takes no inputs but simply fills the filter region with a solid color Black is the default flood color. You can change the color by clicking on the color sample next to Flood Color: in the dialog. Change the color however you wish. Leave the Opacity at one.

The Blend Filter Primitive.

Next add the Blend filter primitive. The drawing will be unchanged. Connect the Blend input to the last Displacement Map. The fabric should appear on top of the flood fill. This is expected as the default blending mode is Normal which simply draws the second image over the first. Use the drop-down menu to change the Mode to Multiply. This results in the lighter areas of the fabric taking on the flood color.

The output of the filter chain after blending.

Try experimenting with the other blending modes.

The Composite Filter Primitive

The flood fill leaks into the background. This can be removed by clipping the image to fabric area using the Composite filter primitive. Add the Composite filter primitive to the filter chain. The resulting image is again unchanged. Connect the second input to the composite filter to the last Displacement Map filter primitive. Still the image remains unchanged. Now change the Operator type to In. This dictates that the image should be clipped to the area that is “In” the image created by the second Displacement Map filter primitive.

Filter Dialog image.

The Filter Effect dialog after adding and adjusting the Flood, Blend, and Composite filter primitives.

The output of the filter after compositing.

Coloring the Fabric with the Component Transfer Filter Primitive

The Component Transfer filter primitive maps, pixel by pixel, the colors from an input image to different colors in an output image. Each “component” (Red, Green, Blue, and Alpha) is mapped independently. The method for mapping is determined by the Type; each Type has its own attributes. We’ll use the Linear and Identity mappings.

The output component has the same value as the input component.
The output component is equal to: intercept &plus input × slope. This is identical to the Identity type if the intercept is zero and the slope is one.

Replace the Flood Fill, Blend, and Composite filter primitives in the above filter chain by the Composite Transfer filter primitive. (To delete a filter primitive, right-click on the filter primitive name and select Delete in the menu that appears.) The just removed three-primitive filter chain mapped black to black and white to the flood color. We can duplicate this by setting the Red, Green, and Blue component transfer types to Linear (keeping the Alpha component type set to Identity). The condition that black maps to black requires that the Intercept values all be set to zero. The condition that white maps to the flood color dictates the slopes. The RGB values for the flood color used above are 205, 185, 107 on a scale where 255 is the maximum value. These values translate to 0.80, 0.73, 0.42 on a scale where the maximum value is one. Since an input value of 1.0 for the red component must result in a value of 0.80 we can see that these values are the required slopes.

Graph of input vs. output for the red, green, and blue channels.

Graph of the transfer functions.

Filter Dialog image.

The Filter Effect dialog after adding and adjusting the Component Transfer filter primitive.

The output of the filter after adding and adjusting the Component Transfer filter primitive.

Now suppose we want the fabric to be more subtle. We can change the mapping so that for each component, zero is mapped to half the maximum value. In this case we have the following values (RGB): Intercepts: 0.40, 0.36, 0.21 and Slopes: 0.40, 0.37, 0.21. See the following figure:

Graph of input vs. output for the red, green, and blue channels.

Graph of the transfer functions where the darkest value is half the lightest value.

Filter Dialog image.

The Filter Effect dialog after adding and adjusting the Component Transfer filter primitive.

The output of the filter after adjusting the Component Transfer filter primitive so the darkest areas have half the component values of the lightest.

Coloring the Fabric with the Color Matrix Filter Primitive

This filter primitive, unlike the Component Transfer, can intermix the color components. It does not, however, have the fine control over the transfer curves like in the Component Transfer filter primitive. There are several Types in this filter primitive. The Saturate, Hue Rotate, and Luminous to Alpha types are shortcuts for the more generic Matrix type. We need to use the Matrix type to match the results of the previous filters.

First replace the Component Transfer filter primitive by the Color Matrix filter primitive. After adding the new primitive, the fabric may disappear; that is a bug in Inkscape. Click on the matrix in the Filter Dialog and the fabric should reappear. The initial matrix is the Identity matrix (consisting of ones on the diagonal) which does not change the image.

The rows in the matrix control the output of, from top to bottom, the Red, Green, Blue, and Alpha channels. The columns correspond to the input, again in the same Red, Green, Blue, and Alpha order. The last column allows one to enter a constant offset for the row. For example, one can make a green object red by changing the top row to “0 1 0 0 0″ which means that the Red channel output is 0×R + 1×G + 0×B + 0×A + 0, where R, G, B, and A are the input values for the Red, Green, Blue, and Alpha channels respectively (on a scale of zero to one).

To change the values in the matrix, click first on a row of numbers to select the row and then click on a numeric entry in the row. The following figures show the values needed to match the fabric samples above.

Filter Dialog image.

The Filter Effect dialog after adding and adjusting the Color Matrix filter primitive to match the first (high contrast) fabric sample above.

Filter Dialog image.

The Filter Effect dialog after adding and adjusting the Color Matrix filter primitive to match the second (lower contrast) fabric sample above.

Coloring the Fabric Using the Fill Color and the Tile Filter Primitive

In an ideal world, a fabric filter would just take as input the color of an object and use that to blend with a pattern. SVG filters do have the ability to do this. One would read in a pattern tile using the Image filter primitive and then tile the pattern using the Tile filter primitive. But the Tile filter primitive is the one filter primitive that Inkscape hasn’t implemented. While more convenient, this method would still lack the fine control over color that the above methods have.

The output of a filter using the Tile primitive. The two rectangles differ only in Fill color. Renders correctly in Chrome, incorrectly in Firefox and Inkscape.

Putting it All Together

Let’s do something with the fabric! We could stencil some text on the fabric to make it look like part of a bag of coffee beans. The best way to do this is to break the filter up into two separate filters. The first will distort the weave (using the first Turbulence and Displacement Map pair and color the fabric while the second will add a gentle wave to both the fabric and text (using the second Turbulence and Displacement Map pair). The text is given its own filter to take away the sharp edges and to also give it a bit of irregularity independent of the weave. The text could be blended on top of the fabric by giving it an opacity of less than one. A better effect can be achieved, however, by using the new mix-blend-mode property. Inkscape can render this property but does not yet have a GUI to set it. Firefox supports this property and Chrome should soon (if it doesn’t already). I’ve used the mix-blend-mode value of multiply by adding the property to the text style attribute with the XML editor. The fabric and text are then grouped together before applying the “wave” filter to the group.

Part of a bag of coffee beans. Three filters are used. The first to distort the weave and give color to the fabric, the second to slightly blur and distort the text, and the third to take the blended together fabric and text and give them both a gentle wave.

Note, it is possible to put the text in the “defs” section and use the Image filter primitive to import the text into a filter so that the blending can be done with the Blend filter primitive. This isn’t easy to do in Inkscape and Firefox seems to have problems rendering it.

I hope you enjoyed this tutorial. Please leave comments and questions!

A section of a bag of coffee beans.

A PNG image just for Google+ which doesn’t support SVG images.

Hacking / Customizing a Kobo Touch ebook reader: Part II, Python

I wrote last week about tweaking a Kobo e-reader's sqlite database by hand.

But who wants to remember all the table names and type out those queries? I sure don't. So I wrote a Python wrapper that makes it much easier to interact with the Kobo databases.

Happily, Python already has a module called sqlite3. So all I had to do was come up with an API that included the calls I typically wanted -- list all the books, list all the shelves, figure out which books are on which shelves, and so forth.

The result was kobo_utils.py, which includes a main function that can list books, shelves, or shelf contents.

You can initialize kobo_utils like this:

import kobo_utils

koboDB = KoboDB("/path/where/your/kobo/is/mounted")
connect() throws an exception if it can't find the .sqlite file.

Then you can list books thusly:

or list shelf names:
or use print_shelf which books are on which shelves:
shelves = koboDB.get_dlist("Shelf", selectors=[ "Name" ])
for shelf in shelves:
    print shelf["Name"]

What I really wanted, though, was a way to organize my library, taking the tags in each of my epub books and assigning them to an appropriate shelf on the Kobo, creating new shelves as needed. Using kobo_utils.py plus the Python epub library I'd already written, that ended up being quite straightforward: shelves_by_tag.

September 15, 2015

Fedora Atomic Logo Idea

The Fedora Cloud Working Group recently decided that in Fedora 24 (or perhaps a bit further out depending on how the tooling/process can support it) that the Atomic version of Fedora is going to be the primary focus of the working group. (Background discussion on their list is available too.)

This has an affect on the Fedora website as the Fedora Cloud edition shifts from a buffet of kind of equally-positioned cloud- and container-related images to a more focused set of images optimized for container hosting (using Atomic) and a set of more clearly ancillary images that are also useful for cloud/container deployment of Fedora that aren’t based on the Atomic platform. We need to position these images accordingly on the website to meet the new model.

Matthew Miller and I discussed how the Cloud WG decision might effect the website and ideas for how we could update the website to suit for Fedora 24. One idea for how we could do this:

  • Consider replacing the “Cloud” edition slot on the front of getfedora.org with a Fedora “Atomic” edition brand.
  • Convert getfedora.org/cloud to focus instead solely on Atomic (maybe redoing the URL to getfedora.org/atomic).
  • Build out a separate cloud image resource site (similar to arm.fedoraproject.org, which is focused on all ARM-related builds across Fedora) with the full set of Cloud Base images (and maybe Fedora Docker containers too?) Then, pull these in to the edition site via a link in the “Other Downloads” section.

Anyhow, this is just an idea. The Atomic brand is already a pretty strong one, so I think trying to force something like “Fedora Atomic” under the current cloud logomark might miss the opportunity for Fedora to work with the brand recognition Atomic already has upstream. The question is – is that even possible? Luckily, I think the answer might be yes :)

The current Fedora Cloud logo.

The current Fedora Cloud logo.

The Atomic upstream logo.

The Atomic upstream logo.

I poked around a little bit with the Atomic logo (I believe tigert created the original awesome logo!), thickened it up and rounded out the lines a little bit so I could use it as a mask on the purple Fedora triangle texture in the same way the original cloud mark is used in the current cloud logo. I think it looks pretty cool; here it is in the context of the other Fedora Edition logos:

Fedora Atomic logo idea

I was kind of worried they wouldn’t hang together as a set, especially since the three logomarks here had been so close (Cloud’s mark was a 90 degrees rotated Server mark, and Workstation is Server with the top two bars merged to make a display,) but in practice it looks like this is really not a concern.

On the to-do list next are mockups for how a potential new cloud.fpo site might look as well as an updated getfedora.org/cloud (or getfedora.org/atomic as the case might be.) I started poking at mocking up a cloud.fpo site for the base cloud images and other cloud goodies but will probably need to iterate that on the Cloud WG list to get it right.

Ideas? Feedback? Comments are open of course :)

Fedora Developer Website Design

Fedora Developer Logo

For the past few weeks I have been working on mockups and the HTML/CSS for a new Fedora website, the Fedora Developer portal (likely to eventually live at developers.fedoraproject.org.) The goal of the site is to provide resources and information to developers building things on Fedora (not primarily developers contributing to Fedora itself.)

A bunch of folks have been contributing content to the site, and Adam Šamalík and Petr Hracek set up the initial first-cut prototype of the site, configuring jekyll to generate the site and building out the basic framework of the site. The prototype was shared with the Fedora Environment and Stacks Working Group mailing list, and after some feedback and iteration on the initial prototype, Petr asked me to take a look at the overall UX / design of the site. So that’s how I came to be involved here. :)

Competitive Analysis and Sitemap

First, to better understand the space this site is in, I took a look at various developer sites for all sorts of OS platforms and took in the sorts of information they provided and how they organized it. I looked at:

  • Red Hat Developers – main nav is streamlined – solutions, products, downloads. Also has community, events, blogs. Large banner for features. Has a membership / join process. Front page directory of technologies under the solutions nav item.
  • Microsoft Windows Dev Center – main nav is a bit cluttered; core features seem to be developer docs, downloadable tools, code samples, community. Has a “get started” beginners’ guide. Features blog posts, videos, feature highlights. Has a log in.
  • Android Developers – main nav is “design | develope | distribute.” Features SDK, code sample library, and videos. Has case studies. Also has blog and support links.
  • Apple Developer – main nav is “platforms | Resources | Programs | Support | Member Center.” Has a membership program of some sort with log in. Front page is not so useful – solely promo banners for random things; full footer that extrapolates more on what’s under each of the main nav items. Resources page has a nice directory with categorized breakdown of all the resources on the site (seems like it’d make a better front page, honestly.) Includes forums, docs, and videos.
  • Ubuntu Developer – cluttered main nav, nav items contain Ubuntu-specific jargon – e.g. not sure what ‘scopes’ or ‘core’ actually are or why I’d care, has a community, has a log in. Has blog and latest events highlights but they are identical. Similar to Apple, actually useful information is pushed down to the full header at the very bottom of the page – including SDK, tutorials, get started guide, how to publish apps, community.

One thing that was common to all of these sites was the developer.*.com URL. (developer.windows.com does redirect to the dev.windows.com URL.) I think because of this, developer.fedoraproject.org seems like it would match the broader platform developer URL pattern out there.

Another thing they seemed to all have in common were directories of technologies – frameworks, platforms, langauges, tools, etc. – affiliated with their own platform. Many focused on deployment too – mostly to app stores, but deployment nonetheless.

Looking at the main structure of the site in the initial prototype, I felt it honestly was a pretty good organizational structure given the other developer sites out there. I wanted to tweak some of the wording of the headers (to make them action-oriented,) and had some suggestions as to additional content pieces that could be developed, but for the most part the prototype had a solid structure. I drew up a sitemap in Inkscape to help visualize it:

Suggested sitemap for developer.fedoraproject.org. Click on image above to view SVG source in git.

Suggested sitemap for developer.fedoraproject.org. Click on image above to view SVG source in git.


With confidence in the site information architecture / basic structure of the content, I then started mocking up what it could look like. Some things I considered while drawing this out:

  • The visual challenge here is to give the site its own feel, but also make sure it feels like a part of the Fedora ‘family,’ and has a visual design that really feels related to the other new Fedora sites we have like getfedora.org, spins.fedoraproject.org, and arm.fedoraproject.org.
  • There should probably be a rotating banner feature area where features and events could be called out on the front page and maybe even on subpages. I don’t like the page full of promos that is the Apple Developer front page – it comes off as a bit disorganized IMHO – so rotating banners seemed preferable to avoid banners taking over the whole front page.
  • The main content of the website is mostly a series of simple reference guides about the platforms, frameworks, and languages in Fedora, which I understand will be kept updated and added to regularly. I think reference material can appear as rather static and perhaps stale, but I think the site should definitely come across as being updated and “living,” so featuring regularly updated content like blog posts could help with that.

So here’s what I came up with, taking some article content from developerblog.redhat.com to fill in the blog post areas –

Mockup for the front page of developer.fedoraproject.org. Click on image to view mockup source and other mockups in git.

Mockup for the front page of developer.fedoraproject.org. Click on image to view mockup source and other mockups in git.

A few notes about the design here:

  • To keep this looking like it’s part of the Fedora family, I employed a number of elements. I stuck with a white background horizontal branding/nav bar along the top and a fat horizontal banner below that. It also has the common Fedora websites footer (which may need some additions / edits;) and a mostly white, blue, and gray color palette. The base font is OpenSans, as it is for the other sites we’ve released in the past year. The site still has its own feel though; there are some little tweaks here and there to do this. For example, I ended up modifying the front page design such that the top banner runs all the way to the very top margin, and recedes only on sub pages to show the white horizontal background on the top header.
  • There is, as planned, a rotating banner feature area. The example in the mockups features DevAssistant, and the team already has other feature banners planned.
  • Blog content in the form of two featured posts as well as a recent blog headlines listing hopefully will inject a more active / current sense to the overall site.
  • I made mockups for each of the major sections of the site and picked out some CC-BY licensed photography roughly related to each corresponding section in its title banner up top.


Petr had also asked if I’d be able to provide the CSS, images, and icons for the site once the mockups were done. So I decided why not? The framework he and Adam used to set up the site was a static framework I was not familiar with – Ruby-based Jekyll, also used by GitHub Pages – and I thought it might be fun to learn more about it.


If you check out the tree for the website implementation, you’ll see a bunch of basic HTML files as well as markdown (*.md) files (the latter mostly in the content repo, which gets set up as a subdirectory under the website tree when you check the project out.) Jekyll lets you break down pages of the site into reusable chunks (e.g., header, footer, etc.), and it also lets you design different layouts that you can link to different pieces of content.

Whether any given page / chunk of content you’re working on is a *.md file or a *.html file, Jekyll has this thing at the top of each file called ‘front matter’ where you can configure the page (e.g., set which layout gets applied to it,) or even define variables. This is where I set the longer titles for each page/section as well as placed the descriptions for the sections that get placed in the title banner area.

Bizarre Jekyll Issue

Insert random interlude here :)

So I ran into a crazy, probably obscure issue with Jekyll during this process – due to my being a newbie and misunderstanding how it worked, yes, but Jekyll did happily build and spew out the site without complaint or pointing out the issue, so perhaps this little warning might help someone else. (Red Hatters Alex Wood and Jason Rist were really helpful in trying to help me debug this crazy issue. The intarwebs just totally failed on this one.)

I was trying to use page variables when trying to implement the title banners at the top of every page – I needed to know which page I was on in order to display the correct page title and description at the top. The variables were just spitting out blank nothingness in the built page. It turns out the issue was that I was using *.md files that had the same name as *.html files, and some variables were set in one file and some another file, and Jekyll seemed to want to just blank them all out when it encountered more than one file with the same base file name. I was able to fix the problem by merging the files into one file (I stuck with HTML and deleted the *.mds.)

Here it is

So the implemented site design is in place in the repo now, and I’ve handed the work back off the team to tweak, hook up functionality to, and to finish up content development. There is a development build of the website at developer-phracek.rhcloud.com, but I’m pretty sure that’s behind from the work I finished last week since I see a lot of issues I know I fixed in the code. It’s something to poke around though and use to get a feel for the site.

Want to Help?

we need your help!

If you poked around the development version of the site, you might have noticed there’s quite a bit more content needed. This is something you can help with!

The team has put together a contribution guide, and the core site content is basically formatted in Markdown syntax (Here’s a neat markdown tutorial.) More information about how to contribute is on the front page of the content repo in GitHub.

Thoughts? Ideas? Need help contributing? Hit me up in the comments :)

Mixed use, Itu, Brazil

This project gathers in a same site three different uses: a residential building, a hotel, and a convention center. The client being a real-estate investor, it was required, as usual in that context, to build as much as possible, in other words, to use the maximal construction area permitted by law. This...

September 14, 2015

Interview with Lucas Ribeiro


Could you tell us something about yourself?

Hi, I am a 24-year-old Brazilian artist, who lives in Sao Paulo. Married and eldest of three brothers. Watching my mother making a lot of pencil portraits when I was a child inspired me to do the same years later, since I saw it was not impossible to learn how to draw. I started to draw with pencils when I was 13, but nothing serious until I reached the age of 20. I began to learn digital painting, watercolor and improving my drawing skill (self-taught). Now I have worked in book covers, character design, a mascot for the government of Sao Paulo and recently even with graphic design. I use mainly Krita, but used previously GIMP, MyPaint, ArtRage, Sketchbook Pro, SAI… but Krita fits everything that I need better.

Do you paint professionally, as a hobby artist, or both?

I’m starting to do more freelance jobs. So I’m combining my hobby with my profession, which is a blessing. So, it is both.

What genre(s) do you work in?

I’m very eclectic, but I have to say that fantasy art and the cartoon style with a more realistic approach, like the concept art of Pixar and Dreamworks, are my favourites, and I plan to dedicate myself more to these styles.

Whose work inspires you most — who are your role models as an artist?

Well, this list is very, very large. I need to say that movies and books inspires me a lot: Lord of the Rings, Star Wars and the Disney animated movies. Inspiration can come from anywhere at any time. A song, a trip. But speaking about artists, I can’t fail to mention David Revoy and Ramon Miranda for doing excellent work with open source tools.

How and when did you get to try digital painting for the first time?

Well, I think that was with MS Paint Brush in the 90’s. Even though I was using a mouse, I was a happy child doing some ugly stuff. But when I started do draw seriously, I heard of Gimp Paint Studio and give it a try. After that I started to try different tools.

What makes you choose digital over traditional painting?

Actually I draw a lot with pencils, pen, ink and watercolor. But digital painting gives you endless possibilities for combinations and experiments without any cost (both in money and in time).

How did you find out about Krita?

I was looking for tips and resources to painting with GIMP, until I found out that David Revoy was using Krita to do the free “Pepper & Carrot” webcomic. When I looked up the pictures, I was impressed. Which is awesome.

What was your first impression?

The brushes feels very natural, almost as the real world. The way that the colour blends is very unique. There was no comparison with Photoshop in that, for example. The experience of painting with Krita was really natural and smooth. Even though that in my old laptop was lagging a little bit in the previously versions of Krita.

What do you love about Krita?

In first place: The brush engines and transform tools. I think they are the best in the market, on this moment. The brush editor is very intuitive and powerful too.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Maybe some speed improvements. When I’m using more layers in high resolution I feel that.

What sets Krita apart from the other tools that you use?

The way that the brushes feel. There is no comparison with other painting tools. Is very natural, in that way I feel I am really painting and not just using a digital tool.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

Every day I make studies or a new illustration. But I think that I would choose the “Gangster Pug”. I used a lot wet brushes, which is very similar to painting with watercolor in the real world. It’s basically the same workflow.

What techniques and brushes did you use in it?

Wet brushes, and airbrush with blending modes like Multiply and Overlay. The Muses and David Revoy’s V6 Brushpack is what I use most.

Where can people see more of your work?

Soon I’ll have a new website and portfolio. But right now, people can see it at behance and facebook. I invite everyone to visit me at these links, especially because 90% of my work is done in Krita now. For stuff like graphic design I use Inkscape or Blender.

Portfolio: behance.com/lucasdreams
My page: facebook.com/lucasvisualart

Anything else you’d like to share?

You can add me on facebook (fb.com/lucasdreams) or send me an email (lucas.visualart@gmail.com) and share your thoughts. If you have not used Krita yet, try it. I think it’s the best tool in the market at the moment, and it’s really a production machine, whether you’re interested in VFX painting, illustration, comics and concept art, or just in painting and sketch.loanda800

September 11, 2015

An Inkscape SVG Filter Tutorial — Part 1

Part 1 introduces SVG filter primitives and demonstrates the creation of a Fabric filter effect. Part 2 shows various ways to colorize the fabric.


SVG filters allow bitmap-type manipulations inside a vector format. Scalability is preserved by pushing the bitmap processing to the SVG renderer at the point when the final screen resolution is known. SVG filters are very powerful, so powerful in fact that they have been moved out of SVG and into a separate CSS specification so that they can also be applied to HTML content. This power comes with a price: SVG filters can be difficult to construct. For example, a simple drop shadow filter consists of three connected filter primitives as shown in this SVG code:
<filter id="DropShadow">
  <feOffset in="SourceAlpha" dx="2" dy="2" result="offset"/>  ❶
  <feGaussianBlur in="offset" stdDeviation="2" result="blur"/>  ❷
  <feBlend in="SourceGraphic" in2="blur" mode="normal"/>  ❸
  1. Offset filter primtive: Create an image using the text alpha (SourceAlpha) and shift it down and right two pixels. Results in shifted black text.
  2. Gaussian Blur filter primitive: Blur result of previous step (“offset”).
  3. Blend filter primtive: Render the original image (SourceGraphic) over the result of the previous step (“blur”).
Some sample text!

A drop shadow applied to text.

Inkscape contains a Filter Dialog that can be used to construct filters. Here is the dialog showing the above drop-shadow filter effect:

Filter dialog showing the three filter primitives and how they are connected.

The Inkscape Filter Dialog showing a drop-shadow filter effect. The dialog shows the filter primitives and how their inputs (left-pointing triangles) are connected (black lines). It also contains controls for setting the various filter primitive attributes.

There can be more than one way to construct the same filter effect. For example, the order of the offset and blur primitives can be swapped without changing the result:

Some more text!

An alternative drop-shadow filter applied to text.

Inkscape contains over 200 canned filters effects, many of which have adjustable parameters. But sometimes none of them will do exactly what you want. In that case you can construct your own filter effect. It’s not as hard as it first seems once you understand some of the basic filter primitives.

A Fabric Filter

This tutorial creates a basic filter that can be applied to a pattern to create realistic fabric. It will introduce several very useful filter primitives that are fundamental to most of Inkscape’s canned filter effects.

Creating a Pattern

To begin with, we need a pattern that is the basis of the weave of the fabric. I’ve constructed a simple pattern consisting of four rectangles, two for the horizontal threads and two for the vertical threads. I’ve applied a linear gradient to give them a 3D look. One can certainly do better but as the pattern tile is quite small, one need not go overboard. Once you have drawn all the pattern parts, select them and then use Objects->Pattern to convert to a pattern. The new pattern will then be available in the Pattern drop-down menu that appears when the Pattern icon is highlighted on the Fill tab of the Fill and Stroke dialog.

The pattern consisting of four rectangles with linear gradients simulating a small section of the fabric weave.

The pattern (shown scaled up).

Next, apply the fabric pattern to the an object to create simple fabric.

The basic weave pattern applied to a large rectangle.

The pattern applied to a large rectangle.

Adding Blur

The pattern looks like a brick wall. It’s too harsh for fabric. We can soften the edges by applying a little blur. This is done through the Gaussian Blur filter primitive. Open the Filter Editor dialog (Filters->Filter Editor). Click on the New button to create a new, empty filter. A new filter with the name “filter1″ should be created. You can double click on the name to give the filter a custom name. Apply the filter to the fabric piece by selecting the piece and then checking the box next to the filter name. Your piece of fabric will disappear; don’t worry. We need to add a filter primitive to get it to show back up. To add a blur filter primitive select Gaussian Blur in the drop-down menu next to Add Effect and then clicking the Add Effect button. The fabric should now be visible with the blur effect applied. You can change the amount of blur by using the slider next to Standard Deviation; a value of 0.5 seems to be about right.

Filter Dialog image.

The Filter Effect dialog after applying a small amount of blur.

Note how the input to the Gaussian Blur primitive (triangle next to “Gaussian Blur”) is linked (under Connections) to the Source Graphic.
The basic weave pattern applied to a large rectangle.

A small amount of blur applied to the fabric.

Distorting the Threads

The pattern is still too rigid. The threads in real fabric are not so regular looking. We need to add some random distortions. To do so, we’ll link up two different filter primitives. The first filter primitive, Turbulence, will generate random noise. This noise will be used as an input to a Displacement Map filter primitive where pixels are shifted based on the value of the input.

The Turbulence Filter Primitive

Add a Turbulence filter primitive to the filter chain by selecting Turbulence from the drop-down menu next to Add Effect button, the click on the button. You should see a rectangle region filled with small random dots. There are a couple of things to note: The first is that the rectangle will be bigger than you initial object. This is normal. The filter region is enlarged by 10% on each side and the Turbulence filter fills this region. This is done on purpose as some filter primitives draw outside the object (e.g. the Gaussian Blur and Offset primitives). You can set the boundary of the filter region under the Filter General Settings tab. The default 10% works for most filters. You don’t want the region to be too large as it effects the time to render the filter. The second thing to note is that the Turbulence filter primitive has no inputs despite what is shown in the Filter Editor dialog.

There are a number of parameters to control the generation of the noise:

There are two values: Turbulence and Fractal Noise. The difference between the two is somewhat technical so I won’t go into it here. (See the Turbulence Filter Primitive section in my guide book.)
Base Frequency
This parameter controls the granularity of the noise. The value roughly corresponds to the inverse of the length in pixels of the fluctuations. (Note that the default value of ‘0’ is a special case and doesn’t follow this rule.)
The number of octaves used in creating the turbulence. For each additional octave, a new contribution is added to the turbulence with the frequency doubled and the contribution halved compared to the proceeding octave. It is usually not useful to use a value above three or four.
The seed for the pseudo-random number generator used to create the turbulence. Normally one doesn’t need to change this value.

One can guess that variations in the threads are about on the order of the distance between adjacent threads. For the pattern used here, the vertical threads are 6 pixels apart. This gives a base frequency of about 0.17 (i.e. 1/6). The value of Type should be changed to Fractal Noise. (Both Type values give good visual results but the Turbulence value leads to a shift of the image down and to the right for technical reasons.) Here is the resulting dialog:

Filter Dialog image.

The Filter Effect dialog after adding the Turbulence filter primitive.

And here is the resulting image:

The output of the first turbulence filter primitive.

The output of the filter chain which is at this point the output of the Turbulence filter primitive.

The Displacement Map Filter Primitive

Now we need to add the Displacement Map filter primitive which will take both the output of the Gaussian Blur and the Turbulence filter primitives as inputs. Select Dispacement Map from the drop-down menu and then click on the Add Effect button. Note that both inputs to the Dispacement Map filter primitive are set to the last filter primitive in the filter chain. We’ll need to drag the top one to the Gaussian Blur filter primitive. (Start the drag in the little triangle at the right of the filter primitive in the list.) Again, the image doesn’t change. We’ll need to make one more change but first here are the parameters for the Displacement Map filter primitive:

The scale factor is used to determine how far pixels should be shifted. The magnitude of the shift is the value of the displacement map (on a scale of 0 to 1) multiplied by this value.
X displacment
Determines which component (red, green, blue, alpha) should be used from the input map to control the x displacement.
Y displacement
Determines which component (red, green, blue, alpha) should be used from the input map to control the y displacement.

For our purpose, any values of X displacement and Y displacement are equally valid as all channels contain the same type of pseudo-random noise. To actually see a shift, one must set a non-zero scale factor. A value of about six seems to give a good effect.

Filter Dialog image.

The Filter Effect dialog after adding and adjusting the Displacement Map filter primitive.

And here is the resulting image:

The output of the first turbulence filter primitive.

The output of the filter chain after adding and adjusting the Displacement Map filter primitive.

Distorting the Fabric

Fabric rarely lies flat unless stretched and even then it is hard to make the threads lie straight and parallel. We can add a random wave to the fabric by adding another Turbulence and Displacement Map pair, but this time using a lower Base Frequency. Repeat the instructions above to add the two filter primitives but this time connect the top input to the Displacement Map to the previous Displacement Map. Set the Base Frequency to a value of 0.01. Set the Type to Fractal Noise. Set the Scale to ten.

Filter Dialog image.

The Filter Effect dialog after adding and adjusting the second Turbulence and Displacement Map filter primitives.

And here is the resulting image:

The final fabric image.

The output of the filter chain after distorting the fabric.

Of course, the pattern and filter can be applied to an arbitrary shape:

The pattern and filter applied to a blob.

The pattern and filter applied to a cloth patch.


We have constructed a basic Fabric filter but there is plenty of room for improvement. In the next part we’ll look at ways to add color to the fabric.

A section of a bag of coffee beans.

A PNG image just for Google+ which doesn’t support SVG images.

The blooms of summer, and weeds that aren't weeds

[Wildflowers on the Quemazon trail] One of the adjustments we've had to make in moving to New Mexico is getting used to the backward (compared to California) weather. Like, rain in summer!

Not only is rain much more pleasant in summer, as a dramatic thundershower that cools you off on a hot day instead of a constant cold drizzle in winter (yes, I know that by now Calfornians need a lot more of that cold drizzle! But it's still not very pleasant being out in it). Summer rain has another unexpected effect: flowers all summer, a constantly changing series of them.

Right now the purple asters are just starting up, while skyrocket gilia and the last of the red penstemons add a note of scarlet to a huge array of yellow flowers of all shapes and sizes. Here's the vista that greeted us on a hike last weekend on the Quemazon trail.

Down in the piñon-juniper where we live, things aren't usually quite so colorful; we lack many red blooms, though we have just as many purple asters as they do up on the hill, plus lots of pale trumpets (a lovely pale violet gilia) and Cowpen daisy, a type of yellow sunflower.

But the real surprise is a plant with a modest name: snakeweed. It has other names, but they're no better: matchbrush, broomweed. It grows everywhere, and most of the year it just looks like a clump of bunchgrass.

[Snakeweed in bloom] Then come September, especially in a rainy year like this one, and all that snakeweed suddenly bursts into a glorious carpet of gold.

We have plenty of other weeds -- learning how to identify Russian thistle (tumbleweed), kochia and amaranth when they're young, so we can pull them up before they go to seed and spread farther, has launched me on a project of an Invasive Plants page for the nature center (we should be ready to make that public soon).

But snakeweed, despite the name, is a welcome guest in our yard, and it lifts my spirits to walk through it on a September evening.

By the way, if anyone in Los Alamos reads this blog, Dave and I are giving our first planetarium show at the nature center tomorrow (that's Friday) afternoon. Unlike most PEEC planetarium shows, it's free! Which is probably just as well since it's our debut. If you want to come see us, the info is here: Night Sky Fiesta Planetarium Show.

September 08, 2015

Softness and Superresolution

Softness and Superresolution

Experimenting and Clarifying

A small update on how things are progressing (hint: well!) and some neat things the community is playing with.

I have been quiet these past few weeks because I decided I didn’t have enough to do and thought a rebuild/redesign of the GIMP website would be fun, apparently. Well, it is fun and something that couldn’t hurt to do. So I stepped up to help out.

A Question of Softness

There was a thread recently on a certain large social network in a group dedicated to off-camera flash. The thread was started by someone with the comment:

The most important thing you can do with your speed light is to put some rib [sic] stop sail cloth over the speed light to soften the light.

Which just about gave me an aneurysm (those that know me and lighting can probably understand why). Despite some sound explanations about why this won’t work to “soften” the light, there was a bit of back and forth about it. To make matters worse, even after over 100 comments, nobody bothered to just go out and shoot some sample images to see it for themselves.

So I finally went out and shot some to illustrate and I figured they would be more fun if they were shared (I did actually post these on our forum).

I quickly set a lightstand up with a YN560 on it pointed at my garden statue. I then took a shot with bare flash, one with diffusion material pulled over the flash head, and one with a 20” diy softbox attached.

Here’s what the setup looked like with the softbox in place:

Soft Light Test - Softbox Setup Simple light test setup (with a DIY softbox in place).

Remember, this was done to demonstrate that simply placing some diffusion fabric over the head of a speedlight does nothing to “soften” the resulting light:

Softness test image bare flash Bare flash result. Click to compare with diffusion material.

This shows clearly that diffusion material over the flash head does nothing to affect the “softness” of the resulting light.

For a comparison, here is the same shot with the softbox being used:

Softness test image softbox Same image with the softbox in place. Click to compare with diffusion material.

I also created some crops to help illustrate the difference up close:

Softness test crop #1 Click to compare: Bare Flash With Diffusion With Softbox
Softness test crop #1 Click to compare: Bare Flash With Diffusion With Softbox

Hopefully this demonstration can help put to rest any notion of softening a light through close-set diffusion material (at not-close flash-to-subject distances). At the end of the day, the “softness” quality of a light is a function of the apparent size of the light source relative to the subject. (The sun is the biggest light source I know of, but it’s so far it’s quality is quite harsh.)

A Question of Scaling

On discuss, member Mica asked an awesome question about what our workflows are for adding resolution (upsizing) to an image. There were a bunch of great suggestions from the community.

One I wanted to talk about briefly I thought was interesting from a technical perspective.

Both Hasselblad and Olympus announced not too long ago the ability to drastically increase the resolution of images in their cameras that used a “sensor-shift” technology to shift the sensor by a pixel or so while shooting multiple frames, then combing the results into a much larger megapixel image (200MP in the case of Hasselblad, and 40MP in the Olympus).

It turns out we can do the same thing manually by burst shooting a series of images while handholding the camera (the subtle movement of our hand while shooting provides the requisite “shift” to the sensor). Then we simply combine the images, upscale, and average the results to get a higher resolution result.

The basic workflow uses Hugin align_image_stack, Imagemagick mogrify, and G’MIC mean blend script to achieve the results.

  1. Shoot a bunch of handheld images in burst mode (if available).
  2. Develop raw files if that’s what you shot.
  3. Scale images up to 4x resolution (200% in width and height). Straight nearest-neighbor type of upscale is fine.
    • In your directory of images, create a new sub-directory called resized.
    • In your directory of images, run mogrify -scale 200% -format tif -path ./resized *.jpg if you use jpg’s, otherwise change as needed. This will create a directory full of upscaled images.
  4. Align the images using Hugin’s align_image_stack script.
    • In the resized directory, run /path/to/align_image_stack -a OUT file1.tif file2.tif ... fileX.tif The -a OUT option will prefix all your new images with OUT.
    • I move all of the OUT* files to a new sub-directory called aligned.
  5. In the aligned directory, you now only need to mean average all of the images together.
    • Using Imagemagick: convert OUTfile*.tif -evaluate-sequence mean output.bmp
    • Using G’MIC: gmic video-avg.gmic -avg \" *.tif \" -o output.bmp

I used 7 burst capture images from an iPhone 6+ (default resolution 3264x2448). This is the test image:

Superresolution test image Sample image, red boxes show 100% crop areas.

Here is a 100% crop of the first area:

100% crop of the base image, straight upscale.
100% crop, super resolution process result.

The second area crop:

100% crop, super resolution process result.
100% crop, super resolution process result.

Obviously this doesn’t replace the ability to have that many raw pixels available in a single exposure, but if the subject is relatively static this method can do quite well to help increase the resolution. As with any mean/median blending technique, a nice side-effect of the process is great noise reduction as well…

Not sure if this warrants a full article post, but may consider it for later.

September 07, 2015

GUADEC Gothenburg

The GUADEC posts have settled by now, which is why it’s time for me to post another one. I hope those of you lucky enough to be abel to visit the beautiful, but expensive, city of Gothenburg will enjoy this little 4K edit of the moment I’ve captured on my pocket camera.

GUADEC Gothenburg at 4K

And if you did, check out some of the photos too. I'sve stoppped counting how many I’ve attended, but it’s always great to meet up with all the creative minds and the new student blood that makes GNOME happen. Thanks to all of you, and especially to this year’s organizers! They made a stellar job.

DSC01363 DSC01140 DSC01233

September 04, 2015

Kickstarter Drawing Challenge!

Every month, we’ve got a drawing challenge on the Krita forum, where you can paint and draw a set subject and discuss your work with others. But this month’s challenge is special. The subject is our mascot Kiki, and the winner’s drawing will be on the t-shirts sent out as rewards for the kickstarter backers! And the winner will get a t-shirt as well, of course. So, connect your drawing tablet, fire up the latest Krita, get drawing and share the results on the forum!


Hacking / Customizing a Kobo Touch ebook reader: Part I, sqlite

I've been enjoying reading my new Kobo Touch quite a lot. The screen is crisp, clear and quite a bit whiter than my old Nook; the form factor is great, it's reasonably responsive (though there are a few places on the screen where I have to tap harder than other places to get it to turn the page), and I'm happy with the choice of fonts.

But as I mentioned in my previous Kobo article, there were a few tweaks I wanted to make; and I was very happy with how easy it was to tweak, compared to the Nook. Here's how.

Mount the Kobo

When you plug the Kobo in to USB, it automatically shows up as a USB-Storage device once you tap "Connect" on the Kobo -- or as two storage devices, if you have an SD card inserted.

Like the Nook, the Kobo's storage devices show up without partitions. For instance, on Linux, they might be /dev/sdb and /dev/sdc, rather than /dev/sdb1 and /dev/sdc1. That means they also don't present UUIDs until after they're already mounted, so it's hard to make an entry for them in /etc/fstab if you're the sort of dinosaur (like I am) who prefers that to automounters.

Instead, you can use the entry in /dev/disk/by-id. So fstab entries, if you're inclined to make them, might look like:

/dev/disk/by-id/usb-Kobo_eReader-3.16.0_N905K138254971:0 /kobo   vfat user,noauto,exec,fmask=133,shortname=lower 0 0
/dev/disk/by-id/usb-Kobo_eReader-3.16.0_N905K138254971:1 /kobosd vfat user,noauto,exec,fmask=133,shortname=lower 0 0

One other complication, for me, was that the Kobo is one of a few devices that don't work through my USB2 powered hub. Initially I thought the Kobo wasn't working, until I tried a cable plugged directly into my computer. I have no idea what controls which devices work through the hub and which ones don't. (The Kobo also doesn't give any indication when it's plugged in to a wall charger, nor does

The sqlite database

Once the Kobo is mouted, ls -a will show a directory named .kobo. That's where all the good stuff is: in particular, KoboReader.sqlite, the device's database, and Kobo/Kobo eReader.conf, a human-readable configuration file.

Browse through Kobo/Kobo eReader.conf for your own amusement, but the remainder of this article will be about KoboReader.sqlite.

I hadn't used sqlite before, and I'm certainly no SQL expert. But a little web searching and experimentation taught me what I needed to know.

First, make a local copy of KoboReader.sqlite, so you don't risk overwriting something important during your experimentation. The Kobo is apparently good at regenerating data it needs, but you might lose information on books you're reading.

To explore the database manually, run: sqlite3 KoboReader.sqlite

Some useful queries

Here are some useful sqlite commands, which you can generalize to whatever you want to search for on your own Kobo. Every query (not .tables) must end with a semicolon.

Show all tables in the database:

The most important ones, at least to me, are content (all your books), Shelf (a list of your shelves/collections), and ShelfContent (the table that assigns books to shelves).

Show all column names in a table:

PRAGMA table_info(content);
There are a lot of columns in content, so try PRAGMA table_info(content); to see a much simpler table.

Show the names of all your shelves/collections:


Show everything in a table:


Show all books assigned to shelves, and which shelves they're on:

SELECT ShelfName,ContentId FROM ShelfContent;
ContentId can be a URL to a sideloaded book, like file:///mnt/sd/TheWitchesOfKarres.epub, or a UUID like de98dbf6-e798-4de2-91fc-4be2723d952f for books from the Kobo store.

Show all books you have installed:

SELECT Title,Attribution,ContentID FROM content WHERE BookTitle is null ORDER BY Title;
One peculiarity of Kobo's database: each book has lots of entries, apparently one for each chapter. The entries for chapters have the chapter name as Title, and the book title as BookTitle. The entry for the book as a whole has BookTitle empty, and the book title as Title. For example, I have file:///mnt/sd/earnest.epub sideloaded:
sqlite> SELECT Title,BookTitle from content WHERE ContentID LIKE "%hamlet%";
ACT I.|Hamlet
Scene II. Elsinore. A room of state in the Castle.|Hamlet
Scene III. A room in Polonius's house.|Hamlet
Scene IV. The platform.|Hamlet
Scene V. A more remote part of the Castle.|Hamlet
Act II.|Hamlet
  [ ... and so on ... ]
ACT V.|Hamlet
Scene II. A hall in the Castle.|Hamlet
Each of these entries has Title set to the name of the chapter (an act in the play) and BookTitle set to Hamlet, except for the final entry, which has Title set to Hamlet and BookTitle set to nothing. That's why you need that query WHERE BookTitle is null if you just want a list of your books.

Show all books by an author:

SELECT Title,Attribution,ContentID FROM content WHERE BookTitle is null
AND Attribution LIKE "%twain%" ORDER BY Title;
Attribution is where the author's name goes. LIKE %% searches are case insensitive.

Of course, it's a lot handier to have a program that knows these queries so you don't have to type them in every time (especially since the sqlite3 app has no history or proper command-line editing). But this has gotten long enough, so I'll write about that separately.

September 03, 2015

About FreeCAD, architecture and workflows

Quite some time I didn't post about FreeCAD, but it doesn't mean things have been dead there. One of the important things we've been busy with in the past weeks is the transfer of the FreeCAD code and release files to GitHub. Sourceforge, where all this was hosted before, is unfortunately giving worrying signs of...

Updating the Shop!

Today we finally updated the offerings in the Krita webshop. The Comics with Krita training DVD by Timothée Giet is available as a free download using bittorrent, but if you want to support the Krita development, you can now download it directly for just €9,95. It’s still a really valuable resource, discussing not just Krita’s user interface, but also the technical details of creating comic book panels and even going to print.

We’ve also now got the USB sticks that were rewards in last year’s kickstarter for sale. By default, you get Comics with Krita, the Muses DVD and Krita 2.9.2 for Windows and OSX, as well as some brushes and other resources. That’s €34,95. For five euros more, I’ll put the latest Krita builds and the latest brush packs on it before sending it out. That’s a manual process at the moment since we release so often that it’s impossible to order the USB sticks from the supplier with the right version pre-loaded!

Because you can now get the Muses DVD, Comics with Krita and Krita itself on a USB Stick, we’ve reduced the price of the Muses DVD to €24,95! You can select either the download or the physical DVD, the price is the same.

And check out the very nice black tote bags and cool mugs as well!

All prices include shipping and only in the Netherlands, V.A.T. is added.


September 02, 2015

Krita 2.9.7 Released!

Two months of bug fixing, feature implementing, Google-Summer-of-Code-sweating, it’s time for a new release! Krita 2.9.7 is special, because it’s the last 2.9 release that will have new features. We’ll be releasing regular bug fix releases, but from now on, all feature development focuses on Krita 3.0. But 2.9.7 is packed! There are new features, a host of bug fixes, the Windows builds have been updated with OpenEXR 2.2. New icons give Krita a fresh new look, updated brushes improve performance, memory handling is improved… Let’s first look at some highlights:

New Features:

Tangent Normal Brush Engine

As is traditional, in September, we release the first Google Summer of Code results. Wolthera’s Tangent Normal Brush engine has already been merged!


It’s a specialized feature, for drawing normal maps, as used in 3d engines and games. Check out the introduction video:

There were four parts to the project:

  • The Tangent Normal Brush Engine. (You need a tilt-enabled tablet stylus to use it!)
  • The bumpmap filter now accepts normal map input
  • A whole new Normalize filter
  • And a new cursor option: the Tilt Cursor

Fresh New Icons

We’ve got a whole new, carefully assembled icon set. All icons are tuned so they work equally well with light and dark themes. And it’s now also possible to choose the size of the icons in the toolbox.


If you’ve got a high-dpi screen, make them big, if you’re on a netbook, make them small! All it takes is a right-click on the toolbox.


And to round out the improvements to the toolbox, the tooltips now show the keyboard shortcuts you can use to activate a tool and you can show and hide the toolbox from the docker menu.

Improvements to the Wrap-around mode

Everyone who does seamless textures loves Krita’s unique wraparound mode. And now we’ve fixed two limitations, and you can not just pick colors from anywhere, not just the original central image, but also fill from anywhere!


New Color Space Selector

Wolthera also added a new dialog for picking the color profile: The Color Profile browser. if you just want to draw without worries, Krita’s default will work for you, of course. But if are curious, or want to go deeper into color management, or have advanced needs then this browser dialog gives you all the details you need to make an informed choice!

Krita ships with a large set of carefully tuned ICC profiles created by Elle Stone. Her extensive notes on when one might prefer to use one or the other are included in the new color profile browser.

Compatibility with the rest of the world

We improved compatibility with Gimp: Krita can now load group layers, load XCF v3 files and finally load XCF files on Windows, too. Photoshop PSD support always gets attention. We made it possible to load bit/channel CMYK and Grayscale images, ZIP compressed PSD files and improved saving images with a single layer that has transparency to PSD.

Right-click to undo last path point

You can now right-click in the middle of creating a path to undo the last point.

More things…

  • The freehand tools’ Stabilizer mode has a new ‘Scalable smoothness’ feature.
  • You can now merge down Selection Masks
  • We already had shortcuts to fill your layer or selection with the foreground or background color or the current pattern at 100% opacity. If you press Shift in addition to the shortcut, the currently set painting opacity will be used.
  • We improved the assistants. You can now use the Shift key to add horizontal snapping to the handles of the straight-line assistants. The same shortcut will snap the third handle of the ellipse assistant to provide perfect circles.
  • Another assistant improvement:  there is now a checkbox to assistant snapping that will make the snapping happen to only the first snapped-to-assistant. This removes snapping issues on infinite assistants while keeping the ability to snap to chained assistants while the checkbox is unticked.
  • Several brushes were replaced with optimized versions: Basic_tip_default, Basic_tip_soft, Basic_wet, Block_basic, Block_bristles, Block_tilt, Ink_brush_25, Ink_gpen_10, Ink_gpen_25 now are much more responsive.
  • There is a new and mathematically robust normal map combination blending mode.
  • Slow down cursor outline updates for randomized brushes: when painting with a brush with fuzzy rotation, the outline looked really noisy before, now it’s smoother and easier to look at.
  • You can now convert any selection into a vector shape!
  • We already had a trim image to layer size option, but we added the converse: Trim to Image Size for if your layers are bigger than your image. (Which is easily managed with moving, rotating and so on).
  • The dodge and burn filter got optimized
  • Fixes to the Merge Layer functionality: you can use Ctrl+E to merge multiple selected layers, you can merge multiple selected layers with layer styles and merging of clone layers together with their sources will no longer break Krita.
  • The Color to Alpha filter now works correctly  with 16 bits floating point per channel color models.
  • We added a few more new shortcuts: scale image to new size using CTRL+ALT+I,  resize canvas with CTRL+ALT+C, create group kayer is CTRL+G, and feather selection = SHIFT+F6.

Fixed Bugs:

We resolved more than 150 bugs for this release. Here’s a highlight of the most important bug fixes! Some important fixes have to do with loading bundles. This is now more robust, but you might have problems with older bundle files. We also redesigned the Clone and Stamp brush creation dialogs. Look for the buttons in the predefined brush-tip tab of the brush editor. There are also performance optimizations, memory leak fixes and more:

  1.  BUG: 351599 Fix abr (photoshop) brush loading
  2. BUG:343615 Remember the toolbar visibility state when switching to canvas-only
  3. BUG:338839 Do not let the wheel zoom if there are modifiers pressed
  4. BUG:347500 Fix active layer activation mask
  5. Remove misleading error message after saving fails
  6. BUG 350289 : Prevent Krita from loading incomplete assistant.
  7. BUG:350960 Add ctrl-shift-s as default shortcut for “Save As” on Windows.
  8. Fix the Bristle brush presets
  9. Fix use normal map checkbox in the bumpmap filter UI.
  10. Fix loading the system-set monitor profile when using colord
  11. When converting between linear light sRGB and gamma corrected sRGB, automatically uncheck the “Optimize” checkbox in the colorspace conversion dialog.
  12. BUG:351488 Do not share textures when that’s not possible. This fixes showing the same image in two windows on two differently profiled monitors.
  13. BUG:351488 Update the display profile when moving screens. Now Krita will check whether you moved your window to another monitor, and if it detects you did that, recalculate the color correction if needed.
  14. Update the display profile after changing the settings — you no longer need to restart Krita after changing the color management settings.
  15. BUG:351664 Disable the layerbox if there is no open image, fixing a crash that could happen if you right-clicked on the layerbox before opening an image.
  16. BUG:351548 Make the transform tool work with Pass Through group layers
  17. BUG:351560 Make sure a default KoColor is black and transparent (fixes the default color settings for color fill layers)
  18. Lots of memory leak fixes
  19. BUG:351497 Blacklist “photoshop”:DateCreated” when saving. Photoshop adds a broken metadata line to JPG images that gave trouble when saving an image that contained a JPG created in Photoshop as a layer to Krita’s native file format.
  20. Ask for a profile when loading 16 bits PNG images, since Krita assumes linear light is default for 16 bits per channel RGB images.
  21. Improve the performance of most color correction filters
  22. BUG:350498 Work around encoding issues in kzip: images with a Japanese name now load correctly again.
  23. BUG:348099 Better error messages when exporting to PNG.
  24. BUG:349571 Disable the opacity setting for the shape brush. It hasn’t worked for about six years now.
  25. Improve the Image Recovery dialog by added some explanations.
  26. BUG:321361 Load resources from nested directories
  27. Do not use a huge amount of memory to save the pre-rendered image to OpenRaster or KRA files.
  28. BUG:351298 Fix saving CMYK JPEG’s correctly and do not crash saving 16 bit CMYK to JPEG
  29. BUG:351195 Fix slowdown when activating “Isolate Layer” mode
  30. Fix loading of selection masks
  31. BUG:345560 Don’t add the files you select when creating a File Layer  to the recent files list.
  32. BUG:351224 Fix crash when activating Pass-through mode for a group with transparency mask
  33. BUG:347798 Don’t truncate fractional brush sizes on eraser switch
  34. Don’t add new layers to a locked group layer
  35. Transform invisible layers if they are part of the group
  36. BUG:345619 Allow Drag & Drop of masks
  37. Fix the Fill Layer dialog to show the correct options
  38. BUG:344490 Make the luma options in the color selector settings translatable.
  39. BUG:351193 Don’t hang when isolating a layer during a stroke
  40. BUG:349621 Palette docker: Avoid showing a horizontal scrollbar
  41. Many fixes and a UI redesign for the Stamp and Clipboard brush creation dialogs
  42. BUG:351185 Make it possible to select layers in a pass-through group using the R shortcut.
  43. Don’t stop loading a bundle when a wrong manifest entry is found
  44. BUG:349333 fix inherit alpha on fill layers
  45. BUG:351005 Don’t crash on closing krita if the filter manager is open
  46. BUG:347285: Open the Krita Manual on F1 on all platforms
  47. BUG: 341899 Workaround for Surface Pro 3 Eraser
  48. BUG:350588 Fix a crash when the PSD file type is not recognized by the system
  49. BUG:350280 Fix a hangup when pressing ‘v’ and ‘b’ in the brush tool simultaneously
  50. BUG:350280 Fix  crash in the line tool.
  51. BUG:350507 Fix crash when loading a transform mask with a non-affine transform


August 31, 2015

Freaky Details (Calvin Hollywood)

Freaky Details (Calvin Hollywood)

Replicating Calvin Hollywood's Freaky Details in GIMP

German photographer/digital artist/photoshop trainer Calvin Hollywood has a rather unique style to his photography. It’s a sort of edgy, gritty, hyper-realistic result, almost a blend between illustration and photography.

Calvin Hollywood Examples

As part of one of his courses, he talks about a technique for accentuating details in an image that he calls “Freaky Details”.

Here is Calvin describing this technique using Photoshop:

In my meandering around different retouching tutorials I came across it a while ago, and wanted to replicate the results in GIMP if possible. There were a couple of problems that I ran into for replicating the exact same workflow:

  1. Lack of a “Vivid Light” layer blend mode in GIMP
  2. Lack of a “Surface Blur” in GIMP

Those problems have been rectified (and I have more patience these days to figure out what exactly was going on), so let’s see what it takes to replicate this effect in GIMP!

Replicating Freaky Details


The only extra thing you’ll need to be able to replicate this effect is G’MIC for GIMP.

You don’t technically need G’MIC to make this work, but the process of manually creating a Vivid Light layer is tedious and error-prone in GIMP right now. Also, you won’t have access to G’MIC’s Bilateral Blur for smoothing. And, seriously, it’s G’MIC - you should have it anyway for all the other cool stuff it does!

Summary of Steps

Here’s the summary of steps we are about to walk through to create this effect in GIMP:

  1. Duplicate the background layer.
  2. Invert the colors of the top layer.
  3. Apply “Surface Blur” to top layer.
  4. Set top layer blend mode to “Vivid Light”.
  5. New layer from visible.
  6. Set layer blend mode of new layer to “Overlay”, hide intermediate layer.

There are just a couple of small things to point out though, so keep reading to be aware of them!

Detailed Steps

I’m going to walk through each step to make sure it’s clear, but first we need an image to work with!

As usual, I’m off to Flickr Creative Commons to search for a CC licensed image to illustrate this with. I found an awesome portrait taken by the U.S. National Guard/Staff Sergeant Christopher Muncy:

New York National Guard, on Flickr New York National Guard by U.S. National Guard/Staff Sergeant Christopher Muncy on Flickr (cb).
Airman First Class Anthony Pisano, a firefighter with the New York National Guard’s 106th Civil Engineering Squadron, 106th Rescue Wing conducts a daily equipment test during a major snowstorm on February 17, 2015.
(New York Air National Guard / Staff Sergeant Christopher S Muncy / released)

This is a great image to test the effect, and to hopefully bring out the details and gritty-ness of the portrait.

1./2. Duplicate background layer, and invert colors

So, duplicate your base image layer (Background in my example).

Layer → Duplicate

I will usually name the duplicate layer something descriptive, like “Temp” ;).

Next we’ll just invert the colors on this “Temp” layer.

Colors → Invert

So right now, we should be looking at this on our canvas:

GIMP Freaky Details Inverted Image The inverted duplicate of the base layer.
GIMP Freaky Details Inverted Image Layers What the Layers dialog should look like.

Now that we’ve got our inverted “Temp” layer, we just need to apply a little blur.

3. Apply “Surface Blur” to Temp Layer

There’s a couple of different ways you could approach this. Calvin Hollywood’s tutorial explicitly calls for a Photoshop Surface Blur. I think part of the reason to use a Surface Blur vs. Gaussian Blur is to cut down on any halos that will occur along edges of high contrast.

There are three main methods of blurring this layer that you could use:

  1. Straight Gaussian Blur (easiest/fastest, but may halo - worst results)

    Filters → Blur → Gaussian Blur

  2. Selective Gaussian Blur (closer to true “Surface Blur”)

    Filters → Blur → Selective Gaussian Blur

  3. G’MIC’s Smooth [bilateral] (closest to true “Surface Blur”)

    Filters → G’MIC → Repair → Smooth [bilateral]

I’ll leave it as an exercise for the reader to try some different methods and choose one they like. (At this point I personally pretty much just always use G’MIC’s Smooth [bilateral] - this produces the best results by far).

For the Gaussian Blurs, I’ve had good luck with radius values around 20% - 30% of an image dimension. As the blur radius increases, you’ll be acting more on larger local contrasts (as opposed to smaller details) and run the risk of halos. So just keep an eye on that.

So, let’s try applying some G’MIC Bilateral Smoothing to the “Temp” layer and see how it looks!

Run the command:

Filters → G’MIC → Repair → Smooth [bilateral]

GIMP Freaky Details G'MIC Bilateral Filter The values I used in this example for Spatial/Value Variance.

The values you want to fiddle with are the Spatial Variance and Value Variance (25 and 20 respectively in my example). You can see the values I tried for this walkthrough, but I encourage you to experiment a bit on your own as well!

Now we should see our canvas look like this:

GIMP Freaky Details G'MIC Bilateral Filter Result Our “Temp” layer after applying G’MIC Smoothing [bilateral]
GIMP Freaky Details Inverted Image Layers Layers should still look like this.

Now we just need to blend the “Temp” layer with the base background layer using a “Vivid Light” blending mode…

4./5. Set Temp Layer Blend Mode to Vivid Light & New Layer

Now we need to blend the “Temp” layer with the Background layer using a “Vivid Light” blending mode. Lucky for me, I’m friendly with the G’MIC devs, so I asked nicely, and David Tschumperlé added this blend mode for me.

So, again we start up G’MIC:

Filters → G’MIC → Layers → Blend [standard] - Mode: Vivid Light

GIMP Freaky Details Vivid Light Blending G’MIC Vivid Light blending mode, pay attention to Input/Output!

Pay careful attention to the Input/Output portion of the dialog. You’ll want to set the Input Layers to All visibles so it picks up the Temp and Background layers. You’ll also probably want to set the Output to New layer(s).

When it’s done, you’re going to be staring at a very strange looking layer, for sure:

GIMP Freaky Details Vivid Light Blend Mode Well, sure it looks weird out of context…
GIMP Freaky Details Vivid Light Blend Mode Layers The layers should now look like this.

Now all that’s left is to hide the “Temp” layer, and set the new Vivid Light result layer to Overlay layer blending mode…

6. Set Vivid Light Result to Overlay, Hide Temp Layer

We’re just about done. Go ahead and hide the “Temp” layer from view (we won’t need it anymore - you could delete it as well if you wanted to).

Finally, set the G’MIC Vivid Light layer output to Overlay layer blend mode:

GIMP Freaky Details Final Blend Mode Layers Set the resulting G’MIC output layer to Overlay blend mode.

The results we should be seeing will have enhanced details and contrasts, and should look like this (mouseover to compare the original image):

GIMP Freaky Details Final Our final results (whew!)
(click to compare to original)

This technique will emphasize any noise in an image so there may be some masking and selective application required for a good final effect.


This is not an effect for everyone. I can’t stress that enough. It’s also not an effect for every image. But if you find an image it works well on, I think it can really do some interesting things. It can definitely bring out a very dramatic, gritty effect (it works well with nice hard rim lighting and textures).

The original image used for this article is another great example of one that works well with this technique:

GIMP Freaky Details Alternate Final After a Call by Mark Shaiken on Flickr. (cbna)

I had muted the colors in this image before applying some Portra-esque color curves to the final result..

Finally, a BIG THANK YOU to David Tschumperlé for taking the time to add a Vivid Light blend mode in G’MIC.

Try the method out and let me know what you think or how it works out for you! And as always, if you found this useful in any way, please share it, pin it, like it, or whatever you kids do these days…

This tutorial was originally published here.

Interview with Brian Delano

big small and me 4 and 5 sample

Could you tell us something about yourself?

My name is Brian Delano. I’m a musician, writer, futurist, entrepreneur and artist living in Austin, Texas. I don’t feel I’m necessarily phenomenal at any of these things, but I’m sort of taking an approach of throwing titles at my ego and seeing whichones stick and sprout.

Do you paint professionally, as a hobby artist, or both?

I’m more or less a hobby artist. I’ve made a few sales of watercolors here and there and have had my pieces in a few shows around town, but, so far, the vast majority of my art career exists as optimistic speculation between my ears.

What genre(s) do you work in?

I mostly create abstract art. I’ve been messing around with web comic ideas a bit, but that’s pretty young on my “stuff I wanna do” list. Recently, I’ve been working diligently on illustrating a children’s book series that I’ve been conceptualizing for a few years.

Whose work inspires you most — who are your role models as an artist?

Ann Druyan & Carl Sagan, Craig & Connie Minowa, Darren Waterston, Cy Twombly, Theodor Seuss Geisel, Pendelton Ward, Shel Silverstein and many others.

How and when did you get to try digital painting for the first time?

My first exposure to creating digital art was through the mid-nineties art program Kid Pix. It was in most every school’s computer lab and I thought it was mind-blowingly fun. I just recently got a printout from one of my first digital paintings from this era (I think I was around 8 or so when I made it) and I still like it. It was a UFO destroying a beach house by shooting lightning at it.

What makes you choose digital over traditional painting?

Don’t get me wrong, traditional (I call it analog :-P) art is first and foremost in my heart, but when investment in materials and time is compared between the two mediums, there’s no competition. If I’m trying to make something where I’m prototyping and moving elements around within an image while testing different color schemes and textures, digital is absolutely the way to go.

How did you find out about Krita?

I was looking for an open source alternative to some of the big name software that’s currently out for digital art. I had already been using GiMP and was fairly happy with what it offered in competition with Photoshop, but I needed something that was more friendly towards digital painting, with less emphasis on imaging. Every combination of words in searches and numerous scans through message boards all pointed me to Krita.

What was your first impression?

To be honest, I was a little overwhelmed with the vast set of options Krita has to offer in default brushes and customization. After a few experimental sessions, some video tutorials, and a healthy amount of reading through the manual, I felt much more confident in my approach to creating with Krita.

What do you love about Krita?

If I have a concept or a direction I want to take a piece, even if it seems wildly unorthodox, there’s a way to do it in Krita. I was recently trying to make some unique looking trees and thought to myself ” I wish I could make the leafy part look like rainbow tinfoil…” I messed around with the textures, found a default one that looked great for tinfoil, made a bunch of texture circles with primary colored brush outlines, selected all opaque on the layer, added a layer below it, filled in the selected space with a rainbow gradient, lowered the opacity a bit on the original tin foil circle layer, and bam! What I had imagined was suddenly a (digital) reality!

What do you think needs improvement in Krita? Is there anything that really annoys you?

Once in a while, if I’m really pushing the program and my computer, Krita will seem to get lost for a few seconds and become non responsive. Every new release seems to lessen this issue, though, and I’m pretty confident that it won’t even be an issue as development continues.

What sets Krita apart from the other tools that you use?

Krita feels like an artist’s program, created by artists who program. Too many other tools feel like they were created by programmers and misinterpreted focus group data to cater to artists’ needs that they don’t fully understand. I know that’s a little vague, but once you’ve tried enough different programs and then come to Krita, you’ll more than likely see what I mean.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I’m currently illustrating a children’s book series that I’ve written which addresses the size and scope of everything and how it relates to the human experience. I’m calling the series “BIG, small & Me”, I’m hoping to independently publish the first book in the fall and see where it goes. I’m not going to cure cancer or invent faster than lightspeed propulsion, but if I can inspire the child that one day will do something this great, or even greater, then I will consider my life a success.

big small and me cover sample

What techniques and brushes did you use in it?

I’ve been starting out sketching scenes with the pencil brushes, then creating separate layers for inking. Once I have the elements of the image divided up by ink, I turn off my sketch layers, select the shapes made by the ink layers and then fill blocks on third layers. When I have the basic colors of the different elements completed in this manner, I turn off my ink lines and create fourth and fifth layers for texturing and detailing each element. There are different tweaks and experimental patches in each page I’ve done, but this is my basic mode of operation in Krita.

Where can people see more of your work?

I have a few images and a blog up at artarys.com, and will hopefully be doing much more with that site  pretty soon. I’m still in the youngish phase of most of the projects I’m working on so self promotion will most likely be ramping up over the next few months. I’m hoping to set up a kickstarter towards the end of the year for a first pressing of BIG, small & Me, but until then most of my finished work will end up on either artarys.com or my facebook page.

Anything else you’d like to share?

It’s ventures like Krita that give me hope for the future of creativity. I am so thankful that there are craftspeople in the world so dedicated to creating such a superior tool for digital art.

August 26, 2015

Switching to a Kobo e-reader

For several years I've kept a rooted Nook Touch for reading ebooks. But recently it's become tough to use. Newer epub books no longer work work on any version of FBReader still available for the Nook's ancient Android 2.1, and the Nook's built-in reader has some fatal flaws: most notably that there's no way to browse books by subject tag, and it's painfully slow to navigate a library of 250 books when have to start from the As and you need to get to T paging slowly forward 6 books at a time.

The Kobo Touch

But with my Nook unusable, I borrowed Dave's Kobo Touch to see how it compared. I like the hardware: same screen size as the Nook, but a little brighter and sharper, with a smaller bezel around it, and a spring-loaded power button in a place where it won't get pressed accidentally when it's packed in a suitcase -- the Nook was always coming on while in its case, and I didn't find out until I pulled it out to read before bed and discovered the battery was too low.

The Kobo worked quite nicely as a reader, though it had a few of the same problems as the Nook. They both insist on justifying both left and right margins (Kobo has a preference for that, but it doesn't work in any book I tried). More important is the lack of subject tags. The Kobo has a "Shelves" option, called "Collections" in some versions, but adding books to shelves manually is tedious if you have a lot of books. (But see below.)

It also shared another Nook problem: it shows overall progress in the book, but not how far you are from the next chapter break. There's a choice to show either book progress or chapter progress, but not both; and chapter progress only works for books in Kobo's special "kepub" format (I'll write separately about that). I miss FBReader's progress bar that shows both book and chapter progress, and I can't fathom why that's not considered a necessary feature for any e-reader.

But mostly, Kobo's reader was better than the Nook's. Bookmarks weren't perfect, but they basically worked, and I didn't even have to spent half an hour reading the manual to use them (like I did with the Nook). The font selection was great, and the library navigation had one great advantage over the Nook: a slider so you could go from A to T quickly.

I liked the Kobo a lot, and promptly ordered one of my own.

It's not all perfect

There were a few disadvantages. Although the Kobo had a lot more granularity in its line spacing and margin settings, the smallest settings were still a lot less tight than I wanted. The Nook only offered a few settings but the smallest setting was pretty good.

Also, the Kobo can only see books at the top level of its microSD card. No subdirectories, which means that I can't use a program like rsync to keep the Kobo in sync with my ebooks directory on my computer. Not that big a deal, just a minor annoyance.

More important was the subject tagging, which is really needed in a big library. It was pretty clear Shelves/Collections were what I needed; but how could I get all my books into shelves without laboriously adding them all one by one on a slow e-ink screen?

It turns out Kobo's architecture makes it pretty easy to fix these problems.

Customizing Kobo

While the rooted Nook community has been stagnant for years -- it was a cute proof of concept that, in the end, no one cared about enough to try to maintain it -- Kobo readers are a lot easier to hack, and there's a thriving Kobo community on MobileReads which has been trading tips and patches over the years -- apparently with Kobo's blessing.

The biggest key to Kobo's customizability is that you can mount it as a USB storage device, and one of the files that exposes is the device's database (an sqlite file). That means that well supported programs like Calibre can update shelves/collections on a Kobo, access its book list, and other nifty tricks; and if you want more, you can write your own scripts, or even access the database by hand.

I'll write separately about some Python scripts I've written to display the database and add books to shelves, and I'll just say here that the process was remarkably straightforward and much easier than I usually expect when learning to access a new device.

There's lots of other customizing you can do. There are ways of installing alternative readers on the Kobo, or installing Python so you can write your own reader. I expected to want that, but so far the built-in reader seems good enough.

You can also patch the OS. Kobo updates are distributed as tarballs of binaries, and there's a very well designed, documented and supported (by users, not by Kobo) patching script distributed on MobileReads for each new Kobo release. I applied a few patches and was impressed by how easy it was. And now I have tight line spacing and margins, a slightly changed page number display at the bottom of the screen (still only chapter or book, not both), and a search that defaults to my local book collection rather than the Kobo store.

Stores and DRM

Oh, about the Kobo store. I haven't tried it yet, so I can't report on that. From what I read, it's pretty good as e-bookstores go, and a lot of Nook and Sony users apparently prefer to buy from Kobo. But like most e-bookstores, the Kobo store uses DRM, which makes it a pain (and is why I probably won't be using it much).

They use Adobe's DRM, and at least Adobe's Digital Editions app works in Wine under Linux. Amazon's app no longer does, and in case you're wondering why I didn't consider a Kindle, that's part of it. Amazon has a bad reputation for removing rights to previously purchased ebooks (as well as for spying on their customers' reading habits), and I've experienced it personally more than once.

Not only can I no longer use the Kindle app under Wine, but Amazon no longer lets me re-download the few Kindle books I've purchased in the past. I remember when my mother used to use the Kindle app on Android regularly; every few weeks all her books would disappear and she'd have to get on the phone again to Amazon to beg to have them back. It just isn't worth the hassle. Besides, Kindles can't read public library books (those are mostly EPUBs with Adobe DRM); and a Kindle would require converting my whole EPUB library to MOBI. I don't see any up side, and a lot of down side.

The Adobe scheme used by Kobo and Nook is better, but I still plan to avoid books with DRM as much as possible. It's not the stores' fault, and I hope Kobo does well, because they look like a good company. It's the publishers who insist on DRM. We can only hope that some day they come to their senses, like music publishers finally did with MP3 versus DRMed music. A few publishers have dropped DRM already, and if we readers avoid buying DRMed ebooks, maybe the message will eventually get through.

August 25, 2015

Funding Krita

Even Free software needs to be funded. Apart from being very collectible, money is really useful: it can buy transportation so contributors can meet, accommodation so they can sleep, time so they can code, write documentation, create icons and other graphics, hardware to test and develop the software on.

With that in mind, KDE is running a fund raiser to fund developer sprints, Synfig is running a fund raiser to fund a full-time developer and Krita… We’re actually trying to make funded development sustainable. Blender is already doing that, of course.

Funding development is a delicate balancing act, though. When we started doing sponsorship for full-time development on Krita, there were some people concerned that paying some community members for development would disenchant others, the ones who didn’t get any of the money. Even Google Summer of Code already raised that question. And there are examples of companies hiring away all community members, killing the project in the process.

Right now, our experience shows that it hasn’t been a problem. That’s partly because we have always been very clear about why we were doing the funding: Lukas had the choice between working on Krita and doing some boring web development work, and his goal was fixing bugs and performance issues, things nobody had time for, back then. Dmitry was going to leave university and needed a job, and we definitely didn’t want to lose him for the project.

In the end, people need food, and every line of code that’s written for Krita is one line more. And those lines translate to increased development speed, which leads to a more interesting project, which leads to more contributors. It’s a virtuous circle. And there’s still so much we can do to make Krita better!

So, what are we currently doing to fund Krita development, and what are our goals, and what would be the associated budget?

Right now, we are:

  • Selling merchandise: this doesn’t work. We’ve tried dedicated webshops, selling tote bags and mugs and things, but the total sales is under a hundred euros, which makes it not worth the hassle.
  • Selling training DVD’s: Ramon Miranda’s Muses DVD is still a big success. Physical copies and downloads are priced the same. There’ll be a new DVD, called “Secrets of Krita”, by Timothée Giet this year, and this week, we’ll start selling USB sticks (credit-card shaped) with the training DVD’s and a portable version of Krita for Windows and OSX and maybe even Linux.
  • The Krita Development Fund. It comes in two flavors. For big fans of Krita, there’s the development fund for individual users. You decide how much a month you can spare for Krita, and set up an automatic payment profile with Paypal or a direct bank transfer. The business development fund has a minimum amount of 50 euros/month and gives access to the CentOS builds we make.
  • Individual donations. This depends a lot on how much we do publicity-wise, and there are really big donations now and then which makes it hard to figure out what to count on, from month to month, but the amounts are significant. Every individual donor gets a hand-written email saying thank-you.
  • We are also selling Krita on Steam. We’ve got a problem here: the Gemini variant of Krita, with the switchable tablet/desktop GUI, got broken with the 2.9 release. But Steam users also get regular new builds of the 2.9 desktop version. Stuart is helping us here, but we need to work harder to interact with our community on Steam!
  • And we do one or two big crowd-funding campaigns. Our yearly kickstarters. They take about two full-time months to prepare, and you can’t skimp on preparation because then you’ll lose out in the end, and they take significant work to fulfil all the rewards. Reward fulfilment is actually something we pay someone a volunteer gratification for to do the work. We are considering doing a second kickstarter this year, to give me an income, with as goal producing a finished, polished OSX port of Krita. The 2015 kickstarter campaign brought in 27,471.78 euros, but we still need to buy and send out the rewards, which are estimated at an approximate cost of 5,000 euros.
  • Patreon. I’ve started a patreon, but I’m not sure what to offer prospective patrons, so it isn’t up and running yet.
  • Bug bounties. The problem here is that the amount of money people think is reasonable for fixing a bug is wildly unrealistic, even for a project that is as cheap to develop as Krita. You have to count on 250 euros for a day of work, to be realistic. I’ve sent out a couple of quotations, but… If you realize that adding support for loading group layers from XCF files is already taking three days, most people simply cannot bear the price of a bug fix individually.

So, let’s do sums for the first 8 months of 2015:

Paypal (merchandise, training materials, development fund, kickstarter-through-paypal and smaller individual donations) 8,902.04
The big individual donations usually arrive directly at our bank account, including a one-time donation to sponsor the port of Krita to Qt5 15,589.00
Steam 5,150.97
Kickstarter 27,471.78
Total 57,113.79

So, the Krita Foundation’s current yearly budget is roughly 65,000 euros, which is enough to employ Dmitry full-time and me part-time. The first goal really is to make sure I can work on Krita full-time again. Since KO broke down, that’s been hard, and I’ve spent five months on the really exciting Plasma Phone project for Blue Systems. That was a wonderful experience, but it had a direct influence on the speed of Krita development, both code-wise, as well as in terms of growing the userbase and keeping people involved.

What we also have tried is approaching VFX and game studios, selling support and custom development. This isn’t a big success yet, and that’s puzzling me some. All these studios are on Linux. All their software, except for their 2D painting application, is on Linux. They want to use Krita, on Linux. And every time we are in contact with some studio, they tell us they want Krita. Except, there’s some feature missing, something that needs improved… And we make a very modest quote, one that doesn’t come near what custom development should cost, and silence is the result.

Developing Krita is actually really cheap. We don’t have any overhead: no management, no office, modest hardware needs. With 5,000 euros we can fund one full-time developer for one month, with something to spare for hardware, sprints and other costs, like the license for the administration software, stamps and envelopes. The first goal would be to double our budget, so we can have two full-time developers, but in the end, I would like to be able to fund four to five full-time developers, including me, and that means we’re looking at a year budget of roughly 300,000 euros. With that budget, we’d surpass every existing 2D painting application, and it’s about what Adobe or Corel would need to budget for one developer per year!

Taking it from here, what are the next steps? I still think that without direct involvement of people and organizations who want to use Krita in a commercial, professional setting, we cannot reach the target budget. I’m too much a tech geek — there’s a reason KO failed, and that is that we were horrible at sales — to figure out how to reach out and convince people that supporting Krita would be a winning proposition! Answers on a post-card, please!

August 24, 2015

Self-generated metadata with LVFS

This weekend I finished the penultimate feature for the LVFS. Before today, when uploading firmware there was up to a 24h delay before the new firmware would appear in the metadata. This was because there was a cronjob on my home server downloading files every night from the LVFS site, running appstream-builder on them locally and then uploading the metadata back to the site. Not awesome at all.

Actually generating the metadata in the OpenShift instance was impossible, until today. Due to libgcab and libappstream-glib not being available on the RHEL 6.2 instance I’m using, I had to re-implement two things in Python:

  • Reading and writing Microsoft cabinet archives
  • Reading MetaInfo files and writing compressed AppStream XML

The two helper libraries (only really implementing the parts required, but patches welcome) are python-cabarchive and python-appstream. I’m not awesome at Python, so feedback (in the form of pull requests) welcome.

This means I’m nearly okay to be hit by a bus. Well, nearly; the final feature is to collect statistics about how many people are downloading each firmware file, and possibly collecting data on how many failures and successes there have been when actually applying the firmware. This is actually quite tricky to do without causing privacy issues and not doing double counting. I’ll do some more thinking and then write up a proposal, ideas welcome.

August 22, 2015

2015 KDE Sprints Fundraiser

Krita is a part of the KDE Community. Without KDE, Krita wouldn’t exist, and KDE still supports Krita in many different ways. KDE is a world-wide community for people and projects who create free software. That ranges from applications like Krita, Digikam, Kdenlive to education software like GCompris to desktop and mobile phone software.

Krita not only uses the foundations developed by KDE and its developers all around the world, KDE hosts the website, the forums, everything needed for development. And the people working on all this need to meet from time to time to discuss future directions, to make decisions about technology and to work together on all the software that KDE communities create. As with Krita, most of the work on KDE is done by volunteers!

KDE wants to support those volunteers with travel grants and accomodation support, and for that, KDE is raising funds right now. Getting developers, artists, documentation writers, users all together in one place to work on creating awesome free software! And there is a big sprint coming up soon: the Randa Meetings. From the 6th to the 13th of September more than 50 people will meet in Randa, Switzerland to work, discuss, decide, document, write, eat and sleep under one and the same roof.

It’s a very effective meeting: in 2011 the KDE Frameworks 5 project was started, rejuvenating and modernizing the KDE development platform. Krita is currently being ported to Frameworks. Last year, Kdenlive received special attention, reinvigorating the project as part of the KDE community. Krita artist Timothee Giet worked on GCompris, another new KDE project. This year, the focus is on bringing KDE software to touch devices, tablets, phones, laptops with touch screen.

Let’s help KDE bring people together!

August 21, 2015

Embargoed firmware updates in LVFS

For the last couple of days I’ve been working with a large vendor adding new functionality to the LVFS to support their specific workflow.

Screenshot from 2015-08-21 13-06-02

The new embargo target allows vendors to test the automatic update functionality using a secret vendor-specific URL set in /etc/fwupd.conf without releasing it to the general public until the hardware has been announced.

Updates normally go through these stages: Private → Embargoed → Testing → Stable although LVFS users with the QA capability can skip these as required. The screenshot also shows that we’re unpacking the .cab file and parsing the metainfo file server side (in python), which gives us so much more rich detail about the firmware.

war and peace—the 5th column

Building out the war and peace series, this second instalment focusses on acts of sabotage performed by users against… users. For those in a hurry, there is a short, sharp summary at the end.


Last week my friend Jan‑C. Borchardt tweeted:

‘Stop saying »the user« and referring to them as »he«. Thanks!’

To which I replied:

‘“They”; it is always a group, a diverse group.
‘But I agree, someone talking about “the user” is a tell‐tale sign that they are part of the problem, not the solution.’

All that points at a dichotomy that I meant to blog about for a long time. You see, part one of the war and peace series looked at all four parties involved—product makers, users, developers and designers—as separate silos. The logical follow‑up is to look at hybrid actors: the user‐developer, the product maker‐developer, et cetera.

While working to provide a theoretical underpinning to 20+ years of observation of performance in practice of these hybrid actors, I noticed that for all user‐XYZ hybrids, the user component has not much in common with the users of said product. In general, individual users are in conflict with the user group they are part of.

That merits its own blog post, and here we are.

spy vs. spy

First we got to get away from calling this ‘user versus users.’ The naming of the two needs to be much more dissimilar, both to get the point across and so that you don’t have to read the rest of this post with a magnifying glass, just to be sure which one I am talking about.

During the last months I have been working with the concept of inhabitants vs. their city. The inhabitants stand for individual users, each pursuing their personal interests. All of them are united in the city they have chosen to live in—think photoshop city, gmail city, etc. Each city stands for a bustling, diverse user group.

inner city blues

With this inhabitants & city model, it becomes easier to illustrate the mechanisms of conflict. Let’s start off with a real‐life example.

Over the next years, Berlin needs to build tens of thousands units of affordable housing to alleviate a rising shortage. Because the tens of thousands that annually move to Berlin are (predominantly) looking for its bubbly, relaxed, urban lifestyle, building tower blocks on the edge of town (the modernist dream), or rows of cookie‐cutter houses in suburbia (the anglo‐saxon dream) won’t solve anything.

What is needed is development of affordable housing in the large urban area of Berlin. A solid majority of the population of Berlin agrees with this. Under one condition: no new buildings in their backyard. And thus almost nothing affordable gets built.

What can we learn from this? Naturally reactionary inhabitants take angry action to hinder the development that their city really needs. I am sure this rings a bell for many a product maker.


The second example is completely fictional. A small posse of inhabitants forms and petitions the city to build something for their (quite) special interest: a line‐dancing facility. At first they demand that the city pays for everything and performs all the work. When this falls on deaf ears, the posse organises a kickstarter where about 150 backers chip in to secure the financing of the building work.

Being petitioned again, the city asks where this line‐dancing facility could be located. The posse responds that in one of the nicest neighbourhoods there is a empty piece of grassland. The most accessible part of it would be a good location for the facility and its parking lot.

The city responds that this empty grassland host events and community activities on about 100 days a year, many of which are visited by folks from all over the city. And all other days of the year the grassland serves the neighbourhood, simply by being empty and semi‐wild nature; providing a breather between all the dense and busy urbanisation, and a playground for kids.

repeat after me: yee‑haw!

This angers the line‐dancing posse. They belittle the events and activities, and don’t know anyone who partakes in them. The events can be moved elsewhere. The land just sits there, being empty, and is up for grabs. Here they are with a sack of money and a great idea, so let’s get a move on.

The city then mentions one of its satellite towns, reachable by an expressway. At its edge, there is plenty of open space for expansion. It would be good to talk to the mayor of that town. The posse is now furious. Their line‐dancing facility, which would make such a fine feature for the heart of the city, being relegated to being an appendix of a peripheral module? Impossible!

What can we learn from this? Inhabitants will loudly push their special‐interest ideas, oblivious to the negative impact on their city. Again this must also ring a bell for many product makers.

leaving the city

Now that the inhabitants & city model has helped us to find these mechanisms, I must admit there are also some disadvantages to it. First, cities do not scale up or down like user groups do. I have worked on projects ranging from a handful of users (hamlet) to 100 million (large country). And yes, the mechanisms hold over this range.

Second, I suspect that many, especially those of a technological persuasion, will take the difference between inhabitants and their city to be that the former are the people, and the latter the physical manifestation, especially buildings and infrastructure.

no escape

Thus we move on for a second time, picking a more generic route, but one with attitude. For individual users I have picked the term punter. If that smacks to you as clientèle that is highly opinionated, with a rather parochial view of their environs, then you got my point.

Now you may think ‘is he picking on certain people?’ No, on the contrary: we all are punters, for everything we use: software, devices, websites, services—hell, all products. You, me, everyone. We are all just waffling in a self‐centred way.

There is no single exception to the punter principle, even for people whose job is centred on getting beyond the punter talk and to the heart of the matter. It is simply a force of nature. The moment we touch, or talk about, a product we use, we are a punter.

greater good

For the user group I have picked the term society. This works both as a bustling, diverse populace and as a club of people with common interests (the photoshop society, gmail society, product‑XYZ society). Some of you, especially when active in F/LOSS, will say ‘is not community the perfect term here?’ It would be, if it wasn’t already squatted.

After almost a decade in F/LOSS I can say that in practice, ‘community’ boils down to a pub full of punters (i.e. chats, mailing lists and forums). In those pubs you will hear punters yapping about their pet feature (line dancing) and loudly resisting structural change in their backyard. What you won’t hear is a civilised, big‐picture discourse about how to advance their society.

it differs

One thing that last week’s exchange with Jan‑C., and also a follow up elsewhere, brought me is the focus on the diversity of a (product) society. This word wonderfully captures how in dozens and dozens of dimensions people of a society are different, have different needs and different approaches to get stuff done.

I can now review my work over the last decade and see that diversity is a big factor in making designing for users so demanding. The challenge is to create a compact system that is flexible enough to serve a diverse society (hint: use the common interests to avoid creating a sprawling mess).

I can also think of the hundreds of collaborators I worked with and now see what they saw: diversity was either ‘invisible,’ in their mono‐cultural environment, or it was such an overwhelming problem that they did not dare tackle it (‘let’s see what the user asks for’). Talking about ‘the user’ is the tell‐tale sign of a diversity problem.

The big picture emerges—

If you want to know why technology serves society so badly, look no further than the tech industry’s problem to acknowledge, and adapt to, the diversity of society.

Yes, I can see how the tendency of the tech sector to make products that only its engineers understand has the same roots as the now widely publicised problem that this sector has a hard time being inclusive to anyone who is not a male, WASP engineer; it is a diversity problem.

but why?

Back to the punters. That we all act like one was described ages ago by the tragedy of the commons. This is—

‘[…] individuals acting independently and rationally according to each’s self‐interest behave contrary to the best interests of the whole group by depleting some common resource.’

If you think about it for a minute, you can come up with many, many examples individuals acting like that, at the cost of society. The media are filled with it, every day.

what a waste

What common resources do punters strive to deplete in (software) products? From my position that is easy to answer—

  1. makers’ time; both in proprietary and F/LOSS, the available time of the product maker, the designer and developers is scarce; it is not a question of money, nor can you simply throw more people at it—larger teams simply take longer to deliver more mediocre results. To make better products, all makers need to be focussed on what can advance their society; any time spent on punter talk, or acts (e.g. a punter’s pull request), is wasted time.
  2. interaction bandwidth; this is, loosely, a combination of UI output (screen space, sound and tactile output over time) and input (events from buttons, wheels, gestures), throttled by the limit what humans can process at any given time. Features need interaction and this eats the available bandwidth, fast. In good products, the interaction bandwidth is allocated to serve its whole society, instead of a smattering of punters.

The tragedy of (software) products is that it’s completely normal that in reaction to punters’ disinformation and acts of sabotage, a majority of maker’s time and a majority of interaction bandwidth gets wasted.

Acts of sabotage? SME software makers of specialised tools know all about fat contracts coming with a list of punter wishes. Even in F/LOSS, adoption by a large organisation can come with strings attached. Modern methods are trojan horses of punter‐initiated bounties, crowdfunding or code contributions of their wishes.

the point

This makes punters the fifth column in the UI version of war and peace. Up to now we had four players in our saga—product maker, users (i.e. the society), developers and the designer—and here is a fifth: a trait in all of us within society to say and do exactly that what makes (software) products bloated, useless, collapse under their own weight and burn to the ground.

It is easy to see that punters are the enemy of society and of product makers (i.e. those who aim to make something really valuable for society). Punters have an ally in developers, who love listening to punters and then build their wishes. It makes the both of them feel real warm and fuzzy. (I am still not decided on whether this is deliberate on the part of developers, or that they are expertly duped by punters offering warmth and fuzziness.)

That leaves designers; do they fight punters like dragon slayers? No, not at all. Read on.

the dragon whisperer

Remember that punters and society are one and the same thing. The trick is to attenuate the influence of punters to zero; and to tune into the diversity and needs of society and act upon them. Problem is, you only get to talk to punters. Every member of society acts like a punter when they open their mouth.

There is a craft that delivers insight into a society, from working with punters. It is called user research. There are specialist practitioners (user researchers) and furthermore any designer worth their salt practices this. There is a variety of user research methods, starting with interviewing and surveying, followed up by continuous analysis by the designer of all punter/society input (e.g. of all that ‘community’ pub talk).

the billion‐neuron connection

What designers do is maintain a model of the diversity and needs of the society they are designing for, from the first to the last minute they are on a project. They use this model while solving the product‐users‐tech puzzle, i.e. while designing.

When the designer is separated from the project (tech folks tend towards that, it’s a diversity thing) then the model is lost. And so is the project.

(Obligatory health & safety notice: market research has nothing to do with user research, it is not even a little bit useful in this context.)

brake, brake

At this point I would like to review the conflicts and relationships that we saw in part one of war and peace, using the insights we won today. But this blog post is already long enough, so that will have to wait for another day.

Instead, here is the short, sharp summary of this post:

  • Users groups can be looked at in two ways: as a congregation of punters and as a society.
  • We all are punters, talking in a in a self‐centred way and acting in our self‐interest.
  • We are also all members of (product) societies; bustling, diverse populaces and clubs of people with common interests (the photoshop society, gmail society, product‑XYZ society).
  • Naturally reactionary punters take angry action to hinder structural product development.
  • Punters will loudly push their special‐interest ideas, oblivious to the negative impact on their society.
  • The diversity of societies poses one of the main challenges in designing for users.
  • The inability of the tech sector to acknowledge, and adapt to, the diversity of society explains why it tends to produce horrible, tech‐centric products.
  • In a fine example of ‘the tragedy of the commons,’ punters behave contrary to the best interests of their society by depleting makers’ time and interaction bandwidth.
  • Punters act like a fifth column in the tri‑party conflict between product makers, society and developers.
  • You only get to talk to punters, but pros use user research methods to gain insight into the diversity and needs of a society.
  • Everyone gets bamboozled by punters, but not designers. They use user research and maintain a model of diversity and needs, to design for society.

Interested, or irritated? Then (re)read the whole post before commenting. Meanwhile you can look forward to part three of war and peace, the UI version.

Python module for reading EPUB e-book metadata

Three years ago I wanted a way to manage tags on e-books in a lightweight way, without having to maintain a Calibre database and fire up the Calibre GUI app every time I wanted to check a book's tags. I couldn't find anything, nor did I find any relevant Python libraries, so I reverse engineered the (simple, XML-bsaed) EPUB format and wrote a Python script to show or modify epub tags.

I've been using that script ever since. It's great for Project Gutenberg books, which tend to be overloaded with tags that I don't find very useful for categorizing books ("United States -- Social life and customs -- 20th century -- Fiction") but lacking in tags that I would find useful ("History", "Science Fiction", "Mystery").

But it wasn't easy to include it in other programs. For the last week or so I've been fiddling with a Kobo ebook reader, and I wanted to write programs that could read epub and also speak Kobo-ese. (I'll write separately about the joys of Kobo hacking. It's really a neat little e-reader.)

So I've factored my epubtag script into a usable Python module, so as well as being a standalone program for viewing epub book data, it's easy to use from other programs. It's available on GitHub: epubtag.py: parse EPUB metadata and view or change subject tags.

August 18, 2015

Different User Types in LVFS

I’ve been working with two large (still un-named) vendors about their required features for the Linux Vendor Firmware Service. One of the new features I’ve landed this week in the test instance are the different user modes.

Screenshot from 2015-08-18 21-45-31

There are currently three classes of user:

  • The admin user that can do anything
  • Unprivileged users that can just upload files to the testing target
  • QA users that can upload files to the testing or stable target, and can tag files from testing to stable

This allows a firmware engineer to upload files before the new hardware has launched, and then someone else from the QA or management team can test the firmare and then push it out to the firmware so it can be flashed on real hardware by users.

I’ve also added functionality so that users can also change their own password (moving away from vendor keys) and added a simple test suite to test all the new rules.

August 17, 2015

Preço de software BIM no Brasil

Abaixo está uma tabela com preços de software BIM para arquitetura. Incluí aqui somente software que tem suporte ao formato IFC (tanto leitura como gravação). Sem isso, acho que concordamos que não podemos chamar algo de BIM. Também retirei as soluções restritas a um certo tipo de construção (como DDS-CAD). As aplicações abaixo são as...

August 16, 2015

Why I Stopped Reading Books Written by Judith Tarr

Not about Krita or KDE, so main article behind the fold... Instead it's about my reaction to a blog article or two by an author whose work I used to buy.

Read more ...

August 14, 2015

Notes from the dark(table) Side

Notes from the dark(table) Side

A review of the Open Source Photography Course

We recently posted about the Open Source Photography Course from photographer Riley Brandt. We now also have a review of the course as well.

This review is actually by one of the darktable developers, houz! He had originally posted it on discuss as a topic but I think it deserves a blog post instead. (When a developer from a favorite project speaks up, it’s usually worth listening…)

Here is houz’s review:

The Open Source Photography Course Review

by houz

Author houz headshot

It seems that there is no topic to discuss The Open Source Photography Course yet so let’s get started.


First of all, as a darktable developer I am biased so take everything I write with a grain of salt. Second, I didn’t pay for my copy of the videos but Riley was kind enough to provide a free copy for me to review. So add another pinch of salt. I will therefore not tell you if I would encourage you to buy the course. You can have my impressions nevertheless.


I won’t say anything about the GIMP part, not because it wouldn’t know how to use that software but it’s relatively short and I just didn’t notice anything to comment on. It’s solid basics of how to use GIMP and the emphasis on layer masks is really important in real world usage.

Now for the darktable part, I have to say that I liked it a lot. It showcases a viable workflow and is relatively complete – not by explaining every module and becoming the audio book of the user manual but by showing at least one tool for every task. And as we all know, in darktable there are many ways to skin a cat, so concentrating on your favourites is a good thing.

What I also appreciate is that Riley managed to cut the single topics to manageable chunks of around 10 minutes or less so you can easily watch them in your lunch break and have no problem to come back to one topic later and easily find what you are looking for.

Before this starts to sound like an advertisement I will just point out some small nitpicking things I noticed while watching the videos. Most of these were not errors in the videos but are just extra bits of information that might make your workflow even smoother, so it’s more of an addendum than an erratum.

  • When going through your images on lighttable you can either zoom in till you only see a single image (alt-1 is a shortcut for that) or hold the z key pressed. Both are shown in the videos. The latter can quickly become tedious since releasing z just once bring you back to where you were. There are however two more keyboard shortcuts that are not assigned by default under views>lighttable: ‘sticky preview’ and ‘sticky preview with focus detection’. Both work just like normal z and ctrl-z, just without the need to keep the key pressed. You can assign a key to these, for example by reusing z and ctrl-z.
  • Color labels can be set with F1 .. F5, similar to rating.
  • Basecurve and tonecurve allow very fine up/down movement of points with the mouse wheel. Hover over a node and scroll.
  • Gaussian in shadows&highlights tends to give stronger halos than bilateral in normal use, see the darktable blog for an example.
  • For profiled denoising better use ‘HSV color’ instead of ‘color’ and ‘HSV lightness’ instead of ‘lightness’, see the user manual for details.
  • When using the mouse wheel to zoom the image you can hold ctrl to get it smaller than fitting to the screen. That’s handy to draw masks over the image border.
  • When moving the triangles in color zones apart you actually widen the scope of affected values since the curve gets moved off the center line on a wider range.
  • Also color zones: You can also change reds and greens in the same instance, no need for multiple instances. Riley knows that and used two instances to be able to control the two changes separately.
  • When loading sidecar files from lighttable, you can even treat a JPEG that was exported from darktable like an XMP file and manually select that since the JPEGs get the processing data embedded. It’s like a backup of the XMP with a preview. Caveat: When using LOTS of mask nodes (mostly with the brush mask) the XMP data might get too big so it’s no longer possible to embed in the JPEG, but in general it works.
  • The collect module allows to store presets so you can quickly access often used search rules. And since presets only store the module settings and not the resulting image set these will be updated when new images are imported.
  • In neutral density you can draw a line with the right mouse button, similar to rotating images.
  • Styles can also be created from darkroom, there is a small button next to the history compression button.

So, that’s it from me. Did you watch the videos, too? What was your impression? Do you have any remarks?


Today Stellarium has been published for public testing. We are refactored the GUI for deep-sky objects and catalog of the their objects. Plus we introduced few new options for DSO. Other big features for current beta: Iridium flares, accurate calculations of the ecliptic obliquity, calculations of the nutation and cross-index data for DSO and stars.

Please help us test it.

List of changes between version 0.13.3 and HEAD (
- Added accurate calculations of the ecliptic obliquity. Finally we have good precession! (LP: #512086, #1126981, #1240070, #1282558, #1444323)
- Added calculations of the nutation. Now we optionally have IAU-2000B nutation. (Applied between 1500..2500 only.)
- Added new DSO catalog and related to him features (LP: #1453731, #1432243, #1285175, #1237165, #1189983, #1153804, #1127633, #1106765, #1106761, #957083)
- Added Iridium flares to Satellites plugin (LP: #1106782)
- Added list of interesting double stars in Search Tool
- Added list of interesting variable stars in Search Tool
- Added cross-identification data for HIP stars (SAO and HD numbers)
- Added sinusoidal projection
- Added draw DSO symbols with different colors
- Added labels to the landscapes (gazetteer support)
- Added a new behaviour for drawing of orbit for selected planet - drawing of the orbits of 'hierarchical group' (parent and all their childrens) of the selected celestial body (disabled by default).
- Added an option to lower the horizon, e.g. for a widescreen cylindrical view with mostly sky and ground on lower screen border (LP: #1299063).
- Added various timeshift-by-year commands (LP: #1478670)
- Added Moon's phases info
- Rewriting the Meteor module and the Meteor Showers plugin (LP: #1471143)
- Updated skycultures
- Updated Hungarian and Brazilian Portuguese translations of landscapes and skycultures
- Updated data for Solar System Observer
- Updated color scheme for constellations
- Updated documentation
- Updated QZip stuff
- Updated TUI plugin
- Updated data for Satellites plugin
- Updated Clang support
- Updated Pluto texture
- Using a better formula for CCD FOV calculation in Oculars plugin (LP: #1440330)
- Fixed shortkeys conflict (New shortkeys for Scenery 3D plugin - LP: #1449767)
- Fixed aliasing issue with GCC in star data unpacking
- Fixed documentation for core.wait() function (LP: #1450870)
- Fixed perspective projection issue (LP: #724868)
- Fixed crash on activation of AngleMeasure in debug mode (LP: #1455839)
- Fixed issue for update bottom bar position when we toggle the GUI (LP: #1409023)
- Fixed weird view of the Moon in Oculars plugin when Moon's scaling is enabled
- Fixed non-realistic mode drawing of artificial satellites
- Fixed availability plugins for the scripting engine (LP: #1468986)
- Fixed potention memory bug in Meteor class
- Fixed wrong spectral types for some stars (LP: #1429530)
- Change visibleSkyArea limitation to allow landscapes with "holes" in the ground. (LP: #1469407)
- Improved Delta T feature (LP: #1380242)
- Improved the ad-hoc visibility formula for LDN and LBN
- Improved the sight opportunities when artificial satellites are passes (LP: #907318)
- Changed behaviour of coloring of orbits of artificial satellites: we use gray color for parts of orbits, where satellite will be invisible (LP: #914751)
- Improved coloring and rendering artificial satellites (include their labels) in the Earth shadow for both modes (LP: #1436954)
- Improved for landscape and lightscape brightness with/without atmosphere. Now even considers landscape opacity for no-atmosphere conditions.
- Reduce Milky Way brightness in moonlight
- Enhancement of features in Telescope Control plugin (LP: #1469450)

Download link (Windows 32 and 64 bit, OS X): https://launchpad.net/stellarium/trunk/trunk

Custom attributes in angular-gettext

Kristiyan Kostadinov recently submitted a very neat new feature for angular-gettext, which was just merged: support for custom attributes.

This feature allows you to mark additional attributes for extraction. This is very handy if you’re always adding translations for the same attributes over and over again.

For example, if you’re always doing this:

<input placeholder="{{ 'Input something here' | translate }}">

You can now mark placeholder as a translatable attribute. You’ll need to define your own directive to do the actual translation (an example is given in the documentation), but it’s now a one-line change in the options to make sure that placeholder gets recognized and hooked into the whole translation string cycle.

Your markup will then become:

<input placeholder="Input something here">

And it’ll still internationalize nicely. Sweet!

You can get this feature by updating your grunt-angular-gettext dependency to at least 2.1.3.

Full usage instructions can be found in the developer guide.

Comments | More on rocketeer.be | @rubenv on Twitter

August 13, 2015

Linux Vendor Firmware Service: We Need Your Help

I spend a lot of my day working on framework software for other programs to use. I enjoy this plumbing, and Red Hat gives me all the time I need to properly design and build these tricky infrastructure-type projects. Sometimes, just one person isn’t enough.

For the LVFS project, I need vendors making hardware to submit firmware files with carefully written metadata so that they can be downloaded by hundreds of thousands of Linux users securely and automatically. I also need those vendors to either use a standardized flashing protocol (e.g. DFU or UEFI) or to open the device specifications enough to allow flashing firmware without signing an NDA.

Over the last couple of months I’ve been emailing various tech companies trying to get hold of the right people to implement this. So far the reaction from companies has been enthusiastic and apathetic in equal measures. I’ve had a few vendors testing the process, but I can’t share those names just yet as most companies have been testing with unreleased hardware.

This is where you come in. On your Linux computer right now, think about what hardware you own that works in Linux that you know has user-flashable firmware? What about your BIOS, your mouse, or your USB3 hub? Your network card, your RAID card, or your video card?

Things I want you to do:

  • Find the vendor on the internet, and either raise a support case or send an email. Try and find a technical contact, not just some sales or marketing person
  • Tell the vendor that you would like firmware updates when using Linux, and that you’re not able to update the firmware booting to Windows or OS-X
  • Tell the vendor that you’re more likely to buy from them again if firmware updates work on Linux
  • Inform the vendor about the LVFS project : https://beta-lvfs.rhcloud.com/

At all times I need you to be polite and courteous, after all we’re asking the vendor to spend time (money) on doing something extra for a small fraction of their userbase. Ignoring one email from me is easy, but getting tens or hundreds of support tickets about the same issue is a great way to get an issue escalated up to the people that can actually make changes.

So please, spend 15 minutes opening a support ticket or sending an email to a vendor now.

August 12, 2015

August Update

Let’s ramble a bit… It’s August, which means vacation time. Well, vacation is a big word! Krita was represented at Akademy 2015 with three team members. We gave presentations and talks and in general had a good time meeting up with other KDE hackers. And we added a couple of vacation days in wonderful Galicia (great food, awesome wine, interesting language).

But Akademy and vacation followed on me breaking my arm, which followed on a sprint in Los Angeles where the topic of the day was not Krita but Plasma Mobile. All in all, about a month without me doing any work on Krita.

In the meantime, Wolthera and Jouni’s Google Summer of Code, Dmitry’s Levels of Detail kickstarter project, Stefano’s work on fixing resource bundles, Michael’s work on fixing the OpenGL canvas in the Qt5 branch and lots of other things have been going on. Lots of excitement! And let’s not forget to mention the new icons that Timothee Giet is working on. It sometimes gets hard to push changes to the code repository because everything is doing that at the same time.

Anyway, on returning from Galicia, there were over 350 bugs in the bug tracker… So we decide to skip the August release of Krita 2.9, and spend a month on fixing bugs, fixing bugs and fixing more bugs! There are unstable builds in files.kde.org, but they are unstable. Be warned!

Update: we had to pull the builds… They were way too unstable! Despite that, another eight bugs bit the dust today and we’re close to correctly loading and saving 16 bit cmyk to and from PSD.

Oh, and we’ve also started on sourcing the Kickstarter rewards. It’s a huge amount of work this year: 610 postcards, 402 sticker sheets, 402 shortcut sheets, 86 t-shirts, 16 tote bags, 26 Secrets of Krita DVD’s (Timothee has already started on recording), 48 usb sticks, 17 mugs, 53 pencil cases, 3 sketch books and 1 tablet holder!

August 11, 2015

war and peace, the abridged UI version

Not to worry, this is not going to be as lengthy as Tolstoy’s tome. Actually, I intend this blog post to be shorter than usual. But yes, my topic today is war and peace in interaction design.


I like to think that my work as an interaction architect makes the world a better place.

I realise product makers’ dreams, designing elegant and captivating embodiments they can ship. I save terminally ill software and websites from usability implosion, and in the meantime get their makers a lot closer to what they always intended.

On top of that, I provide larger organisations with instruments to reign in the wishful thinking that naturally accumulates there, and to institute QA every step along the way.

All of this is accomplished by harmonising the needs of product makers, developers and users. You would think that because I deliver success; solutions to fiendishly complex problems; certainty of what needs to be done; and (finally!) usable software, working with these three groups is a harmonious, enthusiastic and warm experience.

Well, so would I, but this is not always the case.


The animosity has always baffled me. I also took it personally: there I was, showing them the light at the end of the tunnel of their own misery, and they get all antsy, hostile and hurt about it.

After talking it through with some trusted friends, I have now a whole new perspective on the matter. Sure enough, as an interaction architect I am always working in a conflict zone, but it is not my conflict. Instead, it is the tri‑party conflict between product makers, developers and users.

The main conflict between product makers and users
Each individual user expects software and websites really to be made for me‐me‐me, while product makers try to make it for as many users as possible. Both are a form of greed.
There is also a secondary conflict, when users ‘pay’ the product maker in the form of personal, behavioural data, and/or by eyeballing advertisements—’nuff said.
The main conflict between product makers and developers
Product makers want the whole thing perfectly finished by tomorrow, while reserving the right to change their mind, any given minute, on what this thing is. Developers like to know exactly up front what all the modules are they have to build—but not too exactly, it robs the chance to splice in a cheaper substitute—while knowing it takes time, four times more than one would think, to build software.
That this is a fundamental conflict is proven by the current fad for agile development, where it is conveniently forgotten that there is such a thing as coherence and wholeness to a fine piece of software.
The main conflict between developers and users
This one is very simple: who is going to do the work? Developers think it is enough to get the technology running on the processing platform—with not too many crashing bugs—and users are free to do the rest. Users will have no truck with technology; they want refined solutions that ‘just work™,’ i.e. the developers do the work.

All of this makes me Jimmy Carter at Camp David. The interaction design process, the resulting interaction design and its implementation are geared towards bringing peace and prosperity to the three parties involved. This implies compromises from each party. And for me to tell them things they do not like to hear.

Product makers need to be told
  • to make real hard choices and define their target user groups narrowly—it is not for everyone;
  • that they cannot play fast and loose with users’ data;
  • to take the long, strategic view on the product level, instead of trying to micro‐manage every detail of its realisation;
  • to concentrate on the features that really make the product, instead of a pile‑up of everybody’s wish lists;
  • to accept that they cannot have it all, and certainly not with little effort and investment.
Users need to be told that
  • each of them are just part of a (very large) group and that in general the needs of this group are addressed by the product;
  • using software and websites takes a certain amount of investment from them: time, money, participation and/or privacy;
  • software cannot read their minds; to use it they will need to input rather exactly what they are trying to achieve;
  • quite a few of them are outside the target user group and their needs will not be taken into consideration.
Developers need to be told
  • that we are here to make products, not just code modules;
  • no substitutes please, the qualities of what needs to be built determines success;
  • users cannot be directly exposed to technology; an opaque—user—interface is needed between the two;
  • if it isn’t usable, it does not work.

No wonder everybody gets upset.


How do I get myself some peace? Well, the only way is to obtain a bird’s‐eye view of the situation and to accept it.

First, I must accept that this war is inherent to any design process and all designers are confronted by it. Nobody ever really told us, but we are there to bring peace and success.

Second, I have to accept that product makers, developers and users get upset with my interaction design solutions for the simple reason that they are now confronted with the needs of the other two parties. They had it ‘all figured out’ and now this turns up. (Yes, I do also check if they have a point and discovered a bug in my design.)

Third, I have to see my role as translator in a different light. We all know that product makers, developers and users have a hard time talking to each other, and it is the role of the interaction architect to translate between them.

It is now clear to me that when I talk to one of the parties involved, I do not only fit the conversation to their frame of reference and speak their language, but also represent the other two parties. There is some anger potential there: for my audience because I speak in such familiar terms, about unfamiliar things that bring unwelcome complexity; for me because tailored vocabulary and examples do not increase acceptance from their side.

Accepting the war and peace situation is step one, doing something about it is next. I think it will take some kind of aikido move; a master blend of force and invisibility.

Force, because I am still implicitly engaged to bring peace and success, and to solve a myriad of interaction problems nobody wants to touch. This must be met head‑on, without fear.

Invisibility, because during the whole process it must be clear to all three parties that they are not negotiating with me, but with each other.


That is it for today, I promised to keep it short. There are some interesting kinks and complications in the framework I set up today, but dealing with them will have to wait for another blog post.

August 10, 2015

Blender Institute releases pilot of Cosmos Laundromat – a free and open source episodical animation film.

The 10 minute opening sequence of the movie now has been released on the web. Successive episodes will be made when additional funding is in.


The pilot tells the story of Franck, a suicidal sheep who lives on a desolate island. He meets his fate in a quirky salesman, who offers him the gift of a lifetime. Little does he know that he can only handle so much lifetime…

The “Cosmos Laundromat” project started in 2014 as an experimental feature film in which an adventurous and absurdist love story gets told by multiple studios – each working in their own unique style. The project was initiated by Blender Foundation to improve animation production pipelines with the free and open source 3D software Blender. Based on the results of a crowd-funding campaign in Spring 2014, the Blender Institute in the Netherlands decided to first work on a pilot.

The opening of Cosmos Laundromat, the 10-minute pilot called “First Cycle”, now has been released for the public on the web. In the past weeks it had successful preview screenings; in the EyeFilm cinema in Amsterdam, on the SIGGRAPH convention in Los Angeles, and in the Presto Theatre at the Pixar campus in Emmeryville.
The official theatrical premiere will be on the Netherlands Film Festival in September. The film has been nominated for the prestigious Jury Prize of the Animago festival in Berlin.

New episodes will depend on audience feedback and additional funding. Recurring revenues will be expected to be generated via the Blender Institute’s subscription system “Blender Cloud”, which gives access to all of the source data that was used to make the film. New episodes are also meant to be made using free and open source software only, sharing the entire works with the audience, free to use, free to remix and free to learn from.

Ton Roosendaal –  producer and director of Blender Institute – spent a week in Los Angeles presenting Blender and the film project. “The reception we had was fabulous, especially from artists who work in the animation industry. They totally dig the sophisticated story build up, the high quality character animation and the amazing visuals. And most of all they root for us to become a success – because we are proving that there’s independent animation production possible outside of the film business with its restrictive distribution and licensing channels.”

More information:

Cosmos Laundromat can be watched via the production blog:

Film screenshots, Poster and promotion images:

Or contact  producer Ton Roosendaal, ton@blender.org


Bat Ballet above the Amaranths

This evening Dave and I spent quite a while clearing out amaranth (pigweed) that's been growing up near the house.

[Palmer's amaranth, pigweed] We'd been wondering about it for quite some time. It's quite an attractive plant when small, with pretty patterns on its leaves that remind me of some of the decorative houseplants we used to try to grow when I was a kid.

I've been working on an Invasive Plants page for the nature center, partly as a way to figure out myself which plants we need to pull and which are okay. For instance, Russian thistle (tumbleweed) -- everybody knows what it looks like when it's a dried-up tumbleweed, but by then it's too late, scattering its seeds all over. Besides, it's covered with spikes by then. The trick is to recognize and pull it when it's young, and the same is true of a lot of invasives, especially the ones with spiky seeds that stick to you, like stickseed and caltrops (goatheads).

A couple of the nature center experts have been sending me lists of invasive plants I should be sure to include, and one of them was a plant called redroot pigweed. I'd never heard of it, so I looked it up -- and it looked an awful lot like our mystery plant. A little more web searching on Amaranthus images eventually led me to Palmer's amaranth, which turns out to be aggressive and highly competitive, with sticky seeds.

Unfortunately the pretty little plants had had a month to grow by the time we realized the problem, and some of them had trunks an inch and a half across, so we had to go after them with a machete and a hand axe. But we got most of them cleared.

As we returned from dumping the last load of pigweed, a little after 8 pm, the light was fading, and we were greeted by a bat making rounds between our patio and the area outside the den. I stopped what I was doing and watched, entranced, as the bat darted into the dark den area then back out, followed a slalom course through the junipers, buzzed past my head and the out to make a sweep across the patio ... then back, around the tight corner and back to the den, over and over.

I stood watching for twenty minutes, with the bat sometimes passing within a foot of my head. (yay, bat -- eat some of these little gnats that keep whining by my ears and eyes!) It flew with spectacular maneuverability and grace, unsurpassed by anything save perhaps a hummingbird, changing direction constantly but always smoothly. I was reminded of the way a sea lion darts around underwater while it's hunting, except the bat is so much smaller, able to turn in so little space ... and of course maneuvering in the air, and in the dark, makes it all the more impressive.

I couldn't hear the bat's calls at all. Years ago, waiting for dusk at star parties on Fremont Peak, I used to hear the bats clearly. Are the bats here higher pitched than those California bats? Or am I just losing high frequencies as I get older? Maybe a combination of both.

Finally, a second bat, a little smaller than the first, appeared over the patio and both bats disappeared into the junipers. Of course I couldn't see either one well enough to tell whether the second bat was smaller because it was a different species, or a different gender of the same species. In Myotis bats, apparently the females are significantly larger than the males, so perhaps my first bat was a female Myotis and the male came to join her.

The two bats didn't reappear, and I reluctantly came inside.

Where are they roosting? In the trees? Or is it possible that one of them is using my bluebird house? I'm not going to check and risk disturbing anyone who might be roosting there.

I don't know if it's the same little brown bat I saw last week on the front porch, but it seems like a reasonable guess.

I've wondered how many bats there are flying around here, and how late they fly. I see them at dusk, but of course there's no reason to think they stop at dusk just because we're no longer able to see them. Perhaps I'll find out: I ordered parts for an Arduino-driven bat detector a few weeks ago, and they've been sitting on my desk waiting for me to find time to solder them together. I hope I find the time before summer ends and the bats fly off wherever they go in winter.

August 07, 2015

wishful thinking; ignite the shirts

A week ago I presented about my wishful thinking and act to succeed series at Ignite Berlin. That led to some unforeseen developments, with the result that you can look forward to some real cool t‑shirts.


The Ignite format is pretty demanding. I better let them explain it themselves:

‘Each speaker gets 5 minutes on stage. 20 slides, which auto‐forward every 15 seconds, no going back. So it’s pretty brutal, although nothing that a rehearsal can’t fix.’
the Ignite format, from their about page

Yes, this is really different than presenting 20 to 45 minutes at your own rhythm, which is what I am used to. A strategy, careful planning of the 20 slides and a generous helping of rehearsal are asked for. What I regularly see at conferences—some (recycled) slides banged together the night before and winging it during showtime—is bound to have a 99.99% fail rate at Ignite.

bang, you win

The upside is that the audience wins. All the speakers are an order of magnitude more prepared than they normally would be. There is no time for waffling and even single‐issue talks are engaging for five minutes.

At this event there were fourteen talks, two runs of seven each, which sounds like a looong marathon to sit through. In praxis, one run of seven talks takes 35 minutes of pure talk time, plus some for applaus and changeover (everything is pre‐sequenced on a single laptop). Thus in 38–40 minutes, seven engaging topics have passed and then it is time for a break, to digest and discuss.

Since my talk was scheduled almost at the end of the event, I expected to be too preoccupied to enjoy all these talks before mine. On the night, all of the talks engaged and entertained me, which put me in a good mood for mine. (When is the last time you could say that about a conference?)

show and tell

In my Ignite talk I showed a selection of wishful‐thinking issues, together with the positive action that can must be taken to remedy them. Meanwhile, I told the back‐story, for instance, that—

  • I have seen all of this wishful thinking in practice;
  • I wanted to expose a destructive streak that runs through the IT industry;
  • it was more work to make issues and remedies fit a single tweet, than to come up with them;
  • I felt that I could go on ‘forever’, but called it quits at fifty;
  • being in interaction design—which is essentially product realisation and involves seeing all dimensions (product, users, tech)—makes it easy to see the damage from wishful thinking;
  • it is a real shame to see the right people, with the right intentions, run projects into the ground through wishful thinking;
  • this is not valid only in IT, but in any industry;
  • please, it is difficult, but resist the wishful thinking when you believe in what you are working on;
  • what is needed is process change, which is also difficult, introducing a design process that from the first to the last minute of the project shapes and runs all product realisation, including manufacturing or fixing that final bug.


I had plenty of interesting discussions after the talks were through, but one really took me by surprise: fellow speaker Onika Simon of Spokehub said something along the lines of ‘why don’t you put this wishful thinking on t‑shirts? There are plenty of people who deserve to get one.’

During my talk I had admitted that I am not a product maker and that never in my life I’ve had a good product idea. Thus it did not surprise me that I never had thought of wishful‐thinking t‑shirts. But now that the genie was out of the bottle, how difficult could it be?

snakes and ladders

Some parts were really straightforward. The content was already there. Deciding what should go on front and back, and picking some free‐as‐in‐speech fonts (right, no pirated components in my products) was no big deal. Neither was typesetting the texts.

Making EPS files already involved jumping through one hoop (why not accept pdf? It is just about the same tech). Dealing with spreadshirt was a three‐ring circus. Spreadshirt is suppose to make it easy to open your own merchandising outlet, but forget about the easy part.

I could go on and on, about requiring flash <spit>, crashes, usability disasters, the pervasive ‘how do I get that done?’ and ‘how do I know it did it?’ anxiety, and only finding out what you will get when you get there. But let’s say that unless you are a spreadshirt executive, I won’t bother (you with it).


Against these odds, I did manage to put up a t‑shirt shop in less than a week. There is one MVP: a limited‐edition t‑shirt (available one month only) in female and male cuts, and two variants, dark and bright:

the bright female, dark female, dark male and bright male wishful     thinking shirts

I found out at the very end, when I got to check it out (typical, eh), that you can change the shirt colour in the shop. Suits me fine; a simple ‘menu’ to choose from and then freedom to customise, a bit.

When I checked the wishful thinking topic page, I noticed how hard‐hitting these are by themselves, so it was clear that these go, solo, on the front:

the text on the front of the shirt: the hardware specs are fixed, now     we can start with the software design

This is the wishful thought for August ’15 and you can see that I plunged for the first one I saw. Each month I will pick a different one (no, not in the order on that page) and change the ‘bright’ colour scheme.

On the back we ensure that everyone gets the point…

the text on the back: wishful thinking breeds failed products

…just in case the beholder wishfully thinks the statement on the front is best‐practice.


And out of the blue m+mi works offers a hardware product. It will be fun offering these and I hope spreadshirt cooperates a bit more to keep it that way. I look forward to seeing one of these t‑shirts being worn in the wild.

PyCon Australia 2015

If anyone is interested in my talk I gave at PyCon Australia in Brisbane, here is the YouTube link:

Slides can be found here: http://redhat.slides.com/rjoost/deck-3/YouTube link: How your Python program behaves: a story on how to build a program slicer

The conference was a blast. Thanks to the organisers for this wonderful conference.

August 06, 2015

Creating a QML snap with Snapcraft

We want to enable all kinds of developers to quickly make applications and devices using Snappy as their basis. A quick way to make compelling user interfaces is by using QML, so it seemed like a natural fit to get QML working in snapcraft to eliminate complex setups, and just get things working. There is an Introduction to Snapcraft, I'm going to assume you've already read that.

To get started with an interesting demo I went and stole the Qt photoviewer demo and pulled it into its' own repository, then added a couple simple configuration files. This is a great demo because it is graphical and fun, but also shows pulling data from the network as all the photos are based on Flickr tags.

    plugin: qml
    plugin: copy
      main.qml: main.qml
      PhotoViewerCore: PhotoViewerCore
snappy-metadata: meta

The snapcraft.yaml file includes two parts. The first part is the QML plug in which includes all the pieces needed to run QML programs from the Ubuntu archive. The second is the copy plugin which copies our QML files into the snap. We don't have a build system in this example so copy is all we need, more complex examples could use the cmake or autotools plugins instead.

The last item in the snapcraft.yaml tells Snapcraft where to find the packaging information for Snappy. In the meta directory we have a packages.yaml that is a standard Snappy package file.

name: photoviewer
version: 0.1
vendor: Ted Gould <ted@canonical.com>
frameworks: [mir]
  - name: photoviewer
    exec: qmlscene main.qml --
      - mir_client
      - network-client

It configures a binary that will be set up by Snappy, which is simply a call to qmlscene with our base QML file. This will then get wrapped up into a single binary in /apps/bin that we can execute.

We need to now turn this directory into a snap. You should follow the instructions to install snapcraft, and then you can just call it in that directory:

$ snapcraft

There are a few ways to set up a Snappy system, the one that I've used here is with QEMU on my development system. That makes it easy to develop and test with, and currently the Mir snap is only available for amd64. After getting snappy setup you'll need to grab the Mir framework from the store and install the snap we just built.

$ sudo snappy install mir
$ sudo snappy install --allow-unauthenticated photoviewer_1.0_amd64.snap

You can then run the photoviewer:

$ photoviewer.photoviewer

And you should have something like this on your display:

While this is a simple demo of what can be done with QML, it can be expanded to enable all kinds of devices from displaying information on a network service or providing UI for a small IoT device.

August 05, 2015

Report from Akademy 2015

A week has passed since I’ve been back from Akademy, so it’s more than time to make a little report.

I’ve enjoyed a lot meeting old and new friends from KDE. Lot of good times shared :)

akademy2015-people(photo by Alex Merry ; you can find lot of ther cool photos on this link)

This year I made a quick little talk presenting the result of my work on GCompris. You can find it with all other talks recorded on this page, if you didn’t watch them already.


Also I could discuss some ideas for next things to come, so stay tuned ;)

Thanks a lot to KDE e.V for the support, that was another awesome experience.

August 04, 2015

Color Curves Matching

Color Curves Matching

Sample points and matching tones

In my previous post on Color Curves for Toning/Grading, I looked at the basics of what the Curves dialog lets you do in GIMP. I had been meaning to revisit the subject with a little more restraint (the color curve in that post was a little rough and gross, but it was for illustration so I hope it served its purpose).

This time I want to look at the use of curves a little more carefully. You’d be amazed at the subtlety that gentle curves can produce in toning your images. Even small changes in your curves can have quite the impact on your final result. For instance, have a look at the four film emulation curves created by Petteri Sulonen (if you haven’t read his page yet on creating these curves, it’s well worth your time):

Dot Original Headshot Original
Dot Portra NC400 Film Portraesque (Kodak Portra NC400 Film)
Dot Fuji Provia Film Proviaesque (Fujichrome Provia)
Dot Fuji Velvia Film Velviaesque (Fujichrome Velvia)
Dot crossprocessed C41 Film Crossprocess (E6 slide film in C-41 neg. processing)

I can’t thank Petteri enough for releasing these curves for everyone to use (for us GIMP users, there is a .zip file at the bottom of his post that contains these curves packaged up). Personally I am a huge fan of the Portraesque curve that he has created. If there is a person in my images, it’s usually my go-to curve as a starting point. It really does generate some wonderful skin tones overall.

The problem in generating these curves is that one has to be very, very familiar with the characteristics of the film stocks you are trying to emulate. I never shot Velvia personally, so it is hard for me to have a reference point to start from when attempting to emulate this type of film.

What we can do, however, is to use our personal vision or sense of aesthetic to begin toning our images to something that we like. GIMP has some great tools for helping us to become more aware of color and the effects of each channel on our final image. That is what we are going to explore…

Disclaimer I cannot stress enough that what we are approaching here is an entirely subjective interpretation of what is pleasing to our own eyes. Color is a very complex subject and deserves study to really understand. Hopefully some of the things I talk about here will help pique your interest to push further and experiment!
There is no right or wrong, but rather what you find pleasing to your own eye.

Approximating Tones

What we will be doing is using Sample Points and the Curves dialog to modify the color curves in my image above to emulate something else. It could be another photograph, or even a painting.

I’ll be focusing on the skin tones, but the method can certainly be used for other things as well.

Dot Original Headshot My wonderful model.

With an image you have, begin considering what you might like to approximate the tones on. For instance, in my image above I want to work on the skin tones to see where it leads me.

Now find an image that you like, and would like to approximate the tones from. It helps if the image you are targeting already has tones somewhat similar to what you are starting with (for instance, I would look for another Caucasian image with similar tones to start from, as opposed to Asian). Keeping tones at least similar will reduce the violence you’ll do to your final image.

So for my first example, perhaps I would like to use the knowledge that the Old Masters already had in regards to color, and would like to emulate the skin tones from Vermeer’s Girl with the Pearl Earring.

Johannes Vermeer Girl with the Pearl Earring Johannes Vermeer - The Girl With The Pearl Earring (1665)

In GIMP I will have my original image already opened, and will then open my target image as a new layer. I’ll pull this layer to one side of my image to give me a view of the areas I am interested in (faces and skin).

Vermeer setup GIMP

I will be using Sample Points extensively as I proceed. Read up on them if you haven’t used them before. They are basically a means of giving you real-time feedback of the values of a pixel in your image (you can track up to four points at one time).

I will put a first sample point somewhere on the higher skin tones of my base image. In this case, I will put one on my models forehead (we’ll be moving it around shortly, so somewhere in the neighborhood is fine).

GIMP first sample point

Ctrl + Left Click in the ruler area of your main window (shown in green above), and drag out into your image. There should be crosshairs across your entire image screen showing you where you are dragging.

When you release the mouse button, you’ve dropped a Sample Point onto your image. You can see it in my image above as a small crosshair with the number 1 next to it.

GIMP should open the sample points dialog for you when you create the first point, but if not you can access it from the image menu under:

Windows → Dockable Dialogs → Sample Points

Sample points dialog

This is what the dialog looks like. You can see the RGB pixel data for the first sample point that I have already placed. As you place more sample points, they will each be reflecting their data on this dialog.

You can go ahead and place more sample points on your image now. I’ll place another sample point, but this time I will put it on my target image where the tones seem similar in brightness.

Sample point placed

What I’ll then do is change the data being shown in the Sample Points dialog to show HSV data instead of Pixel data.

Sample points dialog with 2 points

Now, I will shoot for around 85% value on my source image, and try to find a similar value level in similar tones from my target image as well. Once you’ve placed a sample point, you can continue to move it around and see what types of values it gives you. (If you use another tool in the meantime, and can no longer move just the sample point - you can just select the Color Picker Tool to be able to move them again).

Move the points around your skin tones until you get about the same Value for both points.

Once you have them, make sure your original image layer is active, then start up the curves dialog.

Colors → Curves…

Now here is something really handy to know while using the Curves dialog: if you hover your mouse over your image, you’ll notice that the cursor is a dropper - you can click and drag on an area of your image, and the corresponding value will show up in your curves dialog for that pixel (or averaged area of pixels if you turn that on).

So click and drag to about the same pixel you chose in your original image for the sample point.

Curve base Curves dialog with a value point (217) for my sampled pixel.

Here is what my working area currently looks like:

GIMP workspace for sample point color matching

I have my curves dialog open, and an area around my sample point chosen so that the values will be visible in the dialog, my images with their associated sample points, and the sample points dialog showing me the values of those points.

The basic idea now is to adjust my RGB channels to get my original image sample point (#1) to match my target image sample point (#2).

Because I selected an area around my sample point with the curves dialog open, I will know roughly where those values need to be adjusted. Let’s start with the Red channel.

First, set the Sample Points dialog back to Pixel to see the RGBA data for that pixel.

GIMP Sample point Red Green Blue matching

We can now see that to match the pixel colors we will need to make some adjustments to each channel. Specifically,

the Red channel will have to come down a bit (218 → 216),

the Green down some as well (188 → 178),

and Blue much more (171 → 155).

You may want to resize your Curves dialog window larger to be able to more finely control the curves. If we look at the Red channel in my example, we would want to adjust the curve down slightly at the vertical line that shows us where our pixel values are:

Color Curve Adjustment Red

We can adjust the red channel curve along this vertical axis (marked x:217) until our pixel red value matches the target (216).

Then just change over to the green channel and do the same:

Color Curve Adjustment Green

Here we are adjusting the green curve vertically along the axis marked x:190 until our pixel green value matches the target (178).

Finally, follow the same procedure for the blue channel:

Color Curve Adjustment Blue

As before, we adjust along the vertical axis x:173 until our blue channel matches the target (155).

At this point, our first sample point pixel should be the same color as from our target.

The important thing to take away from this exercise is to be watching your image as you are adjusting these channels to see what types of effects they produce. Dropping the green channel should have seen a slight addition of magenta to your image, and dropping the blue channel should have shown you the addition of a yellow to balance it.

Watch your image as you make these changes.

Don’t hit OK on your curves dialog yet!

You’ll want to repeat this procedure, but using some sample points that are darker than the previous ones. Our first sample points had values of about 85%, so now let’s see if we can match pixels down below 50% as well.

Without closing your curves dialog, you should be able to click and drag your sample points around still. So I would set your Sample Points dialog to show you HSV values again, and now drag your first point around on your image until you find some skin that’s in a darker value, maybe around 40-45%.

Once you do, try to find a corresponding value in your target image (or something close at least).

I managed to find skin tones with values around 45% in both of my images:

Color CUrve Skin Dark Color Curve Sking Dark RGB

In these darker tones, I can see that the adjustments I will have to make are for:

Red down a bit (116 → 114),

Green bumped up some (60 → 73),

Blue slightly down (55 → 53).

With the curves dialog still active, I then click and drag on my original image until I am in the same area as my sample point again. This give me my vertical line showing me the value location in my curves dialog, just as before:

Dark tones red Red down to 114.
Dark tones green Green up to 73.
Dark tones blue Blue down to 53.

At this point you should have something similar to the tones of your target image. Here is my image after these adjustments so far:

Results so far GIMP Matching Effects of the curves so far (click to compare to original).

Once you’ve got things in a state that you like, it would be a good idea to save your progress. At the top of the Curves dialog there is a “+” symbol. This will let you add the current settings to your favorites. This will allow you to recall these settings later to continue working on them.

However, you’re results might not quite look right at the moment. So why not?

Well, the first problem is that Sample Points will only allow you to sample a single pixel value. There’s a chance that the pixels you pick are not truly representative of the correct skin tones in that range (for instance you may have inadvertently clicked a pixel that represents the oil paint cracks in the image). It would be nice if there were an option for Sample Points to allow an adjustable sample radius (if there is an option I haven’t found it yet).

The second issue is that similar value points might be very different colors overall. Hopefully your sources will be nice for you to pick in areas that you know are relatively consistent and representative of the tones you want, but it’s not always a guarantee.

If the results are not quite what you want at the moment, you can do what I will sometimes do and go back to the beginning…

While still keeping the curves dialog open you can pull your sample points to another location, and match the target again. Try choosing another sample point with a similar value as the first one. This time instead of adding new points the curve as you make adjustments, just drag the existing points you previously placed.

It’s an Iterative Process

Depending on how interested you are in tweaking your resulting curve, you may find yourself going around a couple of times. That’s ok.

Iterative flowchart

I would recommend keeping your curves to having two control points at first. You want your curves to be smooth across the range (any abrupt changes will do strange things to your final image).

If you are doing a couple of iterations, try modifying existing points on your curves instead of adding new ones. It may not be an exact match, but it doesn’t have to be. It only needs to look nice to your eyes.

There won’t be a perfect solution for a perfect color matching between images, but we can produce pleasing curves that emulate the results we are looking for.

In Conclusion

I personally have found the process of doing this with different images to be quite instructive in how the curves will affect my image. If you try this out and pay careful attention to what is happening while you do it, I’m hopeful you will come away with a similar appreciation of what these curves will do.

Most importantly, don’t be constrained by what you are targeting, but rather use it as a stepping off point for inspiration and experimentation for your own expression!

I’ll finish with a couple of other examples…

Dot Botticelli Birth of Venus Sandro Botticelli - The Birth of Venus) (click to compare to original)
Fa Presto - St. Michael (click to compare original)

And finally, as promised, here’s the video tutorial that steps through everything I’ve explained above:

From a request, I’ve packaged up some of the curves from this tutorial (Pearl Earring, St. Michael, the previous Orange/Teal Hell, and another I was playing with from a Norman Rockwell painting): Download the Curves (7zip .7z)

FPV Addicts

I’ve started doing longer edits of the 15 second clips I usually put on Instagram. I’ve been really creative with the naming so far.

FPV Addicts

FPV Addict

August 03, 2015

Monthly Drawing Challenge August 2015

(by jmf)

The 6th iteration of the Monthly Drawing Challenge is taking place on the Krita Forums!

This month’s topic is… Ancient

To enter, post your picture on the August drawing challenge thread. The deadline is August 24, 2015. You are free to interpret the topic in any way. Let your imagination run free. The winner is decided through a poll running 7 days after the deadline. The winner gets the privilege to choose next month’s topic. You can use the hashtag #kritachallenge to talk about this challenge on social media.

I started the challenge in February 2015 with two goals: to give people motivation to draw, and to give them a way to get rid of the “blank canvas syndrome”. The challenge is not about winning! It is about making art, trying something new, and getting inspired.


Last month’s winner: “Love at First Flight” by scottyp.

love at first flight

July 31, 2015

Fri 2015/Jul/31

  • I've been making a little map of things in Göteborg, for GUADEC. The red markers are for the main venue (Folkets Hus) and the site of the BOFs (IT University). There's a ferry line to get from near Folkets Hus to the IT Univerisy. The orange marker is the Liseberg amusement park where the roller coasters are. The blue markers are some hotels.

    Go here if you cannot see the map.

July 30, 2015

New Discuss Categories and Logging In

New Discuss Categories and Logging In

Software, Showcase, and Critiques. Oh My!

Hot on the heels of our last post about welcoming G’MIC to the forums at discuss.pixls.us, I thought I should speak briefly about some other additions I’ve recently made.

These were tough for me to finally make a decision about. I want to be careful and not get crazy with over-categorization. At the same time, I do want to make good logical breakdowns for people that is still intuitive.

Here is what the current category breakdown looks like for discuss:

    The comment/posts from articles/blogposts here on the main site.
  • Processing
    Processing and managing images after they’ve been captured.
  • Capturing
    Capturing an image and the ways we go about doing it.
  • Showcase
  • Critique
  • Meta
    Discussions related to the website or the forum itself.
    • Help!
      Help with the website or forums.
  • Software
    Discussions about various software in general.

Along with the addition of the Software category (and the G’MIC subcategory), I decided that the Help! category would make more sense under the Meta category. That is, the Help! section is for website/forum help, which is more of a Meta topic (hence moving it).


As we’ve already seen, there is now a Software category for all discussions about the various software we use. The first sub-category to this is of course, the G’MIC subcategory.

F/OSS Project Logos

If there is enough interest in it, I am open to creating more sub-categories as needed to support particular software projects (GIMP, darktable, RawTherapee, etc…). I will wait until there is some interest before adding more categories here.


This category had some interest from members and I agree that it’s a good idea. It’s intended as a place for members to showcase the works they’re proud of and to hopefully serve as a nice example of what we’re capable of producing using F/OSS tools.

A couple of examples from the Showcase category so far:

Filmulator Output Example, by Carlo Vaccari New Life, Filmulator Output Sample, by CarVac
Mairi Troisieme, by Pat David Mairi Troisième by Pat David (cbna)

There may be a use of this category later for storing submissions for a rotating lede image on the main page of the site.


This is intended as a place for members to solicit advice and critiques on their works from others. It took me a little work to come up with an initial take on the overall description for the category.

I can promise that I will do my best to give honest and constructive feedback to anyone that asks in this category. I also promise to do my best to make sure that no post goes un-answered here (I know how beneficial feedback has been to me in the past, so it’s the least I could do to help others out in return).

Discuss Login Options

I also bit the bullet this week and finally caved to sign up for a Facebook account. The only reason was because I had to have a personal account to get an API key to allow people to log in using their FB account (with OAuth).

dicuss.pixls.us login options We can now use Google, Facebook, Twitter, and Yahoo! to Log In.

On the other hand, we now accept four different methods of logging in automatically along with signing up for a normal account. I have been trying to make it as frictionless as possible to join the conversation and hopefully this most recent addition (FB) will help in some small way.

Oh, and if you want to add me on Facebook, my profile can be found here. I also took the time to create a page for the site here: PIXLS.US on Facebook.

released darktable 1.6.8

We are happy to announce that darktable 1.6.8 has been released.

The release notes and relevant downloads can be found attached to this git tag:
Please only use our provided packages ("darktable-1.6.8.*" tar.xz and dmg) not the auto-created tarballs from github ("Source code", zip and tar.gz). The latter are just git snapshots and will not work! Here are the direct links to tar.xz and dmg:

this is a point release in the stable series. the sha256sum is

sha256sum darktable-1.6.8.tar.xz
sha256sum darktable-1.6.8.dmg

and as always, please don't use the tarballs provided by github (marked as "Source code").


  • clipping, santiy check for custom aspect ratios
  • read lensmodel from xmp
  • handle canon lens recognition special case
  • general cleanups


  • Canon EOS M3
  • Canon EOS 5Ds (R)
  • Nikon 1 J5
  • Panasonic DMC-G7 (4:3 aspect ratio only)
  • Fujifilm X-T10
  • Pentax K-S2
  • Panasonic TZ71
  • Olympus TG-4
  • Leica VLUX1 4:3 aspect ratio mode

standard color matrices

  • Canon EOS M3
  • Canon EOS 5Ds (R)
  • Nikon 1 J5
  • Panasonic DMC-G7
  • Fujifilm X-T10
  • Pentax K-S2
  • Olympus TG-4

white balance presets

  • Samsung NX500
  • Panasonic TZ71

noise profiles

  • Sony ILCE-5100
  • Fujifilm HS50EXR
  • Canon EOS 5Ds R

So now go out, enjoy the summer and take a lot of photos!

A good week for critters

It's been a good week for unusual wildlife.

[Myotis bat hanging just outside the front door] We got a surprise a few nights ago when flipping the porch light on to take the trash out: a bat was clinging to the wall just outside the front door.

It was tiny, and very calm -- so motionless we feared it was dead. (I took advantage of this to run inside and grab the camera.) It didn't move at all while we were there. The trash mission accomplished, we turned out the light and left the bat alone. Happily, it wasn't ill or dead: it was gone a few hours later.

We see bats fairly regularly flying back and forth across the patio early on summer evenings -- insects are apparently attracted to the light visible through the windows from inside, and the bats follow the insects. But this was the first close look I'd had at a stationary bat, and my first chance to photograph one.

I'm not completely sure what sort of bat it is: almost certainly some species of Myotis (mouse-eared bats), and most likely M. yumanensis, the "little brown bat". It's hard to be sure, though, as there are at least six species of Myotis known in the area.

[Woodrat released from trap] We've had several woodrats recently try to set up house near the house or the engine compartment of our Rav4, so we've been setting traps regularly. Though woodrats are usually nocturnal, we caught one in broad daylight as it explored the area around our garden pond.

But the small patio outside the den seems to be a particular draw for them, maybe because it has a wooden deck with a nice dark space under it for a rat to hide. We have one who's been leaving offerings -- pine cones, twigs, leaves -- just outside the door (and less charming rat droppings nearby), so one night Dave set three traps all on that deck. I heard one trap clank shut in the middle of the night, but when I checked in the morning, two traps were sprung without any occupants and the third was still open.

But later that morning, I heard rattling from outside the door. Sure enough, the third trap was occupied and the occupant was darting between one end and the other, trying to get out. I told Dave we'd caught the rat, and we prepared to drive it out to the parkland where we've been releasing them.

[chipmunk caught in our rat trap] And then I picked up the trap, looked in -- and discovered it was a pretty funny looking woodrat. With a furry tail and stripes. A chipmunk! We've been so envious of the folks who live out on the canyon rim and are overloaded with chipmunks ... this is only the second time we've seen here, and now it's probably too spooked to stick around.

We released it near the woodpile, but it ran off away from the house. Our only hope for its return is that it remembers the nice peanut butter snack it got here.

[Baby Great Plains skink] Later that day, we were on our way out the door, late for a meeting, when I spotted a small lizard in the den. (How did it get in?) Fast and lithe and purple-tailed, it skittered under the sofa as soon as it saw us heading its way.

But the den is a small room and the lizard had nowhere to go. After upending the sofa and moving a couple of tables, we cornered it by the door, and I was able to trap it in my hands without any damage to its tail.

When I let it go on the rocks outside, it calmed down immediately, giving me time to run for the camera. Its gorgeous purple tail doesn't show very well, but at least the photo was good enough to identify it as a juvenile Great Plains skink. The adults look more like Jabba the Hut nothing like the lovely little juvenile we saw. We actually saw an adult this spring (outside), when we were clearing out a thick weed patch and disturbed a skink from its hibernation. And how did this poor lizard get saddled with a scientfic name of Eumeces obsoletus?

July 27, 2015

3D printing Poe

I helped print this statue of Edgar Allan Poe, through “We the Builders“, who coordinate large-scale crowd-sourced 3D print jobs:

Poe's Face

You can see one of my parts here on top, with “-Kees” on the piece with the funky hair strand:

Poe's Hair

The MakerWare I run on Ubuntu works well. I wish they were correctly signing their repositories. Even if I use non-SSL to fetch their key, as their Ubuntu/Debian instructions recommend, it still doesn’t match the packages:

W: GPG error: http://downloads.makerbot.com trusty Release: The following signatures were invalid: BADSIG 3D019B838FB1487F MakerBot Industries dev team <dev@makerbot.com>

And it’s not just my APT configuration:

$ wget http://downloads.makerbot.com/makerware/ubuntu/dists/trusty/Release.gpg
$ wget http://downloads.makerbot.com/makerware/ubuntu/dists/trusty/Release
$ gpg --verify Release.gpg Release
gpg: Signature made Wed 11 Mar 2015 12:43:07 PM PDT using RSA key ID 8FB1487F
gpg: requesting key 8FB1487F from hkp server pgp.mit.edu
gpg: key 8FB1487F: public key "MakerBot Industries LLC (Software development team) <dev@makerbot.com>" imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)
gpg: BAD signature from "MakerBot Industries LLC (Software development team) <dev@makerbot.com>"
$ grep ^Date Release
Date: Tue, 09 Jun 2015 19:41:02 UTC

Looks like they’re updating their Release file without updating the signature file. (The signature is from March, but the Release file is from June. Oops!)

© 2015, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

Basic Color Curves

Basic Color Curves

An introduction and simple color grading/toning

Color has this amazing ability to evoke emotional responses from us. From the warm glow of a sunny summer afternoon to a cool refreshing early evening in fall. We associate colors with certain moods, places, feelings, and memories (consciously or not).

Volumes have been written on color and I am in no ways even remotely qualified to speak on it. So I won’t.

Instead, we are going to take a look at the use of the Curves tool in GIMP. Even though GIMP is used to demonstrate these ideas, the principles are generic to just about any RGB curve adjustments.

Your Pixels and You

First there’s something you need to consider if you haven’t before, and that’s what goes into representing a colored pixel on your screen.

PIXLS.US House Zoom Example Open up an image in GIMP.
PIXLS.US House Zoom Example Now zoom in.
PIXLS.US House Zoom Example Nope - don’t be shy now, zoom in more!
PIXLS.US House Zoom Example Aaand there’s your pixel. So let’s investigate what goes into making your pixel.

Remember, each pixel is represented by a combination of 3 colors: Red, Green, and Blue. In GIMP (currently at 8-bit), that means that each RGB color can have a value from 0 - 255, and combining these three colors with varying levels in each channel will result in all the colors you can see in your image.

If all three channels have a value of 255 - then the resulting color will be pure white. If all three channels have a value of 0 - then the resulting color will be pure black.

If all three channels have the same value, then you will get a shade of gray (128,128,128 would be a middle gray color for instance).

So now let’s see what goes into making up your pixel:

GIMP Color Picker Pixel View The RGB components that mix into your final blue pixel.

As you can see, there is more blue than anything else (it is a blue-ish pixel after all), followed by green, then a dash of red. If we were to change the values of each channel, but kept ratio the same between Red, Green, and Blue, then we would keep the same color and just lighten or darken the pixel by some amount.

Curves: Value

So let’s leave your pixel alone for the time being, and actually have a look at the Curves dialog. I’ll be using this wonderful image by Eric from Flickr.

Hollow Moon by Eric qsimple Flickr Hollow Moon by qsimple/Eric on Flickr. (cbna)

Opening up my Curves dialog shows me the following:

GIMP Base Curves Dialog

We can see that I start off with the curve for the Value of the pixels. I could also use the drop down for “Channel” to change to red, green or blue curves if I wanted to. For now let’s look at Value, though.

In the main area of the dialog I am presented with a linear curve, behind which I will see a histogram of the value data for the entire image (showing the amount of each value across my image). Notice a spike in the high values on the right, and a small gap at the brightest values.

GIMP Base Curves Dialog Input Output

What we can do right now is to adjust the values of each pixel in the image using this curve. The best way to visualize it is to remember that the bottom range from black to white represents the current value of the pixels, and the left range is the value to be mapped to.

So to show an example of how this curve will affect your image, suppose I wanted to remap all the values in the image that were in the midtones, and to make them all lighter. I can do this by clicking on the curve near the midtones, and dragging the curve higher in the Y direction:

GIMP Base Curves Dialog Push Midtones

What this curve does is takes the values around the midtones, and pushes their values to be much lighter than they were. In this case, values around 128 were re-mapped to now be closer to 192.

Because the curve is set Smooth, there will be a gradual transition for all the tones surrounding my point to be pulled in the same direction (this makes for a smoother fall-off as opposed to an abrupt change at one value). Because there is only a single point in the curve right now, this means that all values will be pulled higher.

Hollow Moon Example Pushed Midtones The results of pushing the midtones of the value curve higher (click to compare to original).

Care should be taken when fiddling with these curves to not blow things out or destroy detail, of course. I only push the curves here to illustrate what they do.

A very common curve adjustment you may hear about is to apply a slight “S” curve to your values. The effect of this curve would be to darken the dark tones, and to lighten the light tones - in effect increasing global contrast on your image. For instance, if I click on another point in the curves, and adjust the points to form a shape like so:

GIMP Base Curves Dialog S shaped curve A slight “S” curve

This will now cause dark values to become even darker, while the light values get a small boost. The curve still passes through the midpoint, so middle tones will stay closer to what they were.

Hollow Moon Example S curve applied Slight “S” curve increases global contrast (click for original).

In general, I find it easiest to visualize in terms of which regions in the curve will effect different tones in your image. Here is a quick way to visualize it (that is true for value as well as RGB curves):

GIMP Base Curves darks mids lights zones

If there is one thing you take away from reading this, let it be the image above.

Curves: Colors

So how does this apply to other channels? Let’s have a look.

The exact same theory applies in the RGB channels as it did with values. The relative positions of the darks, midtones, and lights are still the same in the curve dialog. The primary difference now is that you can control the contribution of color in specific tonal regions of your image.

Value, Red, Green, Blue channel picker.

You choose which channel you want to adjust from the “Channel” drop-down.

To begin demonstrating what happens here it helps to have an idea of generally what effect you would like to apply to your image. This is often the hardest part of adjusting the color tones if you don’t have a clear idea to start with.

For example, perhaps we wanted to “cool” down the shadows of our image. “Cool” shadows are commonly seen during the day in shadows out of direct sunlight. The light that does fall in shadows is mostly reflected light from a blue-ish sky, so the shadows will trend slightly more blue.

To try this, let’s adjust the Blue channel to be a little more prominent in the darker tones of our image, but to get back to normal around the midtones and lighter.

Boosting blues in darker tones
Pushing up blues in darker tones (click for original).

Now, here’s a question: If I wanted to “cool” the darker tones with more blue, what if I wanted to “warm” the lighter tones by adding a little yellow?

Well, there’s no “Yellow” curve to modify, so how to approach that? Have a look at this HSV color wheel below:

The thing to look out for here is that opposite your blue tones on this wheel, you’ll find yellow. In fact, for each of the Red, Green, and Blue channels, the opposite colors on the color wheel will show you what an absence of that color will do to your image. So remember:

RedCyan GreenMagenta BlueYellow

What this means to you while manipulating curves is that if you drag a curve for blue up, you will boost the blue in that region of your image. If instead you drag the curve for blue down, you will be removing blues (or boosting the Yellows in that region of your image).

So to boost the blues in the dark tones, but increase the yellow in the lighter tones, you could create a sort of “reverse” S-curve in the blue channel:

Boost blues in darks, boost yellow in high tones (click for original).

In the green channel for instance, you can begin to introduce more magenta into the tones by decreasing the curve. So dropping the green curve in the dark tones, and letting it settle back to normal towards the high tones will produce results like this:

Suppressing the green channel in darks/mids adds a bit of magenta
(click for original).

In isolation, these curves are fun to play with, but I think that perhaps walking through some actual examples of color toning/grading would help to illustrate what I’m talking about here. I’ll choose a couple of common toning examples to show what happens when you begin mixing all three channels up.

Color Toning/Grading

Orange and Teal Hell

I use the (cinema film) term color grading here because the first adjustment we will have a look at to illustrate curves is a horrible hollywood trend that is best described by Todd Miro on his blog.

Grading is a term for color toning on film, and Todd’s post is a funny look at the prevalence of orange and teal in modern film palettes. So it’s worth a look just to see how silly this is (and hopefully to raise awareness of the obnoxiousness of this practice).

The general thought here is that caucasian skin tones trend towards orange, and if you have a look at a complementary color on the color wheel, you’ll notice that directly opposite orange is a teal color.

Screenshot from Kuler borrowed from Todd.

If you don’t already know about it, Adobe has online a fantastic tool for color visualization and palette creation called Kuler Adobe Color CC. It lets you work on colors based on some classic rules, or even generate a color palette from images. Well worth a visit and a fantastic bookmark for fiddling with color.

So a quick look at the desired effect would be to keep/boost the skin tones into a sort of orange-y pinkish color, and to push the darker tones into a teal/cyan combination. (Colorists on films tend to use a Lift, Gamma, Gain model, but we’ll just try this out with our curves here).

Quick disclaimer - I am purposefully exaggerating these modifications to illustrate what they do. Like most things, moderation and restraint will go a long ways towards not causing your viewers eyeballs to bleed. Remember - light touch!

So I know that I want to see my skin tones head into an orange-ish color. In my image the skin tones are in the upper mids/low highs range of values, so I will start around there.

What I’ve done is put a point around the low midtones to anchor the curve closer to normal for those tones. This lets me fiddle with the red channel and to isolate it roughly to the mid and high tones only. The skin tones in this image in the red channel will fall toward the upper end of the mids, so I’ve boosted the reds there. Things may look a little weird at first:

If you look back at the color wheel again, you’ll notice that between red and green, there is a yellow, and if you go a bit closer towards red the yellow turns to more of an orange. What this means is that if we add some more green to those same tones, the overall colors will start to shift towards an orange.

So we can switch to the green channel now, put a point in the lower midtones again to hold things around normal, and slightly boost the green. Don’t boost it all the way to the reds, but about 2/3rds or so to taste.

This puts a little more red/orange-y color into the tones around the skin. You could further adjust this by perhaps including a bit more yellow as well. To do this, I would again put an anchor point in the low mid tones on the blue channel, then slightly drop the blue curve in the upper tones to introduce a bit of yellow.

Remember, we’re experimenting here so feel free to try things out as we move along. I may consider the upper tones to be finished at the moment, and now I would want to look at introducing a more blue/teal color into the darker tones.

I can start by boosting a bit of blues in the dark tones. I’m going to use the anchor point I already created, and just push things up a bit.

Now I want to make the darker tones a bit more teal in color. Remember the color wheel - teal is the absence of red - so we will drop down the red channel in the lower tones as well.

And finally to push a very slight magenta into the dark tones as well, I’ll push down the green channel a bit.

If I wanted to go a step further, I could also put an anchor point up close to the highest values to keep the brightest parts of the image closer to a white instead of carrying over a color cast from our previous operations.

If your previous operations also darkened the image a bit, you could also now revisit the Value channel, and make modifications there as well. In my case I bumped the midtones of the image just a bit to brighten things up slightly.

Finally to end up at something like this.

After fooling around a bit - disgusting, isn’t it?
(click for original).

I am exaggerating things here to illustrate a point. Please don’t do this to your photos. :)

If you’d like to download the curves file of the results we reached above, get it here:
Orange Teal Hell Color Curves


Remember, think about what the color curves represent in your image to help you achieve your final results. Begin looking at the different tonalities in your image and how you’d like them to appear as part of your final vision.

For even more fun - realize that the colors in your images can help to evoke emotional responses in the viewer, and adjust things accordingly. I’ll leave it as an exercise for the reader to determine some of the associations between colors and different emotions.

Sun 2015/Jul/26

  • An inlaid GNOME logo, part 5

    Esta parte en español

    (Parts 1, 2, 3, 4)

    This is the shield right after it came out of the clamps. I had to pry it a bit from the clamped board with a spatula.

    Unclamped shield

    I cut out the shield shape by first sawing the straight sections, and then using a coping saw on the curved ones.

    Sawing straight edges

    Coping the curves

    All cut out

    I used a spokeshave to smooth the convex curves on the sides.

    Spokeshave for the curves

    The curves on the top are concave, and the spokeshave doesn't fit. I used a drawknife for those.

    Drawknife for the tight curves

    This gives us crisp corners and smooth curves throughout.

    Crisp corner

    On to planing the face flat! I sharpened my plane irons...

    Sharp plane iron

    ... and planed carefully. The cutoff from the top of the shield was useful as a support against the planing stop.

    Starting to plane the shield

    The foot shows through once the paper is planed away...

    Foot shows through the paper

    Check out the dual-color shavings!

    Dual-color shavings

    And we have a flat board once again. That smudge at the top of the sole is from my dirty fingers — dirty with metal dust from the sharpening step — so I washed my hands and planed the dirt away.

    Flat shield

    The mess after planing

    But it is too flat. So, I scribed a line all around the front and edges, and used the spokeshave and drawknife again to get a 45-degree bevel around the shield. The line is a bit hard to see in the first photo, but it's there.

    Scribed lines for bevel

    Beveling with a spokeshave

    Final bevel around the shield

    Here is the first coat of boiled linseed oil after sanding. When it dries I'll add some coats of shellac.

    First coat of linseed oil

Trackpad workarounds: using function keys as mouse buttons

I've had no end of trouble with my Asus 1015E's trackpad. A discussion of laptops on a mailing list -- in particular, someone's concerns that the nifty-looking Dell XPS 13, which is available preloaded with Linux, has had reviewers say that the trackpad doesn't work well -- reminded me that I'd never posted my final solution.

The Asus's trackpad has two problems. First, it's super sensitive to taps, so if any part of my hand gets anywhere near the trackpad while I'm typing, suddenly it sees a mouse click at some random point on the screen, and instead of typing into an emacs window suddenly I find I'm typing into a live IRC client. Or, worse, instead of typing my password into a password field, I'm typing it into IRC. That wouldn't have been so bad on the old style of trackpad, where I could just turn off taps altogether and use the hardware buttons; this is one of those new-style trackpads that doesn't have any actual buttons.

Second, two-finger taps don't work. Three-finger taps work just fine, but two-finger taps: well, I found when I wanted a right-click (which is what two-fingers was set up to do), I had to go TAP, TAP, TAP, TAP maybe ten or fifteen times before one of them would finally take. But by the time the menu came up, of course, I'd done another tap and that canceled the menu and I had to start over. Infuriating!

I struggled for many months with synclient's settings for tap sensitivity and right and left click emulation. I tried enabling syndaemon, which is supposed to disable clicks as long as you're typing then enable them again afterward, and spent months playing with its settings, but in order to get it to work at all, I had to set the timeout so long that there was an infuriating wait after I stopped typing before I could do anything.

I was on the verge of giving up on the Asus and going back to my Dell Latitude 2120, which had an excellent trackpad (with buttons) and the world's greatest 10" laptop keyboard. (What the Dell doesn't have is battery life, and I really hated to give up the Asus's light weight and 8-hour battery life.) As a final, desperate option, I decided to disable taps completely.

Disable taps? Then how do you do a mouse click?

I theorized, with all Linux's flexibility, there must be some way to get function keys to work like mouse buttons. And indeed there is. The easiest way seemed to be to use xmodmap (strange to find xmodmap being the simplest anything, but there you go). It turns out that a simple line like

  xmodmap -e "keysym F1 = Pointer_Button1"
is most of what you need. But to make it work, you need to enable "mouse keys":
  xkbset m

But for reasons unknown, mouse keys will expire after some set timeout unless you explicitly tell it not to. Do that like this:

  xkbset exp =m

Once that's all set up, you can disable single-finger taps with synclient:

  synclient TapButton1=0
Of course, you can disable 2-finger and 3-finger taps by setting them to 0 as well. I don't generally find them a problem (they don't work reliably, but they don't fire on their own either), so I left them enabled.

I tried it and it worked beautifully for left click. Since I was still having trouble with that two-finger tap for right click, I put that on a function key too, and added middle click while I was at it. I don't use function keys much, so devoting three function keys to mouse buttons wasn't really a problem.

In fact, it worked so well that I decided it would be handy to have an additional set of mouse keys over on the other side of the keyboard, to make it easy to do mouse clicks with either hand. So I defined F1, F2 and F3 as one set of mouse buttons, and F10, F11 and F12 as another.

And yes, this all probably sounds nutty as heck. But it really is a nice laptop aside from the trackpad from hell; and although I thought Fn-key mouse buttons would be highly inconvenient, it took surprisingly little time to get used to them.

So this is what I ended up putting in .config/openbox/autostart file. I wrap it in a test for hostname, since I like to be able to use the same configuration file on multiple machines, but I don't need this hack on any machine but the Asus.

if [ $(hostname) == iridum ]; then
  synclient TapButton1=0 TapButton2=3 TapButton3=2 HorizEdgeScroll=1

  xmodmap -e "keysym F1 = Pointer_Button1"
  xmodmap -e "keysym F2 = Pointer_Button2"
  xmodmap -e "keysym F3 = Pointer_Button3"

  xmodmap -e "keysym F10 = Pointer_Button1"
  xmodmap -e "keysym F11 = Pointer_Button2"
  xmodmap -e "keysym F12 = Pointer_Button3"

  xkbset m
  xkbset exp =m
  synclient TapButton1=1 TapButton2=3 TapButton3=2 HorizEdgeScroll=1

July 24, 2015

Fri 2015/Jul/24

  • An inlaid GNOME logo, part 4

    Esta parte en español

    (Parts 1, 2, 3)

    In the last part, I glued the paper templates for the shield and foot onto the wood. Now comes the part that is hardest for me: excavating the foot pieces in the dark wood so the light-colored ones can fit in them. I'm not a woodcarver, just a lousy joiner, and I have a lot to learn!

    The first part is not a problem: use a coping saw to separate the foot pieces.

    Foot pieces, cut out

    Next, for each part of the foot, I started with a V-gouge to make an outline that will work as a stop cut. Inside this shape, I used a curved gouge to excavate the wood. The stop cut prevents the gouge from going past the outline. Finally, I used the curved gouge to get as close as possible to the final line.

    V channel as a stop cut Excavating inside the channel

    Each wall needs squaring up, as the curved gouge leaves a chamfered edge instead of a crisp angle. I used the V-gouge around each shape so that one of the edges of the gouge remains vertical. I cleaned up the bottom with a combination of chisels and a router plane where it fits.

    Square walls

    Then, each piece needs to be adjusted to fit. I sanded the edges to have a nice curve instead of the raw edges from the coping saw. Then I put a back bevel on each piece, using a carving knife, so the back part will be narrower than the front. I had to also tweak the walls in the dark wood in some places.

    Unadjusted piece Sanding the curves Beveling the edges

    After a lot of fiddling, the pieces fit — with a little persuasion — and they can be glued. When the glue dries I'll plane them down so that they are level to the dark wood.

    Gluing the pieces Glued pieces

    Finally, I clamped everything against another board to distribute the pressure. Let's hope for the best.


July 22, 2015

Welcome G'MIC

Welcome G'MIC

Moving G'MIC to a modern forum

Anyone who’s followed me for a while likely knows that I’m friends with G’MIC (GREYC’s Magic for Image Computing) creator David Tschumperlé. I was also able to release all of my film emulation presets on G’MIC for everyone to use with David’s help and we collaborated on a bunch of different fun processing filters for photographers in G’MIC (split details/wavelet decompose, freaky details, film emulation, mean/median averaging, and more).

David Tschumperle beauty dish GMIC David, by Me (at LGM2014)

It’s also David that helped me by writing a G’MIC script to mean average images for me when I started making my amalgamations (Thus moving me away from my previous method of using Imagemagick):

Mad Max Fury Road Trailer 2 - Amalgamation Mad Max Fury Road Trailer 2 - Amalgamation

So when the forums here on discuss.pixls.us were finally up and running, it only made sense to offer G’MIC its own part of the forums. They had previously been using a combination of Flickr groups and gimpchat.com. These are great forums, they were just a little cumbersome to use.

You can find the new G’MIC category here. Stop in and say hello!

I’ll also be porting over the tutorials and articles on work we’ve collaborated on soon (freaky details, film emulation).



To the winners of the Open Source Photography Course Giveaway

I compiled the list of entries this afternoon across the various social networks and let random.org pick an integer in the domain of all of the entries…

So a big congratulations goes out to:

Denny Weinmann (Facebook, @dennyweinmann, Google+ )
Nathan Haines (@nhaines, Google+)

I’ll be contacting you shortly (assuming you don’t read this announcement here first…)! I will need a valid email address from you both in order to send your download links. You can reach me at pixlsus@pixls.us.

Thank you to everyone who shared the post to help raise awareness! The lessons are still on sale until August 1st for $35USD over on Riley’s site.

1.2.0-beta.0 released

Hello everyone. I’m really happy to announce the first proper pre-release of MyPaint 1.2.0 for those brave early-bird testers out there.

You can download it from https://github.com/mypaint/mypaint/releases/tag/v1.2.0-beta.0.

Windows and Ubuntu binaries are available, all signed, and the traditional signed canonical source tarball is there too for good measure. Sorry about the size of the Windows installers – we need to package all of GTK/GDK and Python on that platform too.

(Don’t forget: if you find the translations lacking for your languages, you can help fix mistakes before the next beta over at https://hosted.weblate.org/engage/mypaint/ )

July 20, 2015

Plugging in those darned USB cables

I'm sure I'm not the only one who's forever trying to plug in a USB cable only to find it upside down. And then I flip it and try it the other way, and that doesn't work either, so I go back to the first side, until I finally get it plugged in, because there's no easy way to tell visually which way the plug is supposed to go.

It's true of nearly all of the umpteen variants of USB plug: almost all of them differ only subtly from the top side to the bottom.

[USB trident] And to "fix" this, USB cables are built so that they have subtly raised indentations which, if you hold them to the light just right so you can see the shadows, say "USB" or have the little USB trident on the top side:

In an art store a few weeks ago, Dave had a good idea.

[USB cables painted for orientation] He bought a white paint marker, and we've used it to paint the logo side of all our USB cables.

Tape the cables down on the desk -- so they don't flop around while the paint is drying -- and apply a few dabs of white paint to the logo area of each connector. If you're careful you might be able to fill in the lowered part so the raised USB symbol stays black; or to paint only the raised USB part. I tried that on a few cables, but after the fifth or so cable I stopped worrying about whether I was ending up with a pretty USB symbol and just started dabbing paint wherever was handy.

The paint really does make a big difference. It's much easier now to plug in USB cables, especially micro USB, and I never go through that "flip it over several times" dance any more.

July 19, 2015

Windows porting

I’m hoping that MyPaint will be able to support Windows fully starting with the first v1.2.0-beta release. This is made possible by the efforts of our own Windows porters and testers, and the dedicated folks who keep MSYS2 working so well.

The Inno Setup installer we'll be using starting with the 1.2.0-beta releases. Releases will start happening shortly (date TBA) on Github, and you’ll be able to pull down installer binaries for 32 bit and 64 bit Windows as part of this.

If you’re interested in the workings of the installer build, and would like to test it and help it improve, it’s all documented and scripted in the current github master. Please be aware that SourceForge downloads are involved during the build procedure until MSYS2 fix that. Our own binaries and installers will never be knowingly distributed – by us – through SourceForge or any similar crapware bundling site.

Discussion thread on the forums.

July 15, 2015

The Open Source Photography Course

The Open Source Photography Course

A chance to win a free copy

Photographer Riley Brandt recently released his Open Source Photography Course. I managed to get a little bit of his time to answer some questions for us about his photography and the course itself. You can read the full interview right here:

A Q&A with Photographer Riley Brandt

As an added bonus just for PIXLS.US readers, he has gifted us a nice surprise!

Did Someone Say Free Stuff?

Riley went above and beyond for us. He has graciously offered us an opportunity for 2 readers to win a free copy of the course (one in an open format like WebM/VP8, and another in a popular format like MP4/H.264)!

For a chance to win, I’m asking you to share a link to this post on:

with the hashtag #PIXLSGiveAway (you can click those links to share to those networks). Each social network counts as one entry, so you can triple your chances by posting across all three.

Next week (Monday, 2015-07-20 Wednesday, 2015-07-22 to give folks a full week), I will search those networks for all the posts and compile a list of people, from which I’ll pick the winners (using random.org). Make sure you get that hashtag right! :)

Some Previews

Riley has released three nice preview videos to give a taste of what’s in the courses:

A Q&A with Photographer Riley Brandt

A Q&A with Photographer Riley Brandt

On creating a F/OSS photography course

Riley Brandt is a full-time photographer (and sometimes videographer) at the University of Calgary. He previously worked for the weekly (Calgary) local magazine Fast Forward Weekly (FFWD) as well as Sophia Models International, and his work has been published in many places from the Wall Street Journal to Der Spiegel (and more).

Riley Brandt Logo

He recently announced the availability of The Open Source Photography Course. It’s a full photographic workflow course using only free, open source software that he has spent the last ten months putting together.

Riley has graciously offered two free copies for us to give away!
For a chance to win, see this blog post.

Riley Brandt Photography Course Banner

I was lucky enough to get a few minutes of Riley’s time to ask him a few questions about his photography and this course.

A Chat with Riley Brandt

Tell us a bit about yourself!

Hello, my name is Riley Brandt and I am a professional photographer at the University of Calgary.

At work, I get to spend my days running around a university campus taking pictures of everything from a rooster with prosthetic legs made in a 3D printer, to wild students dressed in costumes jumping into freezing cold water for charity. It can be pretty awsome.

Outside of work, I am a supporter of Linux and open source software. I am also a bit of a film geek.

Univ. Calgary Prosthetic Rooster [ed. note: He’s not kidding - That’s a rooster with prosthetic legs…]

I see you were trained in photojournalism. Is this still your primary photographic focus?

Though I definitely enjoy portraits, fashion and lifestyle photography, my day to day work as a photographer at a university is very similar to my photojournalism days.

I have to work with whatever poor lighting conditions I am given, and I have to turn around those photos quickly to meet deadlines.

However, I recently became an uncle for the first time to a baby boy, so I imagine I will be expanding into new born and toddler photography very soon :)

Riley Brandt Environment Portrait Sample Environmental Portrait by Riley Brandt

How long have you been a photographer?

Photography started as a hobby for me when I was living the Czech Republic in the late 90s and early 2000s. My first SLR camera was the classic Canon AE1 (which I still have).

I didn’t start to work as a full time professional photographer until I graduated from the Journalism program at SAIT Polytechnic in 2008.

What type of photography do you enjoy doing the most?

In a nutshell, I enjoy photographing people. This includes both portraits and candid moments at events.

I love meeting someone with an interesting story, and then trying to capture some of their personality in an image.

At events, I’ve witnessed everything from the joy of someone meeting an astronaut they idolize, to the anguish of a parent at graduation collecting a degree instead of their child who was killed. Capturing genuine emotion at events is challenging, and overwhelming at times, but is also very gratifying.

It would be hard for me to choose between candids or portraits. I enjoy them both.

Riley Brandt Portraits Portraits by Riley Brandt

How would you describe your personal style?

I’ve been told several times that my images are very “clean”. Which I think means I limit the image to only a few key elements, and remove any major distractions.

If you had to choose your favorite image from your portfolio, what would it be?

I don’t have a favorite image in my collection.

However, at the end of a work week, I usually have at least one image that I am really happy with. A photo that I will look at again when I get home from work. An image that I look forward to seeing published. Those are my favorites.

Has free-software always been the foundation of your workflow?

Definitely not. I started with Adobe software, and still use it (and other non-free software) at work. Though hopefully that will change.

I switched to free software for all my personal work at home, because all my computers at home run Linux.

I also dislike at lot of Adobe’s actions as a company, ie: horrible security and switching to a “cloud” version of their software which is really just a DRM scheme.

There many significant reasons to not run non-free software, but what really motivated my switch initially was simply that Adobe never released a Linux version of their software.

What is your normal OS/platform?

I guess I am transitioning from Ubuntu to Fedora (both GNU/Linux). My main desktop is still running Ubuntu Gnome 14.04. But my laptop is running Fedora 21.

Ubuntu doesn’t offer an up to date version of the Gnome desktop environment. It also doesn’t use the Gnome Software Centre or many Gnome apps. Fedora does. So my desktop will be running Fedora in the near future as well.

Riley Brandt Summer Days Riley Brandt Summer Days Lifestyle by Riley Brandt

What drove you to consider creating a free-software centric course?

Because it was so difficult for me to transition from Adobe software to free software, I wanted to provide an easier option for others trying to do the same thing.

Instead of spending weeks or months searching through all the different manuals, tutorials and websites, someone can spend a weekend watching my course and be up and running quickly.

Also, it was just a great project to work on. I got to combine two of my passions, Linux and photography.

Is the course the same as your own approach?

Yes, it’s the same way I work.

I start with fundamentals like monitor calibration and file management. Then onto basics like correcting exposure, color, contrast and noise. After that, I cover less frequently used tools. It’s the same way I work.

The course focuses heavily on darktable for RAW processing - have you also tried any of the other options such as RawTherapee?

I originally tried digiKam because it looked like it had most of the features I needed. However, KDE and I are like oil and water. The user interface felt impenetrable to me, so I moved on.

I also tried RawTherapee, but only briefly. I got some bad results in the beginning, but that was probably due to my lack of familiarity with the software. I might give it another go one day.

Once darktable added advanced selective editing with masks, I was all in. I like the photo management element as well.

Riley Brandt Portraits

Have you considered expanding your (course) offerings to include other aspects of photography?

Umm.. not just yet. I first need to rest :)

If you were to expand the current course, what would you like to focus on next?

It’s hard to say right now. Possibly a more in depth look at GIMP. Or a series where viewers watch me edit photos from start to finish.

It took 10 months to create this course, will you be taking a break or starting right away on the next installment? :)

A break for sure :) I spent most of my weekends preparing and recording a lesson for the past year. So yes, first a break.

Some parting words?

I would like to recommend the Desktop Publishing course created by GIMP Magazine editor Steve Czajka for anyone who is trying to transition from Adobe InDesign to Scribus.

I would also love to see someone create a similar course for Inkscape.

The Course

Riley Brandt Photography Course Banner

The Open Source Photography Course is available for order now at Riley’s website. The course is:

  • Over 5 hours of video material
  • DRM free
  • 10% of net profits donated back to FOSS projects
  • Available in open format (WebM/VP8) or popular (H.264), all 1080p
  • $50USD

He has also released some preview videos of the course:

From his site is a nice course outline to get a feel for what is covered:

Course Outline

Chapter 1. Getting Started

  1. Course Introduction
    Welcome to The Open Source Photography Course
  2. Calibrate Your Monitor
    Start your photography workflow the right way by calibrating your monitor with dispcalGUI
  3. File Management
    Make archiving and searching for photos easier by using naming conventions and folder organization
  4. Download and Rename
    Use Rapid Photo Downloader to rename all your photos during the download process

Chapter 2. Raw Editing in darktable

  1. Introduction to darktable, Part One
    Get to know darktable’s user interface
  2. Introduction to darktable, Part Two
    Take a quick look at the slideshow view in darktable
  3. Import and Tag
    Import photos into darktable and tag them with keywords, copyright information and descriptions
  4. Rating Images
    Learn an efficient way to cull, rate, add color labels and filter photos in lighttable
  5. Darkroom Overview
    Learn the basics of the darkroom view including basic module adjustments and creating favorites
  6. Correcting Exposure, Part 1
    Correct exposure with the base curves, levels, exposure, and curves modules
  7. Correcting Exposure, Part 2
    See several examples of combining modules to correct an image’s exposure
  8. Correct White Balance
    Use presets and make manual changes in the white balance module to color correct your images
  9. Crop and Rotate
    Navigate through the many crop and rotate options including guides and automatic cropping
  10. Highlights and Shadows
    Recover details lost in the shadows and highlights of your photos
  11. Adding Contrast
    Make your images stand out by adding contrast with the levels, tone curve and contrast modules
  12. Sharpening
    Fix those soft images with the sharpen, equalizer and local contrast modules
  13. Clarity
    Sharpen up your midtones by utilizing the local contrast and equalizer modules
  14. Lens Correction
    Learn how to fix lens distortion, vignetting and chromatic aberrations
  15. Noise Reduction
    Learn the fastest, easiest and best way to clean up grainy images taken in low light
  16. Masks, Part one
    Discover the possibilities of selective editing with the shape, gradient and path tools
  17. Masks, Part Two
    Take you knowledge of masks further in this lesson about parametric masks
  18. Color Zones
    Learn how to limit your adjustments to a specific color’s hue, saturation or brightness
  19. Spot Removal
    Save time by making simple corrections in darktable, instead of opening up GIMP
  20. Snapshots
    Quickly compare different points in your editing history with snapshots
  21. Presets and Styles
    Save your favorite adjustments for later with presets and styles
  22. Batch Editing
    Save time by editing one image, then quickly applying those same edits to hundreds of images
  23. Searching for Images
    Learn how to sort and search through a large collection of images in Lighttable
  24. Adding Effects
    Get creative in the effects group with vignetting, framing, split toning and more
  25. Exporting Photos
    Learn how to rename, resize and convert you RAW photos to JPEG, TIFF and other formats

Chapter 3. Touch Ups in GIMP

  1. Introduction to GIMP
    Install GIMP, then get to know your way around the user interface
  2. Setting Up GIMP, Part 1
    Customize the user interface, adjust a few tools and install color profiles
  3. Setting Up GIMP, Part 2
    Set keyboard shortcuts that mimic Photoshop’s and install a couple of plugins
  4. Touch Ups
    Use the heal tool and the clone tool to clean up your photos
  5. Layer Masks
    Learn how to make selective edits and non-destructive edits using layer masks
  6. Removing Distractions
    Combine layers, a helpful plugin and layer masks to remove distractions from your photos
  7. Preparing Images for the Web
    Reduce file size while retaining quality before you upload your photos to the web
  8. Getting Help and Finding the Community
    Find out which websites, mailing lists and forums to go to for help and friendly discussions

All the images in this post © Riley Brandt.

July 14, 2015

Hummingbird Quidditch!

[rufous hummingbird] After months of at most one hummingbird at the feeders every 15 minutes or so, yesterday afternoon the hummingbirds here all suddenly went crazy. Since then, my patio looks like a tiny Battle of Britain, There are at least four males involved in the fighting, plus a couple of females who sneak in to steal a sip whenever the principals retreat for a moment.

I posted that to the local birding list and someone came up with a better comparison: "it looks like a Quidditch game on the back porch". Perfect! And someone else compared the hummer guarding the feeder to "an avid fan at Wimbledon", referring to the way his head keeps flicking back and forth between the two feeders under his control.

Last year I never saw anything like this. There was a week or so at the very end of summer where I'd occasionally see three hummingbirds contending at the very end of the day for their bedtime snack, but no more than that. I think putting out more feeders has a lot to do with it.

All the dogfighting (or quidditch) is amazing to watch, and to listen to. But I have to wonder how these little guys manage to survive when they spend all their time helicoptering after each other and no time actually eating. Not to mention the way the males chase females away from the food when the females need to be taking care of chicks.

[calliope hummingbird]

I know there's a rufous hummingbird (shown above) and a broad-tailed hummingbird -- the broad-tailed makes a whistling sound with his wings as he dives in for the attack. I know there a black-chinned hummer around because I saw his characteristic tail-waggle as he used the feeder outside the nook a few days before the real combat started. But I didn't realize until I checked my photos this morning that one of the combatants is a calliope hummingbird. They're usually the latest to arrive, and the rarest. I hadn't realized we had any calliopes yet this year, so I was very happy to see the male's throat streamers when I looked at the photo. So all four of the species we'd normally expect to see here in northern New Mexico are represented.

I've always envied places that have a row of feeders and dozens of hummingbirds all vying for position. But I would put out two feeders and never see them both occupied at once -- one male always keeps an eye on both feeders and drives away all competitors, including females -- so putting out a third feeder seemed pointless. But late last year I decided to try something new: put out more feeders, but make sure some of them are around the corner hidden from the main feeders. Then one tyrant can't watch them all, and other hummers can establish a beachhead.

It seems to be working: at least, we have a lot more activity so far than last year, even though I never seem to see any hummers at the fourth feeder, hidden up near the bedroom. Maybe I need to move that one; and I just bought a fifth, so I'll try putting that somewhere on the other side of the house and see how it affects the feeders on the patio.

I still don't have dozens of hummingbirds like some places have (the Sopaipilla Factory restaurant in Pojoaque is the best place I've seen around here to watch hummingbirds). But I'm making progress

Building a better catalog file

Inside a windows driver package you’ll probably see a few .dll‘s, a .inf file and a .cat file. If you’ve ever been curious in Windows you’ll have double clicked it and it would show some technical information about the driver and some information about who signed the file.

We want to use this file to avoid having to get vendors to manually sign the firmware file with a GPG detached signature, which also implies trusting the Microsoft WHQL certificate. These are my notes on my adventure so far.

There are not many resources on this stuff, and I’d like to thank dwmw2 and dhowels for all their help so far answering all my stupid questions. osslsigncode is also useful to see how other signing is implemented.

So, the basics. A .cat file is a SMIME PKCS DER file. We can dump the file using:

openssl asn1parse -in ecfirmware.cat  -inform DER

and if we were signing just one file we should be able to verify the .cat file with something like this:

wget http://www.microsoft.com/pki/certs/MicRooCerAut_2010-06-23.crt
openssl x509 -in MicRooCerAut_2010-06-23.crt -inform DER -out ms/msroot.pem -outform PEM
cat ms/*.pem > ms/certs.pem
openssl smime -verify -CAfile ms/certs.pem -in ecfirmware.cat -inform DER -attime $(date +%s --date=2015-01-01) -content ECFirmware.
Verification failed

(Ignore the need to have the root certificate for now, that seems to be a bug in OpenSSL and they probably have bigger fires to fight at this point)

…but it’s not. We have a pkcs7-signed blob and we need to work out how to get the signature to actually *match* and then we have to work out how to interpret the pkcs7-signed data blob, and use the sha256sums therein to validate the actual data. OpenSSL doesn’t know how to interpret the MS content type OID ( so it wimps out and doesn’t put any data into the digest at all.

We can get the blob using a simple:

dd if=ecfirmware.cat of=ecfirmware.cat.payload bs=1 skip=66 count=1340

…which now verifies:

openssl smime -verify -CAfile ms/certs.pem -in ecfirmware.cat -inform DER -attime $(date +%s --date=2015-01-01) -content ecfirmware.cat.payload
Verification successful

The blob appears to be a few tables of UTF-16 filename and SHA1/SHA256 checksummed data, encoded in ASN.1 notation. I’ve spent quite a few evenings decompiling the DER file into an ASN file without a whole lot of success (there are 14(!) layers of structures to contend with) and I’ve still not got an ASN file that can correctly parse my DER file for my simple unsigned v1 (ohh yes, v1 = SHA1, v2 = SHA256) test files. There is also a lot of junk data in the blob, and some questionable design choices which mean it’s a huge pain to read. Even if I manage to write the code to read the .cat data blob I’ve then got to populate the data (including the junk data…) so that Windows will accept my file to avoid needing a Microsoft box to generate all firmware images. Also add to the mix that the ASN.1 data is different on different Windows versions (with legacy versions overridden), which explains why you see things like rather than translated titles in the catalog viewer in Windows XP when trying to view .cat files created on Windows 7.

I’ve come to the conclusion that writing a library to reliably read and write all versions of .cat files is probably about 3 months work, and that’s 3 months I simply don’t have. Given there isn’t actually a specification (apart from a super small guide on how to use the MS crypto API) it would also be an uphill battle with every Windows release.

We could of course do something Linux specific that does the same thing, although that obviously would not work on Windows and means we have to ask the vendors to do an extra step in release engineering. Using GPG would be easiest, but a lot of the hardware vendors seem wed to the PKCS certificate mechanism, and I suppose it does mean you can layer certificates for root trust, vendors and contractors. GPG signing the firmware file only doesn’t actually give us a file-list with the digest values of the other metadata in the .cab file.

A naive solution would be to do something like this:

sha25sum firmware.inf firmware.metainfo.xml firmware.bin > firmware.digest
openssl dgst -sha256 -sign cert-private.pem -out firmware.sign firmware.digest
openssl dgst -sha256 -verify cert-pubkey.pem -signature firmware.sign firmware.files

But to actually extract the firmware.digest file we need the private key. We can check prepared data using the public key, but that means shipping firmware.digest and firmware.sign when we only really want one file (.cab files checksum the files internally, so we can be sure against data corruption).

Before I go crazy and invent yet another file format specification does anybody know of a signed digest format with an open specification? Better ideas certainly welcome, thanks.


July 13, 2015

darktable on Windows

darktable on Windows

Why don't you provide a Windows build?

Due to the heated debate lately, a short foreword:

We do not want to harass, insult or criticize anyone due to his or her choice of operating system. Still, from time to time we encounter comments from people accusing us of ignorance or even disrespect towards Windows users. If any of our statements can be interpreted such, we want to apologize for that – and once more give the full explanation of our lacking Windows support.

The darktable project

darktable is developed and maintained by a small group of people in their spare time, just for fun. We do not have any funds, do not provide travel reimbursements for conferences or meetings, and don’t even have a legal entity at the moment. In other words: None of the developers has ever seen (and most likely will ever see) a single $(INSERT YOUR CURRENCY) for the development of darktable, which is thus a project purely driven by enthusiasm and curiosity.

The development environment

The team is quite mixed, some have a professional background in computing, others don’t. But all love photography and like exploring the full information recorded by the camera themselves. Most new features are added to darktable as an expert for, let’s say GPU computing, steps up and is willing to provide and maintain code for the new feature.

Up till now there is one technical thing that unites all developers: None of them is using Windows as operating system. Some are using Mac OSX, Solaris, etc, but most run some Linux distribution. New flavors of operating systems kept being added to our list with people willing to support their favorite system joining the team.

Also (since this stands out a bit as “commercial operating system”) Mac OS X support arrived in exactly this way. Someone (parafin!) popped up, said: “I like this software, and I want to run darktable on my Mac.”, compiled it on OS X and since then does testing and package building for the Mac OS X operating system. And this is not an easy job. Initially there were just snapshot builds from git, no official releases, not even release candidates – but already the first complaints about the quality arrived. Finally, there was a lot of time invested in working around specific peculiarities of this operating system to make it work and provide builds for every new version of darktable released.

This nicely shows one of the consequences of the project’s organizational (non-) structure and development approach: at first, every developer cares about darktable running on his personal system.

Code contributions and feature requests

Usually feature requests from users or from the community are treated like a brainstorming session. Someone proposes a new feature, people think and discuss about it – and if someone likes the idea and has time to code it, it might eventually come – if the team agrees on including the feature.

But life is not a picnic. You probably wouldn’t pass by your neighbor and demand from him to repair your broken car – just because you know he loves to tinker with his vintage car collection at home.
Same applies here. No one feels comfortable if suddenly request are being made that would require a non-negligible amount of work – but with no return for the person carrying out the work, neither moneywise nor intellectually.

This is the feeling created every time someone just passes by leaving as only statement: “Why isn’t there a Windows build (yet)?”.

Providing a Windows build for darktable

The answer has always been the same: because no one stepped up doing it. None of the passers-by requesting a Windows build actually took the initiative, just downloaded the source code and started the compilation. No one approached the development team with actual build errors and problems encountered during a compilation using MinGW or else on Windows. The only thing ever aired were requests for ready-made binaries.

As stated earlier here, the development of darktable is totally about one’s own initiative. This project (as many others) is not about ordering things and getting them delivered. It’s about starting things, participating and contributing. It’s about trying things out yourself. It’s FLOSS.

One argument that pops up from time to time is: “darktable’s user base would grow immensely with a Windows build!”. This might be true. But – what’s the benefit from this? Why should a developer care how many people are using the software if his or her sole motivation was producing a nice software that he/she could process raw files with?

On the contrary: more users usually means more support, more bug tracker tickets, more work. And this work usually isn’t the pleasing sort, hunting seldom bugs occurring with some rare camera’s files on some other operating system is usually not exactly what people love to spent their Saturday afternoon on.

This argumentation would totally make sense if darktable would be sold, the developers paid and the overall profit would depend on the number of people using the software. No one can be blamed for sending such requests to a company selling their software or service (for your money or your data, whatever) – and it is up to them to make an economical decision on whether it makes sense to invest the time and manpower or not.

But this is different.

Not building darktable on Windows is not a technical issue after all. There certainly are problems of portability, and code changes would be necessary, but in the end it would probably work out. The real problem is (as has been pointed out by the darktable development team many times in the past) the maintenance of the build as well as all the dependencies that the package requires.

The darktable team is trying to deliver a high-quality reliable software. Photographers rely on being able to re-process their old developments with recent versions of darktable obtaining exactly the same result – and that on many platforms, being it CPUs or GPUs with OpenCL. Satisfying this objective requires quite some testing, thinking and maintenance work.

Spawning another build on a platform that not a single developer is using would mean lots and lots of testing – in unfamiliar terrain, and with no fun attached at all. Releasing a half-way working, barely tested build for Windows would harm the project’s reputation and diminish the confidence in the software treating your photographs carefully.

We hope that this reasoning is comprehensible and that no one feels disrespected due to the choice of operating system.


That other OS

Why don't you provide a Windows build?

Due to the heated debate lately, a short foreword:

We do not want to harass, insult or criticize anyone due to his or her choice of operating system. Still, from time to time we encounter comments from people accusing us of ignorance or even disrespect towards Windows users. If any of our statements can be interpreted such, we want to apologize for that – and once more give the full explanation of our lacking Windows support.

The darktable project

darktable is developed and maintained by a small group of people in their spare time, just for fun. We do not have any funds, do not provide travel reimbursements for conferences or meetings, and don't even have a legal entity at the moment. In other words: None of the developers has ever seen (and most likely will ever see) a single $(INSERT YOUR CURRENCY) for the development of darktable, which is thus a project purely driven by enthusiasm and curiosity.

The development environment

The team is quite mixed, some have a professional background in computing, others don't. But all love photography and like exploring the full information recorded by the camera themselves. Most new features are added to darktable as an expert for, let's say GPU computing, steps up and is willing to provide and maintain code for the new feature.

Up till now there is one technical thing that unites all developers: None of them is using Windows as operating system. Some are using Mac OSX, Solaris, etc, but most run some Linux distribution. New flavors of operating systems kept being added to our list with people willing to support their favorite system joining the team.

Also (since this stands out a bit as “commercial operating system”) Mac OS X support arrived in exactly this way. Someone (parafin!) popped up, said: “I like this software, and I want to run darktable on my Mac.”, compiled it on OS X and since then does testing and package building for the Mac OS X operating system. And this is not an easy job. Initially there were just snapshot builds from git, no official releases, not even release candidates – but already the first complaints about the quality arrived. Finally, there was a lot of time invested in working around specific peculiarities of this operating system to make it work and provide builds for every new version of darktable released.

This nicely shows one of the consequences of the project's organizational (non-) structure and development approach: at first, every developer cares about darktable running on his personal system.

Code contributions and feature requests

Usually feature requests from users or from the community are treated like a brainstorming session. Someone proposes a new feature, people think and discuss about it – and if someone likes the idea and has time to code it, it might eventually come – if the team agrees on including the feature.

But life is not a picnic. You probably wouldn't pass by your neighbor and demand from him to repair your broken car – just because you know he loves to tinker with his vintage car collection at home.
Same applies here. No one feels comfortable if suddenly request are being made that would require a non-negligible amount of work – but with no return for the person carrying out the work, neither moneywise nor intellectually.

This is the feeling created every time someone just passes by leaving as only statement: “Why isn't there a Windows build (yet)?”.

Providing a Windows build for darktable

The answer has always been the same: because no one stepped up doing it. None of the passers-by requesting a Windows build actually took the initiative, just downloaded the source code and started the compilation. No one approached the development team with actual build errors and problems encountered during a compilation using MinGW or else on Windows. The only thing ever aired were requests for ready-made binaries.

As stated earlier here, the development of darktable is totally about one's own initiative. This project (as many others) is not about ordering things and getting them delivered. It's about starting things, participating and contributing. It's about trying things out yourself. It's FLOSS.

One argument that pops up from time to time is: “darktable's user base would grow immensely with a Windows build!”. This might be true. But – what's the benefit from this? Why should a developer care how many people are using the software if his or her sole motivation was producing a nice software that he/she could process raw files with?

On the contrary: more users usually means more support, more bug tracker tickets, more work. And this work usually isn't the pleasing sort, hunting seldom bugs occurring with some rare camera's files on some other operating system is usually not exactly what people love to spent their Saturday afternoon on.

This argumentation would totally make sense if darktable would be sold, the developers paid and the overall profit would depend on the number of people using the software. No one can be blamed for sending such requests to a company selling their software or service (for your money or your data, whatever) – and it is up to them to make an economical decision on whether it makes sense to invest the time and manpower or not.

But this is different.

Not building darktable on Windows is not a technical issue after all. There certainly are problems of portability, and code changes would be necessary, but in the end it would probably work out. The real problem is (as has been pointed out by the darktable development team many times in the past) the maintenance of the build as well as all the dependencies that the package requires.

The darktable team is trying to deliver a high-quality reliable software. Photographers rely on being able to re-process their old developments with recent versions of darktable obtaining exactly the same result – and that on many platforms, being it CPUs or GPUs with OpenCL. Satisfying this objective requires quite some testing, thinking and maintenance work.

Spawning another build on a platform that not a single developer is using would mean lots and lots of testing – in unfamiliar terrain, and with no fun attached at all. Releasing a half-way working, barely tested build for Windows would harm the project's reputation and diminish the confidence in the software treating your photographs carefully.

We hope that this reasoning is comprehensible and that no one feels disrespected due to the choice of operating system.


That other OS

Translators needed for 1.2.0

MyPaint badly needs your language skills to make the 1.2.0 release a reality. Please help us out by translating the program into your language. We literally cannot make v1.2.0 a good release of MyPaint without your help, so to help you out we’ve made it as easy as we can for you to get involved by translating program texts.

Translation status: Graphical status badge for all mypaint project translations
Begin translating now: https://hosted.weblate.org/engage/mypaint/

Rosetta StoneThe texts in the MyPaint application are in heavy need of updating for the 23 languages currently supported. If you’re fluent in a language other than English, and have a good working knowledge of MyPaint and the English language, then you can help our translation effort.

We’re using a really cool online translation service called WebLate, another Open Source project whose developers have very graciously offered us free hosting. It integrates with our Github development workflow very nicely indeed, so well in fact that I’m hoping to use it for continuous translation after 1.2.0 has been released.

To get involved, click on the begin translating now link above, and sign in with Github, Google, or Facebook. You can create an account limited to just the Weblate developers’ hosted service too. There are two parts to MyPaint: the main application, and its brush-painting library. Both components need translating.

Maintaining language files can be a lot of work, so you should get credit for the work you do. The usual workflow isn’t anonymous: your email address and sign-in name will be recorded in the commit log on Github, and you can put your names in the about box by translating the marker string “translator-credits” when it comes up! If you’d prefer to work anonymously, you don’t have to sign in: you can just make suggestions via WebLate for other translators to review and integrate.

Even if your language is complete, you can help by sharing the link above among your colleagues and friends on social media.

Thank you to all of our current translators, and in advance to new translators, for all the wonderful work you’re doing. I put a lot of my time into MyPaint trying to make sure that it’s beautiful, responsive, and stable. I deeply appreciate all the work that others do on the project too and, from a monoglot like myself, some of the most inspiring work I see happening on the project by others is all the effort put into making MyPaint comprehensible and international. Many, many thank yous.

Frozen for 1.2.0

Quick note to say that MyPaint is now frozen for the upcoming 1.2.0 release. Expect announcements here about dates for specific betas, plus previews and screenshots of new features; however the most current project status can be seen on our Github milestones page.

July 10, 2015

Fri 2015/Jul/10

  • Package repositories FAIL

    Today I was asking around something like, mobile websites work without Flash; how come non-mobile Twitter and YouTube want Flash on my desktop (which is disabled now because of all the 0-day exploits?

    Elad kindly told me that if I install the GStreamer codecs, and disable Flash, it should work. I didn't have those codecs on my not-for-watching-TV machine, so I set to it.

    openSUSE cannot distribute the codecs themselves, so the community does it with an external, convenient one-click-install web-button. When you click it, the packaging machinery churns and you get asked if you want to trust the Packman repository — where all the good non-default stuff packages are.

    Packman's help        page

    It's plain HTTP. No HSTS or anything. It tells you the fingerprint of the repository's signing key... over plain HTTP. On the FAQ page, there is a link to download that public key over plain FTP.

    Packman's key over        plain FTP

    Now, that key is the "PackMan Build Service" key, a key from 2007 with only 1024-bit DSA. The key is not signed by anybody.

    PackMan        Build Service key

    However, the key that the one-click install wants to use is another one, the main PackMan Project key.

    PackMan Project        key

    It has three signatures, but when I went down the rabbit hole of fetching each of those keys to see if I knew those people — I have heard of two of them, but my little web of trust doesn't have them.

    So, YOLO, right? "Accept". "Trust". Because "Cancel" is the only other option.

    The installation churns some more, and it gives me this:

    libdvdcss repository   is unsigned

    YOLO all the way.

    I'm just saying, that if you wanted to pwn people who install codecs, there are many awesome places here to do it.

    But anyway. After uninstalling flash-player, flash-player-gnome, freshplayerplugin, pullin-flash-player, the HTML5 video player works in Firefox and my fat desktop now feels as modern as my phone.

    Update:Hubert Figuière has an add-on for Firefox that will replace embedded Flash video players in other websites with HTML5, the No-flash add-on.

July 09, 2015

Krita 2.9.6 released!

After a month of bugfixing, we give you Krita 2.9.6! With lots of bugfixes, but bugfixes aren’t the only thing in 2.9.6, we also have a few new features!

The biggest change is that we now have selection modifiers! They are configured as follows:

  • Shift+click: add to selection.
  • Alt+click: subtract from selection.
  • Shift+alt+click: intersect selection
  • Ctrl+click: replace selection (for when you have set the
  • selection mode to something else but replace).

These don’t work with the path tool yet, and aren’t configurable, but we’re going to work on that. Check out the manual page for the selection tools for more information on how this relates to constraint and from center for the rectangle and ellipse select.

Also new: Continuous transform and crop!

Now, when you applied a transform or crop, and directly afterwards click on the canvas, Krita will recall the previous transform or crop, and allow you to adjust that instead! If you press ‘esc’ when in this ‘continuous mode’, Krita will forget the continuous transform, and allow you to start a new one.

The final of the big new features must be that the tool-options can now be put into the toolbar:

tool options in the toobar

By default it’s still a docker, but you can configure it in settings->configure Krita->general. You can also easily summon this menu with the ‘\’ key!

And Thorsten Zachmann has improved the speed of all the color adjustment filters, often by a factor of four or more.

Full list of features new to 2.9.6:

  • Add possibility to continue a Crop Tool action
  • Speed up of color balance, desaturate, dodge, hsv adjustment, index color per-channel and posterize filters.
  • Activate Cut/Copy Sharp actions in the menu
  • Implemented continuation of the transform with clicking on canvas
  • new default workspace
  • Add new shortcuts (‘\’ opens the tool options, f5 opens the brush editor, f7 opens the preset selector.)
  • Show the tool options in a popup (toggle this on or off in the general preferences, needs restarting Krita)
  • Add three new default shortcuts (Create group layer = Ctrl+G, Merge Selected layer = Ctrl+Alt+E, Scale image to new size = Alt+Ctrl+I )
  • Add an ‘hide pop-up on mouseclick option’ to advanced color selector.
  • Make brush ‘speed’ sensor work properly
  • Allow preview for “Image Background Color and Transparency” dialog.
  • Selection modifier patch is finally in! (shift=add, alt=subtract, shift+alt=intersect, ctrl=replace. Path tool doesn’t work yet, and they can’t be configured yet)

Bugfixes new to 2.9.6

  • BUG:346932 Fix crash when saving a pattern to a *.kra
  • Make Group Layer return correct extent and exact bounds when in pass-through mode
  • Make fixes to pass-through mode.
  • Added an optional optimization to slider spin box
  • BUG:348599 Fix node activating on the wrong image
  • BUG:349792 Fix deleting a color in the palette docker
  • BUG:349823 Fix scale to image size while adding a file layer
  • Fixed wrapping issue for all dial widgets in Layer Styles dialog
  • Fix calculation of y-res when loading .kra files
  • BUG:349598 Prevent a divide by zero
  • BUG:347800 Reset cursor when canvas is extended to avoid cursor getting stuck in “pointing hand” mode
  • BUG:348730 Fix tool options visibility by default
  • BUG:349446 Fix issue where changing theme doesn’t update user config
  • BUG:348451 Fix internal brush name of LJF smoke.
  • BUG:349424 Set documents created from clipboard to modified
  • BUG:349451 Make more robust: check pointers before use
  • Use our own code to save the merged image for kra and ora (is faster)
  • BUG:313296 Fix Hairy brush not to paint black over transparent pixels in Soak Ink mode
  • Fix PVS warning in hairy brush
  • (gmic) Try to workaround the problem with busy cursor
  • BUG:348750 Don’t limit the allowed dock areas
  • BUG:348795 Fix uninitialized m_maxPresets
  • BUG:349346 (gmic) If there is selection, do not synchronize image size
  • BUG:348887 Disable autoscroll for the fill-tool as well.
  • BUG:348914 Rename the fill layers.



Taming annoyances in the new Google Maps

For a year or so, I've been appending "output=classic" to any Google Maps URL. But Google disabled Classic mode last month. (There have been a few other ways to get classic Google maps back, but Google is gradually disabling them one by one.)

I have basically three problems with the new maps:

  1. If you search for something, the screen is taken up by a huge box showing you what you searched for; if you click the "x" to dismiss the huge box so you can see the map underneath, the box disappears but so does the pin showing your search target.
  2. A big swath at the bottom of the screen is taken up by a filmstrip of photos from the location, and it's an extra click to dismiss that.
  3. Moving or zooming the map is very, very slow: it relies on OpenGL support in the browser, which doesn't work well on Linux in general, or on a lot of graphics cards on any platform.

Now that I don't have the "classic" option any more, I've had to find ways around the problems -- either that, or switch to Bing maps. Here's how to make the maps usable in Firefox.

First, for the slowness: the cure is to disable webgl in Firefox. Go to about:config and search for webgl. Then doubleclick on the line for webgl.disabled to make it true.

For the other two, you can add userContent lines to tell Firefox to hide those boxes.

Locate your Firefox profile. Inside it, edit chrome/userContent.css (create that file if it doesn't already exist), and add the following two lines:

div#cards { display: none !important; }
div#viewcard { display: none !important; }

Voilà! The boxes that used to hide the map are now invisible. Of course, that also means you can't use anything inside them; but I never found them useful for anything anyway.

July 07, 2015

What's New, Some New Tutorials, and PIXLS!

What's been going on?! A bunch!

In case you've not noticed around here, I've been transitioning tutorials and photography related stuff over to PIXLS.US.

I built that site from scratch, so it's taken a bit of my time... I've also been slowly porting some of my older tutorials that I thought would still be useful over there. I've also been convincing all sorts of awesome folks from the community to help out by writing/recording tutorials for everyone, and we've already got quite a few nice ones over there:

A Blended Panorama with PhotoFlow

Basic Landscape Exposure Blending with GIMP and G'MIC

An Open Source Portrait (Mairi)

Skin Retouching with Wavelet Decompose

Luminosity Masking in darktable

Digital B&W Conversion (GIMP)

So just a gentle reminder that the tutorials have all mostly moved to PIXLS.US. Head over there for the newest versions and brand-new material, like the latest post from the creator of PhotoFlow, Andrea Ferrero on Panorama Exposure Blending with Hugin and PhotoFlow!

Also, don't forget to come by the forums and join the community at:


That's not to say I've abandoned this blog, just that I've been busy trying to kickstart a community over there! I'm also accepting submissions and/or ideas for new articles. Feel free to email me!

PhotoFlow Blended Panorama Tutorial

PhotoFlow Blended Panorama Tutorial

Andrea Ferrero has been busy!

After quite a bit of back and forth I am quite happy to be able to announce that the latest tutorial is up: A Blended Panorama with PhotoFlow! This contribution comes from Andrea Ferrero, the creator of a new project: PhotoFlow.

In it, he walks through a process of stitching a panorama together using Hugin and blending multiple exposure options through masking in PhotoFlow (see lede image). The results are quite nice and natural looking!

Local Contrast Enhancement: Gaussian vs. Bilateral

Andrea also runs through a quick video comparison of doing LCE using both a Gaussian and Bilateral blur, in case you ever wanted to see them compared side-by-side:

He started a topic post about it in the forums as well.

Thoughts on the Main Page

Over on discuss I started a thread to talk about some possible changes to the main page of the site.

Specifically I’m talking about the background lede image at the very top of the main page:

I had originally created that image as a placeholder in Blender. The site is intended as a photography-centric site, so the natural thought was why not use photos as a background instead?

The thought is to rotate through images as provided by the community. I’ve also mocked up two version of using an image as a background.

Simple replacement of the image with photos from the community. This is the most popular in the poll on the forum at the moment. The image will be rotated amongst images provided by community members. I just need to make sure that the text shown is legible over whatever the image may be…

Full viewport splash version, where the image fills the viewport. This is not very popular from the feedback I received (thank you akk, ankh, muks, DrSlony, LebedevRI, and others on irc!). I personally like the idea but I can understand why others may not like it.

If anyone wants to chime in (or vote in the poll) then head over to the forum topic and let us know your thoughts!

Also, a big thank you to Morgan Hardwood for allowing us to use that image as a background example. If you want a nice way to support F/OSS development, it just so happens that Morgan is a developer for RawTherapee, and a print of that image is available for purchase. Contact him for details.

July 06, 2015

The votes are in!

Here’s the definitive list of stretch goal votes. A whopping 94,1% of eligible voters (622 of 661) actually voted: 94,9% of kickstarter backers and 84,01% of paypal backers. Thank you again, everyone who pledged, donated and voted, for your support!

Votes Stretch goal Phabricator Task
0 N/A Extra Lazy Brush: interactive tool for coloring the image in a couple of strokes T372
1 120 19.29% 10. Animated file formats export: animated gif, animated png and spritemaps T116
2 56 9.00% 8. Rulers and guides: drag out guides from the rulers and generate, save and load common sets of guides. Save guides with the document. T114
3 51 8.20% 1. Multiple layer selection improvements T105
4 48 7.72% 19. Make it possible to edit brush tips in Krita T125
5 42 6.75% 21. Implement a Heads-Up-Display to manipulate the common brush settings: opacity, size, flow and others. T127
6 38 6.11% 2. Update the look & feel of the layer docker panel (1500 euro stretch goal) T106
7 37 5.95% 22. Fuzzy strokes: make the stroke consistent, but add randomness between strokes. T166
8 33 5.31% 5. Improve grids: add a grid docker, add new grid definitions, snap to grid T109
9 31 4.98% 6. Manage palettes and color swatches T112
10 28 4.50% 18. Stacked brushes: stack two or more brushes together and use them in one stroke T124

These didn’t make it, but we’re keeping them for next time:

  Votes   Stretch goal
11 23 3.70% 4. Select presets using keyboard shortcuts
12 19 3.05% 13. Scale from center pivot: right now, we transform from the corners, not the pivot point.
13 19 3.05% 9. Composition helps: vector objects that you can place and that help with creating rules of thirds, spiral, golden mean and other compositions.
14 18 2.89% 7. Implement a Heads-Up-Display for easy manipulation of the view
15 17 2.73% 20. Select textures on the fly to use in textured brushes
16 9 1.45% 15. HDR gradients
17 9 1.45% 11. Add precision to the layer move tool
18 8 1.29% 17. Gradient map filter
19 5 0.80% 16. On-canvas gradient previews
20 5 0.80% 12. Show a tooltip when hovering over a layer with content to show which one you’re going to move.
21 3 0.48% 3. Improve feedback when using more than one color space in a single image
22 3 0.48% 14. Add a gradient editor for stop gradients

July 04, 2015

Create a signed app with Cordova

I wrote last week about developing apps with PhoneGap/Cordova. But one thing I didn't cover. When you type cordova build, you're building only a debug version of your app. If you want to release it, you have to sign it. Figuring out how turned out to be a little tricky.

Most pages on the web say you can sign your apps by creating platforms/android/ant.properties with the same keystore information in it that you'd put in an ant build, then running cordova build android --release

But Cordova completely ignored my ant.properties file and went on creating a debug .apk file and no signed one.

I found various other purported solutions on the web, like creating a build.json file in the app's top-level directory ... but that just made Cordova die with a syntax error inside one of its own files). This is the only method that worked for me:

Create a file called platforms/android/release-signing.properties, and put this in it:

// if you don't want to enter the password at every build, use this:

Then cordova build android --release finally works, and creates a file called platforms/android/build/outputs/apk/android-release.apk

July 02, 2015

libmypaint is ready for translation

MyPaint is well on its way to feature and string freeze, but its brush library is stable enough to be translated now.

You can help! Example status page from WebLateThe developers of WebLate, a really nice online translation tool, have offered us hosting for translations.

Translation status: Graphical status badge for all mypaint project translations
Join: https://hosted.weblate.org/engage/mypaint/

If you’re fluent in a language other than English, or know a FOSS-friendly person who is, you can help with the translation effort. Please share the link above as widely as you can, or dive in yourself and start translating brush setting texts. It’s a surprisingly simple workflow: you translate program texts one at a time resolving any discrepancies and correcting problems the system has discovered. Each text has a link back to the source code too, if you want to see where it was set up. At the end of translating into your language you get a nice fully green progress bar, a glowing sense of satisfaction, and your email address in the commit log ☺

If you want to help out and good language skills, we’d really appreciate your assistance. Helping to translate a project is a great way of learning about how it works internally, and it’s one of the easiest and most effective ways of getting involved in the Free/Open Source culture and putting great software into people’s hands, worldwide.

July 01, 2015

Web Open Font Format (WOFF) for Web Documents

The Web Open Font Format (short WOFF; here using Aladin font) is several years old. Still it took some time to get to a point, where WOFF is almost painless to use on the linux desktop. WOFF is based on OpenType style fonts and is in some way similar to the more known True Type Font (.ttf). TTF fonts are widely known and used on the Windows platform. Those feature rich kind of fonts are used for high quality font displaying for the system and local office-and design documents. WOFF aims at closing the gap towards making those features available on the web. With these fonts it becomes possible to show nice looking fonts on paper and web presentations in almost the same way. In order to make WOFF a success, several open source projects joined forces, among them Pango and Qt, and contributed to harfbuzz, a OpenType text shaping engine. Firefox and other web engines can handle WOFF inside SVG web graphics and HTML web documents using harfbuzz. Inkscape uses at least since version 0.91.1 harfbuzz too for text inside SVG web graphics. As Inkscape is able to produce PDF’s, designing for both the web and print world at the same time becomes easier on Linux.

Where to find and get WOFF fonts?
Open Font Library and Google host huge font collections . And there are more out on the web.

How to install WOFF?
For using inside inkscape one needs to install the fonts locally. Just copy the fonts to your personal ~/.fonts/ path and run

fc-cache -f -v

After that procedure the fonts are visible inside a newly started Inkscape.

How to deploy SVG and WOFF on the Web?
Thankfully WOFF in SVG documents is similar to HTML documents. However simply uploading a Inkscape SVG to the web as is will not be enough to show WOFF fonts. While viewing the document locally is fine, Firefox and friends need to find those fonts independent of the localy installed fonts. Right now you need to manually edit your Inkscape SVG to point to the online location of your fonts . For that open the SVG file in a text editor and place a CSS font-face reference right after the <svg> element like:

<style type=”text/css”>
@font-face {
font-family: “Aladin”;
src: url(“fonts/Aladin-Regular.woff”) format(“woff”);

How to print a Inkscape SVG document containing WOFF?
Just convert to PDF from Inkscape’s file menue. Inkscape takes care for embedding the needed fonts and creates a portable PDF.

In case your prefered software is not yet WOFF ready, try the woff2otf python script for converting to the old TTF format.

Hope this small post gets some of you on the font fun path.

Fedora Hubs Update!!!


The dream is real – we are cranking away, actively building this very cool, open source, socially-oriented collaboration platform for Fedora.

Myself and Meghan Richardson, the Fedora Engineering Team’s UX intern for this summer, have been cranking out UI mockups over the past month or so (Meghan way more than me at this point. :) )

Screenshot from 2015-06-23 09-24-44

We also had another brainstorming session. We ran the Fedora Hubs Hackfest, a prequel to the Fedora Release Engineering FAD a couple of weeks ago.

After a lot of issues with the video, full video of the hackfest is now finally available (the reason for the delay in my posting this :) ).

Let’s talk about what went down during this hackfest and where we are today with Fedora Hubs:

What is Fedora Hubs, Exactly?

(Skip directly to this part of the video)

We talked about two elevator pitches for explaining it:

  • It’s an ‘intranet’ page for the Fedora Project. You work on all these different projects in Fedora, and it’s a single place you can get information on all of them as a contributor.
  • It’s a social network for Fedora contributors. One place to go to keep up with everything across the project in ways that aren’t currently possible. We have a lot of places where teams do things differently, and it’s a way to provide a consistent contributor experience across projects / teams.

Who are we building it for?

(Skip directly to this part of the video)

  • New Fedora Contributors – A big goal of this project is to enable more contributors and make bootstrapping yourself as a Fedora contributor less of a daunting task.
  • Existing Fedora Contributors – They already have a workflow, and already know what they’re doing. We need to accommodate them and not break their workflows.

The main philosophy here is to provide a compelling user experience for new users that can potentially enhance the experience for existing contributors but at the very least will never disrupt the current workflow of those existing contributors. Let’s look at this through the example of IRC, which Meghan has mocked up in the form of a web client built into Fedora Hubs aimed at new contributor use:

If you’re an experienced contributor, you’ve probably got an IRC client, and you’re probalby used to using IRC and wouldn’t want to use a web client. IRC, though, is a barrier to new contributors. It’s more technical than the types of chat systems they’re accustomed to. It becomes another hurdle on top of 20 or so other hurdles they have to clear in the process of joining as a contributor – completely unrelated to the actual work they want to do (whatever it is – design, marketing, docs, ambassadors, etc.)

New contributors should be able to interact with the hubs IRC client without having to install anything else or really learn a whole lot about IRC. Existing contributors can opt into using it if they want, or they can simply disable the functionality in the hubs web interface and continue using their IRC clients as they have been.

Hackfest Attendee Introductions

(Skip directly to this part of the video)

Next, Paul suggested we go around the room and introduce ourselves for anybody interested in the project (and watching the video.)

  • Máirín Duffy (mizmo) – Fedora Engineering UX designer working on the UX design for the hubs project
  • Meghan Richardson (mrichard) – Fedora Engineering UX intern from MSU also working on the UX design for the hubs project
  • Remy Decausemaker (decause) – Fedora Community lead, Fedora Council member
  • Luke Macken (lmacken) – Works on Fedora Infrastructure, release engineering, tools, QA
  • Adam Miller (maxamillion) – Works on Release engineering for Fedora, working on build tooling and automation for composes and other things
  • Ralph Bean (threebean) – Software engineer on Fedora Engineering team, will be spending a lot of time working on hubs in the next year
  • Stephen Gallagher (sgallagh) – Architect at Red Hat working on the Server platform, on Fedora’s Server working group, interested in helping onboard as many people as possible
  • Aurélien Bompard (abompard) – Software developer, lead developer of Hyperkitty
  • David Gay (oddshocks) – Works on Fedora infrastructure team and cloud teams, hoping to work on Fedora Hubs in the next year
  • Paul Frields (sticksteR) – Fedora Engineering team manager
  • Pierre-Yves Chibon (pingou) – Fedora Infrastructure team member working mostly on web development
  • Patrick Uiterwijk (puiterwijk) – Member of Fedora’s system administration team
  • Xavier Lamien (SmootherFrOgZ) – Fedora Infrastructure team member working on Fedora cloud SIG
  • Atanas Beloborodov (nask0) – A very new contributor to Fedora, he is a web developer based in Bulgaria.
  • (Matthew Miller and Langdon White joined us after the intros)

Game to Explore Fedora Hub’s Target Users

(Skip directly to this part of the video)

We played a game called ‘Pain Gain’ to explore both of the types of users we are targeting: new contributors and experienced Fedora contributors. We started talking about Experienced Contributors. I opened up a shared Inkscape window and made two columns: “pain” and “gain:”

  • For the pain column, we came up with things that are a pain for experienced contributors the way our systems / processes currently work.
  • For the gain column, we listed out ways that Fedora Hubs could provide benefits for experienced contributors.

Then we rinsed and repeated for new contributors:


While we discussed the pains/gains, we also came up with a lot of sidebar ideas that we documented in an “Idea Bucket” area in the file:


I was worried that this wouldn’t work well in a video chat context, but I screen-shared my Inkscape window and wrote down suggestions as they were brought up and I think we came out with a useful list of ideas. I was actually surprised at the number of pains and gains on the experienced contributor side: I had assumed new contributors would have way more pains and gains and that the experienced contributors wouldn’t have that many.

Prototype Demo

(Skip directly to this part of the video)

Screenshot from 2015-06-23 12-57-27

Ralph gave us a demo of his Fedora Hubs prototype – first he walked us through how it’s built, then gave the demo.


In the README there is full explanation of how the prototype works so I won’t reiterate everything there. Some points that came up during this part of the meeting:

  • Would we support hubs running without Javascript? The current prototype completely relies on JS. Without JS, it would be hard to do widgets like the IRC widget. Some of the JS frameworks come with built-in fail modes. There are some accessibility issues with ways of doing things with JS, but a good design can ensure that won’t happen. For the most part, we are going to try to support what a default Fedora workstation install could support.
  • vi hotkeys for Hubs would be awesome. :) Fedora Tagger does this!
  • The way the widgets work now, each widget has to define a data function that gets called with a session object, and it has to return JSON-ifiable python code. That gets stored in memcached and is how the wsgi app and backend communicate. If you can write a data function to return JSON and write a template the data gets plugged into – that’s mainly what’s needed. Take a look at the stats widget – it’s pretty simple!
  • All widgets also need a ‘should_invalidate()’ function that lets the system know what kinds of information apply to which widgets. Every fedmsg has to go through every widget to see if it invalidates a given widget’s data – we were worried that this would result in a terrible performance issue, but by the end of the hackfest we had that figured out.
  • Right now the templates are ginja2, but Ralph thinks we should move to client-side (javascript) templates. The reason is that when updated data gets pushed over websockets from the bus, it can involve garbage communication any time new changes in data come across – it’s simpler that the widget doesn’t have to request the templates and instead the templates are already there in the client.
  • Angular could be a nice client-side way of doing the templates, but Ralph had heard some rumors that AngularJS 2 was going to support only Chrome, and AngularJS 1.3 and 2 aren’t compatible. nask0 has a lot of experience with Angular though and does not think v2 is going to be Chrome-only.
  • TODO: Smoother transitions for when widgets pop into view as they load on an initial load.
  • Langdon wondered if there would be a way to consider individual widgets being able to function as stand-alones on desktops or mobile. The raw zeromq pipes could be hooked up to do this, but the current design uses EventSource which is web-specific and wouldn’t translate to say a desktop widget. Fedora Hubs will emit its own fedmsgs too, so you could build a desktop widget using that as well.
  • Cache invalidation issues was the main driver of the slowness in Fedora Packages, but now we have a cache that updates very quickly so we get constant time access to delivering those pages.

Mockup Review

Screenshot from 2015-06-23 13-48-56

Next, Meghan walked us through the latest (at the time :) we have more now!) mockups for Fedora Hubs, many based on suggestions and ideas from our May meetup (the 2nd hubs video chat.)

Creating / Editing Hubs

(Skip directly to this part of the video)

First, she walked us through her mockups for creating/editing hubs – how a hub admin would be able to modify / set up their hub. (Mockup (download from ‘Raw’ and view in Inkscape to see all screens.)) Things you can modify are the welcome message, colors, what widgets get displayed, the configuration for widgets (e.g. what IRC channel is associated with the hub?), and how to add widgets, among many other things.

Meghan also put together a blog post detailing these mockups.

One point that came up here – a difference is that when users edit their own hubs, they can’t associate an IRC channel with it, but a nick and a network, to enable their profile viewers to pm them.

We talked about hub admins vs FAS group admins. Should they be different or exactly the same? We could make a new role in FAS – “hub admin” – and store it there if it’s another one. Ralph recommended keeping it simple by having FAS group admins and hub admins one and the same. Some groups are more strict about group admins in FAS, some are not. Would there be scenarios where we’d want people to be able to admin the FAS group for a team but not be able to modify the hub layout (or vice-versa?) Maybe nesting the roles – if you’re a FAS admin you can be FAS admin + hub admin, if you’re a hub admin you can just admin the hub but not the FAS group.

Another thing we talked about is theming hubs. Luke mentioned that Reddit allows admins to have free reign in terms of modifying the CSS. Matthew mentioned having a set of backgrounds to choose from, like former Fedora wallpapers. David cautioned that we want to maintain some uniformity across the hubs to help enable new contributors – he gave the example of Facebook, where key navigational elements are not configurable. I suggested maybe they could only tweak certain CSS classes. Any customizations could be stored in the database.

Another point: members vs subscribers on a hub. Subscribers ‘subscribe’ to a hub, members ‘join’ a hub. Subscribing to a hub adds it to your bookmarks in the main horizontal nav bar, and enables certain notifications for that hub to appear in your feed. We talked about different vocabulary for ‘subscribe’ vs ‘join’ – instead of ‘subscribe’ we talking about ‘following’ or ‘starring’ (as in Github) vs joining. (Breaking News :) Since then Meghan has mocked up the different modes for these buttons and added the “star” concept! See below.)


We had a bit of an extended discussion about a lot of the different ways someone could be affiliated with a team/project that has a hub. Is following/subscribing too non-committal? Should we have a rank system so you could move your way up ranks, or is it a redundant gameification given the badge system we have in place? (Maybe we can assign ranks based on badges earned?) Part of the issue here is for others to identify the authority of the other people they’re interacting with, but another part is for helping people feel more a part of the community and feel like valued members. Subscribing is more like following a news feed, being a member is more being part of the team.

Joining Hubs

(Skip directly to this part of the video)

The next set of mockups Meghan went through showed us the workflow of how a user requests membership in a given hub and how the admin receives the membership request and handles it.

We also tangented^Wtalked about the welcome message on hubs and how to dismiss or minimize them. I think we concluded that we would let people collapse them and remove them, and if they remove them we’ll give them a notification that if they want to view them at any time they can click on “Community Rules and Guidelines.”

Similarly, the notification to let the admin know that a user has requested access to something and they dismiss it and want to tend to it later – it will appear in the admin’s personal stream as well for later retrieval.

We talked about how to make action items in a user’s notification feed appear differently than informational notifications; some kind of different visual design for them. One idea that came up was having tabs at the top to filter between types of notifications (action, informational, etc.) I explained how we were thinking about having a contextual filter system in the top right of each ‘card’ or notification to let users show or hide content too. Meghan is working on mockups for this currently.

David had the idea of having action items assigned to people appear as actions within their personal stream… since then I have mocked this up:


Personal Profiles

(Skip directly to this part of the video)

Next Meghan walked us through the mockups she worked on for personal profiles / personal streams. One widget she mocked up is for personal library widgets. Other widgets included a personal badges earned display, hubs you’re a member of, IRC private message, a personal profile.

Meghan also talked about privacy with respect to profiles and we had a bit of a discussion about that. Maybe, for example, by default your library could be private, maybe your stream only shows your five most recent notifications and if someone is approved (using a handshake) as a follower of yours they can see the whole stream. Part of this is sort of a bike lock thing…. everything in a user’s profile is broadcast on fedmsg, but having it easily accessible in one place in a nice interface makes it a lot easier (like not having a lock on your bike.) One thing Langdon brought up is that we don’t want to give people a false sense of privacy. So we have to be careful about the messaging we do around it. We thought about whether or not we wanted to offer this intermediate ‘preview’ state for people’s profiles for those viewing them without the handshake. An alternative would be to let the user know who is following them when they first start following them and to maintain a roster of followers so it is clear who is reading their information.

Here’s the blog post Meghan wrote up on the joining hubs and personal profile mockups with each of the mockups and more details.

Bookmarks / main nav

(Skip directly to this part of the video)

The main horizontal navbar in Fedora Hubs is basically a bookmarks bar of the hubs you’re most interested in. Meghan walked us through the bookmarks mockups – she also covered these mockups in detail on her bookmarks blog post.


Yes. Yes, it is.

So you may be wondering when this is going to be available. Well, we’re working on it. We could always use more help….


Where’s stuff happening?

How does one help? Well, let me walk you through where things are taking place, so you can follow along more closely than my lazy blog posts if you so desire:

  • Chat with us: #fedora-hubs on irc.freenode.net is where most of the folks working on Fedora Hubs hang out, day in and day out. threebean’s hooked up a bot in there too that pushes notifications when folks check in code or mockup updates.
  • Mockups repo: Meghan and I have our mockups repo at https://github.com/fedoradesign/fedora-hubs, which we both have hooked up via Sparkleshare. (You are free to check it out without Sparkleshare and poke around as you like, of course.)
  • Code repo: The code is kept in a Pagure repo at https://pagure.io/fedora-hubs. You’ll want to check out the ‘develop’ branch and follow the README instructions to get all setup. (If I can do it, you can. :) )
  • Feature planning / Bug reporting: We are using Pagure’s issue tracker at https://pagure.io/fedora-hubs/issues to plan out features and track bugs. One way we are using this which I think is kind of interesting – it’s the first time I’ve used a ticketing system in exactly this way – is that for every widget in the mockups, we’ve opened up a ticket that serves as the design spec with mockups from our mockup repo embedded in the ticket.
  • Project tracking: This one is a bit experimental. But the Fedora infra and webdev guys set up http://taiga.fedoraproject.org – an open source kanban board – that Meghan and I started using to keep track of our todo list since we had been passing post-it notes back and forth and that gets a bit unwieldy. It’s just us designers using it so far, but you are more than welcome to join if you’d like. Log in with your Fedora staging password (you can reset it if it’s not working and it’ll only affect stg) and ping us in #fedora-hubs to have your account added to the kanban board.
  • Notification Inventory: This is an inventory that Meghan started of the notifications we’ve come up with for hubs in the mockups.
  • Nomenclature Diagram for Fedora Hubs: We’ve got a lot of neat little features and widgets and bits and bobs in Fedora Hubs, but it can be confusing talking about them without a consistent naming scheme. Meghan created this diagram to help sort out what things are called.

How can I help?

Well, I’m sure glad you asked. :) There’s a few ways you can easily dive in and help right now, from development to design to coming up with cool ideas for features / notifications:

  1. Come up with ideas for notifications you would find useful in Fedora Hubs! Add your ideas to our notification inventory and hit us up in #fedora-hubs to discuss!
  2. Look through our mockups and come up with ideas for new widgets and/or features in Fedora Hubs! The easiest way to do this is probably to peruse the mini specs we have in the pagure issue tracker for the project. But you’re free to look around our mockups repo as well! You can file your widget ideas in Pagure (start the issue name with “Idea:” and we’ll review them and discuss!
  3. Help us develop the widgets we’ve planned! We’ve got little mini design specs for the widgets in the Fedora Hubs pagure issue tracker. If a widget ticket is unassigned (and most are!), it’s open and free for you to start hacking on! Ask Meghan and I any questions in IRC about the spec / design as needed. Take a look at the stats widget that Ralph reviewed in explaining the architecture during the hackfest, and watch Ralph’s demo and explanation of how Hubs is built to see how the widgets are put together.
  4. There are many other ways to help (ask around in #fedora-hubs to learn more,) but I think these have a pretty low barrier for starting up depending on your skillset and I think they are pretty clearly documented so you can be confident you’re working on tasks that need to get done and aren’t duplicating efforts!

    Hope to see you in #fedora-hubs! :)

June 30, 2015

Parsing Option ROM Firmware

A few weeks ago an issue was opened on fwupd by pippin. He was basically asking for a command to return all the hashes of the firmwares installed on his hardware, which I initially didn’t really see the point of doing. However, after doing a few hours research about all the malware that can hide in VBIOS for graphics cards, option ROM in network cards, and keyboard matrix EC processors I was suitably worried also. I figured fixing the issue was a good idea. Of course, malware could perhaps hide itself (i.e. hiding in an unused padding segment and masking itself out on read) but this at least raises the bar from a security audit point of view, and is somewhat easier than opening the case and attaching a SPI programmer to the chip itself.

Fast forward a few nights. We can now verify ATI, NVIDIA, INTEL and ColorHug firmware. I’ve not got any other hardware with ROM that I can read from userspace, so this is where I need your help. I need willing volunteers to compile fwupd from git master (or rebuild my srpm) and then run:

cd fwupd/src
find /sys/devices -name rom -exec sudo ./fwupdmgr dump-rom {} \;

All being well you should see something like this:

/sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0/rom -> f21e1d2c969dedbefcf5acfdab4fa0c5ff111a57 [Version:]

If you see something just that, you’re not super helpful to me. If you see Error reading from file: Input/output error then you’re also not so helpful as the kernel module for your hardware is exporting a rom file and not hooking up the read vfuncs. If you get an error like Failed to detect firmware header [8950] or Firmware version extractor not known then you’ve just become interesting. If that’s you, can you send the rom file to richard_at_hughsie.com as an attachment along with any details you know about the hardware. Thanks!


Interview with Livio Fania


Could you tell us something about yourself?

I’m Livio Fania. I’m Italian Illustrator living in France.

Do you paint professionally, as a hobby artist, or both?

I paint professionally.

What genre(s) do you work in?

I make illustrations for press, posters and children books. My universe is made by geometrical shapes, stylized characters and flashy colors.

Whose work inspires you most — who are your role models as an artist?

I like the work of João Fazenda, Riccardo Guasco and Nick Iluzada among many others.

What makes you choose digital over traditional painting?

I did not take a definite choice. Even if I work mainly digitally, I still have a lot of fun using traditional tools such as colored pencils, brush pens and watercolors. Besides, in the 90% of cases I draw by hand, I scan, and just at the end of the process I grab my graphic tablet stylus.

I do not think that working digitally means to be faster. On the contrary, I can work more quickly by hand, especially in the first sketching phases. What digital art allows is CONTROL all over the process. If you keep your layer stack well organized, you can always edit your art without losing the original version, and this is very useful when your client asks for changes. If you work with traditional tools and you drop your ink in the wrong place, you can’t press Ctrl+z.


How did you find out about Krita?

I discovered Krita through a video conference posted on David Revoy’s blog. Even if I don’t particularly like his universe, I think he is probably the most influential artist using FLOSS tools, and I’m very grateful to him for sharing his knowledge with the community. Previously, I used to work with MyPaint, mainly for its minimalist interface which was perfect for the small laptop I had. Then I discovered that Krita was more versatile and better developed, so I took some time to learn it and now I could not do without it.

What was your first impression?

At first I thought it was not the right tool for me. Actually, most digital artists use Krita for its painting features, like blending modes and textured brushes, which allow to obtain realistic light effects. Personally, I think that realism can be very boring and that is why I paint in a stylized way with uniform tints. Besides, I like to bound my range of possibilities in a set of limited elements: palettes of 5-8 colors and 2-3 brushes. So at the beginning I felt like Krita had too many options for me. But little by little I adapted the GUI to my workflow. Now I really think everybody can find their own way to use Krita, no matter the painting style they use.

What do you love about Krita?

Two elements I really love:
1) The favourite presets docker which pops up with right click. It contains everything you need to keep painting and it is a pleasure to control everything with a glance.
2) The Composition tab, which allows to completely change the color palette or experiment with new effects without losing the original version of a drawing.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I think that selections are not intuitive at all and could be improved. When dealing with complex selections, it is time-consuming to check the selecting mode in the option tab (replace, intersect, subtract) and proceed accordingly. Especially considering that by default the selecting mode is the same you had when you used the tool last time (but in the meantime you probably forgot it). I think it would be much better if every time a selection tool is taken, it would be be in “normal” mode by default, and then one can switch to a different modes by pressing Ctrl/Shift.

What sets Krita apart from the other tools that you use?

Krita is by far the most complete digital painting tool developed on Linux. It is widely customizable (interface, workspaces, shortcuts, tabs) and it offers a very powerful brush engine, even compared to proprietary applications. Also, a very important aspect is the that the Krita foundation has a solid organization and develops it in a continuous way thanks to donations, Kickstarter campaigns etcetera. This is particularly important in the open source community, where we have sometimes well designed projects which disappear because they are not supported properly.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

The musicians in the field.

What techniques and brushes did you use in it?

As i said, I like to have limited presets. In this illustration I mostly used the “pastel_texture_thin” brush which is part of the default set of brushes in Krita. I love its texture and the fact that it is pressure sensitive. Also, I applied a global bitmap texture on an overlay layer.

Where can people see more of your work?


Anything else you’d like to share?

Yes, I would like to add that I also release all my illustrations under a Creative Commons license, so you can Download my portfolio, copy it and use it for non-commercial purposes.

June 29, 2015

Approaching string freeze and beta release cycle

It’s time to start putting the next release together, folks!

I would like to announce a freeze of all translated strings on Sat 11th July 2015, and then begin working on the first beta release properly. New features raised as a Github Pull Request by the end of Sat 4th July stand a chance of getting in, but midnight on that day is the deadline for new code submissions if they touch text the user will see on screen.

The next release will be numbered 1.2.0; we are currently in the alpha phase of development for it, but that phase will end shortly.

Fixing the remaining alpha-cycle bugs is going well. We currently only have four bugs left in the cycle milestone, and that number will diminish further shortly. The main goal right now is to merge and test any pending small features that people want to get into 1.2.0, and to thoroughly spellcheck and review the English-language source strings so that things will be better for our translators.

Expect announcements about the translation effort, and dates for beta releases shortly.

Just Say It!

While I love typing on small on screen keyboards on my phone, it is much easier to just talk. When we did the HUD we added speech recognition there, and it processed the audio on the device giving the great experience of controlling your phone with your voice. And that worked well with the limited command set exported by the application, but to do generic voice, today, that requires more processing power than a phone can reasonably provide. Which made me pretty excited to find out about HP's IDOL on Demand service.

I made a small application for Ubuntu Phone that records the audio you speak at it, and sends it up to the HP IDOL on Demand service. The HP service then does the speech recognition on it and returns the text back to us. Once I have the text (with help from Ken VanDine) I set it up to use Content Hub to export the text to any other application that can receive it. This way you can use speech recognition to write your Telegram notes, without Telegram having to know anything about speech at all.

The application is called Just Say It! and is in the Ubuntu App Store right now. It isn't beautiful, but definitely shows what can be done with this type of technology today. I hope to make it prettier and add additional features in the future. If you'd like to see how I did it you can look at the source.

As an aside: I can't get any of the non-English languages to work. This could be because I'm not a native speaker of those languages. If people could try them I'd love to know if they're useful.

Chollas in bloom, and other early summer treats

[Bee in cholla blossom] We have three or four cholla cacti on our property. Impressive, pretty cacti, but we were disappointed last year that they never bloomed. They looked like they were forming buds ... and then one day the buds were gone. We thought maybe some animal ate them before the flowers had a chance to open.

Not this year! All of our chollas have gone crazy, with the early rain followed by hot weather. Last week we thought they were spectacular, but they just kept getting better and better. In the heat of the day, it's a bee party: they're aswarm with at least three species of bees and wasps (I don't know enough about bees to identify them, but I can tell they're different from one another) plus some tiny gnat-like insects.

I wrote a few weeks ago about the piñons bursting with cones. What I didn't realize was that these little red-brown cones are all the male, pollen-bearing cones. The ones that bear the seeds, apparently, are the larger bright green cones, and we don't have many of those. But maybe they're just small now, and there will be more later. Keeping fingers crossed. The tall spikes of new growth are called "candles" and there are lots of those, so I guess the trees are happy.

[Desert willow in bloom] Other plants besides cacti are blooming. Last fall we planted a desert willow from a local native plant nursery. The desert willow isn't actually native to White Rock -- we're around the upper end of its elevation range -- but we missed the Mojave desert willow we'd planted back in San Jose, and wanted to try one of the Southwest varieties here. Apparently they're all the same species, Chilopsis linearis.

But we didn't expect the flowers to be so showy! A couple of blossoms just opened today for the first time, and they're as beautiful as any of the cultivated flowers in the garden. I think that means our willow is a 'Rio Salado' type.

Not all the growing plants are good. We've been keeping ourselves busy pulling up tumbleweed (Russian thistle) and stickseed while they're young, trying to prevent them from seeding. But more on that in a separate post.

As I write this, a bluebird is performing short aerobatic flights outside the window. Curiously, it's usually the female doing the showy flying; there's a male out there too, balancing himself on a piñon candle, but he doesn't seem to feel the need to show off. Is the female catching flies, showing off for the male, or just enjoying herself? I don't know, but I'm happy to have bluebirds around. Still no definite sign of whether anyone's nesting in our bluebird box. We have ash-throated flycatchers paired up nearby too, and I'm told they use bluebird boxes more than the bluebirds do. They're both beautiful birds, and welcome here.

Image gallery: Chollas in bloom (and other early summer flowers.

June 28, 2015


A couple of FreeCAD architecture/BIM related questions that I get often: Is FreeCAD ready enough to do serious BIM work? This is a very complex question, and the answer could be yes or no, depending on what's important to you. It of course also depends on what is BIM for you, because clearly enough, there isn't a universal...

June 26, 2015

A Blended Panorama with PhotoFlow

A Blended Panorama with PhotoFlow

Creating panoramas with Hugin and PhotoFlow

The goal of this tutorial is to show how to create a sort-of-HDR panoramic image using only Free and Open Source tools. To explain my workflow I will use the image below as an example.

This panorama was obtained from the combination of six views, each consisting of three bracketed shots at -1EV, 0EV and +1EV exposure. The three exposures are stitched together with the Hugin suite, and then exposure-blended with enfuse. The PhotoFlow RAW editor is used to prepare the initial images and to finalize the processing of the assembled panorama. The final result of the post-processing is below:

Final result Final result of the panorama editing (click to compare to simple +1EV exposure)

In this case I have used the brightest image for the foreground, the darkest one for the sky and clouds, and and exposure-fused one for a seamless transition between the two.

The rest of the post will show how to get there…

Before we continue, let me advise you that I’m not a pro, and that the tips and “recommendations” that I’ll be giving in this post are mostly derived from trial-and-error and common sense. Feel free to correct/add/suggest anything… we are all here to learn!

Taking the shots

Shooting a panorama requires a bit of preparation and planning to make sure that one can get the best out of Hugin when stitching the shots together. Here is my personal “checklist”:

  • Manual Focus - set the camera to manual focus, so that the focus plane is the same for all shots
  • Overlap Shots - make sure that each frame has sufficient overlap with the previous one (something between 1/2 and 1/3 of the total area), so that hugin can find enough control points to align the images and determine the lens correction parameters
  • Follow A Straight Line - when taking the shots, try to follow as much as possible a straight line (keeping for example the horizon at the same height in your viewfinder); if you have a tripod, use it!
  • Frame Appropriately - to maximize the angle of view, frame vertically for an horizontal panorama (and vice-versa for a vertical one)
  • Leave Some Room - frame the shots a bit wider than needed, to avoid bad surprises when cropping the stitched panorama
  • Fixed Exposure - take all shots with a fixed exposure (manual or locked) to avoid luminance variations that might not be fully compensated by hugin
  • Bracket if Needed - if you shoot during a sunny day, the brightness might vary significantly across the whole panorama; in this case, take three or more bracketed exposures for each view (we will see later how to blend them in the post-processing)

Processing the RAW files

If you plan to create the panorama starting from the in-camera Jpeg images, you can safely skip this section. On the other hand, if you are shooting RAW you will need to process and prepare all the input images for Hugin. In this case it is important to make sure that the RAW processing parameters are exactly the same for all the shots. The best is to adjust the parameters on one reference image, and then batch-process the rest of the images using those settings.

Using PhotoFlow

Loading and processing a RAW file is rather easy:

  1. Click the “Open” button and choose the appropriate RAW file from your hard disk; the image preview area will show at this point a grey and rather dark image

  2. Add a “RAW developer” layer; a configuration dialog will show up which allows to access and modify all the typical RAW processing parameters (white balance, exposure, color conversion, etc… see screenshots below).

More details on the RAW processing in PhotoFlow can be found in this tutorial.

Once the result is ok the RAW processing parameters need to be saved into a preset. This can be done following a couple of simple steps:

  1. Select the “RAW developer” layer and click on the “Save” button below the layers list widget (at the bottom-right of the photoflow’s window)

  2. A file chooser dialog chooser dialog will pop-up, where one has to choose an appropriate file name and location for the preset and then click “Save”;
    the preset file name must have a “.pfp” extension

The saved preset needs then to be applied to all the RAW files in the set. Under Linux, PhotoFlow comes with an handy script that automates the process. The script is called pfconv and can be found here. It is a wrapper around the pfbatch and exiftool commands, and is used to process and convert a bunch of files to TIFF format. Save the script in one of the folders included in your PATH environment variable (for example /usr/local/bin) and make it executable:

sudo chmod u+x /usr/local/bin/pfconv

Processing all RAW files of a given folder is quite easy. Assuming that the RAW processing preset is stored in the same folder under the name raw_params.pfp, run this commands in your preferred terminal application:

cd panorama_dir
pfconv -p raw_params.pfp *.NEF

Of course, you have to change panorama_dir to your actual folder and the .NEF extension to the one of your RAW fles.

Now go for a cup of coffee, and be patient… a panorama with three or five bracketed shots for each view can easily have more than 50 files, and the processing can take half an hour or more. Once the processing completed, there will be one tiff file for each RAW image, an the fun with Hugin can start!

Assembling the shots

Hugin is a powerful and free software suite for stitching multiple shots into a seamless panorama, and more. Under Linux, Hugin can be usually installed through the package manager of your distribution. In the case of Ubuntu-based distros it can be usually installed with:

sudo apt-get install hugin

If you are running Hugin for the first time, I suggest to switch the interface type to Advanced in order to have full control over the available parameters.

The first steps have to be done in the Photos tab:

  1. Click on Add images and load all the tiff files included in your panorama. Hugin should automatically determine the lens focal length and the exposure values from the EXIF data embedded in the tiff files.

  2. Click on Create control points to let hugin determine the anchor points that will be used to align the images and to determine the lens correction parameters so that all shots overlap perfectly. If the scene contains a large amount of clouds that have likely moved during the shooting, you can try setting the feature matching algorithm to cpfind+celeste to automatically exclude non-reliable control points in the clouds.

  3. Set the geometric parameters to Positions and Barrel Distortion and hit the Calculate button.

  4. Set the photometric parameters to High dynamic range, fixed exposure (since we are going to stitch bracketed shots that have been taken with fixed exposures), and hit the Calculate button again.

At this point we can have a first look at the assembled panorama. Hugin provides an OpenGL-based previewer that can be opened by clicking on the on the GL icon in the top toolbar (marked with the arrow in the above screenshot). This will open a window like this:

If the shots have been taken handheld and are not perfectly aligned, the panorama will probably look a bit “wavy” like in my example. This can be easily fixed by clicking on the Straighten button (at the top of the Move/Drag tab). Next, the image can be centered in the preview area with the Center and Fit buttons.

If the horizon is still not straight, you can further correct it by dragging the center of the image up or down:

At this point, one can switch to the Projection tab and play with the different options. I usually find the Cylindrical projection better than the Equirectangular that is proposed by default (the vertical dimension is less “compressed”). For architectural panoramas that are not too wide, the Rectilinear projection can be a good option since vertical lines are kept straight.

If the projection type is changed, one has to click once more on the Center and Fit buttons.

Finally, you can switch to the Crop tab and click on the HDR Autocrop button to determine the limits of the area containing only valid pixels.

We are now done with the preview window; it can be closed and we can go back to the main window, in the Stitcher tab. Here we have to set the options to produce the output images the way we want. The idea is to blend each bracketed exposure into a separate panorama, and then use enfuse to create the final exposure-blended version. The intermediate panoramas, which will be saved along with the enfuse output, are already aligned with respect to each other and can be combined using different type of masks (luminosity, gradients, freehand, etc…).

The Stitcher tab has to be configured as in the image below, selecting Exposure fused from any arrangement and Blended layers of similar exposure, without exposure correction. I usually set the output format to TIFF to avoid compression artifacts.

The final act starts by clicking on the Stitch! button. The input images will be distorted, corrected for the lens vignetting and blended into seamless panoramas. The whole process is likely to take quite long, so it is probably a good opportunity for taking a pause…

At the end of the processing, few new images should appear in the output directory: one with an “blended_fused.tif” suffix containing the output of the final enfuse step, and few with an “_exposure????.tif” suffix that contain intermediate panoramas for each exposure value.

Blending the exposures

Very often, photo editing is all about getting what your eyes have seen out of what your camera has captured.

The image that will be edited through this tutorial is no exception: the human vision system can “compensate” large luminosity variations and can “record” scenes with a wider dynamic range than your camera sensor. In the following I will attempt to restore such large dynamics by combining under- and over-exposed shots together, in a way that does not produce unpleasing halos or artifacts. Nevertheless, I have intentionally pushed the edit a bit “over the top” in order to better show how far one can go with such a technique.

This second part introduces a certain number of quite general editing ideas, mixed with details specific to their realization in PhotoFlow. Most of what is described here can be reproduced in GIMP with little extra effort, but without the ease of non-destructive editing.

The steps that I followed to go from one to the other can be more or less outlined like that:

  1. take the foreground from the +1EV version and the clouds from the -1EV version; use the exposure-blended Hugin output to improve the transition between the two exposures

  2. apply an S-shaped tonal curve to increase the overall brightness and add contrast.

  3. apply a combination of the a and b channels of the CIE-Lab colorspace in overlay blend mode to give more “pop” to the green and yellow regions in the foreground

The image below shows side-by-side three of the output images produced with Hugin at the end of the first part. The left part contains the brightest panorama, obtained by blending the shots taken at +1EV. The right part contains the darkest version, obtained from the shots taken at -1EV. Finally, the central part shows the result of running the enfuse program to combine the -1EV, 0EV and +1EV panoramas.

Comparison between the +1EV exposure (left), the enfuse output (center) and the -1EV exposure (right)

Exposure blending in general

In scenes that exhibit strong brightness variations, one often needs to combine different exposures in order to compress the dynamic range so that the overall contrast can be further tweaked without the risk of losing details in the shadows or highlights.

In this case, the name of the game is “seamless blending”, i.e. combining the exposures in a way that looks natural, without visible transitions or halos. In our specific case, the easiest thing would be to simply combine the +1EV and -1EV images through some smooth transition, like in the example below.

Simple blending of the +1EV and -1EV exposures

The result is not too bad, however it is very difficult to avoid some brightening of the bottom part of the clouds (or alternatively some darkening of the hills), something that will most likely look artificial even if the effect is subtle (our brain will recognize that something is wrong, even if one cannot clearly explain the reason…). We need something to “bridge” the two images, so that the transition looks more natural.

At this point it is good to recall that the last step performed by Hugin was to call the enfuse program to blend the three bracketed exposures. The enfuse output is somehow intermediate between the -1EV and +1EV versions, however a side-by-side comparison with the 0EV image reveals the subtle and sophisticated work done by the program: the foreground hill is brighter and the clouds are darker than in the 0EV version. And even more importantly, this job is done without triggering any alarm in your brain! Hence, the enfuse output is a perfect candidate to improve the transition between the hill and the sky.

Final result Enfuse output (click to see 0EV version)

Exposure blending in PhotoFlow

It is time to put all the stuff together. First of all, we should open PhotoFlow and load the +1EV image. Next we need to add the enfuse output on top of it: for that you first need to add a new layer (1) and choose the Open image tool from the dialog that will open up (2)(see below).

Inserting as image from disk as a layer

After clicking the “OK” button, a new layer will be added and the corresponding configuration dialog will be shown. There you can choose the name of the file to be added; in this case, choose the one ending with “_blended_fused.tif” among those created by Hugin:

“Open image” tool dialog

Layer masks: theory (a bit) and practice (a lot)

For the moment, the new layer completely replaces the background image. This is not the desired result: instead, we want to keep the hills from the background layer and only take the clouds from the “_blended_fused.tif” version. In other words, we need a layer mask.

To access the mask associated to the “enfuse” layer, double-click on the small gradient icon next to the name of the layer itself. This will open a new tab with an initially empty stack, where we can start adding layers to generate the desired mask.

How to access the grayscale mask associated to a layer

In PhotoFlow, masks are edited the same way as the rest of the image: through a stack of layers that can be associated to most of the available tools. In this specific case, we are going to use a combination of gradients and curves to create a smooth transition that follows the shape of the edge between the hills and the clouds. The technique is explained in detail in this screencast.

To avoid the boring and lengthy procedure of creating all the necessary layers, you can download this preset file and load it as shown below:

The mask is initially a simple vertical linear gradient. At the bottom (where the mask is black) the associated layer is completely transparent and therefore hidden, while at the top (where the mask is white) the layer is completely opaque and therefore replaces anything below it. Everywhere in between, the layer has a degree of transparency equal to the shade of gray in the mask.

In order to show the mask, activate the “show active layer” radio button below the preview area, and then select the layer that has to be visualized. In the example above, I am showing the output of the topmost layer in the mask, the one called “transition”. Double-clicking on the name of the “transition layer allows to open the corresponding configuration dialog, where the parameters of the layer (a curves adjustment in this case) can be modified. The curve is initially a simple diagonal: output values exactly match input ones.

If the rightmost point in the curve is moved to the left, and the leftmost to the right, it is possible to modify the vertical gradient and the reduce the size of the transition between pure black and pure white, as shown below:

We are getting closer to our goal of revealing the hills from the background layer, by making the corresponding portion of the mask purely black. However, the transition we have obtained so far is straight, while the contour of the hills has a quite complex curvy shape… this is where the second curves adjustment, associated to the “modulation” layer, comes into play.

As one can see from the screenshot above, between the bottom gradient and the “transition” curve there is a group of three layers: an horizontal gradient, a modulation curve and invert operation. Moreover, the group itself is combined with the bottom vertical gradient in grain merge blending mode.

Double-clicking on the “modulation” layer reveals a tone curve which is initially flat: output values are always 50% independently of the input. Since the output of this “modulation” curve is combined with the bottom gradient in grain merge mode, nothing happens for the moment. However, something interesting happens when a new point is added and dragged in the curve: the shape of the mask matches exactly the curve, like in the example below.

The sky/hills transition

The technique introduced above is used here to create a precise and smooth transition between the sky and the hills. As you can see, with a sufficiently large number of points in the modulation curve one can precisely follow the shape of the hills:

The result of the blending looks like that (click the image to see the initial +1EV version):

Final result Enfuse output blended with the +1EV image (click to see the initial +1EV version)

The sky looks already much denser and saturated in this version, and the clouds have gained in volume and tonal variations. However, the -1EV image looks even better, therefore we are going to take the sky and clouds from it.

To include the -1EV image we are going to follow the same procedure done already in the case of the enfuse output:

  1. add a new layer of type “Open image” and load the -1EV Hugin output (I’ve named this new layer “sky”)

  2. open the mask of the newly created layer and add a transition that reveals only the upper portion of the image

Fortunately we are not obliged to recreate the mask from scratch. PhotoFlow includes a feature called layer cloning, which allows to dynamically copy the content of one layer into another one. Dynamically in the sense that the pixel data gets copied on the fly, such that the destination always reflects the most recent state of the source layer.

After activating the mask of the “sky” layer, add a new layer inside it and choose the “clone layer” tool (see screenshot below).

Cloning a layer from one mask to another

In the tool configuration dialog that will pop-up, one has to choose the desired source layer among those proposed in the list under the label “Layer name”. The generic naming scheme of the layers in the list is “[root group name]/root layer name/OMap/[mask group name]/[maks layer name]”, where the items inside square brackets are optional.

Choice of the clone source layer

In this specific case, I want to apply a smoother transition curve to the same base gradient already used in the mask of the “enfuse” layer. For that we need to choose “enfuse/OMap/gradient modulation (blended)” in order to clone the output of the “gradient modulation” group after the grain merge blend, and then add a new curves tool above the cloned layer:

The final transition mask between the hills and the sky

The result of all the efforts done up to now is shown below; it can be compared with the initial starting point by clicking on the image itself:

Final result Edited image after blending the upper portion of the -1EV version through a layer mask. Click to see the initial +1EV image.

Contrast and saturation

We are not quite done yet, as the image is still a bit too dark and flat, however this version will “tolerate” some contrast and luminance boost much better than a single exposure. In this case I’ve added a curves adjustment at the top of the layer’s stack, and I’ve drawn an S-shaped RGB tone curve as shown below:

The effect of this tone curve is to increase the overall brightness of the image (the middle point is moved to the left) and to compress the shadows and highlights without modifying the black and white points (i.e. the extremes of the curve). This curve definitely gives “pop” to the image (click to see the version before the tone adjustment):

Final result Result of the S-shaped tonal adjustment (click the image to see the version before the adjustment).

However, this comes at the expense of an overall increase in the color saturation, which is a typical side effect of RGB curves. While this saturation boost looks quite nice in the hills, the effect is rather disastrous in the sky. The blue as turned electric, and is far from what a nice, saturated blue sky should look like!

However, there is a simple fix to this problem: change the blend mode of the curves layer from Normal to Luminosity. The tone curve in this case only modified the luminosity of the image, but preserves as much as possible the original colors. The difference between normal and lumnosity blending is shown below (click to see the Normal blending). As one can see, the Luminosity blend tends to produce a duller image, therefore we will need to fix the overall saturation in the next step.

Luminosity blend S-shaped tonal adjustment with Luminosity blend mode (click the image to see the version with Normal blend mode).

To adjust the overall saturation of the image, let’s now add an Hue/Saturation layer above the tone curve and set the saturation value to +50. The result is shown below (click to see the Luminosity blend output).

Saturation boost Saturation set to +50 (click the image to see the Luminosity blend output).

This definitely looks better on the hills, however the sky is again “too blue”. The solution is to decrease the saturation of the top part through an opacity mask. In this case I have followed the same steps as for the mask of the sky blend, but I’ve changed the transition curve to the one shown here:

Saturation mask

In the bottom part the mask is perfectly white, and therefore a +50 saturation boost is applied. On the top the mask is instead just about 30%, and therefore the saturation is increased of only about +15. This gives a better overall color balance to the whole image:

Saturation boost after mask Saturation set to +50 through a transition mask (click the image to see the Luminosity blend output).

Lab blending

The image is already quite ok, but I still would like to add some more tonal variations in the hills. This could be done with lots of different techniques, but in this case I will use one that is very simple and straightforward, and that does not require any complex curve or mask since it uses the image data itself. The basic idea is to take the a and/or b channels of the Lab colorspace, and combine them with the image itself in Overlay blend mode. This will introduce tonal variations depending on the color of the pixels (since the a and b channels only encode the color information). Here I will assume you are quite familiar wit the Lab colorspace. Otherwise, here is the link to the Wikipedia page that should give you enough informations to follow the rest of the tutorial.

Looking at the image, one can already guess that most of the areas in the hills have a yellow component, and will therefore be positive in the b channel, while the sky and clouds are neutral or strongly blue, and therefore have b values that are negative or close to zero. The grass is obviously green and therefore negative in the a channel, while the wineyards are brownish and therefore most likely with positive a values. In PhotoFlow the a and b values are re-mapped to a range between 0 and 100%, so that for example a=0 corresponds to 50%. You will see that this is very convenient for channel blending.

My goal is to lighten the green and the yellow tones, to create a better contrast around the wineyards and add some “volume” to the grass and trees. Let’s first of all inspect the a channel: for that, we’ll need to add a group layer on top of everything (I’ve called it “ab overlay”) and then added a clone layer inside this group. The source of the clone layer is set to the a channel of the “backgroud” layer, as shown in this screenshot:

a channel clone Cloning of the Lab “a” channel of the background layer

A copy of the a channel is shown below, with the contrast enhanced to better see the tonal variations (click to see the original versions):

Saturation boost after mask The Lab a channel (boosted contrast)

As we have already seen, in the a channel the grass is negative and therefore looks dark in the image above. If we want to lighten the grass we therefore need to invert it, to obtain this:

Saturation boost after mask The inverted Lab a channel (boosted contrast)

Let’s now consider the b channel: as sursprising as it might seem, the grass is actually more yellow than green, or at least the b channel values in the grass are higher than the inverted a values. In addition, the trees at the top of the hill stick nicely out of the clouds, much more than in the a channel. All in all, a combination of the two Lab channels seems to be the best for what we want to achieve.

With one exception: the blue sky is very dark in the b channel, while the goal is to leave the sky almost unchanged. The solution is to blend the b channel into the a channel in Lighten mode, so that only the b pixels that are lighter than the corresponding a ones end up in the blended image. The result is shown below (click on the image to see the b channel).

b channel lighten blend b channel blended in Lighten mode (boosted contrast, click the image to see the b channel itself).

And this are the blended a and b channels with the original contrast:

b channel lighten blend The final a and b mask, without contrast correction

The last act is to change the blending mode of the “ab overlay” group to Overlay: the grass and trees get some nice “pop”, while the sky remains basically unchanged:

ab overlay Lab channels overlay (click to see the image after the saturation adjustment).

I’m now almost satisfied with the result, except for one thing: the Lab overlay makes the yellow area on the left of the image way too bright. The solution is a gradient mask (horizontal this time) associated to the “ab overlay group”, to exclude the left part of the image as shown below:

overlay blend mask

The final, masked image is shown here, to be compared with the initial starting point:

final result The image after the masked Lab overlay blend (click to see the initial +1EV version).

The Final Touch

Through the tutorial I have intentionally pushed the editing quite above what I would personally find acceptable. The idea was to show how far one can go with the techniques I have described; fortunatey, the non-destructive editing allows us to go back on our steps and reduce the strength of the various effects until the result looks really ok.

In this specific case, I have lowered the opacity of the “contrast” layer to 90%, the one of the “saturation” layer to 80% and the one of the “ab overlay” group to 40%. Then, feeling that the “b channel” blend was still brightening the yellow areas too much, I have reduced the opacity of the “b channel” layer to 70%.

opacity adjustment Opacities adjusted for a “softer” edit (click on the image to see the previous version).

Another thing I still did not like in the image was the overall color balance: the grass in the foreground looked a bit too “emerald” instead of “yellowish green”, therefore I thought that the image could profit of a general warming up of the colors. For that I have added a curves layer at the top of the editing stack, and brought down the middle of the curve in both the green and blue channels. The move needs to be quite subtle: I brought the middle point down from 50% to 47% in the greens and 45% in the blues, and then I further reduced the opacity of the adjustment to 50%. Here comes the warmed-up version, compared with the image before:

opacity adjustment “Warmer” version (click to see the previous version)

At this point I was almost satisfied. However, I still found that the green stuff at the bottom-right of the image attracted too much my attention and distracted the eye. Therefore I darkened the bottom of the image with a slightly curved gradient applied in “soft light” blend mode. The gradient was created with the same technique used for blending the various exposures. The transition curve is shown below: in this case, the top part was set to 50% gray (remember that we blend the gradient in “soft light” mode) and the bottom part was moved a bit below 50% to obtain a slightly darkening effect:

vignetting gradient Gradient used for darkening the bottom of the image.

It’s done! If you managed to follow me ‘till the end, you are now rewarded with the final image in all its glory, that you can again compare with the initial starting point.

final result The final image (click to see the initial +1EV version).

It has been a quite long journey to arrive here… and I hope not to have lost too many followers on the way!

June 24, 2015

Introducing the Linux Vendor Firmware Service

As some of you may know, I’ve spent the last couple of months talking with various Red Hat partners and other OpenHardware vendors that produce firmware updates. These include most of the laptop vendors that you know and love, along with a few more companies making very specialized hardware.

We’ve now got a process, fwupd, that is capable of taking the packaged update and applying it to the hardware using various forms of upload mechanism. We’ve got a specification, AppStream, which is used to describe the updates and provide metadata for what firmware updates are available to be installed. What we were missing was to “close the circle” and provide a web service for small and medium size vendors to use to upload new firmware and make it available to Linux users.

Microsoft already provides such a thing for vendors to use, and it’s part of the Microsoft Update service. From the vendors I’ve talked to, the majority don’t want to run any tools on their firmware to generate metadata. Most of them don’t even want to commit to hosting the metadata or firmware files in the same place forever, and with a couple of exceptions actually like the Microsoft Update model.

I’ve created a simple web service that’s being called Linux Vendor Firmware Service (perhaps not the final name). You can see the site in action here, although it’s not terribly useful or exciting if you’re not a hardware vendor.

If you are vendor that produces firmware and want an access key for the beta site, please let me know. All firmware uploaded will be transferred to the final site, although I’m still waiting to hear back from Red Hat legal about a longer version of the redistribution agreement.

Anyway, comments very welcome, thanks.