August 22, 2014

Rocker Concept

Anastasia Majzhegisheva brings a quick concept of Rocker character. Rocker is a husband of Mechanic-Sister, and… well, you can see he is a tough guy.

RockerArtwork by Anastasia Majzhegisheva

Artwork by Anastasia Majzhegisheva

P.S. BTW, Anastasia have a Google+ page, you can find a lot of her artwork there!

August 21, 2014

Call for Content: Blenderart Magazine issue #46

We are ready to start gathering up tutorials, making of articles and images for Issue # 46 of Blenderart Magazine.

The theme for this issue isFANtastic FANart

This is going to be a seriously fun issue for everyone to take part in. We are going to honor and pay homage to our favorite artists by creating an issue full of Fanart.

At some point we all give in to the overwhelming urge to re-create our favorite characters, logos etc. We have no intention of claiming the idea as our own, we are simply practicing our craft, improving our skills and showing our love to those artists whose work inspires our own.

So in this issue we are looking for tutorials or “making of” articles on:

  • Personal Fanart projects that you have done to practice your skills and or for fun
  • A nice short summary of why this project inspired you, what you learned

*warning: lack of submissions could result in an entire issue of strange sculpting experiments, half completed models and a galley filled with random bad sketches by yours truly…. :P …… goes off to start filling sketchbook with hundreds of stick figures, just in case. :P


Send in your articles to sandra
Subject: “Article submission Issue # 46 [your article name]“

Gallery Images

As usual you can also submit your best renders based on the theme of the issue. The theme of this issue is “FANtastic FANart”. Please note if the entry does not match with the theme it will not be published.

Send in your entries for gallery to gaurav
Subject: “Gallery submission Issue # 46″

Note: Image size should be of 1024x (width) at max.

Last date of submissions October 5, 2014.

Good luck!
Blenderart Team

Papagayo packages for Windows

For a long time we have received many requests to provide Papagayo packages for Windows. I am aware about the Papagayo 2.0 bump from original developers, and I am considering to port the changes of our custom version of Papagayo into their version as soon as possible. Unfortunately, right now I’m busy with other priorities, so it’s very hard to tell when this actually will be done.

Papagayo screenshot (Windows)

Papagayo screenshot (Windows)

But there is a good news! As a result of our recent collaboration with some small animation studio of Novosibirsk, we have come up with a build of Papagayo, which is works for Windows. The build is pretty crappy, but it is working.

Here’s the download link:

Note: To start the application, unpack the archive and run the “papagayo.bat” file.

Papagayo update

I am happy to announce that we have made a small update to our Papagayo packages, which fixes the issue with FPS settings value. In previous versions FPS value was internally messed up when you load a new audio file, which was leading to incorrect syncronization results. This update have no other changes and is recommended for all users.

Download updated packages

August 20, 2014

Mouse Release Movie

[Mouse peeking out of the trap] We caught another mouse! I shot a movie of its release.

Like the previous mouse we'd caught, it was nervous about coming out of the trap: it poked its nose out, but didn't want to come the rest of the way.

[Mouse about to fall out of the trap] Dave finally got impatient, picked up the trap and turned it opening down, so the mouse would slide out.

It turned out to be the world's scruffiest mouse, which immediately darted toward me. I had to step back and stand up to follow it on camera. (Yes, I know my camera technique needs work. Sorry.)

[scruffy mouse, just released from trap] [Mouse bounding away] Then it headed up the hill a ways before finally lapsing into the high-bounding behavior we've seen from other mice and rats we've released. I know it's hard to tell in the last picture -- the photo is so small -- but look at the distance between the mouse and its shadow on the ground.

Very entertaining! I don't understand why anyone uses killing traps -- even if you aren't bothered by killing things unnecessarily, the entertainment we get from watching the releases is worth any slight extra hassle of using the live traps.

Here's the movie: Mouse released from trap. [Mouse released from trap]

August 17, 2014

Krita booth at Siggraph 2014

This year, for the first time, we had a Krita booth at Siggraph. If you don’t know about it, Siggraph is the biggest yearly Animation Festival, which happened this year in Vancouver.
We were four people to hold the booth:
-Boudewijn Rempt (the maintainer of the Krita project)
-Vera Lukman (the original author of our popup-palette)
-Oscar Baechler (a cool Krita and Blender user)
-and me ;) (spreading the word about Krita training; more about this in a next post…)

Krita team at Siggraph

Together with Oscar and Vera, we’ve been doing live demos of Krita’s coolest and most impressive features.

Krita booth

We were right next to the Blender booth, which made a nice free-open-source solution area. It was a good occasion for me to meet more people from the blender team.

Krita and Blender booth

People were all really impressed, from those who discovered Krita for the first time to those who already knew about it or even already used it.
As we already started working hard on integrating Krita for VFX workflow, with support for high-bit-depth painting on OpenEXR files, supporting OpenColorIO color management, and even animation support, it was a good occasion to showcase these features and get appropriate feedback.
Many studios expressed their interest to integrate Krita into their production pipeline, replacing less ideal solutions they are using currently…
And of course we met lots of digital painters like illustrators, concept artists, storyboarders or texture artists who want to use Krita now.
Reaching such kinds of users was really our goal, and I think it was a success.

There was also a bird of feather event with all Open-source projects related to VFX that were present there, which was full of great encounters.
I even could meet the guy who is looking at fixing the OCIO bug that I reported a few days before, that was awesome!

OpenSource Bird Of Feather

So hopefuly we’ll see some great users coming to Krita in the next weeks/months. As usual, stay tuned ;)

*Almost all photos here by Oscar Baechler; much more photos here or here.

August 15, 2014

DXF export of FreeCAD Drawing pages

I just upgraded the code that exports Drawing pages in FreeCAD, and it works now much better, and much more the way you would expect: Mount your page fully in FreeCAD, then export it to DXF or DWG with the press of a button. Before, doing this would export the SVG code from the Drawing page,...

Time-lapse photography: stitching movies together on Linux

[Time-lapse clouds movie on youtube] A few weeks ago I wrote about building a simple Arduino-driven camera intervalometer to take repeat photos with my DSLR. I'd been entertained by watching the clouds build and gather and dissipate again while I stepped through all the false positives in my crittercam, and I wanted to try capturing them intentionally so I could make cloud movies.

Of course, you don't have to build an Arduino device. A search for timer remote control or intervalometer will find lots of good options around $20-30. I bought one so I'll have a nice LCD interface rather than having to program an Arduino every time I want to make movies.

Setting the image size

Okay, so you've set up your camera on a tripod with the intervalometer hooked to it. (Depending on how long your movie is, you may also want an external power supply for your camera.)

Now think about what size images you want. If you're targeting YouTube, you probably want to use one of YouTube's preferred settings, bitrates and resolutions, perhaps 1280x720 or 1920x1080. But you may have some other reason to shoot at higher resolution: perhaps you want to use some of the still images as well as making video.

For my first test, I shot at the full resolution of the camera. So I had a directory full of big ten-megapixel photos with filenames ranging from img_6624.jpg to img_6715.jpg. I copied these into a new directory, so I didn't overwrite the originals. You can use ImageMagick's mogrify to scale them all:

mogrify -scale 1280x720 *.jpg

I had an additional issue, though: rain was threatening and I didn't want to leave my camera at risk of getting wet while I went dinner shopping, so I moved the camera back under the patio roof. But with my fisheye lens, that meant I had a lot of extra house showing and I wanted to crop that off. I used GIMP on one image to determine the x, y, width and height for the crop rectangle I wanted. You can even crop to a different aspect ratio from your target, and then fill the extra space with black:

mogrify img_6624.jpg -crop 2720x1450+135+315 -scale 1280 -gravity center -background black -extent 1280x720 *.jpg

If you decide to rescale your images to an unusual size, make sure both dimensions are even, otherwise avconv will complain that they're not divisible by two.

Finally: Making your movie

I found lots of pages explaining how to stitch together time-lapse movies using mencoder, and a few using ffmpeg. Unfortunately, in Debian, both are deprecated. Mplayer has been removed entirely. The ffmpeg-vs-avconv issue is apparently a big political war, and I have no position on the matter, except that Debian has come down strongly on the side of avconv and I get tired of getting nagged at every time I run a program. So I needed to figure out how to use avconv.

I found some pages on avconv, but most of them didn't actually work. Here's what worked for me:

avconv -f image2 -r 15 -start_number 6624 -i 'img_%04d.jpg' -vcodec libx264 time-lapse.mp4

Adjust the start_number and filename appropriately for the files you have.

Avconv produces an mp4 file suitable for uploading to youtube. So here is my little test movie: Time Lapse Clouds.

August 13, 2014


I should really write more about all the little open-source tools we use everyday here in our architecture studio. There are your usual CAD / BIM / 3D applications, of course, that you know a bit of if you follow this blog, but one of the tools that really helps us a lot in our...

August 12, 2014

Native OSX packages available for testing

We have made a new packages of Synfig that run natively on OSX and don't require X11 installed. Help us to test them!...

August 10, 2014

Synfig website goes international

We are happy to announce that our main website is going to provide its content translated for several languages....

Sphinx Moths

[White-lined sphinx moth on pale trumpets] We're having a huge bloom of a lovely flower called pale trumpets (Ipomopsis longiflora), and it turns out that sphinx moths just love them.

The white-lined sphinx moth (Hyles lineata) is a moth the size of a hummingbird, and it behaves like a hummingbird, too. It flies during the day, hovering from flower to flower to suck nectar, being far too heavy to land on flowers like butterflies do.

[Sphinx moth eye] I've seen them before, on hikes, but only gotten blurry shots with my pocket camera. But with the pale trumpets blooming, the sphinx moths come right at sunset and feed until near dark. That gives a good excuse to play with the DSLR, telephoto lens and flash ... and I still haven't gotten a really sharp photo, but I'm making progress.

Check out that huge eye! I guess you need good vision in order to make your living poking a long wiggly proboscis into long skinny flowers while laboriously hovering in midair.

Photos here: White-lined sphinx moths on pale trumpets.

August 09, 2014

A bit of FreeCAD BIM work

This afternoon I did some BIM work in FreeCAD for a house project I'm doing with Ryan. We're using this as a test platform for IFC roundtripping between Revit and FreeCAD. So far the results are mixed, lots of information get lost on the way obviously, but on the other hand I'm secretly pretty happy...

August 08, 2014

Siggraph 2014

Meet us at the SIGGRAPH 2014 conference in Vancouver!

Sunday 10 August, Birds of a feather, Convention Center East, room 3

  • 3 PM: Blender Foundation and community meeting
    Ton Roosendaal talks about last year’s results and plans for next year.
    Feedback welcome!
  • 4.30 PM: Blender Artist Showcase and demos
    Everyone’s welcome to show 5-10 minutes of work you did with Blender.
    Well known artists have been invited already for it, like Jonathan Williamson (BlenderCookie), Sean Kennedy (former R&H), Mike Pan (author BGE book), etc.

Tuesday 12 – Thursday 14 August: Tradeshow exhibit

  • Exhibit hall, booth #545
  • FREE TICKETS! Go to this URL and use promotion code BL122947
  • And meet with our neighbors: Krita Foundation.
  • Tuesday 9.30 AM – 6 PM, Wednesday 9.30 AM – 6 PM, Thursday 9.30 AM – 3.30 PM
  • Exhibit has been kindly sponsored by HP and BlenderCookie.

Daily meeting point after show hours to get together informal for a drink or food:

  •  Rogue Kitchen & Wetbar in Gastown. 601 W. Cordova Street
    (Walk out of convention center to the east, to the trainstation, 10 minutes.)


  • Available in three loud colors – the crew outfit for this year. We’ll sell then for CAD 20 at the BOF and booth.


August 07, 2014


  • If you have an orientation sensor in your laptop that works under Windows 8, this tool might be of interest to you.
  • Mattias will use that code as a base to add Compass support to Geoclue (you're on the hook!)
  • I've made a hack to load games metadata using Grilo and Lua plugins (everything looks like nail when you have a hammer ;)
  • I've replaced a Linux phone full of binary blobs by another Linux phone full of binary blobs
  • I believe David Herrmann missed out on asking for a VT, and getting something nice in return.
  • Cosimo will be writing some more animations for me! (and possibly for himself)
  • I now know more about core dumps and stack traces than I would want to, but far less than I probably will in the future.
  • Get Andrea to approve Timm Bädert's git account so he can move Corebird to GNOME. Don't forget to try out Charles, Timm!
  • My team won FreeFA, and it's not even why I'm smiling ;)
  • The cathedral has two towers!
Unfortunately for GUADEC guests, Bretzel Airlines opened its new (and first) shop on Friday, the last days of the BoFs.

(Lovely city, great job from Alexandre, Nathalie, Marc and all the volunteers, I'm sure I'll find excuses to come back :)

Check out Flock Day 2′s Virtual Attendance Guide on Fedora Magazine


I’ve posted today (Thursday’s) guide to Flock talks over on Fedora Magazine:

Guide to Attending Flock Virtually: Day 2

The guide to days 3 and 4 will follow, of course. Enjoy!

August 06, 2014

Guide to Attending Flock Virtually: Day 1


Flock, the Fedora Contributor Conference, starts tomorrow morning in Prague, the Czech Republic, and you can attend – no matter where in the world you are. (Although admittedly, depending on where you are, you may need to give up on some sleep if you intend to attend live ;-) )

Here’s a quick schedule of tomorrow’s talks for remote attendees:

Wednesday, 6 August 2014

6:45 AM UTC / 8:45 AM Prague / 2:45 AM Boston

Opening: Fedora Project Leader (Matthew Miller)

7:00 AM UTC / 9:00 AM Prague / 3:00 AM Boston

Keynote: Free And Open Source Software In Europe: Policies And Implementations (Gijs Hillenius)

8:00 AM UTC / 10:00 AM Prague / 4:00 AM Boston

Better Presentation of Fonts in Fedora (Pravin Satpute)

Contributing to Fedora SELinux Policy (Michael Scherer)

FedoraQA: You are important (Amita Sharma)

9:00 AM UTC / 11:00 AM Prague / 5:00 AM Boston

Fedora Magazine (Chris Anthony Roberts)

State of Copr Build Service (Miroslav Suchý)

Taskotron and Me (Tim Flink)

Where’s Wayland (Matthias Clasen)

12:00 PM UTC / 14:00 PM Prague / 8:00 AM Boston

Fedora Workstation – Goals, Philosophy, and Future (Christian F.K. Schaller)

Procrastination makes you better: Life of a remotee (Flavio Percoco)

Python 3 as Default (Bohuslav Kabrda)

Wayland Input Status (Hans de Goede)

13:00 PM UTC / 15:00 PM Prague / 9:00 AM Boston

Evolving the Fedora Updates Process (Luke Macken)

Fedora Future Devices (Wolnei Tomazelli Junior)

Outreach Program for Women: Lessons in Collaboration
(Marina Zhurakhinskaya)

Predictive Input Methods (Anish Patel)

14:00 PM UTC / 16:00 PM Prague / 10:00 AM Boston

Open Communication and Collaboration Tools for Humans (Sayan Chowdhury, Ratnadeep Debnath)

State of the Fedora Kernel (Josh Boyer)

The Curious Case of Fedora Freshmen (aka Issue #101) (Sarup Banskota)

UX 101: Practical Usability Methods Anyone Can Use (Karen Tang)

15:00 PM UTC / 17:00 PM Prague / 11:00 AM Boston

Fedora Ambassadors: State of the Union (Jiří Eischmann)

Hyperkitty: Past, Present, and Future (Aurélien Bompard)

Kernel Tuning (John H Dulaney)

Release Engineering and You (Dennis Gilmore)

16:00 PM UTC / 18:00 PM Prague / 12:00 PM Boston

Advocating (Christoph Wickert)

Documenting Software with Mallard (Jaromir Hradilek, Petr Kovar)

Fedora Badges and Badge Design (Marie Catherine Nordin, Chris Anthony Roberts)

How is the Fedora kernel different? (Levente Kurusa)

Help us cover these talks!


We’re trying to get as full coverage as possible of these talks on Fedora Magazine. You can help us out, even if you are a remote attendee. If any of the talks above are at a reasonable time in your timezone and you’d be willing to take notes and draft a blog post for Fedora Magazine, please sign up on our wiki page for assignments! You can also contact Ryan Lerch or Chris Roberts for more information about contributing.

August 05, 2014

(lxml) XPath matching against nodes with unprintable characters

Sometimes you want to clean up HTML by removing tags with unprintable characters in them (whitespace, non breaking space, etc). Sometimes encoding this back and forth results in weird characters when the HTML is rendered. Anyways, here is the snippet you might find useful:

def clean_empty_tags(node):
    Finds all tags with a whitespace in it. They come out broke and
    we won't need them anyways.
    for empty in node.xpath("//p[.='\xa0']"):

FreeCAD Spaces

I just finished to give a bit of polish to the Arch Space tool of FreeCAD. Until now it was a barely geometric entity, that represents a closed space. You can define it buy building it from an existing solid shape, or from selected boundaries (walls, floors, whatever). Now I added a bit of visual goodness....

Privacy Policy

I got an envelope from my bank in the mail. The envelope was open and looked like the flap had never been sealed.

Inside was a copy of their privacy policy. Nothing else.

The policy didn't say whether their privacy policy included sealing the envelope when they send me things.

Clarity in GIMP (Local Contrast + Mid Tones)

I was thinking about other ways I fiddle with Luminosity Masks recently, and I thought it might be fun to talk about some other ways to use them when looking at your images.

My previous ramblings about Luminosity Masks:
The rest of my GIMP tutorials can be found here:

If you remember from my previous look at Luminosity Masks, the idea is to create masks that correspond to different luminous levels in your image (roughly the lightness of tones). Once you have these masks, you can make adjustments to your image and isolate their effect to particular tonal regions easily.

In my previous examples, I used them to apply different color toning to different tonal regions of the image, like this example masked to the DarkDark tones (yes, DarkDark):

Mouseover to change Hue to: 0 - 90 - 180 - 270

What’s neat about that application is when you combine it with some Film Emulation presets. I’ll leave that as an exercise for you to play with.

In this particular post I want to do something different.
I want to make some eyes bleed.

“My eyes! The goggles do nothing!” Radioactive Man (Rainier Wolfcastle)

In the same realm of bad tone-mapping for HDR images (see the first two images here) there are those who sharpen to ridiculous proportions as well as abuse local contrast enhancement with Unsharp Mask.

It was this last one that I was fiddling with recently that got me thinking.

Local Contrast Enhancement with Unsharp Mask

If you haven’t heard of this before, let me explain briefly. There is a sharpening method you can use in GIMP (and other software) that utilizes a slightly blurred version of your image to enhance edge contrasts. This leads to a visual perception of increased sharpness or contrast on those edges.

It’s easy to do this manually to see sort of how it works:
  1. Open an image.
  2. Duplicate the base layer.
  3. Blur the top layer a bit (Gaussian blur).
  4. Set the top layer blend mode to “Grain Extract”.
  5. Create a New Layer from visible.
  6. Set the new layer blend mode to “Overlay”, and hide the blurred layer.
Of course, it’s quite a bit easier to just use Unsharp Mask directly (but now you know how to create high-pass layers of your image - we’re learning things already!).

So let’s have a look at an image from a nice Fall day at a farm:

I can apply Unsharp Mask through the menu:

Filters → Enhance → Unsharp Mask...

Below the preview window there are three sliders to adjust the effect: Radius, Amount, and Threshold.

Radius changes how big a radius to use when blurring the image to create the mask.
Amount changes how strong the effect is.
Threshold is a setting for the minimum pixel value difference to define an edge. You can ignore it for now.

If we apply the filter with its default values (Radius: 5.0, Amount: 0.50), we get a nice little sharpening effect on the result:

Unsharp Mask with default values
(mouseover to compare original)

It gives a nice little “pop” to the image (a bit much for my taste). It also avoids sharpening noise mostly, which is nice as well.

So far this is fairly simple stuff, nothing dramatic. The problem is, once many people learn about this they tend to go a bit overboard with it. For instance, let’s crank up the Amount to 3.0:

Don’t do this. Just don’t.

Yikes. But don’t worry. It’s going to get worse.

High Radius, Low Amount

So I’m finally getting to my point. There is a neat method of increasing local contrast in an image by pushing the Unsharp Mask values more than you might normally. If you use a high radius, and default amount you get:

Unsharp Mask, Radius: 80 Amount: 0.5
(mouseover to compare original)

It still looks like clown vomit. But we can still gain the nice local contrast enhancement and mitigate the offensiveness by turning the Amount down even further. Here it is with the Radius still at 80, but the Amount turned down to 0.10:

Unsharp Mask, Radius: 80 Amount: 0.10
(mouseover to compare original)

Even with the Amount at 0.10 it might be a tad much for my taste. The point is that you can gain a nice little boost to local contrast with this method.

Neat but hardly earth-shattering. This has been covered countless times in various places already (and if this is the first time you’re hearing about it, then we’re learning two new things today!).

We can see that we now have a neat method for bumping up the local contrast of an image slightly to give it a little extra visual pop. What we can think about now is, how can I apply that to my images in other interesting ways?

Perhaps we could find some way to apply these effects to particular areas of an image? Say, based on something like luminosity?

Clarity in Lightroom

From what I can tell (and find online), it appears that this is basically what the “Clarity” adjustment in Adobe Lightroom does. It’s a Local Contrast Enhancement masked in some way to middle tones in the image.

Let’s have a quick look and see if that theory holds any weight. Here is the image above, brought into Lightroom and with the “Clarity’ pushed to 100:

From Lightroom 4, Clarity: 100

This seems visually similar to the path we started on already, but let’s see if we can get something better with what we know so far.

Clarity in GIMP

What I want to do is to increase the local contrast of my image, and confine those adjustments to the mid-tone areas of the image. We have seen a method for increasing local contrast with Unsharp Mask, and I had previously written about creating Luminosity Masks. Let’s smash them together and see what we get!

If you haven’t already, go get the Script-Fu to automate the creation of these masks (I tend to use Saul’s version as it’s faster than mine) from the GIMP Registry.

Open an image to get started (I’ll be using the same image from above).

Create Your Luminosity Masks

You’ll need to generate a set of luminosity masks using your base image as a reference. With your image open, you can find Saul’s Luminosity Mask script here:

Filters → Generic → Luminosity Masks (saulgoode)

It should only take a moment to run, and you shouldn’t notice anything different when it’s finished. If you do check your Channels dialog, you should see all nine of the masks there (L, LL, LLL, M, MM, MMM, D, DD, DDD).

Luminosity Masks, by row: Darks, Mids, Lights

Enhance the Local Contrast

Now it’s time to leave subtlety behind us. We are going to be masking these results anyway, so we can get a little crazy with the application in this step. You can use the steps I mentioned above with Unsharp Mask to increase the local contrast, or you can use G'MIC to do it instead.

The reason that you may want to use G'MIC instead is that to increase the local contrast without causing a bit of a color shift would require that you apply the Unsharp Mask on a particular channel after decomposition. G'MIC can automatically apply the contrast enhancement only on the luminance in one step.

Let’s try it with the regular Unsharp Mask in GIMP. I’m going to use similar settings to what we used above, but we’ll turn the amount up even more.

With your image open in GIMP, duplicate the base layer. We’ll be applying the effect and mask on this duplicate over your base.

Now we can enhance the local contrast using Unsharp Mask:
Filters → Enhance → Unsharp Mask...

This time around, we’ll try using Radius: 80 and Amount: 1.5.

Unsharp Mask, Radius: 80, Amount: 1.5. My eyes!

Yes, it’s horrid, but we’re going to be masking it to the mid-range tones remember. Now I can apply a layer mask to this layer by Right-clicking on the layer, and selecting “Add Layer Mask...”.
Right-click → Add Layer Mask...

In the “Add a Mask to the Layer” dialog that pops up, I’ll choose to initialize the layer to a Channel, and choose the “M” mid-tone mask:

Once the ridiculous tones are confined to the mid-tones, things look much better:

Unsharp Mask, Radius: 80, Amount: 1.5. Masked to mid-tones.
(mouseover to compare original)

You can see that there is now a nice boost to the local contrast that is confined to the mid-tones in the image. This is still a bit much for me personally, but I’m purposefully over-doing it in an attempt to illustrate the process. Really you’d want to either tone-down the amount on the USM (UnSharp Mask), or adjust the opacity of this layer to taste now.

So the general formula we are seeing is to make an adjustment (local contrast enhance in this case), and to use the luminosity masks to give us control over where the effect is applied.

For instance, we can try using other types of contrast/detail enhancement in place of the USM step.

I had previously written about detail enhancement through “Freaky Details”. This is what we get when replacing the USM local contrast enhancement with it. Using G'MIC, I can find “Freaky Details” at:
Filters → G'MIC
Details → Freaky details

I used an Amplitude of 4, Scale 22, and Iterations 1. I applied this to the Luminance Channels:

Freaky Details, Amplitude 4, Scale 22, Iterations 1, mid-tone mask
(mouseover to compare original)

Trying other G'MIC detail enhancements such as “Local Normalization” can yield slightly different results:

G'MIC Local Normalization at default values.
(mouseover to compare original)

Yes, there’s some halo-ing, but remember that I’m purposefully allowing these results to get ugly to highlight what they’re doing.

G'MIC Local Variance Normalization is a neat result with fine details as well:

G'MIC Local Variance Normalization (default settings)
(mouseover to compare original)

In Conclusion

This approach works because our eyes will be more sensitive to slight contrast changes as they occur in the mid-tones of an image as opposed to the upper and lower tones. More importantly, it’s a nice introduction to viewing your images as more than a single layer.

Understanding these concepts and viewing your images as the sum of multiple parts allows you much greater flexibility in how you approach your retouching.

I fully encourage you to give it a shot and see what other strange combinations you might be able to discover! For instance, try using the Film Emulation presets in combination with different luminosity masks to find new and interesting combinations of color grading! Try setting the masked layers to different blending modes! You may surprise yourself with what you find.

Help support the site! Or don’t!
I’m not supporting my (growing) family or anything from this website. Seriously.
There is only one reason I am writing these tutorials and posts:
I love doing it.
Technically there is a second reason: to give back to the community. Others before me were instrumental in helping me learn things when I first got started, and I’m hoping to pay it forward here.

If you want to visit an ad, or make a donation, or even link/share my content, I would be absolutely grateful (and tickled pink). If you don’t it’s not going to affect me writing and posting here one bit.

I’ll keep writing, and I’ll keep it free.
If you get any use out of this site, I only ask that you do one thing:
pay it forward.


This blog post is mostly about showing some photos I took, but I may as well give a brief summary from my point of view.

Had a good time in Strasbourg this week. Hacked a bit on Adwaita with Lapo, who has fearlessly been sanding the rough parts after the major refactoring. Jim Hall uncovered the details of his recent usability testing of GNOME, so while we video chatted before, it was nice to meet him in person. Watched Christian uncover his bold plans to focus on Builder full time which is both awesome and sad. Watched Jasper come out with the truth about his love for Windows and Federico’s secret to getting around fast. Uncovered how Benjamin is not getting more aerodynamic (ie fat) like me. Enjoyed a lot of great food (surprisingly had crêpes only once).

In a classic move I ran out of time in my lightning talk on multirotors, so I’ll have to cover the topic of free software flight controllers in a future blog post. I managed to miss a good number of talks I intended to see, which is quite a feat, considering the average price of beer in the old town. Had a good time hanging out with folks which is so rare to me.

During the BOFs on Wednesday I sat down with the Boxes folks, discussing some new designs. Sad that it was only few brief moments I managed to talk to Bastian about our Blender workflows. Unfortunately the Brno folks from whom I stole a spot in the car had to get back on Thursday so I missed the Thursday and Friday BOFs as well.

Despite the weather I enjoyed the second last GUADEC. Thanks for making it awesome again. See you in the next last one in Gothenburg.

August 04, 2014

ReduceContour tool test

Hi all

Recently I´ve been doing some ground work, polishing FillHoles tool and other internal tools. Nothing fun to show but definitelly improving robustness and inbetween added here and there small useful new features, like the one I´ve being showing in this quick and dirt video series.

Like proportinal inflate, I´ve recently added too in Separate disconnected functionality a treshold to delete smaller parts. Very common when we import noisy meshes with lots of floating parts.
One of the most important feature is the possibility to bridge and connect seperated meshes manually, like I show here:

So stay tunned because the real fun my start soon for me :P

Notes on Fedora on an Android device

A bit more than a year ago, I ordered a Geeksphone Peak, one of the first widely available Firefox OS phones to explore this new OS.

Those notes are probably not very useful on their own, but they might give a few hints to stuck Android developers.

The hardware

The device has a Qualcomm Snapdragon S4 MSM8225Q SoC, which uses the Adreno 203 and a 540x960 Protocol A (4 touchpoints) touchscreen.

The Adreno 203 (Note: might have been 205) is not supported by Freedreno, and is unlikely to be. It's already a couple of generations behind the latest models, and getting a display working on this device would also require (re-)writing a working panel driver.

At least the CPU is an ARMv7 with a hardware floating-point (unlike the incompatible ARMv6 used by the Raspberry Pi), which means that much more software is available for it.

Getting a shell

Start by installing the android-tools package, and copy the udev rules file to the correct location (it's mentioned with the rules file itself).

Then, on the phone, turn on the developer mode. Plug it in, and run "adb devices", you should see something like:

$ adb devices
List of devices attached
22ae7088f488 device

Now run "adb shell" and have a browse around. You'll realise that the kernel, drivers, init system, baseband stack, and much more, is plain Android. That's a good thing, as I could then order Embedded Android, and dive in further.

If you're feeling a bit restricted by the few command-line applications available, download an all-in-one precompiled busybox, and push it to the device with "adb push".

You can also use aafm, a simple GUI file manager, to browse around.

Getting a Fedora chroot

After formatting a MicroSD card in ext4 and unpacking a Fedora system image in it, I popped it inside the phone. You won't be able to use this very fragile script to launch your chroot just yet though, as we lack a number of kernel features that are required to run Fedora. You'll also note that this is an old version of Fedora. There are probably newer versions available around, but I couldn't pinpoint them while writing this article.

Runnning Fedora, even in a chroot, on such a system will allow us to compile natively (I wouldn't try to build WebKit on it though) and run against a glibc setup rather than Android's bionic libc.

Let's recompile the kernel to be able to use our new chroot.

Avoiding the brick

Before recompiling the kernel and bricking our device, we'll probably want to make sure that we have the ability to restore the original software. Nothing worse than a bricked device, right?

First, we'll unlock the bootloader, so we can modify the kernel, and eventually the bootloader. I took the instructions from this page, but ignored the bits about flashing the device, as we'll be doing that a different way.

You can grab the restore image from my Fedora people page, as, as seems to be the norm for Android(-ish) devices makers to deny any involvement in devices that are more than a couple of months old. No restore software, no product page.

The recovery should be as easy as

$ adb reboot-bootloader
$ fastboot flash boot boot.img
$ fastboot flash system system.img
$ fastboot flash userdata userdata.img
$ fastboot reboot

This technique on the Geeksphone forum might also still work.

Recompiling the kernel

The kernel shipped on this device is a modified Ice-Cream Sandwich "Strawberry" version, as spotted using the GPU driver code.

We grabbed the source code from Geeksphone's github tree, installed the ARM cross-compiler (in the "gcc-arm-linux-gnu" package on Fedora) and got compiling:

$ export ARCH=arm
$ export CROSS_COMPILE=/usr/bin/arm-linux-gnu-
$ make C8680_defconfig
# Make sure that CONFIG_DEVTMPFS and CONFIG_EXT4_FS_SECURITY get enabled in the .config
$ make

We now have a bzImage of the kernel. Launching "fastboot boot zimage /path/to/bzImage" didn't seem to work (it would have used the kernel only for the next boot), so we'll need to replace the kernel on the device.

It's a bit painful to have to do this, but we have the original boot image to restore in case our version doesn't work. The boot partition is on partition 8 of the MMC device. You'll need to install my package of the "android-BootTools" utilities to manipulate the boot image.

$ adb shell 'cat /dev/block/mmcblk0p8 > /mnt/sdcard/disk.img'
$ adb pull /mnt/sdcard/disk.img
$ bootunpack boot.img
$ mkbootimg --kernel /path/to/kernel-source/out/arch/arm/boot/zImage --ramdisk p8.img-ramdisk.cpio.gz --base 0x200000 --cmdline 'androidboot.hardware=qcom loglevel=1' --pagesize 4096 -o boot.img
$ adb reboot-bootloader
$ fastboot flash boot boot.img

If you don't want the graphical interface to run, you can modify the Android init to avoid that.

Getting a Fedora chroot, part 2

Run the script. It works. Hopefully.

If you manage to get this far, you'll have a running Android kernel and user-space, and will be able to use the Fedora chroot to compile software natively and poke at the hardware.

I would expect that, given a kernel source tree made available by the vendor, you could follow those instructions to transform your old Android phone into an ARM test "machine".

Going further, native Fedora boot

Not for the faint of heart!

The process is similar, but we'll need to replace the initrd in the boot image as well. In your chroot, install Rob Clark's hacked-up adb daemon with glibc support (packaged here) so that adb commands keep on working once we natively boot Fedora.

Modify the /etc/fstab so that the root partition is the SD card:

/dev/mmcblk1 /                       ext4    defaults        1 1

We'll need to create an initrd that's small enough to fit on the boot partition though:

$ dracut -o "dm dmraid dmsquash-live lvm mdraid multipath crypt mdraid dasd zfcp i18n" initramfs.img

Then run "mkbootimg" as above, but with the new ramdisk instead of the one unpacked from the original boot image.

Flash, and reboot.


In the future, one would hope that packages such as adbd and the android-BootTools could get into Fedora, but I'm not too hopeful as Fedora, as a project, seems uninterested in running on top of Android hardware.


Why am I posting this now? Firstly, because it allows me to organise the notes I took nearly a year ago. Secondly, I don't have access to the hardware anymore, as it found a new home with Aleksander Morgado at GUADEC.

Aleksander hopes to use this device (Qualcomm-based, remember?) to add native telephony support to the QMI stack. This would in turn get us a ModemManager Telephony API, and the possibility of adding support for more hardware, such as through RIL and libhybris (similar to the oFono RIL plugin used in the Jolla phone).

Common docker pitfalls

I’ve ran into a few problems with docker I’d like to document myself and how to solve them.

Overwriting an entrypoint

If you’ve configured a script as an entrypoint which fails, you can run the docker image with a shell in order to fiddle with the script (instead of continously rebuilding the image):

#--entrypoint (provides a new entry point which is the nominated shell)
docker run -i --entrypoint='/bin/bash'  -t f5d4a4d6a8eb

Possible errors you face otherwise are these:

/bin/bash: /bin/bash: cannot execute binary file

Weird errors when building the image

I’ve ran into this a few times. Errors like:

Error in PREIN scriptlet in rpm package libvirt-daemon-
useradd: failure while writing changes to /etc/passwd

If you’ve set SELinux to enforcing, you may want to temporarily disable SELinux for just building the image. Don’t disable SELinux permanently.

Old (base) image

Check if your base image has changed (e.g. docker images) and pull it again (docker pull <image>)


August 03, 2014

RawSpeed moves to github

As you may have noticed, there hasn’t been much activity lately, since all on the Rawstudio team has had various things getting in the way of doing more work on Rawstudio.

I have however now and again found time to work on RawSpeed, and will from now on host all changes on github. Github makes a lot of work much easier, and allows direct pull requests to be made.

RawSpeed Version 2; New Cameras & Features

The new features that has been included and can be tested on the development branch. Note that this is RawSpeed and not Rawstudio.

  • Support for Sigma foveon cameras.
  • Support for Fuji cameras.
  • Support old Minolta, Panasonic, Sony and other cameras (contributed by Pedro Côrte-Real)
  • Arbitrary CFA definition sizes.
  • Use pugixml for xml parsing to avoid depending on libxml.

When “version 2″ is stabilized a bit, a formal relase will be made, whereafter the API will be locked.


August 02, 2014

Krita: illustrated beginners guide in Russian

Some time ago our user Tyson Tan (creator of Krita's mascot Kiki) published his beginners guide for Krita. Now this tutorial is also available in Russian language!

If you happen to know Russian, please follow the link :)


Fanart by Anastasia Majzhegisheva – 13

Morevna Universe. Watercolor artwork painting by Anastasia Majzhegisheva

Morevna Universe.
Watercolor and ink artwork by Anastasia Majzhegisheva.


August 01, 2014

Predicting planetary visibility with PyEphem

Part II: Predicting Conjunctions

After I'd written a basic script to calculate when planets will be visible, the next step was predicting conjunctions, times when two or more planets are close together in the sky.

Finding separation between two objects is easy in PyEphem: it's just one line once you've set up your objects, observer and date.

p1 = ephem.Mars()
p2 = ephem.Jupiter()
observer = ephem.Observer()  # and then set it to your city, etc. ='2014/8/1')

ephem.separation(p1, p2)

So all I have to do is loop over all the visible planets and see when the separation is less than some set minimum, like 4 degrees, right?

Well, not really. That tells me if there's a conjunction between a particular pair of planets, like Mars and Jupiter. But the really interesting events are when you have three or more objects close together in the sky. And events like that often span several days. If there's a conjunction of Mars, Venus, and the moon, I don't want to print something awful like

  Conjunction between Mars and Venus, separation 2.7 degrees.
  Conjunction between the moon and Mars, separation 3.8 degrees.
  Conjunction between Mars and Venus, separation 2.2 degrees.
  Conjunction between Venus and the moon, separation 3.9 degrees.
  Conjunction between the moon and Mars, separation 3.2 degrees.
  Conjunction between Venus and the moon, separation 4.0 degrees.
  Conjunction between the moon and Mars, separation 2.5 degrees.

... and so on, for each day. I'd prefer something like:

Conjunction between Mars, Venus and the moon lasts from Friday through Sunday.
  Mars and Venus are closest on Saturday (2.2 degrees).
  The moon and Mars are closest on Sunday (2.5 degrees).

At first I tried just keeping a list of planets involved in the conjunction. So if I see Mars and Jupiter close together, I'd make a list [mars, jupiter], and then if I see Venus and Mars on the same date, I search through all the current conjunction lists and see if either Venus or Mars is already in a list, and if so, add the other one. But that got out of hand quickly. What if my conjunction list looks like [ [mars, venus], [jupiter, saturn] ] and then I see there's also a conjunction between Mars and Jupiter? Oops -- how do you merge those two lists together?

The solution to taking all these pairs and turning them into a list of groups that are all connected actually lies in graph theory: each conjunction pair, like [mars, venus], is an edge, and the trick is to find all the connected edges. But turning my list of conjunction pairs into a graph so I could use a pre-made graph theory algorithm looked like it was going to be more code -- and a lot harder to read and less maintainable -- than making a bunch of custom Python classes.

I eventually ended up with three classes: ConjunctionPair, for a single conjunction observed between two bodies on a single date; Conjunction, a collection of ConjunctionPairs covering as many bodies and dates as needed; and ConjunctionList, the list of all Conjunctions currently active. That let me write methods to handle merging multiple conjunction events together if they turned out to be connected, as well as a method to summarize the event in a nice, readable way.

So predicting conjunctions ended up being a lot more code than I expected -- but only because of the problem of presenting it neatly to the user. As always, user interface represents the hardest part of coding.

The working script is on github at

July 31, 2014

Fanart by Anastasia Majzhegisheva – 12


Morevna. Artwork by Anastasia Majzhegisheva

We’ve got one more fanart submission from Anastasia Majzhegisheva. This time Anastasija also uncovering the details of her creative process by providing WIP images and screenshots. Enjoy!

Free From XP (LinuxPro Magazine) GIMP Article

So, the last time I talked about LinuxPro Magazine was about having a simple give-away of the promotional copies I had received of their GIMP Handbook issue. At that time, I joked with the editor that surely it couldn’t be complete without anything written by me. :)

Then he called me out on my joke and asked me if I wanted to write an article for them.

So, I’ve got an article in LinuxPro Magazine Special Edition #18: Free From XP!

The article is aimed at new users switching over from XP to Linux, so the stuff I cover is relatively basic, like:
  • The Interface
  • Cropping
  • Rotating
  • Correcting Levels
  • Brightness/Contrast
  • Color Levels
  • Curves
  • Resizing
  • Sharpening
  • Saving & Exporting
Still, if you know someone who could use a hand switching, it certainly can’t hurt to pick a copy up! (You can get print and digital copies from their website: LinuxPro Magazine).

Here’s a quick preview of the first page of the article:

My hair doesn’t look anywhere near as fabulous as this image would have you believe...

Also, if anyone sees a copy on a newsstand, it would be awesome if you could send me a quick snap of it.

writing a product vision for Metapolator

A week ago I kicked off my involvement with the Metapolator project as I always do: with a product vision session. Metapolator is an open project and it was the first time I did the session online, so you have the chance to see the session recording (warning: 2&half hours long), which is a rare opportunity to witness such a highly strategic meeting; normally this is top‐secret stuff.

boom boom

For those not familiar with a product vision, it is a statement that we define as ‘the heartbeat of your product, it is what you are making, reduced down to its core essence.’ A clear vision helps a project to focus, to fight off distractions and to take tough design decisions.

To get a vision on the table I moderate a session with the people who drive the product development, who I simply ask ‘what is it we are making, who is it for, and where is the value?’ The session lasts until I am satisfied with the answers. I then write up the vision statement in a few short paragraphs and fine-tune it with the session participants.

To cut to the chase, here is the product vision statement for Metapolator:

‘Metapolator is an open web tool for making many fonts. It supports working in a font design space, instead of one glyph, one face, at a time.
‘With Metapolator, “pro” font designers are able to create and edit fonts and font families much faster, with inherent consistency. They gain unique exploration possibilities and the tools to quickly adapt typefaces to different media and domains of use.
‘With Metapolator, typographers gain the possibility to change existing fonts—or even create new ones—to their needs.
‘Metapolator is extendible through plugins and custom specimens. It contains all the tools and fine control that designers need to finish a font.’

mass deconstruction

I think that makes it already quite clear what Metapolator is. However, to demonstrate what goes into writing a product vision, and to serve as a more fleshed out vision briefing, I will now discuss it sentence by sentence.

‘Metapolator is an open web tool for making many fonts.’
  • There is no standard template for writing a product vision, the structure it needs is as varied as the projects I work with. But then again it has always worked for me to lead off with a statement of identity; to start answering the question ‘what is it we are making?’ And here we have it.
  • open or libre? This was discussed during the session. At the end Simon Egli, Metapolator founder and driving force, wanted to express that we aim beyond just libre (i.e. open source code) and that ‘open’ also applies to the vibe of the tool on the user side.
  • web‑based: this is not just a statement of the technology used, of the fact that it runs in the browser. It is also a solid commitment that it runs on all desktops—mac, win and linux. And it implies that starting to use Metapolator is as easy as clicking/typing the right URL; nothing more required.
  • tool or application? The former fits better with the fact that font design and typography are master crafts (I can just see the tool in the hand of the master).
  • making or designing fonts? I have learned in the last couple of weeks that there is a font design phase where a designer concentrates on shaping eight strategic characters (for latin fonts). This is followed by a production phase where the whole character set is fleshed out, the spacing between all character pairs set, then different weights (e.g. thin and bold) are derived and maybe also narrow end extended variants. This phase is very laborious and often outsourced. ‘Making’ fonts captures both design and production phases.
  • many fonts: this is the heart of the matter. You can see from the previous point that making fonts is up to now a piecemeal activity. Metapolator is going to change that. It is dedicated to either making many different fonts in a row, or a large font family, even a collection of related families. The implication is that in the user interaction of Metapolator the focus is on making many fonts and the user needs for making many fonts take precedence in all design decisions.
‘It supports working in a font design space, instead of one glyph, one face, at a time.’
  • The first sentence said that Metapolator is going to change the world—by introducing a tool for making many fonts, something not seen before; this second one tells us how.
  • supports is not a word one uses lightly in a vision. ‘Supports XYZ’ does not mean it is just technically possible to do XYZ; it means here that this is going to be a world‐class product to do XYZ, which can only be realised with world‐class user interaction to do XYZ.
  • design space is one of these wonderful things that come up in a product vision session. Super‐user Wei Huang coined the phrase when describing working with the current version of Metapolator. It captures very nicely the working in a continuum that Metapolator supports, as contrasted with the traditional piecemeal approach, represented by ‘one glyph, one face, at a time.’ What is great for a vision is that ‘design space’ captures the vibe that working with metapolator should have, but that it is not explicit on the realisation of it. This means there is room for innovation, through technological R&D and interaction design.
‘With Metapolator, “pro” font designers are able to create and edit fonts and font families much faster, with inherent consistency.’
  • With “pro” font designers we encounter the first user group, starting to answer ‘who is it for?’ “Pro” is in quotes because it is not the earning‑a‐living part that interests us, it is the fact that these people mastered a craft.
  • create and edit balances the two activities; it is not all about creating from scratch.
  • fonts and font families balances making very different fonts with making families; it is not all about the latter.
  • much faster is the first value statement, starting to answer ‘where is the value?’ Metapolator stands for an impressive speed increase in font design and production, by abolishing the piecemeal approach.
  • inherent consistency is the second value statement. Because the work is performed by users in the font design space, where everything is connected and continuous, the conventional user overhead of keeping everything consistent disappears.
‘They gain unique exploration possibilities and the tools to quickly adapt typefaces to different media and domains of use.’
  • exploration possibilities is part feature, part value statement, part field of use and part vibe. All these four are completely different things (e.g. there is inherently zero value in a feature), captured in two words.
  • quickly adapt is a continuation of the ‘much faster’ value statement above, highlighting complementary fields of use for it.
‘With Metapolator, typographers gain the possibility to change existing fonts—or even create new ones—to their needs.’
  • And with typographers we encounter the second user group. These are people who use fonts, with a whole set of typographical skills and expertise implied.
  • possibility to change is the value statement for this user group. This is a huge deal. Normally typographers have neither the skills, nor the time, to modify a font. Metapolator will open up this world to them, with that fast speed and inherent consistency that was mentioned before.
  • create new goes one step further than the previous point. Here we have now a commitment to enable more ambitious typographers (that is what ‘even’ stands for) to create new fonts.
  • to their needs is a context we should be aware of. These typographers will be designing something, anything with text, and that is their main goal. Changing or creating a font is for them a worthwhile way to get it done. But it is only part of their job, not the job. Note that the needs of typographers includes applying some very heavy graphical treatments to fonts.
‘Metapolator is extendible through plugins and custom specimens.’
  • extendible through plugins is one realisation of the ‘open’ aspect mentioned in the first sentence. This makes Metapolator a platform and its extendability will have to be taken into account in every step of its design.
  • custom specimens is slightly borderline to mention in a vision; you could say it is just a feature. I included it because it programs the project to properly support working with type specimens.
‘It contains all the tools and fine control that designers need to finish a font.’
  • all the tools: this was the result of me probing during the vision session whether Metapolator is thought to be part of a tool chain, or independent. This means that it must be designed to work stand‑alone.
  • fine control: again the result of probing, this time whether Metapolator includes the finesse to take care of those important details, on a glyph level. Yes, it all needs to be there.
  • that designers need makes it clear by whose standards the tools and control needs to be made: that of the two user groups.

this space has intentionally been left blank

Just as important as what it says in a product vision is what it doesn’t say. What it does not say Metapolator is, Metapolator is explicitly not. Not a vector drawing application, not a type layout program, not a system font manager, not a tablet or smartphone app.

The list goes on and on, and I am sure some users will come up with highly creative fields of use. That is up to them, maybe it works out or they are able to cover their needs with a plugin they write, or have written for them. For the Metapolator team that is charming to hear, but definitely out of scope.

User groups that are not mentioned, i.e. everybody who is not a “pro” font designer or a typographer, are welcome to check out Metapolator, it is free software. If their needs overlap partly with that of the defined user groups, then Metapolator will work out partly for them. But the needs of all these users are of no concern to the Metapolator team.

If that sounds harsh, then remember what a product vision is for: it helps a project to focus, to fight off distractions and to take tough design decisions. That part starts now.

July 30, 2014

A logo & icon for DevAssistant


This is a simple story about a logo design process for an open source project in case it might be informative or entertaining to you. :)

A little over a month ago, Tomas Radej contacted me to request a logo for DevAssistant. DevAssistant is a UI aimed at making developers’ lives easier by automating a lot of the menial tasks required to start up a software project – setting up the environment, starting services, installing dependencise, etc. His team was gearing up for a new release and really wanted a logo to help publicize the release. They came to me for help as colleagues familiar with some of the logo work I’ve done.


When I first received Tomas’ request, I reviewed DevAsisstant’s website and had some questions:

  • Are there any parent or sibling projects to this one that have logos we’d need this to match up with?
  • Is an icon needed that coordinates with the logo as well?
  • There is existing artwork on the website (shown above) – should the logo coordinate with that? Is that design something you’re committed to?
  • Are there any competing projects / products (even on other platforms) that do something similar? (Just as a ‘competitive’ evaluation of their branding.)

He had some answers :) :

  • There aren’t currently any parent or sibling projects with logos, so from that persepctive we had a blank slate.
  • They definitely needed an icon, preferably in all the required sizes for the desktop GUI.
  • Tomas impressively had made the pre-existing artwork himself, but considered it a placeholder.
  • The related projects/products he suggested are: Software Collections, JBoss Forge, and Enide.

From the competition I saw a lot of clean lines, sharp angles, blues and greens, some bold splashes here and there. Software Collections has a logotype without a mark; JBoss Forge has a mark with an anvil (a construction tool of sorts); Enide doesn’t have a logo per se but is part of Node.js which has a very stylized logotype where letters are made out of hexagons.

I liked how Tomas’ placeholder artwork used shades of blue, and thought about how the triangles could be shaped such to make up the ‘D’ of ‘Dev’ and the ‘A’ of Assistant (similarly to how ‘node’ is spelled out with hexagons for each letter in the node.js logotype.) I played around a little be with the notion of ‘d’ and ‘a’ triangles and sketched some ideas out:


I grabbed an icon sheet template from the GNOME design icon repo and drew this out in Inkscape. This, actually, was pretty foolish of me since I hadn’t sent Tomas my sketches at this point and I didn’t even have a solid concept in terms of the mark’s meaning beyond being stylized ‘d’ and ‘a’ – it could have been a waste of time – but thankfully his team liked the design so it didn’t end up being a waste at all. :)


Then I thought a little about about meaning here. (Maybe this is backwards. Sometimes I start with meaning / concept, sometimes I start with a visual and try to build meaning into it. I did the latter this time; sue me!) I was thinking about how JBoss Forge used a construction tool in its logo (Logo copyright JBoss & Red Hat):


And I thought about how Glade uses a carpenter’s square (another construction tool!) in its icon… hmmm… carpenter’s squares are essentially triangles… ! :) (Glade logo from the GNOME icon theme, LGPLv3+):


I could think of a few other developer-centric tools that used other artifacts of construction – rulers, hard hats, hammers, wrenches, etc. – for their logo/icon design. It seemed to be the right family of metaphor anyway, so I started thinking the ‘D’ and ‘A’ triangles could be carpenter’s squares.

What I started out with didn’t yet have the ruler markings, or the transparency, and was a little hacky in the SVG… but it could have those markings. With Tomas’ go-ahead, I made the triangles into carpenter’s squares and created all of the various sizes needed for the icon:


So we had a set of icons that could work! I exported them out to PNGs and tarred them up for Tomas and went to work on the logo.

Now why didn’t I start with the logo? Well, I decided to start with the icon just because the icon had the most amount of constraints on it – there’s certain requirements in terms of the sizes a desktop icon has to read at, and I wanted it to fit in with the style of other GNOME icons… so I figured, start where the most constraints are, and it’s easier to adapt what you come up with there in the arena where you have less constraints. This may have been a different story if the logo had more constraints – e.g., if there was a family of app brands it had to fit into.

So logos are a bit different than icons in that people like to print them on things in many different sizes, and when you pay for printed objects (especially screen-printed T-shirts) you pay for color, and it can be difficult to do effects like drop shadows and gradients. (Not impossible, but certainly more of a pain. :) ) The approach I took with the logo, then, was to simplify the design and flatten the colors down compared to the icon.

Anyhow, here’s the first set of ideas I sent to Tomas for the logomark & logotype:


From my email to him explaining the mockups:

Okay! Attached is a comp of two logo variations. I have it plain and flat in A & B (A is vertical, and B is a horizontal version of the same thing.) C & D are the same except I added a little faint mirror image frame to the blue D and A triangles – I was just playing around and it made me think of scaffolding which might be a nice analogy. The square scaffolding shape the logomark makes could also be used to create a texture/pattern for the website and associated graphics.

The font is an OFL font called Spinnaker – I’ve attached it and the OFL that it came with. The reason I really liked this font in particular compared to some of the others I evaluated is that the ‘A’ is very pointed and sharp like the triangles in the logo mark, and the ratio of space between the overall size of some of the lowercase letters (e.g., ‘a’ and ‘e’) to their enclosed spaces seemed similar to the ratio of the size of the triangles in the logomark and the enclosed space in the center of the logomark. I think it’s also a friendly-looking font – I would think an assistant to somebody would have a friendly personality to them.

Anyway, feel free to be brutal and let me know what you think, and we can go with this or take another direction if you’d prefer.

Tomas’ team unanimously favored the scaffolding versions (C&D), but were hoping the mirror image could be a bit darker for more contrast. So I did some versions with the mirror image at different darknesses:


I believe they picked B or C, and…. we have a logo.

Overall, this was a very smooth, painless logo design process for a very easy-going and cordial “customer.” :)

July 29, 2014

Prefeitura de Belo Horizonte

This is a project we did for a href=>competition for the new city hall of Belo Horizonte (Brazil). It didn't win (The link shows the winning entries), but we are pretty happy about the project anyway. The full presentation boards are at the bottom of this article, as well as the blender model. Below is...

July 24, 2014

Predicting planetary visibility with PyEphem

Part 1: Basic Planetary Visibility

All through the years I was writing the planet observing column for the San Jose Astronomical Association, I was annoyed at the lack of places to go to find out about upcoming events like conjunctions, when two or more planets are close together in the sky. It's easy to find out about conjunctions in the next month, but not so easy to find sites that will tell you several months in advance, like you need if you're writing for a print publication (even a club newsletter).

For some reason I never thought about trying to calculate it myself. I just assumed it would be hard, and wanted a source that could spoon-feed me the predictions.

The best source I know of is the RASC Observer's Handbook, which I faithfully bought every year and checked each month so I could enter that month's events by hand. Except for January and February, when I didn't have the next year's handbook yet by the time my column went to press and I was on my own. I have to confess, I was happy to get away from that aspect of the column when I moved.

In my new town, I've been helping the local nature center with their website. They had some great pages already, like a What's Blooming Now? page that keeps track of which flowers are blooming now and only shows the current ones. I've been helping them extend it by adding features like showing only flowers of a particular color, separating the data into CSV databases so it's easier to add new flowers or butterflies, and so forth. Eventually we hope to build similar databases of birds, reptiles and amphibians.

And recently someone suggested that their astronomy page could use some help. Indeed it could -- it hadn't been updated in about five years. So we got to work looking for a source of upcoming astronomy events we could use as a data source for the page, and we found sources for a few things, like moon phases and eclipses, but not much.

Someone asked about planetary conjunctions, and remembering how I'd always struggled to find that data, especially in months when I didn't have the RASC handbook yet, I got to wondering about calculating it myself. Obviously it's possible to calculate when a planet will be visible, or whether two planets are close to each other in the sky. And I've done some programming with PyEphem before, and found it fairly easy to use. How hard could it be?

Note: this article covers only the basic problem of predicting when a planet will be visible in the evening. A followup article will discuss the harder problem of conjunctions.

Calculating planet visibility with PyEphem

The first step was figuring out when planets were up. That was straightforward. Make a list of the easily visible planets (remember, this is for a nature center, so people using the page aren't expected to have telescopes):

import ephem

planets = [

Then we need an observer with the right latitude, longitude and elevation. Elevation is apparently in meters, though they never bother to mention that in the PyEphem documentation:

observer = ephem.Observer() = "Los Alamos"
observer.lon = '-106.2978' = '35.8911'
observer.elevation = 2286  # meters, though the docs don't actually say

Then we loop over the date range for which we want predictions. For a given date d, we're going to need to know the time of sunset, because we want to know which planets will still be up after nightfall. = d
sunset = observer.previous_setting(sun)

Then we need to loop over planets and figure out which ones are visible. It seems like a reasonable first approach to declare that any planet that's visible after sunset and before midnight is worth mentioning.

Now, PyEphem can tell you directly the rising and setting times of a planet on a given day. But I found it simplified the code if I just checked the planet's altitude at sunset and again at midnight. If either one of them is "high enough", then the planet is visible that night. (Fortunately, here in the mid latitudes we don't have to worry that a planet will rise after sunset and then set again before midnight. If we were closer to the arctic or antarctic circles, that would be a concern in some seasons.)

min_alt = 10. * math.pi / 180.
for planet in planets: = sunset
    if planet.alt > min_alt:
        print, "is already up at sunset"

Easy enough for sunset. But how do we set the date to midnight on that same night? That turns out to be a bit tricky with PyEphem's date class. Here's what I came up with:

    midnight = list(
    midnight[3:6] = [7, 0, 0] =
    if planet.alt > min_alt:
        print, "will rise before midnight"

What's that 7 there? That's Greenwich Mean Time when it's midnight in our time zone. It's hardwired because this is for a web site meant for locals. Obviously, for a more general program, you should get the time zone from the computer and add accordingly, and you should also be smarter about daylight savings time and such. The PyEphem documentation, fortunately, gives you tips on how to deal with time zones. (In practice, though, the rise and set times of planets on a given day doesn't change much with time zone.)

And now you have your predictions of which planets will be visible on a given date. The rest is just a matter of writing it out into your chosen database format.

In the next article, I'll cover planetary and lunar conjunctions -- which were superficially very simple, but turned out to have some tricks that made the programming harder than I expected.

July 23, 2014

Watch out for DRI3 regressions

DRI3 has plenty of necessary fixes for and Wayland, but it's still young in its integration. It's been integrated in the upcoming Fedora 21, and recently in Arch as well.

If WebKitGTK+ applications hang or become unusably slow when an HTML5 video is supposed to be, you might be hitting this bug.

If Totem crashes on startup, it's likely this problem, reported against cogl for now.

Feel free to add a comment if you see other bugs related to DRI3, or have more information about those.

Update: Wayland is already perfect, and doesn't use DRI3. The "DRI2" structures in Mesa are just that, structures. With Wayland, the DRI2 protocol isn't actually used.

Here’s a low-barrier way to help improve FLOSS apps – AppStream metadata: Round 1

UPDATE: This program is full now!

We are so excited that we’ve got the number of volunteers we needed to assign all of the developer-related packages we identified for this round! THANK YOU! Any further applications will be added to a wait list (in case any of the assignees need to drop any of their assigned packages.) Depending on how things go, we may open up another round in a couple of weeks or so, so we’ll keep you posted!

Thanks again!!

– Mo, Ryan, and Hughsie


Do you love free and open source software? Would you like to help make it better, but don’t have the technical skills to know where you can jump in and help out? Here is a fantastic opportunity!

The Problem

There is an cross-desktop, cross-distro, project called AppStream. In a nutshell, AppStream is an effort to standardize metadata about free and open source applications. Rather than every distro have its own separate written description for Inkscape, for example, we’d have a shared and high-quality description of Inkscape that would be available to users of all distros. Why is this kind of data important? It helps free desktop users discover applications that might meet their needs – for example, via searching software center applications (such as GNOME Software and Apper.)

Screenshot of GNOME Software showing app metadata in action!

Screenshot of GNOME Software showing app metadata in action!

Running this project in a collaborative way is also a great way for us to combine efforts and come up with great quality content for everyone in the FLOSS community.

Contributors from Fedora and other distros have been working together to build the infrastructure to make this project work. But, we don’t yet have even close to full metadata coverage of the thousands of FLOSS applications we ship. Without metadata for all of the applications, users could be missing out on great applications or may opt out of installing an app that would work great for them because they don’t understand what the app does or how it could meet their needs.

The Plan

Ryan Lerch among other contributors have been working very hard for many weeks now generating a lot of the needed metadata, but as of today only have roughly 25% coverage for the desktop packages in Fedora. We’d love to see that number increase significantly for Fedora 21 and beyond, but we need your help to accomplish that!

Ryan, Richard Hughes, and I recently talked about the ongoing effort. Progress is slower than we’d like, we have less contributors than we’d like – but it is a great opportunity for new contributors, because of the low barrier to entry and big impact the work has!

So along that line, we thought of an idea for an ongoing program that we’d like to pilot: Basically, we’ll chunk the long list of applications that need the metadata into thematic lists – for example, graphics applications, development applications, social media applications, etc. etc. Each of those lists we’ll break into chunks of say 10 apps each, and volunteers can pick up those chunks and submit metadata for just those 10.

The specific metadata we are looking for in this pilot is a brief summary about what the application is and a description of what the application does. You do not need to be a coder to help out; you’ll need to be able and willing to research the applications in your chunk and draft an openly-licensed paragraph (we’ll provide specific guidelines) and submit it via a web form on github. That’s all you need to do.

This blog post will kick off our pilot (“round 1″) of this effort, and we’ll be focusing on applications geared towards developers.

Your mission

If you choose to participate in this program, your mission will be to research and write up both brief summaries about and long-form descriptions for each of ~10 free and open source applications.

You might want to check out the upstream sites for each application, see if any distros downstream have descriptions for the app, maybe install and try the app out for yourself, or ask current users of the app about it and its strengths and weaknesses. The final text you submit, however, will need to be original writing created by you.


Summary field for application

The summary field is a short, one-line description of what the application enables users to do:

  • It should be around 5 – 12 words long, and a single sentence with no ending punctuation.
  • It should start with action verbs that describe what it allows the user to do, for example, “Create and edit Scalable Vector Graphics images” from the Inkscape summary field.
  • It shouldn’t contain extraneous information such as “Linux,” “open source,” “GNOME,” “gtk,” “kde,” “qt,” etc. It should focus on what the application enables the user to do, and not the technical or implementation details of the app itself.

Here are some examples of good AppStream summary metadata:

  • “Add or remove software installed on the system” (gpk-application / 8 words)
  • “Create and edit Scalable Vector Graphics images” (Inkscape / 7 words)
  • “Avoid the robots and make them crash into each other” (GNOME Robots / 10 words)
  • “View and manage system resources” (GNOME System Monitor / 5 words)
  • “Organize recipes, create shopping lists, calculate nutritional information, and more.” (Gourmet / 10 words)

Description field for application

The description field is a longer-form description of what the application does and how it works. It can be between 1 – 3 short paragraphs / around 75-100 words long.


Here are some examples of good AppStream description metadata:

  • GNOME System Monitor / 76 words:
    “System Monitor is a process viewer and system monitor with an attractive, easy-to-use interface.

    “System Monitor can help you find out what applications are using the processor or the memory of your computer, can manage the running applications, force stop processes not responding, and change the state or priority of existing processes.

    “The resource graphs feature shows you a quick overview of what is going on with your computer displaying recent network, memory and processor usage.”

  • Gourmet / 94 words:
    “Gourmet Recipe Manager is a recipe-organizer that allows you to collect, search, organize, and browse your recipes. Gourmet can also generate shopping lists and calculate nutritional information.

    “A simple index view allows you to look at all your recipes as a list and quickly search through them by ingredient, title, category, cuisine, rating, or instructions.

    “Individual recipes open in their own windows, just like recipe cards drawn out of a recipe box. From the recipe card view, you can instantly multiply or divide a recipe, and Gourmet will adjust all ingredient amounts for you.”

  • GNOME Robots / 102 words:
    “It is the distant future – the year 2000. Evil robots are trying to kill you. Avoid the robots or face certain death.

    “Fortunately, the robots are extremely stupid and will always move directly towards you. Trick them into colliding into each other, resulting in their destruction, or into the junk piles that result. You can defend yourself by moving the junk piles, or escape to safety with your handy teleportation device.

    “Your supply of safe teleports is limited, and once you run out, teleportation could land you right next to a robot, who will kill you. Survive for as long as possible!”

Content license

These summaries and descriptions are valuable content, and in order to be able to use them, you’ll need to be willing to license them under a license such that the AppStream project and greater free and open source software community can use them.

We are requesting that all submissions be licensed under the Creative Commons’ CC0 license.

What’s in it for you?

Folks who contribute metadata to this effort through this program will be recognized in the upstream appdata credits as official contributors to the project and will also be awarded a special Fedora Badges badge for contributing appdata!


When this pilot round is complete, we’ll also publish a Fedora Magazine article featuring all of the contributors – including you!

Oh, and of course – you’ll be making it easier for all free and open source software users (not just Fedora!) to find great FLOSS software and make their lives better! :)

Sign me up! How do I get started?


  1. First, if you don’t have one already, create an account at GitHub.
  2. In order to claim your badge and to interact with our wiki, you’ll need a Fedora account. Create a Fedora account now if you don’t alrady have one.
  3. Drop an email to appstream at lists dot fedoraproject [.] org with your GitHub username and your Fedora account username so we can register you as a contributor and assign you your applications to write metadata for!
  4. For each application you’ll need to write metadata for, we’ve generated an XML document in the Fedora AppStream GitHub repo. We will link you up to each of these when we give you your assignment.
  5. For each application, research the app via upstream websites, reviews, talking to users, and trying out the app for yourself, then write up the summary and description fields to the specifications given above.
  6. To submit your metadata, log into GitHub and visit the XML file for the given application we gave you in our assignment email. Take a look at this example appstream metadata file for an application called Insight. You’ll notice in the upper right corner there is an ‘Edit’ button – click on this, edit the ‘Summary’ and ‘Description’ fields, edit the copyright statement towards the very top of the file with your information, and then submit them using the form at the bottom.

Once we’ve received all of your submissions, we’ll update the credits file and award you your badge. :)

If you end up commiting to a batch of applications and end up not having the time to finish, we ask that you let us know so we can assign the apps to someone else. We’re asking that you take two weeks to complete the work – if you need more time, no problem, let us know. We just want to make sure we reopen up assigned apps for others to join in and help out with.

Let’s do this!

Ready to go? Drop us a line!

GUADEC 2014 Map

Want a custom map for GUADEC 2014?

Here’s a map I made that shows the venue, the suggested hotels, transit ports (airport/train station), vegetarian & veggie-friendly restaurants, and a few sights that look interesting.

I made this with Google Map Engine, exported to KML, and also changed to GeoJSON and GPX.

If you want an offline map on an Android phone, I suggest opening up the KML file with Maps.Me (proprietary OpenStreeMap-based app, but nice) or the GPX on OSMand (open source and powerful, but really clunky).

You can also use the Google Maps Engine version with Google Maps Engine on your Android phone, but it doesn’t really support offline mode all so well, so it’s frustratingly unreliable at best. (But it does have pretty icons!)

See you at GUADEC!

July 21, 2014

Development activity is moving to Github


In just under a week’s time, on Sunday 27th July 2014, I’ll be moving MyPaint’s old Gitorious git repositories over to the new GitHub ones fully, and closing down the old location. For a while now we’ve been maintaining the codelines in parallel to give people some time to switch over and get used to the new site; it’s time to formally switch over now.

If you haven’t yet changed your remotes over on existing clones, now would be a very good time to do that!

The bug tracker is moving from Gna! to Github’s issues tracker too – albeit rather slowly. This is less a matter of just pushing code to a new place and telling people about the move; rather we have to triage bugs as we go, and the energy and will to do that has been somewhat lacking of late. Bug triage isn’t fun, but it needs to be done.

(Github’s tools are lovely, and we’re already benefiting from having more eyeballs focussed on the projects. libmypaint has started using Travis and Appveyor for CI, the MyPaint application’s docs will benefit tons from being more wiki-like to edit, and the issue tracker is just frankly better documented and nicer for pasting in screencaps and exception dumps)

FreeCAD release 0.14

This is certainly a bit overdue, since the official launch already happened more than two weeks ago, but at last,here it goes: The 0.14 version of FreeCAD has been released! It happened a long, long time after 0.13, about one year and a half, but we're decided to not let that happen again next time,...

July 19, 2014

Stellarium 0.13.0 has been released!

The Stellarium development team after 9 months of development is proud to announce the release of version 0.13.0 of Stellarium.

This release brings some interesting new features:
- New modulated core.
- Refactored shadows and introducing the normal mapping.
- Sporadic meteors and meteors has the colors now.
- Comet tails rendering.
- New translatable strings and new textures.
- New plugin: Equation of Time - provides solution for Equation of Time.
- New plugin: Field of View - provides shortcuts for quick changes field of view.
- New plugin: Navigational Stars - marks 58 navigational stars on the sky.
- New plugin: Pointer Coordinates - shows the coordinates of the mouse pointer.
- New plugin: Meteor Showers - provides visualization of meteor showers.
- New version of the Satellites plugin: introduces star-like satellites and bug fixes.
- New version of the Exoplanets plugin: displaying of the potential habitable exoplanets; improvements for performance and code refactoring.
- New version of the Angle Measure plugin: displaying of the position angle.
- New version of the Quasars plugin: improvements for performance; added marker_color parameter.
- New version of the Pulsars plugin: improvements for performance; display pulsars with glitches; setting color for marker for different types of the pulsars.
- New versions of the Compass Marks, Oculars, Historical Supernovae, Observability analysis and Bright Novae plugins: bug fixing, code refactoring and improvements.

There have also been a large number of bug fixes and serious performance improvements.

We have updated the configuration file and the Solar System file, so if you have an existing Stellarium installation, we highly recommended reset the settings when you will install the new version (you can choose required points in the installer).

A huge thanks to our community whose contributions help to make Stellarium better!

July 18, 2014

Fri 2014/Jul/18

July 17, 2014

Time-lapse photography: a simple Arduino-driven camera intervalometer

[Arduino intervalometer] While testing my automated critter camera, I was getting lots of false positives caused by clouds gathering and growing and then evaporating away. False positives are annoying, but I discovered that it's fun watching the clouds grow and change in all those photos ... which got me thinking about time-lapse photography.

First, a disclaimer: it's easy and cheap to just buy an intervalometer. Search for timer remote control or intervalometer and you'll find plenty of options for around $20-30. In fact, I ordered one. But, hey, it's not here yet, and I'm impatient. And I've always wanted to try controlling a camera from an Arduino. This seemed like the perfect excuse.

Why an Arduino rather than a Raspberry Pi or BeagleBone? Just because it's simpler and cheaper, and this project doesn't need much compute power. But everything here should be applicable to any microcontroller.

My Canon Rebel Xsi has a fairly simple wired remote control plug: a standard 2.5mm stereo phone plug. I say "standard" as though you can just walk into Radio Shack and buy one, but in fact it turned out to be surprisingly difficult, even when I was in Silicon Valley, to find them. Fortunately, I had found some, several years ago, and had cables already wired up waiting for an experiment.

The outside connector ("sleeve") of the plug is ground. Connecting ground to the middle ("ring") conductor makes the camera focus, like pressing the shutter button halfway; connecting ground to the center ("tip") conductor makes it take a picture. I have a wired cable release that I use for astronomy and spent a few minutes with an ohmmeter verifying what did what, but if you don't happen to have a cable release and a multimeter there are plenty of Canon remote control pinout diagrams on the web.

Now we need a way for the controller to connect one pin of the remote to another on command. There are ways to simulate that with transistors -- my Arduino-controlled robotic shark project did that. However, the shark was about a $40 toy, while my DSLR cost quite a bit more than that. While I did find several people on the web saying they'd used transistors with a DSLR with no ill effects, I found a lot more who were nervous about trying it. I decided I was one of the nervous ones.

The alternative to transistors is to use something like a relay. In a relay, voltage applied across one pair of contacts -- the signal from the controller -- creates a magnetic field that closes a switch and joins another pair of contacts -- the wires going to the camera's remote.

But there's a problem with relays: that magnetic field, when it collapses, can send a pulse of current back up the wire to the controller, possibly damaging it.

There's another alternative, though. An opto-isolator works like a relay but without the magnetic pulse problem. Instead of a magnetic field, it uses an LED (internally, inside the chip where you can't see it) and a photo sensor. I bought some opto-isolators a while back and had been looking for an excuse to try one. Actually two: I needed one for the focus pin and one for the shutter pin.

How do you choose which opto-isolator to use out of the gazillion options available in a components catalog? I don't know, but when I bought a selection of them a few years ago, it included a 4N25, 4N26 and 4N27, which seem to be popular and well documented, as well as a few other models that are so unpopular I couldn't even find a datasheet for them. So I went with the 4N25.

Wiring an opto-isolator is easy. You do need a resistor across the inputs (presumably because it's an LED). 380&ohm is apparently a good value for the 4N25, but it's not critical. I didn't have any 380&ohm but I had a bunch of 330&ohm so that's what I used. The inputs (the signals from the Arduino) go between pins 1 and 2, with a resistor; the outputs (the wires to the camera remote plug) go between pins 4 and 5, as shown in the diagram on this Arduino and Opto-isolators discussion, except that I didn't use any pull-up resistor on the output.

Then you just need a simple Arduino program to drive the inputs. Apparently the camera wants to see a focus half-press before it gets the input to trigger the shutter, so I put in a slight delay there, and another delay while I "hold the shutter button down" before releasing both of them.

Here's some Arduino code to shoot a photo every ten seconds:

int focusPin = 6;
int shutterPin = 7;

int focusDelay = 50;
int shutterOpen = 100;
int betweenPictures = 10000;

void setup()
    pinMode(focusPin, OUTPUT);
    pinMode(shutterPin, OUTPUT);

void snapPhoto()
    digitalWrite(focusPin, HIGH);
    digitalWrite(shutterPin, HIGH);
    digitalWrite(shutterPin, LOW);
    digitalWrite(focusPin, LOW);

void loop()

Naturally, since then we haven't had any dramatic clouds, and the lightning storms have all been late at night after I went to bed. (I don't want to leave my nice camera out unattended in a rainstorm.) But my intervalometer seemed to work fine in short tests. Eventually I'll make some actual time-lapse movies ... but that will be a separate article.

July 16, 2014

Wavelet Decompose (Again)

Yes, more fun things you can do with Wavelet Scales.

If you’ve been reading this blog for a bit (or just read through any of my previous postprocessing tutorials), then you should be familiar with Wavelet Decompose. I use them all the time for skin retouching as well as other things. I find that being able to think of your images in terms of detail scales opens up a new way of approaching problems (and some interesting solutions).

A short discussion on the GIMP Users G+ community led the member +Marty Keil to suggest a tutorial on using wavelets for other things (particularly sharpening). Since I tend to use wavelet scales often in my processing (including sharpening), I figured I would sit down and enumerate some ways to use them, like:

Wavelets? What?

For our purposes (image manipulation), wavelet decomposition allows us to consider the image as multiple levels of detail components, that when combined will yield the full image. That is, we can take an image and separate it out into multiple layers, with each layer representing a discrete level of detail.

To illustrate, let’s have a look at my rather fetching model:

It was kindly pointed out to me that the use of the Lena image might perpetuate the problems with the objectification of women. So I swapped out the Lena image with a model that doesn't carry those connotations.

The results of running Wavelet Decompose on the image yields these 6 layers. These are arranged in increasing order of detail magnitude (scales 1-5 + a residual layer).

Notice that each of the scales contains a particular set of details starting with the finest details and becoming larger until you reach the residual scale. The residual scale doesn’t contain any fine details, instead it consists mostly of color and tonal information.

This is very handy if you need to isolate particular features for modifications. Simply find the scale (or two) that contain the feature and modify it there without worrying as much about other details at the same location.

The Wavelet Decompose plug-in actually sets each of these layer modes (except Residual) as “Grain Merge”. This allows each layer to contribute their details to the final result (which will look identical to the original starting layer with no modifications). The “Grain Merge” layer blend mode means that pixels that are 50% value ( RGB(127,127,127) ) will not affect the final result. This also means that if we paint on one of the scale layers with gray, it will effectively erase those details from the final image (keep this in mind for later).

Skin Smoothing (Redux)

I previously talked about using Wavelet Decompose for image retouching:

The first link was my original post on how I use wavelets to smooth skin tones. The second and third are examples of applying those principles to portraits. The last two articles are complete walkthroughs of a postprocessing workflow, complete with full-resolution examples to download if you want to try it and follow along.

I guess my point is that I’m re-treading a well worn path here, but I have actually modified part of my workflow so it’s not for naught (sorry, I couldn’t resist).

Getting Started

So let’s have a look at using wavelets for skin retouching again. We’ll use my old friend, Mairi for this.

Pat David Mairi Headshot Base Image Wavelet Decompose Frequency Separation

When approaching skin retouching like this, I feel it’s important to pay attention to how light interacts with the skin. The way that light will penetrate the epidermis and illuminate under the surface is important. Couple that with the different types of skin structures, and you get a complex surface to consider.

For instance, there are very fine details in skin such as faint wrinkles and pores. These often contribute to the perceived texture of skin overall. There is also the color and toning of the skin under the surface as well. These all contribute to what we will perceive.

Let’s have a look at a 100% crop of her forehead.

Pat David Mairi Headshot Forehead Closeup Wavelet Decompose Frequency Separation

If I decompose this image to wavelet scales, I can amplify the details on each level by isolating them over the original. So, turning off all the layers except the original, and the first few wavelet scales will amplify the fine details:

Pat David Mairi Headshot Forehead Closeup Wavelet Decompose Frequency Separation
Wavelet scales 1,2,3 over the original image.

You may notice that these fine wavelet scales seem to sharpen up the image. Yes, but we’re not talking about them right now. Stick with me - we’ll look at them a little later.

On the same idea, if I leave the original and the two biggest scales visible, I’ll get a nicely exaggerated view of the sub-surface imperfections:

Pat David Mairi Headshot Forehead Closeup Wavelet Decompose Frequency Separation
Wavelet scales 4,5 over the original image.

What we see here are uneven skin tones not caused by surface imperfections, but by deeper tones in the skin. It is this un-evenness that I often try to subdue, and that I think contributes to a more pleasing overall skin tone.

To illustrate, here I have used a bilateral blur on the largest detail scale (Wavelet scale 5) only. Consider the rather marked improvement over the original working on this single detail scale. Notice also that all of the finer details remain to keep skin texture looking real.

Pat David Mairi Headshot Forehead Closeup Wavelet Decompose Frequency Separation
Smoothing only the largest detail scale (Wavelet scale 5) results
(mouseover to compare to original)

Smoothing Skin Tones

With those results in mind, I can illustrate how I will generally approach this type of skin retouching on a face. I usually start by considering specific sections of a face. I try to isolate my work along common facial contours to avoid anything strange happening across features (like smile lines or noses).

Pat David Mairi Headshot Retouch Regions Wavelet Decompose Frequency Separation

I also like to work in these regions as shown because the amount of smoothing that may be needed is not always consistently the same. The forehead may require more than the cheeks, and both may require less than the nose for instance. This allows me to tailor the retouching I do for each region separately in order to arrive at a more consistent result across the entire face.

I’ll use the free-select tool to create a selection of my region, usually with the “Feather edges” option turned on with a large-ish radius (around 30-45 pixels usually). This lets my edits blend a little smoother into the untouched portions of the image.

These days I’ve adjusted my workflow to minimize how much I actually retouch. I’ll usually look at the residual layer first to check the color tones across an area. If they are too spotty or blotchy, I’ll use a bilateral blur to even them out. There is no bilateral blur built into GIMP directly, so on the suggestion of David Tschumperlé (G'MIC) I’ve started using G'MIC with:

Filters → G'MIC...
Repair → Smooth [bilateral]

Once I’m happy with the results on the residual layer (or it doesn’t need any work), I’ll look at the largest detail scale (usually Wavelet scale 5). Lately, this has been the scale level that usually produces the greatest impact quickly. I’ll usually use a Spatial variance of 10, and a Value variance of 7 (with 2 iterations) on the bilateral blur filter. Of course, adjust these as necessary to suit your image and taste.

Here is the result of following those steps on the image of Mairi (less than 5 minutes of work):

Pat David Mairi Headshot Forehead Closeup Wavelet Decompose Frequency Separation Residual
Bilateral smoothing on Residual and Wavelet scale 5 only
(mouseover to compare to original)

This was only touching the Residual and Wavelet scale 5 with a bilateral blur and nothing else. As you can see this method provides for a very easy way to get to a great base to begin further work on (spot healing as needed, etc.).


I had actually mentioned this in each of my previous workflow tutorials, but it’s worth repeating here. I tend to use the lowest couple of wavelet scales to sharpen my images when I’m done. This is really just a manual version of using the Wavelet Sharpen plugin.

The first couple of detail scales will contain the highest frequency details. I’ve found that using them to sharpen an image up works fantastic. Here, for example, is our photo of Mairi from above after retouching, but now I use a copy of Wavelet scales 1 & 2 over the image to sharpen those details:

Pat David Mairi Headshot Forehead Closeup Wavelet Decompose Frequency Separation Sharpen
Wavelet scale 1 & 2 copied over the result to sharpen.
(mouseover to compare)

I’ve purposefully left both of the detail scales on full opacity to demonstrate the effect. I feel this is a far better method for sharpening compared to regular sharpen (I’ve never gotten good results using it) or even to Unsharp Mask (USM). USM can tend to halo around high contrast areas depending on the settings.

I would also adjust the opacity of the scales to adjust how much they would sharpen. If I wanted to avoid sharpening the background for instance, I would also either mask it out or just paint gray on the detail scale to erase the data in that area.

It doesn’t need to stop just a fine detail sharpening, though. The nature of the wavelet decomposition is that you will also get other scale data that can be useful for enhancing contrast on larger details as well. For instance, if I wanted to enhance the local contrast in the sweater of my image, I could use one of the larger scales over the image again and use a layer mask to control the areas that are affected.

To illustrate, here I have also copied scales 3, 4, and 5 over my image. I’ve applied layer masks to the layers to only allow them to affect the sweater. Using these scales allows a nice local contrast to be applied, adding a bit of “pop” to the sweater texture without increasing contrast on the models face or hair.

Pat David Mairi Headshot Wavelet Decompose Frequency Separation Sharpen Enhance
Using coarser detail scales to add some local contrast to the texture of the sweater
(mouseover to compare to previous)

Normally, if I didn’t have a need to work on wavelet scales, I would just use the Wavelet Sharpen plugin to add a touch of sharpening as needed. If I do find it useful (for whatever reason) to work on detail scales already, then I normally just use the scales directly for manually sharpening the image. Occasionally I’ll create the wavelet scales just to have access to the coarse detail levels to bump local contrast to taste, too.

Once you start thinking in terms of detail scales, it’s hard to not get sucked in to finding all sort of uses for them that can be very, very handy.

Stain Removal

What if the thing we want to adjust is not sub-dermal skin retouching, but rather something more like a stain on a childs clothing? As far as wavelets are concerned, it’s the same thing. So let’s look at something like this:

Pat David Wavelet Frequency Separation Stain
30% of the food made it in!

So there’s a small stain on the shirt. We can fix this easy, right?!

Let’s zoom 100% into the area we are interested in fixing:

Pat David Wavelet Frequency Separation Stain Zoom

If we run a Wavelet decomposition on this image, we can see that the areas that we are interested in are mostly confined to the coarser scales + residual (mostly scales 4, 5, and residual):

Pat David Wavelet Frequency Separation Stain Zoom

More importantly, the very fine details that give texture to her shirt, like the weave of the cotton and stitching of the letters, are nicely isolated on the finer detail scales. We won’t really have to touch the finer scales to fix the stains - so it’s trivially easy to keep the texture in place.

As a comparison, imagine having to use a clone or heal tool to accomplish this. You would have a very hard time getting the cloth weave to match up correctly, thus creating a visual break that would make the repair more obvious.

I start on the residual scale, and work on getting the broad color information fixed. I like to use a combination of the Clone tool, and the Heal tool to do this. Paying attention to the color areas I want to keep, I’ll use the Clone tool to bring in the correct tone with a soft-edged brush. I’ll then use the Heal tool to blend it a bit better into the surrounding textures.

For example, here is the work I did on the Residual scale to remove the stain color information:

Pat David Wavelet Frequency Separation Stain Zoom Residual Fix
Clone/Heal of the Wavelet Residual layer
(mouseover to compare to original)

Yes, I know it’s not a pretty patch, but just a quick pass to illustrate what the results can look like. Here is what the above changes to the Wavelet residual layer produces:

Pat David Wavelet Frequency Separation Stain Zoom Residual Fix
Composite image with retouching only on the Wavelet Residual layer
(mouseover to compare to original)

Not bad for a couple of minutes work on a single wavelet layer. I follow the same method on the next two wavelet scales 4 & 5. Clone similar areas into place and Heal to blend into the surrounding texture. After a few minutes, I arrive at this result:

Pat David Wavelet Frequency Separation Stain Zoom Repair Fix
Result of retouching Wavelet residual, 4, and 5 layers only
(mouseover to compare to original)

Perfect? No. It’s not. It was less than 5 minutes of work total. I could spend another 5 minutes or so and get a pretty darn good looking result, I think. The point is more about how easy it is once the image is considered with respect to levels of detail. Look where the color is and you’ll notice that the fabric texture remains essentially unchanged.

As a father of a three year old, believe me when I say that this technique has proved invaluable to me the past few years...


I know I talk quite a bit about wavelet decomposition for retouching. There is just a wonderful bunch of tasks that become much easier when considering an image as a sum of discrete detail parts. It’s just another great tool to keep in mind as you work on your images.

Help support the site! Or don’t!
I’m not supporting my (growing) family or anything from this website. Seriously.
There is only one reason I am writing these tutorials and posts:
I love doing it.
Technically there is a second reason: to give back to the community. Others before me were instrumental in helping me learn things when I first got started, and I’m hoping to pay it forward here.

If you want to visit an ad, or make a donation, or even link/share my content, I would be absolutely grateful (and tickled pink). If you don’t it’s not going to affect me writing and posting here one bit.

I’ll keep writing, and I’ll keep it free.
If you get any use out of this site, I only ask that you do one thing:
pay it forward.

July 15, 2014

Fanart by Anastasia Majzhegisheva – 11

Anastasia keeps playing with a Morevna’s backstory and this time she brings a short manga/comic strip.2014-07-14-manga-morevna

July 14, 2014

Notes from Calligra Sprint. Part 2: Memory fragmentation in Krita fixed

During the second day of Calligra sprint in Deventer we split into two small groups. Friedrich, Thorsten, Jigar and Jaroslaw were discussing global Calligra issues, while Boud and me concentrated on the performance of Krita and its memory consumption.

We tried to find out why Krita is not fast enough for painting with big brushes on huge images. For our tests we created a two-layer image 8k by 8k pixels (which is 3x256 MiB (2 layers + projection)) and started to paint with 1k by 1k pixels brush. Just to compare, SAI Painting Tool simply forbids creating images more than 5k by 5k pixels and brushes more than 500 pixels wide. And during these tests we found out a really interesting thing...

I guess everyone has at least once read about custom memory management in C++. All these custom new/delete operators, pool allocators usually seem so "geekish" and for "really special purposes only". To tell you the truth, I though I would never need to use them in my life, because standard library allocators "should be enough for everyone". Well, until curious things started to happen...

Well, the first sign of the problems appeared quite long ago. People started to complain that according to system monitor tools (like 'top') Krita ate quite much memory. We could never reproduce it. And what's more 'massif' and internal tile counters always showed we have no memory leaks. We used exactly the number of tiles we needed to store the image of a particular size.

But while making these 8k-image tests, we started to notice that although the number of tiles doesn't grow, the memory reported by 'top' grows quite significantly. Instead of occupying usual 1.3 GiB, which such image would need (layers data + about 400MiB for brushes and textures) reported memory grew up to 3 GiB and higher until OOM Killer woke up and killed Krita. This gave us a clear evidence that we have some problems with fragmentation.

Indeed, during every stoke we have to create about 15000(!) 16KiB objects (tiles). It is quite probable that after a couple of strokes the memory becomes rather fragmented. So we decided to try boost::pool for allocation of these chunks... and it worked! Instead of growing the memory footprint stabilized on 1.3GiB. And that is not counting the fact that boost::pool doesn't free the free'd memory until destruction or explicit purging [0]

Now this new memory management code is already in master! According to some synthetic tests, the painting should become a bit fasted. Not speaking about the much smaller memory usage.


If you see unusually high memory consumption in your application, and the results measured by massif significantly differ from what you see in 'top', you probably have some fragmentation problem. To proof it, try not to return the memory back to the system, but reuse it. The consumption might fall significantly, especially is you allocate memory in different threads.

[0] - You can release unused memory by explicitly calling release_memory(), but 1) the pool must be ordered, which is worse performance; 2) the release_memory() operation takes about 20-30 seconds(!), so there is no use of it for us.

July 13, 2014

Notes from Calligra Sprint in Deventer. Part 1: Translation-friendly code

Last weekend we had a really nice sprint Deventer, which was hosted by Irina and Boudewijn (thank you very much!). We spent two days on discussions, planning, coding and profiling our software, which had many fruitful results.

On Saturday we were mostly talking and discussing our current problems, like porting Calligra to Qt5 and splitting libraries more sanely (e.g. we shouldn't demand mobile applications compile and link QWidget-based libraries). Although these problems are quite important, I will not describe them now (the other people will blog about it very soon). Instead I'm going to tell you about a different problem we also discussed — translations.

The point is, when using i18n() macro, it is quite easy to make mistakes which will make translator's life a disaster, so we decided to make a set of rules of thumb which developers should follow for not creating such issues. Here are these five short rules:

  1. Avoid passing a localized string into a i18n macro
  2. Add context to your strings
  3. Undo commands must have (qtundo-format) context
  4. Use capitalization properly
  5. Beware of sticky strings
Next we will talk about each of the rules in details:

1. Avoid passing a localized string into a i18n macro

They might be not compatible in case, gender or anything else you have no idea about

// Such code is incorrect in 99% of the cases
QString str = i18n(“foo bar”);
i18n(“Some nice string %1”, str);

Example 1

wrongString = i18n(“Delete %1”, XXX ? i18n(“Layer”) : i18n(“Mask”))


correctString = XXX ? i18n(“Delete Layer”) : i18n(“Delete Mask”)

Such string concatenation is correct in English, but it is completely inappropriate in many languages in which a noun can change its form depending on the case. The problem is that in macro i18n(“Mask”) the word "Mask" is used in nominative case (is a subject), but in expression "Delete Mask” it is in accusative case (is an object). For example is Russan the two strings will be different and the translator will not be able to solve the issue easily.

Example 2

wrongString = i18n(“Last %1”, XXX ? i18n(“Monday”) : i18n(“Friday”))

correctString = XXX ? i18n(“Last Monday”) : i18n(“Last Friday”)

This case is more complicated. Both words "Monday" and "Friday" are used in the nominative case, so they will not change their form. But "Monday" and "Friday" have different gender in Russian, so the adjective "Last" must change its form depending on the second word used. Therefore we need to separate strings for the two terms.

The tricky thing here is that we have 7 days in a week, so ideally we should have 7 separate strings for "Last ...", 7 more strings for "Next ..." and so on.

Example 3 — Using registry values

KisFilter *filter = filterRegistry->getFilter(id);
i18n(“Apply %1”, filter->name())

// CORRECT: is there a correct way at all?
KisFilter *filter = filterRegistry->getFilter(id);
i18n(“Apply: \”%1\””, filter->name())

Just imagine how many objects can be stored inside the registry. It can be a dozen, a hundred or a thousand of objects. We cannot control the case, gender and form of each object in the list (can we?). The easiest approach here is to put the object name in quotes and "cite" that literally. This will hide the problem in most of the languages.

2. Add context to your strings

Prefer adding context to your strings rather than expecting translators reading your thoughts

Here is an example of three strings for blur filter. They illustrate the three most important translation contexts

i18nc(“@title:window”, “Blur Filter”)

Window titles are usually nouns (and translated as nouns). There is no limit on the size of the string.

i18nc(“@action:button”, “Apply Blur Filter”)

Button actions are usually verbs. The length of the string is also not very important.

i18nc(“@action:inmenu”, “Blur”)

Menu actions are also verbs, but the length of the string should be as short as possible.

3. Undo commands must have (qtundo-format) context

Adding this context tells the translators to use “Magic String” functionality. Such strings are special and are not reusable anywhere else.

In Krita and Calligra this context is now added automatically, because we use C++ type-checking mechanism to limit the strings passed to an undo command:

KUndo2Command(const KUndo2MagicString &text, KUndo2Command *parent);

4. Use capitalization properly

See KDE policy for details.

5. Beware of sticky strings

When the same string without a context is reused in different places (and especially in different files), doublecheck whether it is appropriate.

E.g. i18n("Duplicate") can be either a brush engine name (noun) or a menu action for cloning a layer (verb). Obviously enough not all the languages have the same form of a word for both verb and noun meanings. Such strings must be split by assigning them different contexts.

Alexander Potashev has created a special python script that can iterate through all the strings in a .po file and report all the sticky strings in a convenient format.


Of course all these rules are only recommendation. They all have exceptions and limitations, but following them in the most trivial cases will make the life of translators much easier.

In the next part of my notes from the sprint I will write how Boud and me were hunting down memory fragmentation problems in Krita on Sunday... :)

July 12, 2014

Trapped our first pack rat

[White throated woodrat in a trap] One great thing about living in the country: the wildlife. I love watching animals and trying to photograph them.

One down side of living in the country: the wildlife.

Mice in the house! Pack rats in the shed and the crawlspace! We found out pretty quickly that we needed to learn about traps.

We looked at traps at the local hardware store. Dave assumed we'd get simple snap-traps, but I wanted to try other options first. I'd prefer to avoid killing if I don't have to, especially killing in what sounds like a painful way.

They only had one live mousetrap. It was a flimsy plastic thing, and we were both skeptical that it would work. We made a deal: we'd try two of them for a week or two, and when (not if) they didn't work, then we'd get some snap-traps.

We baited the traps with peanut butter and left them in the areas where we'd seen mice. On the second morning, one of the traps had been sprung, and sure enough, there was a mouse inside! Or at least a bit of fur, bunched up at the far inside end of the trap.

We drove it out to open country across the highway, away from houses. I opened the trap, and ... nothing. I looked in -- yep, there was still a furball in there. Had we somehow killed it, even in this seemingly humane trap?

I pointed the open end down and shook the trap. Nothing came out. I shook harder, looked again, shook some more. And suddenly the mouse burst out of the plastic box and went HOP-HOP-HOPping across the grass away from us, bounding like a tiny kangaroo over tufts of grass, leaving us both giggling madly. The entertainment alone was worth the price of the traps.

Since then we've seen no evidence of mice inside, and neither of the traps has been sprung again. So our upstairs and downstairs mice must have been the same mouse.

But meanwhile, we still had a pack rat problem (actually, probably, white-throated woodrats, the creature that's called a pack rat locally). Finding no traps for sale at the hardware store, we went to Craigslist, where we found a retired wildlife biologist just down the road selling three live Havahart rat traps. (They also had some raccoon-sized traps, but the only raccoon we've seen has stayed out in the yard.)

We bought the traps, adjusted one a bit where its trigger mechanism was bent, baited them with peanut butter and set them in likely locations. About four days later, we had our first captive little brown furball. Much smaller than some of the woodrats we've seen; probably just a youngster.

[White throated woodrat bounding away] We drove quite a bit farther than we had for the mouse. Woodrats can apparently range over a fairly wide area, and we didn't want to let it go near houses. We hiked a little way out on a trail, put the trap down and opened both doors. The woodrat looked up, walked to one open end of the trap, decided that looked too scary; walked to the other open end, decided that looked too scary too; and retreated back to the middle of the trap.

We had to tilt and shake the trap a bit, but eventually the woodrat gathered up its courage, chose a side, darted out and HOP-HOP-HOPped away into the bunchgrass, just like the mouse had.

No reference I've found says anything about woodrats hopping, but the mouse did that too. I guess hopping is just what you do when you're a rodent suddenly set free.

I was only able to snap one picture before it disappeared. It's not in focus, but at least I managed to catch it with both hind legs off the ground.

Call to translators

We plan to release Stellarium 0.13.0 around July 20.

There are new strings to translate in this release because we have several new plugins and features, and refactored GUI. If you can assist with translation to any of the 132 languages which Stellarium supports, please go to Launchpad Translations and help us out:

Thank you!

July 11, 2014

This Land Is Mine is yours

Due to horrific recent events, This Land Is Mine has gone viral again.

Here’s a reminder that you don’t need permission to copy, share, broadcast, post, embed, subtitle, etc. Copying is an act of love, please copy and share. Yes means yes.

copying is an act of love, please copy and shareAs for the music, it is Fair UseThis Land Is Mine is a PARODY of “The Exodus Song.” That music was sort of the soundtrack of American zionism in the 1960′s and 70′s. It was supposed to express Jewish entitlement to Israel. By putting the song in the mouth of every warring party, I’m critiquing the original song.



flattr this!

July 09, 2014

Invert the colors of qcad3 icons

QCad is an open-source 2D CAD program I've already been kind of fond of. It runs on Windows, Mac and Linux, its version 2 has been the base of LibreCAD, and version 3, which is a couple of months old already, is a huge evolution after version 2. Their developers have always struggled between the...

July 08, 2014

Big and contrasty mouse cursors

[Big mouse cursor from Comix theme] My new home office with the big picture windows and the light streaming in come with one downside: it's harder to see my screen.

A sensible person would, no doubt, keep the shades drawn when working, or move the office to a nice dim interior room without any windows. But I am not sensible and I love my view of the mountains, the gorge and the birds at the feeders. So accommodations must be made.

The biggest problem is finding the mouse cursor. When I first sit down at my machine, I move my mouse wildly around looking for any motion on the screen. But the default cursors, in X and in most windows, are little subtle black things. They don't show up at all. Sometimes it takes half a minute to figure out where the mouse pointer is.

(This wasn't helped by a recent bug in Debian Sid where the USB mouse would disappear entirely, and need to be unplugged from USB and plugged back in before the computer would see it. I never did find a solution to that, and for now I've downgraded from Sid to Debian testing to make my mouse work. I hope they fix the bug in Sid eventually, rather than porting whatever "improvement" caused the bug to more stable versions. Dealing with that bug trained me so that when I can't see the mouse cursor, I always wonder whether I'm just not seeing it, or whether it really isn't there because the kernel or X has lost track of the mouse again.)

What I really wanted was bigger mouse cursor icons in bright colors that are visible against any background. This is possible, but it isn't documented at all. I did manage to get much better cursors, though different windows use different systems.

So I wrote up what I learned. It ended up too long for a blog post, so I put it on a separate page: X Cursor Themes for big and contrasty mouse cursors.

It turned out to be fairly complicated. You can replace the existing cursor font, or install new cursor "themes" that many (but not all) apps will honor. You can change theme name and size (if you choose a scalable theme), and some apps will honor that. You have to specify theme and size separately for GTK apps versus other apps. I don't know what KDE/Qt apps do.

I still have a lot of unanswered questions. In particular, I was unable to specify a themed cursor for xterm windows, and for non text areas in emacs and firefox, and I'd love to know how to do that.

But at least for now, I have a great big contrasty blue mouse cursor that I can easily see, even when I have the shades on the big windows open and the light streaming in.

Important AppData milestone

Today we reached an important milestone. Over 25% of applications in Fedora now ship AppData files. The actual numbers look like this:

  • Applications with descriptions: 262/1037 (25.3%)
  • Applications with keywords: 112/1037 (10.8%)
  • Applications with screenshots: 235/1037 (22.7%)
  • Applications in GNOME with AppData: 91/134 (67.9%)
  • Applications in KDE with AppData: 5/67 (7.5%)
  • Applications in XFCE with AppData: 2/20 (10.0%)
  • Application addons with MetaInfo: 30

We’ve gone up a couple of percentage points in the last few weeks, mostely from the help of Ryan Lerch, who’s actually been writing AppData files and taking screenshots for upstream projects. He’s been concentrating on the developer tools for the last week or so, as this is one of the key groups of people we’re targetting for Fedora 21.

One of the things that AppData files allow us to do is be smarter suggesting “Picks” on the overview page. For 3.10 and 3.12 we had a farly short static list that we chose from at random. For 3.14 we’ve got a new algorithm that tries to find similar software to the apps you already have installed, and also suggests those. So if I have Anjunta and Devhelp installed, it might suggest D-Feet or Glade.

July 04, 2014

Detecting wildlife with a PIR sensor (or not)

[PIR sensor] In my last crittercam installment, the NoIR night-vision crittercam, I was having trouble with false positives, where the camera would trigger repeatedly after dawn as leaves moved in the wind and the morning shadows marched across the camera's field of view. I wondered if a passive infra-red (PIR) sensor would be the answer.

I got one, and the answer is: no. It was very easy to hook up, and didn't cost much, so it was a worthwhile experiment; but it gets nearly as many false positives as camera-based motion detection. It isn't as sensitive to wind, but as the ground and the foliage heat up at dawn, the moving shadows are just as much a problem as they were with image-based motion detection.

Still, I might be able to combine the two, so I figure it's worth writing up.

Reading inputs from the HC-SR501 PIR sensor

[PIR sensor pins]

The PIR sensor I chose was the common HC-SR501 module. It has three pins -- Vcc, ground, and signal -- and two potentiometer adjustments.

It's easy to hook up to a Raspberry Pi because it can take 5 volts in on its Vcc pin, but its signal is 3.3v (a digital signal -- either motion is detected or it isn't), so you don't have to fool with voltage dividers or other means to get a 5v signal down to the 3v the Pi can handle. I used GPIO pin 7 for signal, because it's right on the corner of the Pi's GPIO header and easy to find.

There are two ways to track a digital signal like this. Either you can poll the pin in an infinfte loop:

import time
import RPi.GPIO as GPIO

pir_pin = 7
sleeptime = 1

GPIO.setup(pir_pin, GPIO.IN)

while True:
    if GPIO.input(pir_pin):
        print "Motion detected!"

or you can use interrupts: tell the Pi to call a function whenever it sees a low-to-high transition on a pin:

import time
import RPi.GPIO as GPIO

pir_pin = 7
sleeptime = 300

def motion_detected(pir_pin):
    print "Motion Detected!"

GPIO.setup(pir_pin, GPIO.IN)

GPIO.add_event_detect(pir_pin, GPIO.RISING, callback=motion_detected)

while True:
    print "Sleeping for %d sec" % sleeptime

Obviously the second method is more efficient. But I already had a loop set up checking the camera output and comparing it against previous output, so I tried that method first, adding support to my script. I set up the camera pointing at the wall, and, as root, ran the script telling it to use a PIR sensor on pin 7, and the local and remote directories to store photos:

# python -p 7 /tmp ~pi/shared/snapshots/
and whenever I walked in front of the camera, it triggered and took a photo. That was easy!

Reliability problems with add_event_detect

So easy that I decided to switch to the more efficient interrupt-driven model. Writing the code was easy, but I found it triggered more often: if I walked in front of the camera (and stayed the requisite 7 seconds or so that it takes raspistill to get around to taking a photo), when I walked back to my desk, I would find two photos, one showing my feet and the other showing nothing. It seemed like it was triggering when I got there, but also when I left the scene.

A bit of web searching indicates this is fairly common: that with RPi.GPIO a lot of people see triggers on both rising and falling edges -- e.g. when the PIR sensor starts seeing motion, and when it stops seeing motion and goes back to its neutral state -- when they've asked for just GPIO.RISING. Reports for this go back to 2011.

On the other hand, it's also possible that instead of seeing a GPIO falling edge, what was happening was that I was getting multiple calls to my function while I was standing there, even though the RPi hadn't finished processing the first image yet. To guard against that, I put a line at the beginning of my callback function that disabled further callbacks, then I re-enabled them at the end of the function after the Pi had finished copying the photo to the remote filesystem. That reduced the false triggers, but didn't eliminate them entirely.

Oh, well, The sun was getting low by this point, so I stopped fiddling with the code and put the camera out in the yard with a pile of birdseed and peanut suet nuggets in front of it. I powered on, sshed to the Pi and ran the motion_detect script, came back inside and ran a tail -f on the output file.

I had dinner and worked on other things, occasionally checking the output -- nothing! Finally I sshed to the Pi and ran ps aux and discovered the script was no longer running.

I started it again, this time keeping my connection to the Pi active so I could see when the script died. Then I went outside to check the hardware. Most of the peanut suet nuggets were gone -- animals had definitely been by. I waved my hands in front of the camera a few times to make sure it got some triggers.

Came back inside -- to discover that Python had gotten a segmentation fault. It turns out that nifty GPIO.add_event_detect() code isn't all that reliable, and can cause Python to crash and dump core. I ran it a few more times and sure enough, it crashed pretty quickly every time. Apparently GPIO.add_event_detect needs a bit more debugging, and isn't safe to use in a program that has to run unattended.

Back to polling

Bummer! Fortunately, I had saved the polling version of my program, so I hastily copied that back to the Pi and started things up again. I triggered it a few times with my hand, and everything worked fine. In fact, it ran all night and through the morning, with no problems except the excessive number of false positives, already mentioned.

[piñon mouse] False positives weren't a problem at all during the night. I'm fairly sure the problem happens when the sun starts hitting the ground. Then there's a hot spot that marches along the ground, changing position in a way that's all too obvious to the infra-red sensor.

I may try cross-checking between the PIR sensor and image changes from the camera. But I'm not optimistic about that working: they both get the most false positives at the same times, at dawn and dusk when the shadow angle is changing rapidly. I suspect I'll have to find a smarter solution, doing some image processing on the images as well as cross-checking with the PIR sensor.

I've been uploading photos from my various tests here: Tests of the Raspberry Pi Night Vision Crittercam. And as always, the code is on github: scripts/motioncam with some basic documentation on my site: a motion sensitive camera for Raspberry Pi or other Linux machines. (I can't use github for the documentation because I can't seem to find a way to get github to display html as anything other than source code.)

July 02, 2014

Anaconda Crash Recovery

Whoah! Another anaconda post! Yes! You should know that the anaconda developers are working hard at fixing bugs, improving features, and adding enhancements all the time, blog posts about it or not. :)

Today Chris and I talked about how the UI might work for anaconda crash recovery. So here’s the thing: Anaconda is completely driven by kickstart. Every button, selection, or thing you type out in the UI gets translated into kickstart instructions in memory. So, why not save that kickstart out to disk when anaconda crashes? Then, any configuration and customization you’ve done would be saved. You could then load up anaconda afterwards with the kickstart and it would pre-fill in all of your work so you could continue where you left off!

However! Anaconda is a special environment, of course. We can’t just save to disk. I mean, okay, we could, but then we can’t use that disk as an install target after restarting the installer post crash because we’d have to mount it for reading the kickstart file off of it! Eh. So it’s a bit complicated. Chris and I thought it’d be best to keep this simple (at least to start) and allow allow for saving the kickstart to an external disk to avoid these kind of hairy issues.

Chris and I talked about how it would be cool if the crash screen could just say, “insert a USB disk if you’d like to save your progress,” and we could auto-detect when the disk was inserted, save, and report back to the user that we saved. However, blivet (the storage library used by anaconda) doesn’t yet have support for autodetecting devices. So what I thought we could do instead is have a “Save kickstart” button, and that button would kick off the process of searching for the new disk, reporting to the user if they still needed to insert one or if there was some issue with the disk. Finally, once the kickstart is saved out, it could report a status that it was successfully saved.

Another design consideration I talked over with bcl for a bit – it would be nice to keep this saving process as simple as possible. Can we avoid having a file chooser? Can we just save to the root of the inserted disk and leave it at that? That would save users a lot of mental effort.

The primary use case for this functionality is crash recovery. It crashes, we offer to save your work. One additional case is that you’re quitting the installer and want to save your work – this case is rarer, but maybe it would be worth offering to save during normal quit too.

So here are my first cuts at trying to mock something out here. Please fire away and poke holes!

So this is what you’d first see when anaconda crashes:

You insert the disk and then you hit the “Save kickstart” button, and it tries to look for the disk:

Success – it saved out without issue.

Oooopsie! You got too excited and hit “Save kickstart” without inserting the disk.

Maybe your USB stick is bricked? Something went wrong. Maybe the file system’s messed up? Better try another stick:

Hope this makes sense. My Inkscape SVG source is available if you’d like to tweak or play around with this!

Comments / feedback / ideas welcomed in the comments or on the anaconda-devel list.

Blurry Screenshots in GNOME Software?

Are you a pixel perfect kind of maintainer? Frustrated by slight blurriness in screenshots when using GNOME Software?

If you have one screenshot, capture a PNG of size 752×423. If you have more than one screenshot use a size of 624×351.

If you use any other 16:9 aspect ratio resolution, we’ll scale your screenshot when we display it. If you use some crazy non-16:9 aspect ratio, we’ll add padding and possibly scale it as well, which is going to look pretty bad. That said, any screenshot is better than no screenshot, so please don’t start removing <screenshot> tags.

June development results

Last month we have worked to improve user interface of Synfig and now we are happy to share results of our work....

July 01, 2014

KDE aux RMLL 2014

Post in French, English translation below…

Dans quelques jours débuteront les 15em Rencontres Mondiales du Logiciel Libre à Montpellier, du 5 au 11 Juillet.
Ces rencontres débuteront par un week-end grand public dans le Village du Libre, dans lequel nous aurons un stand de démonstration des logiciels de la communauté KDE.

Ensuite durant toute la semaine se tiendront des conférences sur différents thèmes, la programmation complète se trouve ici . J’aurai le plaisir de présenter une conférence sur les nouveautés récentes concernant les logiciels libre pour l’animation 2D, programmée le Jeudi à 10h30, et suivie par un atelier de crétion libre sur le logiciel de dessin Krita de 14h à 17h.

Passez nous voir au stand KDE ou profiter des conférences et ateliers si vous êtes dans le coin!

En passant, un petit rappel pour deux campagnes importantes de financement participatif:
-Le Kickstarter pour booster le dévelopment de la prochaine version de Krita vient de passer le premier palier d’objectif! Il reste maintenant 9 jours pour atteindre le second palier qui nous permettrai d’embaucher Sven avec Dmitry pour les 6 prochains mois.

-La campagne pour financer le Randa Meeting 2014, réunion permettant aux contributeurs de projets phares de la communauté KDE de concentrer leurs efforts. Celle ci se termine dans 8 jours.

Pensez donc si ce n’est déjà fait à soutenir ces projets ;)




In a few days will begin the 15th “Rencontres Moniales du Logiciel Libre” in Montpellier, from 5th to 11th of July. This event will begin with a week-end for general publicaudience at the “Village du Libre”, where we will have a KDE stand to show the cool software from our community.

Then for the whole week there will be some conferences about several topics, the full schedule is here. I’ll have the pleasure to present a talk about recent news on free software for 2D animation on thursday 10.30 am, followed by a workshop about free creation on Krita painting software from 2 to 5 pm.

Come say hello at the KDE stand or enjoy the conferences and workshop if you’re around!

On a side note, a little reminder for two crowdfunding campaign:
-The Kickstarter to boost Krita development just reached the first step today! We now have 9 days left to reach the next step that will allow us to hire Sven together with Dmitry for the next 6 months.

-The Randa Meeting 2014 campaign, this meeting will allow contributors from key KDE projects to gather and get even more productive than usual.

So think about helping those projects if ou haven’t already ;)


With 518 backers and 15,157 euros, we've passed the target goal and we're 100% funded. That means that Dmitry can work on Krita for the next six months, adding a dozen hot new features and improvements to Krita. We're not done with the kickstarter, though, there are still eight days to go! And any extra funding will go straight into Krita development as well. If we reach the 30,000 euro level, we'll be able to fund Sven Langkamp as well, and that will double the number of features we can work on for Krita 2.9.

And then there's the super-stretch goal... We already have a basic package for OSX, but it needs some really heavy development. It currently only runs on OSX 10.9 Mavericks, krita only seees 1GB of memory, there are OpenGL issues, there GUI issues, there are missing dependencies, missing brush engines. Lots of work to be done. But we've proven now that this goal is attainable, so please help us get there!

It would be really cool to be able to release the next version of Krita for Linux, Windows and OSX, wouldn't it :-)

 And now it's also possible to select your reward and use Paypal -- which Kickstarter still doesn't offer.

Reward Selection

June 30, 2014

WebODF v0.5.0 released: Highlights

Today, after a long period of hard work and preparation, having deemed the existing WebODF codebase stable enough for everyday use and for integration into other projects, we have tagged the v0.5.0 release and published an announcement on the project website.

Some of the features that this article will talk about have already made their way into various other projects a long time ago, most notably ownCloud Documents and ViewerJS. Such features will have been mentioned before in other posts, but this one talks about what is new since the last release.

The products that have been released as ‘supported’ are:

  • The WebODF library
  • A TextEditor component
  • Firefox extension

Just to recap, WebODF is a JavaScript library that lets you display and edit ODF files in the browser. There is no conversion of ODF to HTML. Since ODF is an XML-based format, you can directly render it in a browser, styled with CSS. This way, no information is lost in translation. Unlike other text editors, WebODF leaves your file structure completely intact.

The Editor Components

WebODF has had, for a long time, an Editor application. This was until now not a feature ‘supported’ to the general public, but was simply available in the master branch of the git repo. We worked over the months with ownCloud to understand how such an editor would be integrated within a larger product, and then based on our own experimentation for a couple of awesome-new to-be-announced products, designed an API for it.

As a result, the new “Wodo” Editor Components are a family of APIs that let you embed an editor into your own application. The demo editor is a reference implementation that uses the Wodo.TextEditor component.

There are two major components in WebODF right now:

  1. Wodo.TextEditor provides for straightforward local-user text editing,by providing methods for opening and saving documents. The example implementation runs 100% client-side, in which you can open a local file directly in the editor without uploading it anywhere, edit it, and save it right back to the filesystem. No extra permissions required.
  2. Wodo.CollabTextEditor lets you specify a session backend that communicates with a server and relays operations. If your application wants collaborative editing, you would use this Editor API. The use-cases and implementation details being significantly more complex than the Wodo.TextEditor component, this is not a ‘supported’ part of the v0.5.0 release, but will, I’m sure, be in the next release(s) very soon. We are still figuring out the best possible API it could provide, while not tying it to any specific flavor of backend. There is a collabeditor example in WebODF master, which can work with an ownCloud-like HTTP request polling backend.

These provide options to configure the editor to switch on/off certain features.

Of course, we wholeheartedly recommend that people play with both components, build great things, and give us lots of feedback and/or Pull Requests. :)

New features

Notable new features that WebODF now has include:

  • SVG Selections. It is impossible to have multiple selections in the same window in most modern browsers. This is an important requirement for collaborative editing, i.e., the ability to see other people’s selections in their respective authorship colors. For this, we had to implement our own text selection mechanism, without totally relying on browser-provided APIs.
    Selections are now smartly computed using dimensions of elements in a given text range, and are drawn as SVG polygon overlays, affording numerous ways to style them using CSS, including in author colors. :)
  • Touch support:
    • Pinch-to-zoom was a feature requested by ownCloud, and is now implemented in WebODF. This was fairly non-trivial to do, considering that no help from touch browsers’ native pinch/zoom/pan implementations could be taken because that would only operate on the whole window. With this release, the document canvas will transform with your pinch events.
    • Another important highlight is the implementation of touch selections, necessitated by the fact that native touch selections provided by the mobile versions of Safari, Firefox, and Chrome all behave differently and do not work well enough for tasks which require precision, like document editing. This is activated by long-pressing with a finger on a word, following which the word gets a selection with draggable handles at each end.
Touch selections

Drawing a selection on an iPad

  • More collaborative features. We added OT (Operation Transformations) for more new editing operations, and filled in all the gaps in the current OT Matrix. This means that previously there were some cases when certain pairs of simultaneous edits by different clients would lead to unpredictable outcomes and/or invalid convergence. This is now fixed, and all enabled operations transform correctly against each other (verified by lots of new unit tests). Newly enabled editing features in collaborative mode now include paragraph alignment and indent/outdent.

  • Input Method Editor (IME). Thanks to the persistent efforts of peitschie of QSR International, WebODF got IME support. Since WebODF text editing does not use any native text fields with the assistance of the browser, but listens for keystrokes and converts them into operations, it was necessary to implement support for it using JavaScript using Composition Events. This means that you can now do this:

Chinese - Pinyin (IBUS)

Chinese – Pinyin (IBUS)

and type in your own language (IBUS is great at transliteration!)

Typing in Hindi

Typing in Hindi

  • Benchmarking. Again thanks to peitschie, WebODF now has benchmarks for various important/frequent edit types. benchmark

  • Edit Controllers.  Unlike the previous release when the editor had to specifically generate various operations to perform edits, WebODF now provides certain classes called Controllers. A Controller provides methods to perform certain kinds of edit ‘actions’ that may be decomposed into a sequence smaller ‘atomic’ collaborative operations. For example, the TextController interface provides a removeCurrentSelection method. If the selection is across several paragraphs, this method will decompose this edit into a complex sequence of 3 kinds of operations: RemoveText, MergeParagraph, and SetParagraphStyle. Larger edits described by smaller operations is a great design, because then you only have to write OT for very simple operations, and complex edit actions all collaboratively resolve themselves to the same state on each client. The added benefit is that users of the library have a simpler API to deal with.

On that note…

We now have some very powerful operations available in WebODF. As a consequence, it should now be possible for new developers to rapidly implement new editing features, because the most significant OT infrastructure is already in place. Adding support for text/background coloring, subscript/superscript, etc should simply be a matter of writing the relevant toolbar widgets. :) I expect to see some rapid growth in user-facing features from this point onwards.

A Qt Editor

Thanks to the new Components and Controllers APIs, it is now possible to write native editor applications that embed WebODF as a canvas, and provide the editor UI as native Qt widgets. And work on this has started! The NLnet Foundation has funded work on writing just such an editor that works with Blink, an amazing open source SIP communication client that is cross-platform and provides video/audio conferencing and chat.

To fulfill that, Arjen Hiemstra at KO has started work on a native editor using Qt widgets, that embeds WebODF and works with Blink! Operations will be relayed using XMPP.


Other future tasks include:

  • Migrating the editor from Dojo widgets to the Closure Library, to allow more flexibility with styling and integration into larger applications.
  • Image manipulation operations.
  • OT for annotations and hyperlinks.
  • A split-screen collaborative editing demo for easy testing.
  • Pagination support.
  • Operations to manipulate tables.
  • Liberating users from Google’s claws cloud. :)

If you like a challenge and would like to make a difference, have a go at WebODF. :)

Krita Kickstarter

Krita Kickstarter

I know that I primarily write about photography here, but sometimes something comes along that’s too important to pass up talking about.

Krita just happens to be one of those things. Krita is a digital painting and sketching software by artists for artists. While I love GIMP and have seen some incredible work by talented artists using it for painting and sketching, sometimes it’s better to use a dedicated tool for the job. This is where Krita really shines.

The reason I’m writing about Krita today is that they are looking for community support to accelerate development through their Kickstarter campaign.

That is where you come in. It doesn’t take much to make a difference in great software, and every little bit helps. If you can skip a fancy coffee, pastry, or one drink while out this month, consider using the money you saved to help a great project instead!

There’s only 9 days left in their Kickstarter, and they are less than €800 to hitting their goal of €15,000!

Metamorphosis by Enrico Guarnieri

Of course, the team makes it hard to keep up with them. They seem to be rapidly implementing goals in their Kickstarter before they even get funding. For instance, their “super-stretch” goal was to get an OSX implementation of Krita running. Then this shows up in my feed this morning. A prototype already!

I am in constant awe at the talent and results from digital artists, and this is a great tool to help them produce amazing works. As a photographer I am deeply indebted to those who helped support GIMP development over they years, and if we all pull together maybe we can be the ones who future Krita users thank for helping them get access to a great program...

Skip a few fancy coffee drinks, possibly inspire a future artist? Count me in!

Krita Kickstarter

Still here? Ok, how about some cool video tutorials by Ramón Miranda to help support the campaign?

If you still need more, check out his YouTube channel.

Ok, that's enough out of me.

Go Donate!
Krita Kickstarter

Last week in Krita — weeks 25 & 26

This last two weeks have been very exiting with the kickstarter campaign getting closer and closer to the pledge objective. At the time of writing we just crossed 13k! And with the wave of new users, drawn by the great word spreading labor of collaborators and enthusiasts, we have been very busy bringing new functions and building beta versions for you.

And now there's also the first public build for OSX:

It is just a prototype, with stuff missing and rather detailed instructions on getting it to run... But if you've always wanted to have Krita on OSX, this is your chance to help us make it happen!

Before getting into the new hot stuff in the code I can’t go without mentioning the useful videos from Ramon Miranda. Aiming to improve the common knowledge of Krita features and capabilities as a painting software for those who hear about it for the first time, he has created a short series of video tips: Short video introductions to many functions and fundamentals. Even for the initiated these are a good resource. I wasn’t aware of some depicted functions on the videos. All tips and info on kickstarter post, Ramon youtube channel

Week 25 & 26 progress

Amongst the notable changes and develops we can cite the efforts of Boudewijn to create a building environment to eventually allow the creation of an alpha version for OSX users. Still in experimental phase, the current steps show steady progress as its now possible to open the program and do minor paintings. Of course this is far from been a version to distribute, but if we remember the Windows humble beginnings, this is a great sign. go krita!

In other news. Somsubhra, developer of Krita Animation spin, has added, aside many bug fixes and tweaks, a first rough animation player. I wanted to make a short video for you, but still the build is very fragile and on my system it crashed after creating the document. You can see the player in action in a video made by Sohsumbra


This week’s new features:

  • Implemented “Delayed Stroke” feature for brush smoothing. (Dmitry Kazakov)
  • Edit Selection Mask. (Dmitry Kazakov)
  • Add import/export for r8 and r16 heightmaps, extensions .r8 and .r16. (Boudewijn Rempt)
  • Add ability to zoom and sync for resource item choosers (Ctrl + Wheel). (Sven Langkamp)
  • Brush stabilizer by Juan Luis Boya García. (Juan Luis Boya García)
  • Allow activation of the Isolated Mode with Alt+click on a layer. (Dmitry Kazakov)

This week’s main Bug fixes

  • Make ABR brush loading code more robust. (Boudewijn Rempt)
  • FIX #319279: Drop the full brush image after loading it to save memory. (Boudewijn Rempt)
  • Enable the vector shape in Krita. This make possible to show embedded svg images. (Boudewijn Rempt)
  • CCBUG #333451: Add basic svg support to the vector shape. (Boudewijn Rempt)
  • FIX #335041: Fix crash when installing a bundle. (Boudewijn Rempt)
  • FIX #33592: Fix saving the lock status. (Boudewijn Rempt)
  • Fix crash when trying to paint in scratchpad. (Dmitry Kazakov)
  • Don’t crash if deleting the last layer. (Boudewijn Rempt)
  • FIX #336470: Fix Lens Blur filter artifacts when used as an Adjustment Layer. (Dmitry Kazakov)
  • FIX #334538: Fix anisotropy in Color Smudge brush engine. (Dmitry Kazakov)
  • FIX #336478: Fix convert of clone layers into paint layers and selection masks. (Dmitry Kazakov)
  • CCBUG #285420: Multilayer selection: implement drag & drop of multiple layers. (Boudewijn Rempt)
  • FIX #336476: Fix edge duplication in Clone tool. (Dmitry Kazakov)
  • FIX #336473: Fixed crash when color picking from a group layer. (Dmitry Kazakov)
  • FIX #336115: Fixed painting and color picking on global selections. (Dmitry Kazakov)
  • Fixed moving of the global selection with Move Tool. (Dmitry Kazakov)
  • CCBUG #285420: Add an action to merge the selected layers. (Boudewijn Rempt)
  • CCBUG #285420: Layerbox multi-selection. Make it possible to delete multiple layers. (Boudewijn Rempt)
  • FIX #336804: (Boudewijn Rempt)
  • FIX #336803: (Boudewijn Rempt)
  • FIX #330479: Fix memory leak in KoLcmsColorTransformation. (Boudewijn Rempt)
  • And many code optimizations, memory leak patching, spelling and translation updates and other fixes.

Delay stroke and brush stabilizer

A new way of creating smooth controlled lines. The new Stabilizer smooth mode works using both the distance of the stroke and the speed. It uses 3 important options that can be described as follows:

  • Distance: The less distance the weaker the stabilization force.
  • Delay: When activated, it adds a halo around the cursor. This area is defined as a “Dead Zone”, no stroke is made while the cursor is inside it. Very useful when you need to create a controlled line with explicit angles in it. The Pixel value defines the size of the halo.
  • Finish line: If switched off, line rendering will stop in the spot it was when the pen lifted. Otherwise, it will draw the missing gap between the current brush position and the cursor last location.


Developers have been working to re implement working on multiple layers. This time they made possible to select more than one layer to reorganize the stack, merge and delete actions. After selecting multiple layers you can:

  • Drag and drop from on location to another
  • Drag and drop layers inside a group
  • Click the erase layer button to remove all selected layers
  • Go to “Layer -> Merge selected layers” to merge.

This first implementation allows a much more faster workflow when dealing with many layers. However it is still necessary to use groups to make some actions, like transform, on multiple layers.

NEW :Layer -> Merge selected layers

Edit selection mask (global selection)

To activate go to Selection menu and turn on “Show global Selection Mask”.

When activated all global selections will appear in the layer stacks as local selection do. You can deactivate the selection, hide or edit using any available tool, like transform, brushes or filters.

At the moment it is not possible to see the effect a tool causes on all tools, but you can transform the selection to a painter layer to make finer adjustments.

NEW :Selection -> Show Global Selection Mask

Alt + click isolated layer

Added a new action to toggle isolate layer.

NEW :[ALt] + [Click] over a layer in layer docker.

This actions shows instantly the selected layer hiding all other. It will return to normal mode after another layer is selected, but while the isolated layer mode is on its possible to paint, transform and adjust visualizing only the isolated layer.

June 26, 2014

A Raspberry Pi Night Vision Camera

[Mouse caught on IR camera]

When I built my (and part 2), I always had the NoIR camera in the back of my mind. The NoIR is a version of the Pi camera module with the infra-red blocking filter removed, so you can shoot IR photos at night without disturbing nocturnal wildlife (or alerting nocturnal burglars, if that's your target).

After I got the daylight version of the camera working, I ordered a NoIR camera module and plugged it in to my RPi. I snapped some daylight photos with raspstill and verified that it was connected and working; then I waited for nightfall.

In the dark, I set up the camera and put my cup of hot chocolate in front of it. Nothing. I hadn't realized that although CCD cameras are sensitive in the near IR, the wavelengths only slightly longer than visible light, they aren't sensitive anywhere near the IR wavelengths that hot objects emit. For that, you need a special thermal camera. For a near-IR CCD camera like the Pi NoIR, you need an IR light source.

Knowing nothing about IR light sources, I did a search and came up with something called a "Infrared IR 12 Led Illuminator Board Plate for CCTV Security CCD Camera" for about $5. It seemed similar to the light sources used on a few pages I'd found for home-made night vision cameras, so I ordered it. Then I waited, because I stupidly didn't notice until a week and a half later that it was coming from China and wouldn't arrive for three weeks. Always check the shipping time when ordering hardware!

When it finally arrived, it had a tiny 2-pin connector that I couldn't match locally. In the end I bought a package of female-female SchmartBoard jumpers at Radio Shack which were small enough to make decent contact on the light's tiny-gauge power and ground pins. I soldered up a connector that would let me use a a universal power supply, taking a guess that it wanted 12 volts (most of the cheap LED rings for CCD cameras seem to be 12V, though this one came with no documentation at all). I was ready to test.

Testing the IR light

[IR light and NoIR Pi camera]

One problem with buying a cheap IR light with no documentation: how do you tell if your power supply is working? Since the light is completely invisible.

The only way to find out was to check on the Pi. I didn't want to have to run back and forth between the dark room where the camera was set up and the desktop where I was viewing raspistill images. So I started a video stream on the RPi:

$ raspivid -o - -t 9999999 -w 800 -h 600 | cvlc -vvv stream:///dev/stdin --sout '#rtp{sdp=rtsp://:8554/}' :demux=h264

Then, on the desktop: I ran vlc, and opened the network stream:
(I have a "pi" entry in /etc/hosts, but using an IP address also works).

Now I could fiddle with hardware in the dark room while looking through the doorway at the video output on my monitor.

It took some fiddling to get a good connection on that tiny connector ... but eventually I got a black-and-white view of my darkened room, just as I'd expect under IR illumination. I poked some holes in the milk carton and used twist-ties to seccure the light source next to the NoIR camera.

Lights, camera, action

Next problem: mute all the blinkenlights, so my camera wouldn't look like a christmas tree and scare off the nocturnal critters.

The Pi itself has a relatively dim red run light, and it's inside the milk carton so I wasn't too worried about it. But the Pi camera has quite a bright red light that goes on whenever the camera is being used. Even through the thick milk carton bottom, it was glaring and obvious. Fortunately, you can disable the Pi camera light: edit /boot/config.txt and add this line


My USB wi-fi dongle has a blue light that flickers as it gets traffic. Not super bright, but attention-grabbing. I addressed that issue with a triple thickness of duct tape.

The IR LEDs -- remember those invisible, impossible-to-test LEDs? Well, it turns out that in darkness, they emit a faint but still easily visible glow. Obviously there's nothing I can do about that -- I can't cover the camera's only light source! But it's quite dim, so with any luck it's not spooking away too many animals.

Results, and problems

For most of my daytime testing I'd used a threshold of 30 -- meaning a pixel was considered to have changed if its value differed by more than 30 from the previous photo. That didn't work at all in IR: changes are much more subtle since we're seeing essentially a black-and-white image, and I had to divide by three and use a sensitivity of 10 or 11 if I wanted the camera to trigger at all.

With that change, I did capture some nocturnal visitors, and some early morning ones too. Note the funny colors on the daylight shots: that's why cameras generally have IR-blocking filters if they're not specifically intended for night shots.

[mouse] [rabbit] [rock squirrel] [house finch]

Here are more photos, and larger versions of those: Images from my night-vision camera tests.

But I'm not happy with the setup. For one thing, it has far too many false positives. Maybe one out of ten or fifteen images actually has an animal in it; the rest just triggered because the wind made the leaves blow, or because a shadow moved or the color of the light changed. A simple count of differing pixels is clearly not enough for this task.

Of course, the software could be smarter about things: it could try to identify large blobs that had changed, rather than small changes (blowing leaves) all over the image. I already know SimpleCV runs fine on the Raspberry Pi, and I could try using it to do object detection.

But there's another problem with detection purely through camera images: the Pi is incredibly slow to capture an image. It takes around 20 seconds per cycle; some of that is waiting for the network but I think most of it is the Pi talking to the camera. With quick-moving animals, the animal may well be gone by the time the system has noticed a change. I've caught several images of animal tails disappearing out of the frame, including a quail who visited yesterday morning. Adding smarts like SimpleCV will only make that problem worse.

So I'm going to try another solution: hooking up an infra-red motion detector. I'm already working on setting up tests for that, and should have a report soon. Meanwhile, pure image-based motion detection has been an interesting experiment.

June 25, 2014

Firewalls and per-network sharing


Fedora has had problems for a long while with the default firewall rules. They would make a lot of things not work (media and file sharing of various sorts, usually, whether as a client or a server) and users would usually disable the firewall altogether, or work around it through micro-management of opened ports.

We went through multiple discussions over the years trying to break the security folks' resolve on what should be allowed to be exposed on the local network (sometimes trying to get rid of the firewall). Or rather we tried to agree on a setup that would be implementable for desktop developers and usable for users, while still providing the amount of security and dependability that the security folks wanted.

The last round of discussions was more productive, and I posted the end plan on the Fedora Desktop mailing-list.

By Fedora 21, Fedora will have a firewall that's completely open for the user's applications (with better tracking of what applications do what once we have application sandboxing). This reflects how the firewall was used on the systems that the Fedora Workstation version targets. System services will still be blocked by default, except a select few such as ssh or mDNS, which might need some tightening.

But this change means that you'd be sharing your music through DLNA on the café's Wi-Fi right? Well, this is what this next change is here to avoid.

Per-network Sharing

To avoid showing your music in the caf, or exposing your holiday photographs at work, we needed a way to restrict sharing to wireless networks where you'd already shared this data, and provide a way to avoid sharing in the future, should you change your mind.

Allan Day mocked up such controls in our Sharing panel which I diligently implemented. Personal File Sharing (through gnome-user-share and WedDAV), Media Sharing (through rygel and DLNA) and Screen Sharing (through vino and VNC) implement the same per-network sharing mechanism.

Make sure that your versions of gnome-settings-daemon (which implements the starting/stopping of services based on the network) and gnome-control-center match for this all to work. You'll also need the latest version of all 3 of the aforementioned sharing utilities.

(and it also works with wired network profiles :)

June 24, 2014 Branding Update

So we’ve gone through a lot of iterations of the logo design based on your feedback; here’s the full list of designs and mockups that Ryan, Sarup, and myself have posted:

That’s a lot of work, a lot of feedback, and a lot of iteration. The dust has settled over the past 2 weeks and I think from the feedback we’ve gotten that there is a pretty clear winner that we should move forward with:

Let’s walk through some points about this here:

  • F/G and H I think should both be valid logo treatments. I think that F/G is good for contexts in which it’s clear we’re talking about Fedora (e.g., a Fedora website with a big Fedora logo in the corner), and H is good for contexts in which we need to promote Fedora as well (e.g., a conference T-shirt with other distro logos on it.)
  • Single-color versions of F/G & H are of course completely fine to use as well.
  • F/G are exactly the same except the texture underneath is shifted and scaled a bit. I think it should be okay to play with the texture and change it up. We can talk about this, though.
  • Feedback seemed a bit divided about the cloud mark – it was about 50/50, folks liking it full height on all three bars vs. liking it with some of the bars shorter so it looked like a stylized cloud. I think we should go with the full-height version since it’s a stronger mark (it’s bolder and stands out more) and these are clearly all abstract marks, anyway.
  • Several folks suggested trying to replace the circles in version H with the Fedora speech bubble. I did play around with this, and Ryan and I both agreed that the speech bubble shape complicates things – it makes the marks inside look off-center when they are centered, and it also creates some awkward situations when the entire logo has to interact with other elements on a page or screen, so we thought it’d be better to keep things simple and stick with a simpler shape like a circle.
  • We’ll definitely build some official textures using the pattern in F/G and make them available so you can use them! Ryan has a very cool Inkscape technique for creating these so I’m still hoping to make a screencast showing how to do it.
  • Did I forget a particular point you brought up and would like some more discussion about? Let me know.

We’ll definitely need some logo usage guidelines written up and we’ll have to create a supplemental logo pack that can be dispensed via the logo at logo queue. Those things aren’t quite ready yet – if you want to help with that, let us know at the Fedora design team list or here in the comments.

Anyway, thanks for watching and participating in this process. It’s always a lot of fun to work on designs in the open with everyone like this :)

June 23, 2014

2.5D Parallax Animated Photo Tutorial (using Free Software)

I had been fiddling with creating these 2.5D parallax animated photos for quite a few years now, but there had recently been a neat post by Joe Fellows that brought it into the light again.

The reason I had originally played with the idea is part of a long, sad story involving my wedding and an out-of-focus camcorder that resulted in my not having any usable video of my wedding (in 2008). I did have all of the photographs, though. So as a present to my wife, I was going to re-create the wedding with these animated photos (I’m 99% sure she doesn't ever read my blog - so if anyone knows her don’t say anything! I can still make it a surprise!).

The rest of my GIMP tutorials can be found here:
Getting Around in GIMP

So I had been dabbling with creating these in my spare time over a few years, and it was really neat to see the work done by Joe Fellows for the World Wildlife Fund. Here is that video:

He followed that up with a great video walking through how he does it:

I'm writing here today to walk through the methods I had been using for a while to create the same effect, but entirely with Free/Open Source Software...

Open Source Software Method

Using nothing but Free/Open Source Software, I was able to produce the same effect here:

Joe uses Adobe software to create his animations (Photoshop & After Effects). I neither have, nor want, Photoshop or After Effects.

What I do have is GIMP and Blender!

Blender Logo + GIMP Logo = Heart Icon

What I also don’t have (but would like) is access to the World Wildlife Fund photo archive. Large, good photographs make a huge difference in the final results you’ll see.

What I do have access to are some great old photographs of Ziegfeld Follies Girls. For the purposes of this tutorial we’ll use this one:

Pat David Ziegfeld Follies Woman Reclining
Click here to download the full size image.

This is a long post.
It’s long because I’ve written hopefully detailed enough steps that a completely new user of Blender can pick it up and get something working. For more experienced users, I'm sorry for the length.

As a consolation prize, I’ve linked to my final .blend file just below if anyone wants to download it and see what I finally ended up with at the end of the tutorial. Enjoy!

Here’s an outline of my steps if it helps...

  1. Pick a good image
  2. Find something with good fore/middleground and background separation (and clean edges).
  3. Think in planes
  4. Pay attention to how you can cut up the image into planes.
  5. Into GIMP
    1. Isolate woman as new layer
    2. Mask out everything except the subject you want.
    3. Rebuild background as separate layer (automatically or manually)
    4. Rebuild the background to exclude your subject.
    5. Export isolated woman and background layer
    6. Export each layer as its own image (keep alpha transparency).
  6. Into Blender
    1. Enable “Import Images as Planes” Addon
    2. Enable this ridiculously helpful Addon.
    3. Import Images as Planes
    4. Import your image as separate planes using the Addon.
    5. Basic environment setup
    6. Some Blender basics, and set viewport shade mode to “Texture”
    7. Add some depth
    8. Push background image/plane away from camera and scale to give depth.
    9. Animate woman image mesh
    10. Subdivide subject plane a bunch, then add Shape Keys and modify.
    11. Animate camera
    12. Animate camera position throughout timeline as wanted.
    13. Animate mesh
    14. Set keyframes for Shape Keys through timeline.
    15. Render

File downloads:
Download the .blend file [Google Drive]
These files are being made available under a Creative Commons Attribution, Non-Commercial, Share Alike license (CC-BY-SA-NC).
You're free to use them, modify them, and share them as long as you attribute me, Pat David, as the originator of the file.

Consider the Source Material

What you probably want to look for if you are just starting with these are images with a good separation between a fore/middle ground subject and the background. This will make your first attempts a bit easier until you get the hang of what’s going on. Even better if there are mostly sharp edges differentiating your subject from the background (to help during masking/cutting).

You’ll also want an image bigger than your rendering size (for instance, mine are usually rendered at 1280×720). This is because you want to avoid blowing up your pixels when rendering if possible. This will make more sense later, but for now just try to use source material that’s larger than your intended render size.

Thinking in Planes

The trick to pre-visualizing these is to consider slicing up your image into separate planes. For instance, with our working image, I can see immediately that it’s relatively simple. There is the background plane, and one with the woman/box:

Pat David Ziegfeld Follies Woman Reclining Plane Example
Simple 2-plane visualization of the image.

This is actually all I did in my version of this in the video. This is nice because for the most part the edges are all relatively clean as well (making the job of masking an easier one).

One of my previous tests had shown this idea of planes a little bit clearer:

Yes, that’s my daughter at the grave of H.P. Lovecraft with a flower.

So we’ve visualized a simple plan - isolate the woman and platform from the background. Great!

Into GIMP!

So I will simply open the base image in GIMP to do my masking and exporting each of the individual image planes. Remember, when we’re done, we want to have 2 images, the background and the woman/platform (with alpha transparency):

Pat David Ziegfeld Follies Woman Reclining Background Clean
What my final cleaned up backplate should look like.
(click here to download the full size)

Pat David Ziegfeld Follies Woman Reclining Clean Transparent
My isolated woman/platform image.
(click here to download the full size)

Get Ready to Work

Once in GIMP I will usually duplicate the base image layer a couple of times (this way I have the original untouched image at the bottom of layer stack in case I need it or screw up too badly). The top-most layer is the one I will be masking the woman from. The middle layer will become my new background plate.

Isolating the Woman

To isolate the woman, I’ll need to add a Layer Mask to the top-most layer (if you aren’t familiar with Layer Masks, the go back and read my previous post on them to brush up.

I initialize my layer mask to White (full opacity). Now anywhere I paint black on my layer mask, will make that area transparent on this layer. I also usually turn off the visibility of all the other layers when I am working (so I can see what I’m doing - otherwise the layers beneath would show through and I wouldn’t know where I was working). This is what my layer dialog looks like at this point:

Masking the Woman

Some of these headings are beginning to sound like book titles (possibly romance?) “Isolating the Woman”, “Masking the Woman”...

There’s a few different ways you can proceed at this point to isolate the woman. Really it depends on what you’re most comfortable with. One way is to use Paths to trace out a path around the subject. Another way is to paint directly on the Layer Mask.

All of them suck.

Sorry. There is no fast and easy method of doing this well. This is also one of the most important elements to getting a good result, so don’t cheap out now. Take your time and pull a nice clean mask, whatever method you choose.

For this tutorial, we can just paint directly onto our Layer Mask. Check to make sure the Layer Mask is active (white border around it that you won't be able to see because the mask is white) in the Layer palette, and make sure your foreground color is set to Black. Then it’s just a matter of choosing a paintbrush you like, and start painting around your subject.

I tend to use a simple round brush with a 75% hardness. I'll usually start painting, then take advantage of the Shift key modifier to draw straight lines along my edges. For finer details I'll drop down into a really small brush, and stay a bit larger for easier things.

To illustrate, here’s a 3X speedrun of me pulling a quick mask of our image:

To erase the regions that are left, I'll usually use the Fuzzy Select Tool, grow the selection by a few pixels, and then Bucket Fill that region with black to make it transparent (you can see me doing it at about 2:13 in the video).

Now I have a layer with the woman isolated from the background. I can just select that layer and export it to a .PNG file to retain transparency.

File → Export

Name the file with a .png extension, and make sure that the “Save color values from transparent pixels” is checked to save the alpha transparency.

Rebuilding the Background

Now that you have the isolated woman as an image, it’s time to remove her and the platform from the background image to get a clean backplate. There’s a few ways to go about this, the automated way, or the manual way.

Automated Background Rebuild

The automated way is to use an Inpainting algorithm to do the work for you. I had previously written about using the new G'MIC patch-based Inpainting algorithm, and it does a pretty decent job on this image. If you want to try this method you should first read up about using it here (and have G'MIC installed of course).

To use it in this case was simple. I had already masked out the woman with a layer mask, so all I had to do was Right-Click on the layer mask, and choose “Mask to Selection” from the menu.

Then just turn on the visibility of my “Background” layer (and toggle the visibility of the isolated woman layer off) and activate my “Background” layer by clicking on it.

Then I would grow the selection by a few pixels:

Select → Grow

I grew it by about 4 pixels, then sharpened the selection to remove anti-aliasing:

Select → Sharpen

Finally, make sure my foreground color is pure red (255, 0, 0), and bucket fill that selection. Now I can just run the G'MIC Inpainting [patch-based] against it to Inpaint the region:

Filters → G'MIC
Repair → Inpaint [patch-based]

Let it run for a while (it’s intensive), and in the end my layer now looks like this:

Not bad at all, and certainly usable for our purposes!

If I don’t want to use it as is, it’s certainly a better starting point for doing some selective healing with the Heal Tool to clean it up.

Manual Background Rebuild

Manually is exactly as it sounds. We basically want to erase the woman and platform from the image to produce a clean background plate. For this I would normally just use a large radius Clone Tool for mostly filling in areas, and then the Heal Tool for cleaning it up to look smoother and more natural.

It doesn't have to be 100% perfect, remember. It only needs to look good just behind the edges of your foreground subjects (assuming the parallax isn’t too extreme). Not to mention one of the nice things about this workflow is that it’s relatively trivial later to make modifications and push them into Blender.

Rinse & Repeat

For this tutorial we are now done. We’ve got a clean backplate and an isolated subject that we will be animating. If you wanted to get a little more complex just continue the process starting with the next layer closer to the camera. An example of this is the last girl in my video, where I had separated her from the background, and then her forearm from her body. In that case I had to rebuild the image of her chest that was behind her forearm to account for the animation.

Example of a three-layer separation (notice the rebuilt dress texture)

Into Blender

Now that we have our source material, it’s time to build some planes. Actually, this part is trivially easy thanks to the Import Images as Planes Blender addon.

The key to this addon is that it will automatically import an image into Blender, and assign it to a plane with the correct aspect ratio.

Enable Import Images as Planes

This addon is not enabled by default (at least in my Blender), so we just need to enable it. You can access all of the addons by first going to User Preferences in Blender:

Then into the Addons window:

I find it faster to get what I need by searching in this window for “images”:

To enable the Addon, just put check the small box in the upper right corner of the Addon. Now you can go back into the 3D View.

Back in the 3D View, you can also select the default cube and lamp (Shift - Right Click), and delete them (X key). (Selected objects will have an orange outline highlighting them).

Import Images as Planes

We can now bring in the images we exported from GIMP earlier. The import option is available in:

File → Import → Images as Planes

At this point you’ll be able to navigate to the location of your images and can select them for import (Shift-Click to select multiple):

Before you do the import though, have a look at the options that are presented to you (bottom of the left panel). We need to turn on a couple of options to make things work how we want:

For the Import Options we want to Un-Check the option to “Align Planes”. This will import all of the image planes already stacked with each other in the same location.

Under Material Settings we want to Check both Shadeless and Use Alpha so our image planes will not be affected by lamps and will use the transparency that is already there. We also want to make sure that Z Transparency is pressed.

Everything else can stay at their default settings.

Go ahead and hit the “Import Images as Planes” button now.

Some Basic Setup Stuff

At this point things may look less than interesting. We’re getting there. First we need to cover just a few basic things about getting around in Blender for those that might be new to it.

In the 3D window, your MouseWheel controls the zoom level, and your Middle Mouse button controls the orbit. Right-Click selects objects, and Left-Click will place the cursor. Shift-Middle Click will allow you to pan.

At this point your image planes should already be located in the center of the view. Go ahead and roll your MouseWheel to zoom into the planes a bit more. You should notice that they just look like boring gray planes:

I thought you said we were importing images?!

To see what we’re doing in 3D View, we’ll need to get Blender to show the textures. This is easily accomplished in the toolbar for this view by changing the Viewport Shading:

Now that’s more like it!

At this point I personally like to get my camera to an initial setup as well, so zoom back out and Right-Click on your camera:

We want to reset all of the default camera transformations and rotations by setting those values to 0 (zero). This will place your camera at the origin facing down.

Now change your view to Camera View (looking through the actual camera) by hitting zero (0) on your keyboard numberpad (not 0 along the top of your alpha keys).

Yes, this zero, not the other one!

You’ll be staring at a blank gray viewport at this point. All we have to do now is move the camera back (along the Z-axis), until we can see our working area. I like to use the default Blender grid as a rough approximation of my working area.

To pull the camera back, hit the G key (this will move the active object), and then press the Z key (this will constrain movement along the Z-axis. Slowly pull your mouse cursor away from the center of the screen, and you should see the camera view getting further away from your image planes. As I said, I like to use the default grid as a rough approximation, so I’ll zoom out until I am just viewing the width of the grid:

I’ve also found that working at small scales is a little tough, so I like to scale my image planes up to roughly match my camera view/grid. So we can select all the image planes in the center of our view by pressing the B key and dragging a rectangle over the image planes.

To scale them, press the S key and move the mouse cursor away from the center again. Adjust until the images just fill the camera view:

Image planes scaled up to just fit the camera/grid

This will make the adjustments a little easier to do. Now we’re ready to start fiddling with things!

Adding Some Depth

What we have now is all of our image planes in the exact same location. What we want to do is to offset the background image further away from the camera view (and the other planes).

Right-click on your image planes. If you click multiple times you will cycle through each object under your cursor (in this case between the background/woman image planes). With your background image plane selected, hit the G key to move it, and the Z key again to constrain movement along the Z-axis. (If you find that you’ve accidentally selected the woman image plane, just hit the ESC key to escape out of the action).

This time you’ll want to move the mouse cursor towards the center of the viewport to push the background further back in depth. Here’s where I moved mine to:

We also need to scale that image plane back up so that its apparent size is similar to what it was before we pushed it back in depth. With the background image plane still selected, hit the S key and pull the mouse away from the center again to scale it up. Make it around the same size as it was before (a little bigger than the width of the camera viewport):

Keep in mind that the further back the background plane is, the more pronounced the parallax effect will be. Use a relatively light touch here to maintain a realistic sense of depth.

What’s neat at this point is that if we were not going to animate any of the image planes themselves, we would be about done. For example, if you select the camera again (Right-click on the camera viewport border) you can hit the G key and move the camera slightly. You should be able to clearly see the parallax effect of the background being behind the woman.

Animating the Image Plane

After Effects has a neat tool called “Puppet Tool” that allowed Joe to easily deform his image to appear animated. We don’t have such a tool exactly in Blender at the moment, but it’s trivial to emulate the effects on the image plane using Shape Keys.

What Shape Keys does is simple. You will take a base mesh, add a Shape Key, and then deform the mesh in any way you’d like. Then you can animate the Shape Key deformation of the mesh over time. Multiple Shape Keys will blend together.

We are going to use this function to animate our woman (as opposed to some much more complex animation abilities in Blender).

Before we can deform the woman image plane, though, we need a good mesh to deform. At the moment the woman plane contains only 4 vertices in the mesh. We are going to make this much denser before we do anything else.

We want to subdivide the image plane with the woman. So Right-click to select the woman image plane. Then hit the Tab key to change into edit mode. All of the vertices should already be active (selected), they will all be highlighted if they are (if not, hit the A key to toggle selection of all vertices until they are):

What we want to do is to Subdivide the mesh until we get a desired level of density. With all of the vertices in the plane selected, hit the W key and choose Subdivide from the menu. Repeat until the mesh is sufficiently dense for you. In my case, I subdivided six times and the result looks like this:

If you’ve got a touch of OCD in you, you might want to reduce the unused vertices in the mesh. This is not necessary, but might make things a bit cleaner to look at. To remove those vertices, first hit the A key to de-select all the vertices. Then hit the C key to circle-select. Your should see a circle select region where your mouse is. You can increase/decrease the size of the circle using your MouseWheel. Just click now on areas that are NOT your image to select those vertices:

Select all the vertices in a rough outline around your image, and press the X key to invoke the Delete menu. You can just choose Vertices from this menu. You should be left with a simpler mesh containing only your woman image. Hit the Tab key again to exit Edit mode.

Here is what things look like at the moment:

To clear a little space while I work, I am going to hide the Transform and Object Tools palettes from my view. They can be toggled on/off by pressing the N key and T key respectively.

I am also going to increase the size of the Properties panel on the right. This can be done by clicking and dragging on it’s edge (the cursor will change to a resize cursor):

We will want to change the Properties panel to show the Object Data for the woman image plane. Click on that icon to show the Object Data panel. You will see the entry for Shape Keys in this panel.

We want to add a new Shape Key to this mesh, so press the &plus button two times to add two new keys to this mesh (one key will be the basis, or default position, while the other will be for the deformation we want). After doing this, you should see this in the Shape Keys panel:

Now, the next time we are in Edit mode for this mesh, it will be assigned to this particular Shape Key. We can just start editing vertices by hand now if we want, but there’s a couple of things we can do to really make things much easier.

Proportional Editing Mode

We should turn on Proportional Editing Mode. This will make the deformation of our mesh a bit smoother by allowing our changes to effect nearby vertices as well. So in your 3D View press the Tab key again to enter edit mode.

Once in Edit mode, there is a button for accessing Proportional Editing Mode. Once here, just click on Enable to turn it on.

To test things out, you can Right-click to select a vertex in your mesh, and use the G key to move it around. You should see nearby vertices being pulled along with it. Rolling your MouseWheel up or down will increase/decrease the radius of the proportional pull. Remember, to get out of the current action you can just hit the ESC key on your keyboard to exit without making any changes.

If you really screw up and accidentally make a mess of your mesh, it’s easy to get back to the base mesh again. Just hit Tab to get out of Edit mode, then in the Shape Keys panel you can hit the “−” button to remove that shape key. Just don’t forget to hit “&plus” again to add another key back when you want to try again.

Pivot Point

Blender lets you control where the current pivot point of any modifications you make to the mesh should be. By default it will be the median point of all selected objects, which is fine. You may occasionally want to specify where the point of rotation should be manually.

The button for adjusting the pivot point is in the toolbar of the 3D View. I’ll usually only use Median Point or 3D Cursor when I'm doing these. Remember: Left-clicking the mouse in 3D View will set the cursor location. You can leave it at Median Point for now.

To Animate!

Ok, now we can actually get to the animating of the mesh. We need to decide what we’d like the mesh to look like it’s doing first, though. For this tutorial let’s do a couple of simple animations to get a feel for how the system works. I'm going to focus on changing two things.

First we will rotate the womans head slightly down from it’s base position and second we will rotate her arm down slightly as well.

Let’s start with rotating her head. I will use the circle-select in the 3D View again to select a bunch of vertices in the center of her head (no need to exactly select all the vertices all the way around):

In the 3D View, press the R key to rotate those vertices. With Proportional Editing turned on you should see not only your selected vertices, but nearby vertices also rotating. While in this operation, the mousewheel will adjust the radius of the proportional editing influence (the circle around my rotation in my screenshot shows where my radius was set):

Remember: hit the ESC key if you need to cancel out of any operation without applying anything. Go ahead and rotate the head down a bit until you like how it looks. When you get it where you’d like it, just Left-click the mouse to set the rotation. Subtle is the name of the game here. Try small movements at first!

Now let’s move on to rotating the arm a bit. Hit the A key to de-select all the vertices, and choose a bunch of vertices along the arm (again, I use the circle-select C key to select a bunch at once easily):

If you end up selecting a couple of vertices you don’t want, remember that you can Shift + Right-click to toggle adding/removing vertices to the selection set. For example, in my image above I didn't want to select any vertices that were too close to her face to avoid warping it too much. I also went ahead and made sure to select as many vertices around the arm as I could.

I also Left-click in the location you see in my screenshot to place the cursor roughly at her shoulder. For the arm I also changed the Pivot Point to be the 3D Cursor because I want the arm to pivot at a natural location.

Again, hit the R key to begin rotating the arm. If you find the rotation pulls vertices from too far away and modifies them, scroll your mousewheel to decrease the radius of the proportional editing. In my example I had the radius of influence down very low to avoid warping the womans face too much.

As before, rotate to where you like it, and Left-click the mouse when you’re happy.

Finally, you can test how the overall mesh modifications will look with your Shape Key. Hit the Tab key to get out of Edit Mode and back into Object Mode. All of your mesh modifications should snap back to what they were before you changed anything.

Don’t Panic.

What has happened is that the mesh is currently set so that the Shape Key we were modifying has a zero influence value right now:

The Value slider for the shape key is 0 right now. If you click and drag in this slider you can change the influence of this key from 0 - 1. As you change the value you should be seeing your woman mesh deform from it’s base position at 0, up to it’s fully deformed state at 1. Neat!

Once we’re happy with our mesh modifications, we can now move on to animating the sequence to see how things look!


So what we now want to do is to animate two different things over the course of time in the video. First we want to animate the mesh deformation we just created with Shape Keys, and second we want to animate the movement of the camera through our scene.

If you have a look just below the 3D View window, you should be seeing the Timeline window:

The Timeline window at the bottom

What we are going to do is to set keyframes for our camera and mesh at the beginning and end of our animation timeline (1-250 by default).

We should already be on the first frame by default, so let’s set the camera keyframe now. In the 3D View, Right-click on the camera border to select it (will highlight when selected). Once selected, hit the I key to bring up the keyframe menu.

You’ll see all of the options that you can keyframe here. The one we are interested in is the first, Location. Click it in the menu. This tells Blender that at frame 1 in our animation, the camera should be located at this position.

Now we can define where we’d like our camera to be at the end of the animation. So we should move the frame to 250 in the timeline window. The easiest way to do this is to hit the button to jump to the last frame in the range:

This should move the current frame to 250. Now we can just move the location of our camera slightly, and set a new keyframe for this frame. I am going to just move the camera straight up slightly:

Once position, hit the I key again and set a Location keyframe.

At this point, if you wanted to preview what the animation would look like you can press Alt-A to preview the animation so far (hit ESC when you’re done).

Now we want to do the same thing, but for the Shape Keys to deform over time from the base position to the deformed position we created earlier. In the Timeline window, get back to frame 1 by hitting the jump to first frame in range button:

Once back at frame 1, take a look at the Shape Keys panel again:

Make sure the value is 0, then Right-click on the slider and choose the first entry, Insert Keyframe:

Just like with the camera, now jump back to the last frame in the range. Then set the value slider for the Shape Keys to 1.000. Then Right-click on the Value slider again, and insert another keyframe.

This tells Blender to start the animation with no deformation on the mesh, and at the end to transition to full deformation according to the Shape Key. Conveniently Blender will calculate all of the vertices locations between the two for us for a smooth transition.

As before, now try hitting Alt-A to preview the full animation.

Congratulations, you made it!

Getting a Video Out

If you’re happy with the results, then all that’s left now is to render out the video! There are a few settings we need to specify first, though. So switch over to the Render tab in Blender:

The main settings you’ll want to adjust here are the resolution X & Y and the frame rate. I rendered out at 1280×720 at 100% and Frame Rate of 30 fps. Change your settings as appropriate.

Finally, we just need to choose what format to render out to...

If you scroll down the Render panel you’ll find the options for changing the Output. The first option allows you to choose where you’d like the output file to get rendered to (I normally just leave it as /tmp - it will be C:\tmp in windows). I also change the output format to a movie rendering type. In my screenshot it shows “H.264”, by default it will probably show “PNG”. Change it to H.264.

Once changed, you’ll see the Encoding panel become available just below it. For this test you can just click on the Presets spinner and choose H264 there as well.

Scroll back up to the top of the Render panel, and hit the big Animation button in the top center (see the previous screenshot).

Go get a cup of coffee. Take a walk. Get some fresh air. Depending on the speed of your machine it will take a while...

Once it’s finished, in your tmp directory you’ll find a file called 0001-250.avi. Fire it up and marvel at your results (or wince). Here’s the result of mine directly from the above results:

Holy crap, we made it to the end. That was really, really long.

I promise, though, that it just reads long. If you’re comfortable moving around in Blender and understand the process this takes about 10-15 minutes to do once you get your planes isolated.

Well, that’s about it. I hope this has been helpful, and that I didn’t muck anything up too badly. As always, I’d love to see others results!

Reader David notes in the comments that if the render results look a little ‘soft’ or ‘fuzzy’, increasing the Anti-Aliasing size can help sharpen things up a bit (it’s located on the render panel just below render dimensions). Thanks for the tip David!

Help support the site! Or don’t!
I’m not supporting my (growing) family or anything from this website. Seriously.
There is only one reason I am writing these tutorials and posts:
I love doing it.
Technically there is a second reason: to give back to the community. Others before me were instrumental in helping me learn things when I first got started, and I’m hoping to pay it forward here.

If you want to visit an ad, or make a donation, or even link/share my content, I would be absolutely grateful (and tickled pink). If you don’t it’s not going to affect me writing and posting here one bit.

I’ll keep writing, and I’ll keep it free.
If you get any use out of this site, I only ask that you do one thing:
pay it forward.

June 21, 2014

Mirror a website using lftp

I'm helping an organization some website work. But I'm not the only one working on the website, and there's no version control. I wanted an easy way to make sure all my files were up-to-date before I start to work on one ... a way to mirror the website, or at least specific directories, to my local disk.

Normally I use rsync -av over ssh to mirror directories, but this website is on a server that only offers ftp access. I've been using ncftp to copy files up one by one, but although ncftp's manual says it has a mirror mode and I found a few web references to that, I couldn't find anything telling me how to activate it.

Making matters worse, there are some large files that I don't need to mirror. The first time I tried to use get * in ncftp to get one directory, it spent 15 minutes trying to download a huge powerpoint file, then stalled and lost the connection. There are some big .doc and .docx files, too. And ncftp doesn't seem to have a way to exclude specific files.

Enter lftp. It has a mirror mode (with documentation, even!) which includes a -X to exclude files matching specified patterns.

lftp includes a -e to pass commands -- like "mirror" -- to it on the command line. But the documentation doesn't say whether you can use more than one command at a time. So it seemed safer to start up an lftp session and pass a series of commands to it.

And that works nicely. Just set up the list of directories you want to mirror, and you can write a nice shell function you can put in your. .zshrc or .bashrc:

sitemirror() {
for dir in thisdir thatdir theotherdir
mirror --only-newer -vvv -X '*.ppt' -X '*.doc*' -X '*.pdf' htdocs/$dir $HOME/web/webmirror/$dir"

echo Commands to be run:
echo $commands

lftp <<EOF
open -u 'user,password'

Super easy -- all I do is type sitemirror and wait a little. Now I don't have any excuse for not being up to date.

June 19, 2014



You know. In case you needed a Heisenbug. Here he is.

The SVG source is available on the page for the design.

June 18, 2014

Fuzzy house finch chicks

[house finch chick] The wind was strong a couple of days ago, but that didn't deter the local house finch family. With three hungry young mouths to feed, and considering how long it takes to crack sunflower seeds, poor dad -- two days after Father's Day -- was working overtime trying to keep them all fed. They emptied by sunflower seed feeder in no time and I had to refill it that evening.

The chicks had amusing fluffy "eyebrow" feathers sticking up over their heads, and one of them had an interesting habit of cocking its tail up like a wren, something I've never seen house finches do before.

More photos: House finch chicks.

New Krita Videos – and a Kickstarter to help Krita

Ramón Miranda has published 4 short Krita tutorial videos as a support of the Kickstarter campaign to accelerate Krita development.

Krita is a Digital painting application for artists created by artists and is available for Linux and Windows. Maybe MacOS in future. It’s one of the Kickstarter goals for overfunding.

If you like Krita, throw some money at them. Money is so unpersonal, but it helps a lot. ;-) The intended 24 goals are on the web site of Kickstarter – and they look really worth to have!

Krita Kickstarter

Krita tip 01. Monotone image with Transparency Masks

Krita tip 02. Color curves with filter masks

Krita tip 03. Texturize your images. The easy way

Krita tip 04 Digital color mixer

Don’t forget that I’ll give away a copy of the Muses DVD by Ramón Miranda – if you want it, put a comment below Episode 199!

Episode 200 is under way, progress report are on the top of the side bar!

flattr this!

Krita: "Edit Global Selection" feature

Our Kickstarter project is over 50% of the goal now! To say "Thank you!" to our supporters we decided to implement another small feature in Krita: editing of the global selection. Now you can view, transform, paint and do whatever you like on any global selection!

To activate the feature, just go to main menu and activate Selection->Show Global Selection Mask check box. Now you will see your global selection as a usual mask in the list of layers in the docker. Activate it and do whatever you want with it. You can use any Krita tool to achieve your goal: from trivial Brush tool to complex Transformation or Warp tool. When you are done, just deactivate the check box to save precious space of the Layers docker.

Add usual global selection

Deform with Warp Tool and paint with Brush Tool

This feature (as well as the Isolated Mode one) was a really low hanging fruit for us. It was possible to implement due to a huge refactoring we did in the selections area about two years ago, so adding it was only extending existing functionality. It is really a pity that the other features from the Kickstarter list cannot be implemented so easily :) Now I'm going to dig deep into the Vanishing Points and Transformations problem. Since Saturday I've been trying to struggle through the maths of it, but with limited success...

Next Windows and Krita Lime will have this feature included!

And don't forget to your friends about our Kickstarter!

Last week in Krita — week 23 & 24

Even with the effort of designing, launching and running the kickstarterwe haven’t stopped developing!

In the last two weeks, besides the coding work on the git repositories, Boudewijn has made available a hefty number of testing builds for the windows community. This builds brings up the latest novelties and features developed in the master branch. Note, however, not all feature sets are finished and it is not recommended for production use. Get the bleeding edge build

In other development: The community is slowly building a new site for Krita. Planning and designing has been done mainly through the forums. Still in the early stages of develop, which consists of mock ups and concept designs, the project is shaping up for a brilliant future for this website. Join the forums! and take part in the community!

It’s time to review this last two weeks commited hard work.

This week’s new features:

This was a busy period, Boudewijn worked pretty hard to improve the file saving dialog behavior. This is much more difficult than it sounds since every system uses a different file open/save dialog each one working slightly different, because of the differences the implementation needs to take into account how each one asks and gives back data. The changes should make the dialog behave as expected on all systems.

Dmitry in particular, aside from the many bugs fixed, enhanced the stroke smoothing options adding a scaling factor to the weight option. This allows the weighted smoothing to behave exactly the same at different zoom levels.

New features:

  • Improvements to the clone tool. (Boudewijn Rempt)
  • Scalable Distance option in Weighted Smoothing. (Dmitry Kazakov)
  • Make it possible to zoom in all resource selectors, like brushes, presets, gradients (Sven Langkamp)
  • Make the grabbing area for Transform Tool handles twice lager than the handles themselves. (Dmitry Kazakov)
  • Add image sizes and textures for games. (Boudewijn Rempt and Paul Geraskin)
  • Restore the new view action to the view menu. (Boudewijn Rempt)
  • Add Y + mouse click + movement shortcut for Exposure correction. (Dmitry Kazakov)
  • Add shortcuts to switch between painting blending modes. (Boudewijn Rempt)

Gamma and exposure new cursors

General bugfixes and features

  • FIX #334933: Make the grabbing area for Transform Tool handles twice lager than the handles themselves. (Dmitry Kazakov)
  • FIX #335834: Added a Scalable Distance option to Weighted Smoothing. (Dmitry Kazakov)
  • FIX #335647: Fix painting checkers on openGL mode. (Dmitry Kazakov)
  • FIX #335649: Fix hiding the brush outline when the cursor is outside the canvas. (Dmitry Kazakov)
  • FIX #335660: Use flake/always as activation id for a Krita tools. (Sven Langkamp)
  • FIX #335746: Floating message block input under Gnome. (Boudewijn Rempt)
  • FIX #335745: Hide Pseudo Infinite canvas decoration when in Wraparound mode. (Dmitry Kazakov)
  • FIX #335670: Artistic Color Selector lightstrip display non working when floating. (Boudewijn Rempt)
  • FIX #331358: Fix artifacts when rotation stylus sensor is activated. (Dmitry Kazakov)
  • FIX #332773: Add option to disable touch capabilities of the Wacom tablets on canvas only mode. To activate, in kritarc, please add disableTouchOnCanvas=true. (Dmitry Kazakov)
  • FIX #335048: Rename stable category to Brush engines. (Sven Langkamp)
  • Created new cursors for Exposure/Gamma gestures. (David Revoy)
  • Fix “Move layer into group” icon design. (Timothée Giet)
  • Fix file dialog filter types and use native dialogs when possible. (Boudewijn Rempt)
  • Tweak mirror axes display and default position of move handles. (Arjen Hiemstra)

Clone tool improvements

Clone tool has change now makes it possible to clone from one layer to a different one. This works as follows:

  • [CTRL + CLICK] FIRST Time: Define source layer and source area to clone. You can change layer after this to clone to a different layer
  • [CTRL + CLICK]: Adjusts, changes source coordinates. Note that source clone layer remains the same.
  • [CTRL + ALT + CLICK]: Change source layer and source area to clone.

Layer blending modes shortcuts

Following Photoshop default shortcuts for blending modes developers added the same shortcuts to Krita for painting blending modes. this is awesome as there is no loose in focus during painting sessions just to change brush blending mode.

To promote the use of the new added shortcuts here is a list for you. Painting blending mode shortcuts

File dialogs

File dialogs were reworked to work and behave as the user expects, Boudewijn worked amongst other things to ensure dialogs remember the last used directory and to ensure the correct format output on any system. It’s no longer necessary to select the desired format from the list, it’s enough to write the extension after naming the file for Krita to know exactly what format you want to save the file to. Many other enhancements and tweaks were also necessary to make it possible to work with native dialogs on the main systems: KDE, GNOME and Windows.

krita gemini and sketch

Krita Gemini and Krita sketch received much love to make it run smoothly and consistent in all systems, most changes were in the underlying code to optimize and prepare the code for better development. Some changes to note:

  • Use the desktop dialog for opening images from the welcome screen. (Arjen Hiemstra)
  • Fix colouring of the layer controls and tweak them to look good. (Arjen Hiemstra)
  • Properly disable warnings about floating messages. (Arjen Hiemstra)
  • Add duplicate/clear layer buttons to the Layer panel. (Arjen Hiemstra)
  • Add a clear and clone method to LayerModel. (Arjen Hiemstra)
  • Enable floating messages for Gemini and disable them properly for sketch. (Arjen Hiemstra)
  • Fix the save page and reduce the number of errors/warnings. (Arjen Hiemstra)

Code optimization and cleanup

The ever going process of optimization continues with a huge amount of developer commits. The process might not look like much when reading the git logs, but renaming, moving and getting rid of old naming schemes, paves the way for bigger changes. I couldn’t list them all as I list all else since you will probably get bored, but let’s just say they did a lot of housekeeping this past two weeks.


We're over 50% funded now for the basic goal of having Dmitry another work for six months on Krita. But it's time to put in some sprinting to get to the stretch goals! Help Krita by spreading the word:

June 17, 2014

Netflix Top 50 Covers by Genre (Averaged & Normalized)

In my (apparently) never-ending quest to average all the things, I happened to be surfing around Netflix the other evening looking for something to watch. Then a little light bulb went off!

I had previously blended many different variations of movie posters with varying success, but figured it might be interesting to see mean blends based on Netflix genres (and suggestions for me to watch). So, here are my results across a few different genres:

I found a couple of surprising and interesting things in these results...

For instance, I am not surprised at the prevalence of Teal/Orange in Sci-Fi covers. I was surprised to find that the other genre with such prominent color grading happened to be Comedies (I would have guessed Thrillers or Action).

I had also didn't think that Romance would look like such a hot mess (there's a cosmic sign there, I think). I can also sort of make out abstract faces in Thrillers and Action. Horror is relatively tame by comparison!

Title location seems mostly consistent across genres, though there's a marked preference for top-centered titles in Comedies apparently. You can also see that MST3K must have at least a few titles in my Sci-Fi list (a fact of which I am proud of).

I may finish my work on the movie posters and post them in the future. In the meantime, here is one that I did blending all of the movie posters that the legendary Saul Bass created:

In case your curious, here's the list of the movies these posters are from:

The Shining (1980)
Such Good Friends (1971)
Bunny Lake is Missing (1968)
In Harm's Way (1965)
The Cardinal (1963)
It's a Mad, Mad, Mad, Mad World (1963)
Advise & Consent (1962)
Bird Man of Alcatraz (1962)
One, Two, Three (1961)
Exodus (1960)
Anatomy of a Murder (1959)
North by Northwest (1959)
Vertigo (1958)
Love in the Afternoon (1957)
The Man With the Golden Arm (1955)

DNF v.s. Yum

A lot has been said on fedora-devel in the last few weeks about DNF and Yum. I thought it might be useful to contribute my own views, considering I’ve spent the last half-decade consuming the internal Yum API and the last couple of years helping to design the replacement with about half a dozen of the packaging team here at Red Hat. I’m also a person who unsuccessfully tried to replace Yum completely with Zif in fedora a few years ago, so I know quite a bit about packaging systems and metadata parsing.

From my point of view, the hawkey depsolving library that DNF is designed upon is well designed, optimised and itself built on a successful low-level SAT library that SUSE has been using for years on production level workloads. The downloading and metadata parsing component used by DNF, librepo, is also well designed and complements the hawkey API nicely.

Rather than use the DNF framework directly, PackageKit uses librepo and hawkey to share 80% of the mechanism between PK and DNF. From what I’ve seen of the DNF codebase it’s nice, with unit tests and lots of the older compatibility cruft removed and the only reason it’s not used in PK was that the daemon is written in C and didn’t want to marshal everything via python for latency reasons.

So, from my point of view, DNF is a new command line tool built on 3 new libraries. It’s history may be of a fork from yum, but it resembles more of a 2014 rebuilt American hot-rod with all new motor-sport parts apart from the 1965 modified and strengthened chassis. Renaming DNF to Yum2 would be entirely the wrong message; it’s a new project with a new team and new goals.

Linux Pro Magazine GIMP Handbook Special Edition

I received a few Promo copies of the Linux Pro Magazine GIMP Handbook Special Edition in the mail a couple of weeks ago. I’ve just been too busy to sit down and have a look at it in depth.

I did give it a few flip-throughs when I had a moment, though.

This is apparently a “Special Edition” magazine aimed entirely at GIMP and using it. As such, it’s mostly (unlike my blog) ad free. So that means there’s over 100+ (full color) pages of good content!

It’s broken down nicely by sections:

  • Get Started
    • Compiling GIMP
    • GIMP 2.8 and 2.9
    • User Interfaces
    • Settings
    • Basic Functions
  • Highlights
    • Basic Functions
    • Colors
    • Light and Shadow
    • Animation with GIMP
  • Practice
    • Layers
    • Selection
    • Colors
    • Paths
    • Text and Logos
  • Photo Processing
    • Sharpening
    • Light and Shadow
    • Retouch
    • GIMP Maps
  • Know How
    • G'MIC
    • UFRaw
    • Painting
    • Fine Art HDR Processing
    • Animation with GIMP

The sections appear well written and are clearly laid out. They cover external resources where it makes sense as well (referencing the GIMP Registry quite a few times for some of the more useful and popular scripts).

As an image editing program, GIMP provides a variety of useful and advanced functions for any possible task. Before you use them, however, you should be aware of some simple things. 
— Get Started: Basic Functions

The writing is nice and clear, and there is a good range of topics covered that should get most beginners on track for exploring further in GIMP.

For example, in the “Photo Processing” section, while discussing Sharpening an image, they make mention of most of what users are likely to need (and some more obscure ones): Standard Sharpening, Unsharp Mask, High-Pass Sharpening, Wavelet Sharpen, and more.

The instructions are clearly presented, and having all full-color pages really helps as there are many examples showing the concepts. There’s also some neat coverage of associated/support programs as well like G'MIC, UFRaw, and LuminanceHDR!

There’s a bundled DVD included that will boot to ArtistX linux as well (and includes GIMP install files for OSX/Win).

All in all, quite a bit of content for $15.99.

To sweeten the deal, the nice folks at LinuxPro Magazine are offering a discount code to save 20% off the price!

The discount code is: GIMP20DISCOUNT and it’ll be valid through July 15th.

Of course... A Give-Away!

I do have a few copies they sent to me as promotional copies. I thought it might be more fun to go ahead and pay them forward to someone else who might get more use out of them than me.

So, all you need to do is either leave a comment below, or share any blog post of mine on Twitter or Google+ with the hashtags #patdavid #GIMP.

You can also email me with "GIMP Giveaway" in the subject.

If you do enter, make sure I have a way to reach you.

Oh, and disclaimer: I’ve never done a give away for anything before, so please bear with me if I suck at it...

I’ll sift through the hashtags and whatnot later next week and randomly pick three folks to send these copies to!

I chose the winners earlier this week using So congratulations to Stan, Anand, and Doug! I'll be dropping your magazines in the mail this week!

Help support the site! Or don’t!
I’m not supporting my (growing) family or anything from this website. Seriously.
There is only one reason I am writing these tutorials and posts:
I love doing it.
Technically there is a second reason: to give back to the community. Others before me were instrumental in helping me learn things when I first got started, and I’m hoping to pay it forward here.

If you want to visit an ad, or make a donation, or even link/share my content, I would be absolutely grateful (and tickled pink). If you don’t it’s not going to affect me writing and posting here one bit.

I’ll keep writing, and I’ll keep it free.
If you get any use out of this site, I only ask that you do one thing:
pay it forward.

June 16, 2014

datarootdir v.s. datadir

Public Service Announcement: Debian helpfully defines datadir to be /usr/share/games for some packages, which means that the AppData and MetaInfo files get installed into /usr/share/games/appdata which isn’t picked up by the metadata parsers.

It’s probably safer to install the AppData files into $datarootdir/appdata as this will work even if a distro has redefined datadir to be something slightly odd. I’ve changed the examples on the AppData page, but if you maintain a game on Debian with AppData then this might affect you when Debian starts extracting AppSpream metadata in the next few weeks. Anyone affected will be getting email in the next few days, although it only looks to affect very few people.

Krita team starts implementing features declared for the Kickstarter!

During the first week of our Kickstarter campaign we collected more than 6500 eur, which is about 43% of the goal. That is quite a good result, so we decided to start implementing the features right now, even though the campaign is not finished yet :)

I started to work on three low-hanging fruits in parallel: perspective transformation of the image basing on vanishing points, editing of the global selection and enhanced isolate layer mode.

It turned out that the vanishing point problem is not so "low-hanging" as I thought in the beginning, so right now I'm in the middle of my way of searching the math solution for it: to provide this functionality to the user the developer must find right perspective transformation matrix basing only on points in the two coordinate systems. This problem is inverse to what everyone is accustomed to solve at school :) This is quite interesting ans challenging task and I'm sure we will solve it soon!

Until I found the solution for maths task, I decided to work on small enhancements of our Isolate Layer mode. We have this feature already for a long time, but it was too complicated to use because the only way to activate it was to select "Isolate Layer" item in the right-click menu. Now this problem is gone! You can just press Alt key and click of the layer or mask in the Layers docker. This enables many great use cases for artists, which now can be done with a single Alt-click:
  • Show/Edit the contents of a single layer, without other layers interferring
  • Show/Edit the masks. This includes Selection Masks, so you can edit non-global selections very easily (modifying global selections will be simplified later)
  • Inspect all the layers you have by simply switching between them and looking into their contents in isolated environment

A comic page by Timothée Giet

There are several more things we are planning to do with the Isolated Mode like adding a bit of optimizations, but that is not for today :) 

The next packages in Krita Lime will include this new feature. They are now building at Launchpad and will be available tonight!

And don't forget to spread the word about our Kickstarter!

Fanart by Anastasia Majzhegisheva – 10

Here comes one more illustration for backstory.

Postcard Artwork by Anastasia Majzhegisheva

Artwork by Anastasia Majzhegisheva

I don’t remember if I mentioned that or not, but a few months ago Anastasia is switched to Krita for all her latest art works. Krita is an awesome open-source painting software for artists and they are running a Kickstarter campaign now to make it even better!

June 15, 2014

Vim: Set wrapping and indentation according to file type

Although I use emacs for most of my coding, I use vim quite a lot too, for quick edits, mail messages, and anything I need to edit when logged onto a remote server. In particular, that means editing my procmail spam filter files on the mail server.

The spam rules are mostly lists of regular expression patterns, and they can include long lines, such as:
gift ?card .*(Visa|Walgreen|Applebee|Costco|Starbucks|Whitestrips|free|Wal.?mart|Arby)

My default vim settings for editing text, including line wrap, don't work if get a flood of messages offering McDonald's gift cards and decide I need to add a "|McDonald" on the end of that long line.

Of course, I can type ":set tw=0" to turn off wrapping, but who wants to have to do that every time? Surely vim has a way to adjust settings based on file type or location, like emacs has.

It didn't take long to find an example of Project specific settings on the vim wiki. Thank goodness for the example -- I definitely wouldn't have figured that syntax out just from reading manuals. From there, it was easy to make a few modifications and set textwidth=0 if I'm opening a file in my procmail directory:

" Set wrapping/textwidth according to file location and type
function! SetupEnvironment()
  let l:path = expand('%:p')
  if l:path =~ '/home/akkana/Procmail'
    " When editing spam filters, disable wrapping:
    setlocal textwidth=0
autocmd! BufReadPost,BufNewFile * call SetupEnvironment()

Nice! But then I remembered other cases where I want to turn off wrapping. For instance, editing source code in cases where emacs doesn't work so well -- like remote logins over slow connections, or machines where emacs isn't even installed, or when I need to do a lot of global substitutes or repetitive operations. So I'd like to be able to turn off wrapping for source code.

I couldn't find any way to just say "all source code file types" in vim. But I can list the ones I use most often. While I was at it, I threw in a special wrap setting for mail files:

" Set wrapping/textwidth according to file location and type
function! SetupEnvironment()
  let l:path = expand('%:p')
  if l:path =~ '/home/akkana/Procmail'
    " When editing spam filters, disable wrapping:
    setlocal textwidth=0
  elseif (&ft == 'python' || &ft == 'c' || &ft == 'html' || &ft == 'php')
    setlocal textwidth=0
  elseif (&ft == 'mail')
    " Slightly narrower width for mail (and override mutt's override):
    setlocal textwidth=68
    " default textwidth slightly narrower than the default
    setlocal textwidth=70
autocmd! BufReadPost,BufNewFile * call SetupEnvironment()

As long as we're looking at language-specific settings, what about doing language-specific indentation like emacs does? I've always suspected vim must have a way to do that, but it doesn't enable it automatically like emacs does. You need to set three variables, assuming you prefer to use spaces rather than tabs:

" Indent specifically for the current filetype
filetype indent on
" Set indent level to 4, using spaces, not tabs
set expandtab shiftwidth=4

Then you can also use useful commands like << and >> for in- and out-denting blocks of code, or ==, for indenting to the right level. It turns out vim's language indenting isn't all that smart, at least for Python, and gets the wrong answer a lot of them time. You can't rely on it as a syntax checker the way you can with emacs. But it's a lot better than no language-specific indentation.

I will be a much happier vimmer now!

June 14, 2014

Krita Sprint 2014


First Krita sprint that I attended was back in 2010, in Netherlands, Deventer. We were hosted by Krita maintainer, Boudewijn Rempt. The main topic back then was to define product vision for Krita . It was a great idea to invite Peter Sikking. He helped us define very cool vision for Krita. If you have a vision, you can decide easily what to include into your application, what to implement and what can be provided by external applications. It helps you to decide whether you have to agree with user complaining in bug report or not.

Next Krita sprint was in 2011. The location was very cool: Blender Instituite, Netherlands, Amsterdam. For me the most important moment from that sprint was when artists (Timothee, David Revoy,…) demonstrated how they use Krita. It helped us to identify the biggest problems, focus us on polishing parts of Krita that were causing problems for professional painters. We managed to fix those issues since then, Krita has changed a lot since then, it is used by professionals these days!

Painting session on the roof

Timothee painted me in his painting session on the roof

We failed to organize Krita sprint 2012 in Bilbao, Boud and me were too busy. We probably did not plan Krita sprint 2013 for the very same reason.


Krita sprint 2014 was held in Netherlands, Deventer again. We were hosted by Boud and Irina and it was as great as back in 2010. I arrived on Friday, 16th of May. After travelling for a little bit more than 11 hours using public transport, I managed to arrive to Deventer safely. Train, tram, bus, airplane and train and there I was! When I plan such a journey, I tend to add time paddings to have enough time for travel connections, so that’s why it takes so long. My traveling agony was healed by Wi-Fi almost everywhere!

The main discussion was on Saturday and it is nicely described in DOT KDE article. I personally was wondering how Krita Foundation is doing. So far Krita Foundation is not able to employ all Krita hackers full-time, but I hope one day it will be possible!

One of the steps towards that goal is Krita Kickstarter: it will allow our community developers Dmitry and Sven to work on Krita full-time for a longer time period. I hope it will be a huge success and maybe next fund-raiser will allow more community developers (e.g. me :) ) to work on Krita full-time again. Well, it is more complicated for me now when I have my own family, my wife and a little son to care about.

There are full-time developers who work on Krita projects commercially in KO Gmbh: Boud, who is the maintainer of Krita and few employees, who entered Krita development through KO Gmbh: Stuart, Leinir and Arjen. First two were also present at the sprint! Their work is mostly on Krita Sketch, Krita Gemini and Krita on Steam and this commercial work is feeding them. All their work on Krita is open-source development!

We had nice lunch outside together on Saturday. Then we had hacking time til dinner and after dinner. I was working on G’MIC integration into Krita. This time I was fixing problems with internet filter definition updates. Then I spent some time with Dmitry discussing problems I faced when integrating G’mic into Krita. There are some use-cases that our processing framework, responsible for multi-threaded processing of the layers in Krita, is not counting on: I suppose that it is because we did not have enough focus on image filters. Krita is painting application and filters are usually needed for photo-manipulation. But painters also need to post-process their paintings, so that’s why I’m working on G’mic integration.

Sunday was dedicated to artists: Steven, Timothee and Wolthera showing us how they use Krita, what they struggle with and what can be better. Observing artists using Krita allowed us to see where Krita is right now. In 2011 it was not yet suitable for professional painting, in 2014 we could see that Krita made it into professional league of painting applications.

We spent some time just hanging around and discussing stuff around Krita. We had nice walk to the park, spent time on the roof of our hosting place or walked in the town. It was great to meet old friends and meet some new faces.

Hackstreet boys

We also got some presents from Krita Foundation, nice t-shirts for everybody, one t-shirt was  made specially for my little son — thank you! I brought something for Irina and Boud: some Slovak cheese products and slovak wine – oštiepok was especially enjoyed by Boud and Irina.

Big thank you goes to Boudewijn and Irina for excellent hosting and to KDE e.V. for making the sprint possible.

Related Krita sprint 2014 blogs:
Dot KDE Article:

Adwaita 3.14

Now that the controversial 3.12 tab design has been validated by Apple, we’re ready to tackle new challenges with the widgetry™.

Adwaita has grown into a fairly complex theme. We make sure unfocused windows are less eye-grabbing (flat). We provide a less light-polluting variant for visually-heavy content apps (Adwaita:dark). And last but not least we provide a specific wigdet style for overlay controls (OSD). All this complexity has made Adwaita quite a challenge to maintain and evolve. Since we were to relocate Adwaita directly into gtk+, we had to bite the bullet and perform quite a surgery on it.

There’s a number of improvements we aimed to achieve. Limiting the number of distinct colors and making most colors derived makes it easier to adjust the overall feel of the theme and I’m sure 3rd party themers will enjoy this too. Not relying on image assets for majority of the drawing makes the workflow much more flexible as well. Many of the small graphical elements now make use of the icon theme assets so these remain recolorable based on the context, similar to how text is treated.

Benjamin has been working hard to move the theme closer to the familiar CSS box model, further minimizing the reliance on odd property hacks and engines (Adwaita no longer makes use of any engine drawing).

We still rely on some image assets, but even that is much more manageable with SASS.

Anything gtk related never happens without the giant help from Matthias, Cosimo and Benjamin, but I have to give extra credits to Lapo Calamandrei, without whom these dark caverns would be impossible for me to enter. Another major piece that I’m grateful for living right inside the toolkit, ready to be brought up any time, is the awesome inspector. Really happy to see it mature and evolve.

June 13, 2014

glibc select weakness fixed

In 2009, I reported this bug to glibc, describing the problem that exists when a program is using select, and has its open file descriptor resource limit raised above 1024 (FD_SETSIZE). If a network daemon starts using the FD_SET/FD_CLR glibc macros on fdset variables for descriptors larger than 1024, glibc will happily write beyond the end of the fdset variable, producing a buffer overflow condition. (This problem had existed since the introduction of the macros, so, for decades? I figured it was long over-due to have a report opened about it.)

At the time, I was told this wasn’t going to be fixed and “every program using [select] must be considered buggy.” 2 years later still more people kept asking for this feature and continued to be told “no”.

But, as it turns out, a few months later after the most recent “no”, it got silently fixed anyway, with the bug left open as “Won’t Fix”! I’m glad Florian did some house-cleaning on the glibc bug tracker, since I’d otherwise never have noticed that this protection had been added to the ever-growing list of -D_FORTIFY_SOURCE=2 protections.

I’ll still recommend everyone use poll instead of select, but now I won’t be so worried when I see requests to raise the open descriptor limit above 1024.

© 2014, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

New in Krita: Painting with Exposure and Gamma

Running a kickstarter campaign can be quite exhausting! But that doesn't mean that coding stops -- here is one new Krita 2.9 feature that we prepared earlier: painting with exposure and gamma on HDR images. HDR (high-dynamic-range) images have greater dynamic range of light than ordinary images. If you make one with your camera, you'll combine a set of images made from the same subject at different exposures. If you want to create a HDR image from scratch, you can create one in Krita by selecting the 16 or 32 bit float RGB colorspace.

HDR images are rendered on your decidedly non-HDR monitor by picking an exposure level and checking what would be visible at that level. That's something Krita has been able to do since 2005! With Krita 2.7, Krita started supporting the VFX industry's standard library, OpenColorIO. But it was always hard to select a color for a particular exposure. Not anymore!

Check out this video by Timothee Giet showing off painting with exposure and gamma in Krita:


Here's how it works, technically:

Krita has two methods of rendering the image on screen: internal and using Open Color IO.

  1. Internal. In this mode the image data is converted into the display profile that the user configured in Settings->Color Management dialog. It ensures that all the colors of the image color space are displayed correctly on screen.  To use this mode you need to:
    1. Generate an icc profile for your monitor using any hardware device available on market
    2. Load the “Video Card Gamma Table” part of the generated profile (vcgt icc tag) into the LUT of your video card. You can use ‘xcalib’ to do that.
    3. Choose the profile in Settings->Color Management dialog
  2. Open Color IO mode. In this mode Krita does not do any internal color correction for the displayed image. Instead it passes raw image data to the OCIO engine, which handles the color conversions and color proofing itself. To configure OCIO pipeline one should either:
    1. Get the configuration here.
    2. Set the $OCIO environment variable to the path of your config directory
    3. Select the path to the configuration in the Krita docker manually


Smoking Figure by Timothée Giet


Landscape by Wolthera van Hövell tot Westerflier

Using Exposure and Gamma for painting

Now when using Open Color IO engine (even in “Internal” mode) one can paint on the image, switching the exposure levels on the fly. Just press ‘Y’ key and drag your mouse upward or downward and the amount of light will either increase or decrease! Creation of High Dynamic Range images never have been so easy! This feature can be used for prototyping the scenes that are going to have dynamic light, for example:

A bomb has been blown. The scene becomes filled with light! surely enough, the shadow areas are now becoming well-lightened areas with lots of details!

The characters are in a room. Suddenly the light goes out and the viewer can see only eyes, fire and cigarettes glowing here and there. The highlight areas are now well-detailed.

Color Selectors and Open Color IO

The good news! The color selectors in Krita now know not only about your monitor profile, but also about Exposure, Gamma and Open Color IO proofing settings! So it will show you the color exactly as it will look on the canvas!

Internally, it works with HSV values which you are expected to see on screen. It takes these values, applies exposure and gamma correction, executes color management routines and displays them on screen. It allows you not to think about current exposure value or color space constraints. You will always see the color exactly as it will be painted on canvas!

That is not all! After you chose any color as active, changing exposure will not alter its visual representation.

Giving it a try

Krita Lime packages for Saucy and Trusty contain this feature (older versions of Ubuntu Linux do not have OpenColorIO). Windows users can get the latest Krita build: Take care! These builds are made directly from the development branch and may contain features (like the resource manager) that are not done yet. Not everything will be working, though development builds are usually stable enough for daily work.

Jestedska Odysea Longboard

Some shots with the gopro from last weekend. Music by LuQuS.

Jestedska Odysea Longboard from jimmac on Vimeo.

June 12, 2014

Comcast actually installed a cable! Or say they did.

The doorbell rings at 10:40. It's a Comcast contractor.

They want to dig across the driveway. They say the first installer didn't know anything, he was wrong about not being able to use the box that's already on this side of the road. They say they can run a cable from the other side of the road through an existing conduit to the box by the neighbor's driveway, then dig a trench across the driveway to run the cable to the old location next to the garage.

They don't need to dig across the road since there's an existing conduit; they don't even need to park in the road. So no need for a permit.

We warn them we're planning to have driveway work done, so the driveway is going to be dug up at some point, and they need to put it as deep as possible. We even admit that we've signed a contract with CenturyLink for DSL. No problem, they say, they're being paid by Comcast to run this cable, so they'll go ahead and do it.

We shrug and say fine, go for it. We figure we'll mark the trench across the driveway afterward, and when we finally have the driveway graded, we'll make sure the graders know about the buried cable. They do the job, which takes less than an hour.

If they're right that this setup works, that means, of course, that this could have been done back in February or any time since then. There was no need to wait for a permit, let alone a need to wait for someone to get around to applying for a permit.

So now, almost exactly 4 months after the first installer came out, we may have a working cable installed. No way to know for sure, since we've been happily using DSL for over a month. But perhaps we'll find out some day.

The back story, in case you missed it: Getting cable at the house: a Comcast Odyssey.

June 11, 2014

Application Addons in GNOME Software

Ever since we rolled out the GNOME Software Center, people have wanted to extend it to do other things. One thing that was very important to the Eclipse developers was a way of adding addons to the main application, which seems a sensible request. We wanted to make this generic enough so that it could be used in gedit and similar modular GNOME and KDE applications. We’ve deliberately not targeted Chrome or Firefox, as these applications will do a much better job compared to the package-centric operation of GNOME Software.

So. Do you maintain a plugin or extension that should be shown as an addon to an existing desktop application in the software center? If the answer is “no” you can probably stop reading, but otherwise, please create a file something like this:

<?xml version="1.0" encoding="UTF-8"?>
<!-- Copyright 2014 Your Name Here <> -->
<component type="addon">
<name>Code Assistance</name>
<summary>Code assistance for C, C++ and Objective-C</summary>
<url type="homepage"></url>

This wants to be installed into /usr/share/appdata/gedit-code-assistance.metainfo.xml — this isn’t just another file format, this is the main component schema used internally by AppStream. Some notes when creating the file:

  • You can use anything as the <id> but it needs to be unique and sensible and also match the .metainfo.xml filename prefix
  • You can use appstream-util validate gedit-code-assistance.metainfo.xml if you install appstream-glib from git.
  • Don’t put the application name you’re extending in the <name> or <summary> tags — so you’d use “Code Assistance” rather than “GEdit Code Assistance
  • You can omit the <url> if it’s the same as the upstream project
  • You don’t need to create the metainfo.xml if the plugin is typically shipped in the same package as the application you’re extending
  • Please use <_name> and <_summary> if you’re using intltool to translate either your desktop file or the existing appdata file and remember to add the file to if you use one

Please grab me on IRC if you have any questions or concerns, or leave a comment here. Kalev is currently working on the GNOME Software UI side, and I only finished the metadata extractor for Fedora today, so don’t expect the feature to be visible until GNOME 3.14 and Fedora 21.

FillHoles revamp

Hi all

FillHoles FillHoles Precise

This is a small but important revamp for FillHoles tool.  Completely changed its behaviour to allow more consistent holes boudary treatment. previously all actions where applied at the end of click.
Now holes are saved and different actions are applied one by one at user choice.
Also solved an important bug in Contour smoothing causing the contour to sligthly rotate in each iteration. now it behaves at should be.



June 10, 2014

Krita Lime is updated again!

It was about a month when we last updated Krita Lime. And it is not because we had leisure time and did nothing here ;) In reverse, we got so many features merged into master so it became a bit unstable for a short period of time. And now we fixed all the new problems so you can see a nice build of Krita with lots of shiny features!

Wacom Artpen rotation sensor improved

If you are a happy owner of a Wacom Artpen stylus, now you can use its rotation sensor efficiently: now it works on both Linux and Windows, and more, it works exactly the same way on both operating systems! For those who are not accustomed to work with drivers directly it might come as surprise, but the direction of rotation reported by the Windows and Linux drivers are opposite, not speaking about an offset by 180°. The good news: now it is gone! Just use it!

Avoid stains on the image when using touch-enabled Wacom device

The most popular advice you get from an experienced artist concerning Wacom touch-enabled usually sounds like: "disable touch right after connecting it". Well, it has some grounds... The problem is that the artist while painting with the stylus can easily tap on a tablet with a finger and soil the image with stains of paint. Yes, most of the taps will be filtered by the Wacom driver (it disables touch while stylus is in proximity), but sometimes it doesn't work. Anyway, now the problem is solved, though only on Linux.

Now Linux version of Krita has a special "hidden" configuration option called disableTouchOnCanvas. If you add line


to the beginning of your kritarc file, the touch will not disturb you with extra points on the canvas anymore! Though it will continue to work with UI elements as usual!

OpenColorIO-enabled color selectors for HDR images

This is a very huge and really nice feature, on which we were working hard. There will be a separate article about it soon! Just subscribe to us and wait a bit! ;)

Many small nice enhancements

  •  '[' and ']' shortcuts for changing image size now don't have hysteresis, and scale smoothly with the brush size
  • Zooming, Rotation and Mirroring now have a floating window showing the current state of the operation. This is highly necessary when working in full-screen mode or without the status bar shown.
  • Pseudo Infinite Canvas will not make your canvas too huge. It will grow the image by 100% and no more. Now you can use this feature for doubling any dimension of the canvas with a single click.
  • Added "Scalable Smoothing Distance" feature. When using Weighted Smoothing, the distance will be automatically corrected according to your zoom level, so the behavior of the stylus will not alter with changing the canvas zoom.
  • Made the handles of the Transform Tool easier to click on. Now clickable area is twice wider! Just click!

Kickstarter campaign

And yes, today we started our first donation campaign on Kickstarter! It will last for 30 days only, during which we are going to raise money for 2.9

Direct link:

Help us, and together we will make Krita awesome!

Give Krita 2.9 a Kickstart!

After the successful release of Krita 2.8, the advanced open source digital painting application, the we're kicking off the work on the next release with a kickstarter campaign!


Krita 2.8, released on Linux and Windows, has been a very successful release, with hundreds of thousands of downloads. The buzz has simply been insane! And of course, Krita 2.8 really was a very good release, polished, full of productivity enhancing features.


Part of the secret was Dmitry Kazakov’s full-time work sponsored by the Krita Foundation which provided Krita with an insane number of productivie and innovative features. 


So for Krita 2.9, we are going for a repeat performance! And we’re going to try and double it, too, and have two people work on Krita full-time. Next to Dmitry, there's Sven, who's just finished university. Sven's been working on Krita for about ten years now. That’s the first stretch goal.


And as a super-stretch goal, we intend to port Krita to OS X, too!


Together with our artist community we created a set of goals to work on, ranging from improved compatibility with Photoshop to making the transform tool the most awesome ever seen. Check out the work package for Krita 2.9 on the kickstarter campaign page.

June 09, 2014

DIY Ringflash

The world probably needs another DIY ringflash tutorial like they need a hole in the head. There’s already quite a few different tutorials around explaining how to create one…

So here’s mine! :)

At LGM this year I hacked together a quick ringflash using parts I picked up in a €1 store while walking through the city with Rolf (he helped me pick out and find parts - Thank You Rolf!). I built one while I was there because it was way less hassle than trying to bring mine from home all the way to Leipzig, Germany (they don’t really collapse into any sort of smaller size, so they’re cumbersome to transport).

Anyway, I got some pretty strange looks from folks as I was hacking away at the plastic colander I was using. The trick to making and using these DIY ringflashes is to not care what others think ...

Pat David Ringflash DIY LGM Tobias Ellinghaus houz Meet the GIMP
Using the ringflash on Johannes..., by Tobias Ellinghaus

Because you will look strange building it, and even stranger using it. If you can get past that, the results are pretty neat, I think.

Pat David Ringflash DIY GIMP
The results of using the ringflash on Johannes...

There’s a nice even light that helps to fill in and remove harsh shadows across a subjects face (very flattering for reducing the appearance of wrinkles for instance). At lower power with other lighting it makes a fantastic fill light as well.

So, after seeing the results I had a few people ask me if I was going to write up a quick guide on building one of these things. I wasn’t intending to originally, but figured it might make for a fun post, so here we are.

Now, normally I would take fancy photos of the ringflash to illustrate how to go about making one, but I realized that it would be hard to account for all the different types, sizes, and styles that one could be made in.

Oh, and more importantly I wasn’t about to try and lug it all the way back to the states. So I left it there (I think Simon from darktable ended up taking it).

So I’ll improvise...

Building a DIY Ringflash

The Parts

The actual parts list is pretty simple. A large bowl, a small bowl, a cup (or tube), and some tape:

Pat David Ringflash DIY GIMP

Pat David Ringflash DIY GIMP

The material can be anything you can easily cut and modify, and hopefully that won’t throw it’s own color cast. White is best on the inside of the large bowl and outside of the small bowl and cup. Silver or metal is fine too, but the color temperature will trend to cooler.

The thing I mainly look for are sizes to fit my intended lens(es) I will be using:

Pat David Ringflash DIY GIMP

Each of the components needs to have a diameter bigger than my lens to fit through.

I’ll also look for a bowl that has a flat bottom as it usually makes it easier for me to cut holes through it (as well as tape things together).

All the other dimensions and sizes are arbitrary, and are usually dictated by the materials I can find. In Leipzig I used a colander for the large bowl, a cup, and the same small cheap white plastic bowls they served soup in at the party for the smaller bowl*.

* I did not actually use a soup bowl from the party, I just happened to purchase the same type bowl.

The Cuts

The first cut I’ll usually make is to open a hole in the side of the large bowl to allow the flash to poke through. I almost always just sort of wing it, and cut it by hand (or rotary tool if you have one):

Pat David Ringflash DIY GIMP

It doesn’t have to be placed perfectly in any particular location, though I usually try to place it as close to the bottom of the large bowl as possible.

Once the hole is cut, the flash should fit into place. I try to err on the side of caution and cut on the smaller side just in case. I can always remove more material if I need to, putting it back is harder.

Pat David Ringflash DIY GIMP

Then I’ll usually place the cup into the bowl and trace out it’s diameter onto the large bowl. When I’m cutting this hole, I try to make it on the smaller side of my mark lines to leave me some room to tape the cup into place.

Pat David Ringflash DIY GIMP

I’ll go ahead and cut the bottom of the cup now as well:

Pat David Ringflash DIY GIMP

Pat David Ringflash DIY GIMP

As with the large bowl, I’ll also trace the cup diameter on the small bowl and mark it. I’ll cut this hole a little small as well just in case.

Pat David Ringflash DIY GIMP

Pat David Ringflash DIY GIMP

That’s all there is to cut the parts! Now go grab a roll of tape to tape the whole thing together. If you want to get fancy I suppose you could use some glue or epoxy, but we’re making a DIY ringflash from a bunch of cheap bowls - why get fancy now?


It should be self apparent how the whole thing goes together.

One thing I do try to watch for is to not tape things together where the light will be falling. So to tape the cup to the large bowl, I’ll apply the tape from the outside back of the bowl, into the cup.

Pat David Ringflash DIY GIMP

Tape the front (small) bowl into place in the same way. It might not be pretty, but it should work!

Pat David Ringflash DIY GIMP


When all is said and done, this should be sort of what you’re looking at:

Pat David Ringflash DIY GIMP

I’ve made a few of these now, and they honestly don’t take long at all to throw together. If I’m in a store for any reason I’ll at least check out cheap bowls for sale just in case there might be something good to play with (especially if it’s cheap).

She may not look like much, but she's got it where it counts, kid.
I've made a lot of special modifications myself.

Using It

If you’ve got the power in your flash, it’s pretty easy to drop the ambient to near black (these were mostly all shot around f/5 1250s ISO 200):

Pat David Ringflash DIY LGM Party ginger coons
Pat David Ringflash DIY LGM Party Claire

Pat David LGM DIY Ringflash Ville GIMP

All the shots I took at the LGM 2014 party in Leipzig used this contraption (either as a straight ringflash, or a quick hand-held beauty dish). The whole set is here on Flickr.

Of course, if you put your subject up against a wall, you’ll get that distinctive halo-shadow that a ringflash throws:

Pat David DIY Ringflash Whitney Wall Pretty Smile Girl

Pat David DIY Ringflash

I also normally don’t do anything to attach the flash to the bowls, or the entire contraption to my camera. The reason is that it actually makes a half decent beauty dish in a pinch if needed. All you have to do is move the ringflash off to a side (left side for me, as I shoot right-handed):


Well, that’s it. I hope the instructions were clear, and at least somebody out there tries building and playing with one of these (share the results if you do!).

As you can see, there’s not much to it. The hardest part is honestly finding bowls and cups of appropriate size...

Help support the site! Or don’t!
I’m not supporting my (growing) family or anything from this website. Seriously.
There is only one reason I am writing these tutorials and posts:
I love doing it.
Technically there is a second reason: to give back to the community. Others before me were instrumental in helping me learn things when I first got started, and I’m hoping to pay it forward here.

If you want to visit an ad, or make a donation, or even link/share my content, I would be absolutely grateful (and tickled pink). If you don’t it’s not going to affect me writing and posting here one bit.

I’ll keep writing, and I’ll keep it free.
If you get any use out of this site, I only ask that you do one thing:
pay it forward.

June 06, 2014

Santa Fe Highway Art, and the Digestive Deer

Santa Fe is a city that prides itself on its art. There are art galleries everywhere, glossy magazines scattered around town pointing visitors to the various art galleries and museums.

Why, then, is Santa Fe county public art so bad?

[awful Santa Fe art with eagle, jaguar and angels] Like this mural near the courthouse. It has it all! It combines motifs of crucifixions, Indian dancing, Hermaphroditism, eagles, jaguars, astronomy, menorahs (or are they power pylons?), an angel, armed and armored, attempting to stab an unarmed angel, and a peace dove smashing its head into a baseball. All in one little mural!

But it's really the highway art north of Santa Fe that I wanted to talk about today.

[roadrunner highway art] [horned toad highway art] [rattlesnake highway art] Some of it isn't totally awful. The roadrunner and the horned toad are actually kind of cute, and the rattlesnake isn't too bad.

[rooster highway art] [turkey highway art] On the other hand, the rooster and turkey are pretty bad ...

[rabbit highway art] and the rabbit is beyond belief.

As you get farther away from Santa Fe, you get whole overpasses decorated with names and symbols:
[Posuwaegeh and happy dancing shuriken]

[Happy dancing shuriken] I think of this one near Pojoaque as the "happy dancing shuriken" -- it looks more like a Japanese throwing star, a shuriken, than anything else, though no doubt it has some deeper meaning to the Pojoaque pueblo people.

But my favorite is the overpass near Cuyamungue.

[K'uuyemugeh and digestive deer]

See those deer in the upper right and left corners?

[Cuyamungue digestive deer highway art] Here it is in close-up. We've taken to calling it "the digestive deer".

I can't figure out what this is supposed to tell us about a deer's alimentary tract. Food goes in ... and then we don't want to dwell on what happens after that? Is there a lot of foliage near Cuyamungue that's particularly enticing to deer? A "land of plenty", at least for deer? Do they then go somewhere else to relieve themselves?

I don't know what it means. But as we drive past the Cuyamungue digestive deer on the way to Santa Fe ... it's hard to take the city's airs of being a great center of art and culture entirely seriously.

June 05, 2014

Son of more logos!

Where we left off last time was basically a brain dump of some random ideas. Thank you again for all of the great feedback and commentary around the designs. It really seems that folks are digging the “C” series of logo designs the most – here’s a bunch of iterations of that concept:


A lot of comments focused on the cloud logo not looking quite like a cloud, but the other logos worked pretty well. Other comments talk about how the cloud logo represented ‘scaling up,’ which is a good thing to represent. I poked around a bit with the cloud logo, keeping the vertical design for “scale up,” but varying the heights of the components to suggest a cloud a bit more:


Here’s those shape variations in context with the other logo designs:


Ryan and I talked a little bit about how these shapes are so simple, we could do a lot of cool treatments with them. One issue with making a logo design too ‘cool’ or ‘trendy’ is that the logo tends to get dated quickly. The basic shape of the C series logo designs, though, is that they are so simple they could have a timeless quality about them. You could dress them up with a particular treatment and then go back to basics or use a different treatment to keep up with trends but also not date the logo as trends change. We both really like this recent geometric pixelly fill kind of design (there are a few examples in the pixely pinterest board I put together), and Ryan came up with a great workflow to create these textures using the prerelease version of Inkscape from his copr repo. (We need to document and blog that too, of course. :) I promised banas I’d make it happen!)

Anyway, here are the original C series mockups with that kind of treatment:


Well, anyway! That’s just an update on my thinking here using your feedback. banas has also put together a blog post with some great sketches / iterations for the logo, and I suggest you take a look at his ideas and give him some feedback too:

Dreaming up logos

I know he had some computer issues that prevented him from being able to do these up in Inkscape, but he agreed to share his sketches as a work-in-progress – I think this is a great open way of sharing ideas.

Of course, as I hope is clear by now – your ideas and sketches are most certainly welcome as well, and we’d really love it if you riffed off the ideas that have already been posed by myself, Ryan, and banas. I think together we’ll end up with something really awesome. :)

In case you want to have a play with any of the stuff posted here, I’ve uploaded the SVG containing the assets:

Enjoy :) Productive feedback welcomed in the comments or on the design-team mailing list.

June 04, 2014

Meet Sarah

Pat David Meet Sarah Headshot Photek Portrait softlighter

Sarah has been shooting with my friend Brian for a while now. He and I recently tried to organize to shoot together, but unfortunately he was called away at the last moment. Which was a bummer, because I had also just purchased a 60” Photek Softlighter II, and was super-eager to put it through its paces...

Luckily Sarah was still good to do the shoot! (There were some initial hiccups, but we were able to sort them out and finally get started).

There were two aspects to obtaining these results I thought it might be nice to talk about briefly here, the lighting and the post work...

The Lighting

Meet Sarah - YN560 firing into the 60" softlighter, camera left, really really close in.

If you’ve ever followed my photographs, I like to keep things simple. This is mostly for two reasons: 1) I’m cheap, and simple is inexpensive. 2) I’m not smart, and simple is easy to understand.

Well, I also happen to really like the look of simply lit images if possible. Some folks can get a little nuts about lighting.

The real creativity is when you can make a simply lit image look good . This is the hard part, I think, and the thing I spend most of my time working to achieve and learn about.

I had previously built myself a small-ish softbox to control my light a bit more, and it has seen quite a bit of use in the time that I’ve had it. It works in a pinch for a neat headshot key light, but I was bumping into the limits of using it due to size. I was mostly constrained with tight-in headshots to keep the light relatively soft. (It was only about 20" square).

So I had been looking for quite some time at a larger modifier. Something I could use for full body images if I wanted, but still keep the nice control and diffusion of a large softbox. Thanks to David Hobby, I finally decided that I simply had to have the Photek Softlighter.

To help me understand faster/better what the light is doing and how it effects a photograph, I try to minimize the other variables as much as I can. Shooting in a controlled environment allows me this luxury, so all these images I took of Sarah are shot with a single light only. This way I can better understand it’s contribution to the image if I want to add any other modifiers or lights (without having to worry about nuking the ambient at the same time).

Plus, chiaroscuro!


For these images I wanted to try something a little different for a change. I wanted to have a consistent color grading for all the images from the session, and I wanted to shoot for something a bit softer and subdued.

I followed my usual workflow (documented here and here).

Here was the original image straight out of the camera:

Straight out of the camera

In my Raw processor, I adjusted exposure to taste, increased blacks just slightly and added a touch of contrast. I also dropped the vibrance of the image down just a bit as well. I did this because I knew later I would be toning with some curves and didn’t want things to get too saturated.

After exposure, contrast, and vibrance adjustments

I brought the image into GIMP at this point for further pixel-pushing and retouching. As usual, I immediately decomposed the image into wavelet scales so I could work on individual frequencies. There wasn’t much needed to be done for skin work, just some (very) slight mid-frequency smoothing. Spot healing here and there for trivial things, and I was done.

One small difference in my workflow is that I’ve switched from using a Gaussian Blur on the frequency layers to using a Bilateral Blur instead. I feel it preserves tonal changes much better. It can be found in G'MIC under Repair → Smooth [bilateral].

After some mid-frequency smoothing and spot healing

At this point, it was color toning time! (alliteration alert!)

I ended up applying a portra-esque color curve against the image, and reduced its opacity to about 50%. This gave me the nice tones from the curve adjustment, but didn’t throw everything too far red. Just a sort of delicate touch of portra...

A touch of portra-esque color toning

At this point I did something I don’t normally do that often. I wanted to soften the colors a bit more, and skew the overall feeling just slightly warmer and earthy-toned. Sarah has very pretty auburn hair, the portra adjusted the skin tones to be pleasing, and she had on a white shirt, with a gray/brown sweater.

So I added a layer over the entire image and filled it with a nice brown/orange shade. If you’re familiar at all with teal/orange hell, the shade is actually a much darker and unsaturated orange, but we’re not going to cool the shadows separately. We’re just going to wash the image slightly with this brown-ish shade.

This is the shade I used.
It’s #76645B in HTML notation.

I set the layer opacity for this brown layer down really low to about 9%. This took the edge just ever so slightly off the highlights, and lifted the blacks a tiny bit. Here’s the final result before cropping:

Final result (Compare Original)

I feel like it softens the contrast a bit visually, and enhances the browns and reds nicely. After sharpening and cropping I got the final result:

I purposefully didn’t go into too much detail because I’ve written at length about each of the steps that went into this. The most important thing I think in this case is finding a good color toning that you like (the post about color curves for skin is here). The color layer over the image is new, but it was honestly done through experimentation and taste. Try laying different shades at really low opacity over your image to see how it changes things! Experiment!

The rest of my GIMP tutorials can be found here:
Getting Around in GIMP

More Sarah

Now that we’ve had a chance to shoot together, I’m hoping we can continue to do so. In the meantime, here’s a few more images from that day. (The set is also on Google+)

I love the expression on her face here

Help support the site! Or don’t!
I’m not supporting my (growing) family or anything from this website. Seriously.
There is only one reason I am writing these tutorials and posts:
I love doing it.
Technically there is a second reason: to give back to the community. Others before me were instrumental in helping me learn things when I first got started, and I’m hoping to pay it forward here.

If you want to visit an ad, or make a donation, or even link/share my content, I would be absolutely grateful (and tickled pink). If you don’t it’s not going to affect me writing and posting here one bit.

I’ll keep writing, and I’ll keep it free.
If you get any use out of this site, I only ask that you do one thing:
pay it forward.

June 03, 2014

Cicada Rice Krispies

[Cicadas mating] Late last week we started hearing a loud buzz in the evenings. Cicadas? We'd heard a noise like that last year, when we visited Prescott during cicada season while house-hunting, but we didn't know they had them here in New Mexico. The second evening, we saw one in the gravel off the front walk -- but we were meeting someone to carpool to a talk, so I didn't have time to race inside and get a camera.

A few days later they started singing both morning and evening. But yesterday there was an even stranger phenomenon.

"It sounds like Rice Krispies out in the yard. Snap, crackle, pop," said Dave. And he was right -- a constant, low-level crackling sound was coming from just about all the junipers.

Was that cicadas too? It was much quieter than their loud buzzing -- quiet enough to be a bit eerie, really. You had to stop what you were doing and really listen to notice it.

It was pretty clearly an animal of some kind: when we moved close to a tree, the crackling (and snapping and popping) coming from that tree would usually stop. If we moved very quietly, though, we could get close to a tree without the noise entirely stopping. It didn't do us much good, though: there was no motion at all that we could see, no obvious insects or anything else active.

Tonight the crackling was even louder when I went to take the recycling out. I stopped by a juniper where it was particularly noticeable, and must have disturbed one, because it buzzed its wings and moved enough that I actually saw where it was. It was black, maybe an inch long, with narrow orange stripes. I raced inside for my camera, but of course the bug was gone by the time I got back out.

So I went hunting. It almost seemed like the crackling was the cicadas sort of "tuning up", like an orchestra before the performance. They would snap and crackle and pop for a while, and then one of them would go snap snap snap-snap-snap-snapsnapsnapsnap and then break into its loud buzz -- but only for a few seconds, then it would go back to snapping again. Then another would speed up and break into a buzz for a bit, and so it went.

One juniper had a particularly active set of crackles and pops coming from it. I circled it and stared until finally I found the cicadas. Two of them, apparently mating, and a third about a foot away ... perhaps the rejected suitor?

[Possible cicada emergence holes]
Near that particular juniper was a section of ground completely riddled with holes. I don't remember those holes being there a few weeks ago. The place where the cicadas emerged?

[Fendler's Hedgehog Cactus flower] So our Rice Krispies mystery was solved. And by the way, I don't recommend googling for combinations like cicada rice krispies ... unless you want to catch and eat cicadas.

Meanwhile, just a few feet away from the cicada action, a cactus had sprung into bloom. Here, have a gratuitous pretty flower. It has nothing whatever to do with cicadas.

Update: in case you're curious, the cactus is apparently called a Fendler's Hedgehog, Echinocereus fendleri.

June 02, 2014

Some notes from Krita Sprint 2014

Krita people in Deventer:
Sven, Lukas, Timothee and Steven
Almost two weeks have passed since we returned from the sprint, but we are now only beginning to sort out and formalize all the data and notes we did during the meeting. The point is, this time (as well as during the last sprint in 2011) we had three painters with us who gave us immeasurable amount of input about how they use Krita and what can be improved. This is the case when criticizing and complaining was exactly what we needed :)

So after having general discussions about Krita's roadmap on Saturday [0], we devoted Sunday on listening to painters. Wolthera, Steven and Timothée gave us short sessions during which they were painting their favorite characters and we could look at it and notice all the usecases and small inconveniences they face when working with Krita. The final list of our findings became rather long :), but it will surely have immense impact on our future. We saw not only wishes and bugs, we also had several revelations, the things which we could not even imagine before. Here is a brief list of them.

Tablet-only workflow

Yeah, not all the painters have a keyboard! ;) Sometimes a painter can use a tablet with built-in digitizer for painting. In such a case the workflow completely changes!
Two tool bars is too few! More floating toolbars!
  1. The painter may decide to reassigns the two stylus buttons to pan and zoom gestures since he has no access to a usual Spacebar shortcut.
  2. The toolbars! Yes, the toolbars are the precious space where the painter will put all the actions he needs. And there should be many configurable toolbars. The problem we have now is that there can be only one toolbar of a specific type and every action belongs to its own toolbar. The user should be able to create many general-case toolbars and put/group actions however he likes. I'm not sure this is possible to implement within current KDE framework, but we must investigate into it!
  3. Even when using a tablet some painters cheat a bit and use gaming keypads to access most needed actions, like pop-up palette, color picker and others. Steven came to Deventer with his Razer Nostromo device, and it seems he is quite convenient with it.
Razer Nostromo. Not for gaming :)


Though it might sound funny, but some of Krita features really surprised me! I never knew we could use old-good tools this way.
  1. Experiment Brush. Have you ever thought that this brush might be an ideal tool for creation of shadows on a comic-look pictures? Well, it is ;)
  2. Group Layers + Inherit Alpha layer option. I could never imagine that Inherit Alpha feature can be combined with the Group Layers! If you use Inherit Alpha withing a group, then it'll use only the colors of this group! This can be used for filling/inking the parts of the final image. Just move it into a separate group, activate Inherit Alpha and you can easily fill/outline your part of the image!
  3. Color Picker. This is a trivial tool of course. But if you assign it to the second stylus' button, it becomes an efficient tool for mixing color right on the canvas! Paint, pick and mix colors as if you use a real-world brush.
Well, there were many other issues we found during the sessions. There were also some bugs, but the severity of those was really minor. Especially in comparison to the sessions we did in 2011, when David Revoy had to restart Krita several times due to crashes and problems... We did really much progress since Krita 2.4!

Yeah, it was a really nice and efficient sprint! Thanks Boudewijn and Irina for hosting the whole team in their house and KDE e.V. for making all this possible!

[0] - see Boud's report about it

Last week in Krita — week 21&22

Last weekend we celebrated a Krita sprint in Deventer, an event that reunites Krita’s developers to talk about, coordinate and if possible code the next steps in Krita’s roadmap. The discussion themes where varied and went from Krita foundation status to specific software features such as OpenGL default setting. Other topics included:

  • 2.9 main features and code emphasis and 3.0 roadmap priorities. In general we laid out a plan on how to code all the plans needed for a successful 2.9 release and a solid 3.0 version with the new powerful qt 5.0
  • Multiple documents View. An informative session on the current code status of the branch and discussion on the technical complexities of making it happen.
  • Text and vector tools enhancement was a big topic on the table. Big changes and improvements are now set to happen from 3.0 building slowly towards the future.
  • Translation: We established the work that needs to be done to standardize all translatable strings and make internationalization consistent across languages.
  • Discover-ability of features within the app and tool tips. A decision was made about this: create a simple system to allow the user to submit a help tool tip for the community.
  • OpenGL default settings: Krita has two painting engines, while both work okay. The non-GL engine is very slow on Windows. It was decided to turn the OpenGL engine on by default and, if not supported, fall-back to the CPU based one.

This week’s new features:

  • Index colors filter. (Manuel Riecke)
  • Allow to lock docker state. (Boudewijn Rempt)
  • Improved gradient editor: save rename edit. (Sven Langkamp)
  • Show a floating message when rotating, mirroring and zooming the canvas. (Dmitry Kazakov)

Index colors filter

This filter allows you to reduce the number of colors displayed using a color ramp’s map.

Filter: Index Colors dialog

Filter: Index Colors variants

Some uses for the filter include HD index painting as in the video below.


Lock docker state

It’s now possible to lock the dockers in position to avoid any modification. This solves the problem of accidentally moving the dockers while adjusting layers and options.

Dock lock position

Improved gradient editor

The gradient editor received necessary improvements. This makes creating gradients a lot easier and gives the option to rename and edit. It's now much faster to get exactly the gradient you are building.

Gradient Editor

Floating messages

Rotating, Zooming and Mirroring canvas will now trigger a nice message at the top of the screen with the current degree, zoom level or ON/OFF states. The message will stay for a second before fading out. Neat!

Floating messages

General bug fix and features

  • FIX #335438: Fix saving on tags for brushes. (Sven Langkamp)
  • Fix crash in imagesplit dialog. (Boudewijn Rempt)
  • FIX #335382: Fix segfault in image docker on ARM. (Patch: Supersayonin)
  • Improved gradient editor. Create segmented gradients, edit, rename. (Sven Langkamp)
  • Show the current brush in the statusbar, CCBUG:#332801. (Boudewijn Rempt)
  • Disable the lock button if the docker is set to float. (Boudewijn Rempt)
  • Do not miss the dot when appending the extension. (Patch: Tobias Hintze)
  • FIX #335298: Fix cut and paste error. (Boudewijn Rempt)
  • Implement internet updates for G'MIC (Compile G’MIC with zlib dependency). (Lukáš Tvrdý)
  • FIX #331358: Fixed rotation tablet sensor on Windows and make rotation on Linux be consistent with rotation on Windows. (Dmitry Kazakov)
  • FIX #331694: Add another exr mimetype “x-application/x-extension-exr”, to make the exr system more robust. (Boudewijn Rempt)
  • FIX #316859: Fix Chalk brush paint wrong colors on saturation ink depletion. (Dmitry Kazakov)
  • Index Colors Filter: Add option to reduce the color count. (Manuel Riecke)
  • Fix “modifier + key” canvas input shortcuts not working. (Arjen Hiemstra)
  • FIX #325928: Allow to lock the state of the dockers. (Boudewijn Rempt)
  • Reduce size of mirror handles and other minor tweaks to the mirror axis handles. (Arjen Hiemstra)
  • Let the user select a 1px brush with a shortcut. (Dmitry Kazakov)
  • FIX #325295: Make changing the brush size with the shortcuts more consistent. (Dmitry Kazakov)
  • FIX #334982: Fix a hang-up when opening the filter dialog twice. (Dmitry Kazakov)
  • Enable opengl by default. (Boudewijn Rempt)
  • FIX #334371. (Boudewijn Rempt)
  • Add rename composition to the right click menu. (Sven Langkamp)
  • FIX #334826: Fix loading ABR brushes. (Boudewijn Rempt)
  • Add ability to update a composition CCBUG:#322159. (Sven Langkamp)
  • Activate compositions with double click. (Sven Langkamp)

Code cleanup and optimizations.

Following previous week’s efforts, Boudewijn Rempt kept improving the code efficiency on many aspects of the code. With the hard work of Dmitry Kazakov, Stuart Dickson, Lukáš Tvrdý and Sven Langkamp the code is kept evolving, from ensuring compilation on systems with QT 4.7 or disabling some portions of QT if the version is too old (to make Krita work in many distros), updating G'MIC version to, improving OpenGL crash detection, and renaming brushes to “brush tips” to avoid user confusion between presets and brushes, now brush tips.

Krita Sketch and Gemini

This week in sketch an Gemini. Dan Leinir Turthra Jensen added support for templates and a small foldout in brush tool to allow selecting predefined sizes. Stuart Dickson fixed minor UI inconsistencies re-ordering the display of new documents window, updating mirror tool bar actions and expanding template categories. Dmitry Kazakov fixed color space realted problems. Arjen Hiemstra tweaked the size of mirror mode handles to make them more comfortable to use and made other enhancements to the general UI look and feel. Bug fixes are as follow:

  • Fix hsvF -> hsv. This cause crash when loading sketch and gemini. (Dmitry Kazakov)
  • FIX #332864: Use a lighter colour for the x in the unchecked checkbox. (Arjen Hiemstra)
  • FIX #332860: Cleanup the Tool panel and tool config pages. (Arjen Hiemstra)

Krita Gemini/Sketch with templates

Animation branch

Sohsumbra’s work preview can be seen in the following video, working animation with layers.


June 01, 2014

Release notes: May 2014

What’s the point of releasing open-source code when nobody knows about it? In “Release Notes” I give a round-up of recent open-source activities.

angular-rt-popup (New, github)

A small popover library, similar to what you can find in Bootstrap (it uses the same markup and CSS). Does some things differently compared to angular-bootstrap:

  • Easier markup
  • Better positioning and overflows
  • Correctly positions the arrow next to anchor


grunt-git (Updated, github)

  • Support for --depth in clone.
  • Support for --force in push.
  • Multiple file support in archive.


angular-gettext (Updated, github, website)

Your favorite translation framework for Angular.JS gets some updates as well:

  • You can now use $count inside a plural string as the count variable. The older syntax still works though. Here’s an example:
    <div translate translate-n="boats.length" translate-plural="{{$count}} boats">One boat</div>
  • You can now use the translate filter in combination with other filters:
    {{someVar | translate | lowercase}}
  • The shared angular-gettext-tools module, which powers the grunt and gulp plugins is now considered stable.