February 09, 2016

sql-migrate slides

I recently gave a small lightning talk about sql-migrate (a SQL Schema migration tool for Go), at the Go developer room at FOSDEM.

Annotated slides can be found here.


Comments | More on rocketeer.be | @rubenv on Twitter

February 08, 2016

Anonymous reviews in GNOME Software

Choosing an application to install is hard when there are lots of possible projects matching a specific search term. We already list applications based on the integration level and with useful metrics like “is it translated in my language” and this makes sure that high quality applications are listed near the top of the results. For more information about an application we often want a more balanced view than the PR speak or unfounded claims of the upstream project. This is where user-contributed reviews come in.


To get a user to contribute a review (which takes time) we need to make the process as easy as possible. Making the user create a user account on yet-another-webservice will make this much harder and increase the barrier to participation to the point that very few people would contribute reviews. If anonymous reviewing does not work the plan is to use some kind of attestation service so you can use a GMail or Facebook for confirming your identity. At this point I’m hoping people will just be nice to each other and not abuse the service although this reviewing facility will go away if it starts being misused.

Designing an anonymous service is hard when you have to be resilient against a socially awkward programmer with specific political ideologies. If you don’t know any people that match this description you have obviously never been subscribed to fedora-devel or memo-list.

Obviously when contacting a web service you share your IP address. This isn’t enough to uniquely identify a machine and user, which we want for the following reasons:

  • Allowing users to retract only their own reviews
  • Stopping users up or down-voting the same review multiple times

A compromise would be to send a hash of two things that identify the user and machine. In GNOME Software we’re using a SHA1 hash of the machine-id and the UNIX username along with a salt, although this “user_id” is only specified as a string and the format is not checked.

For projects like RHEL where we care very much what comments are shown to paying customers we definitely want reviews to be pre-approved and checked before showing to customers. For distros like Fedora we don’t have this luxury and so we’re going to rely on the community to self-regulate reviews. Reviews are either up-voted or down-voted according how useful they are along with the nuclear option of marking the review as abusive.


By specifying the users current locale we can sort the potential application reviews according to a heuristic that we’re still working on. Generally we want to prefer useful reviews in the users locale and hide ones that have been marked as abusive, and we also want to indicate the users self-review so they can remove it later if required. We also want to prioritize reviews for the current application version compared to really old versions of these applications.

Comments welcome!

Attack of the Killer Titmouse!

[Juniper titmouse attacking my window] For the last several days, when I go upstairs in mid-morning I often hear a strange sound coming from the bedroom. It's a juniper titmouse energetically attacking the east-facing window.

He calls, most often in threes, as he flutters around the windowsill, sometimes scratching or pecking the window. He'll attack the bottom for a while, moving from one side to the other, then fly up to the top of the window to attack the top corners, then back to the bottom.

For several days I've run down to grab the camera as soon as I saw him, but by the time I get back and get focused, he becomes camera-shy and flies away, and I hear EEE EEE EEE from a nearby tree instead. Later in the day I'll sometimes see him down at the office windows, though never as persistently as upstairs in the morning.

I've suspected he's attacking his reflection (and also assumed he's a "he"), partly because I see him at the east-facing bedroom window in the morning and at the south-facing office window in the early afternoon. But I'm not sure about it, and certainly I hear his call from trees scattered around the yard.

Something I was never sure of, but am now: titmice definitely can raise and lower their crests. I'd never seen one with its crest lowered, but this one flattens his crest while he's in attack mode.

His EEE EEE EEE call isn't very similar to any of the calls listed for juniper titmouse in the Stokes CD set or the Audubon Android app. So when he briefly attacked the window next to my computer yesterday afternoon while I was sitting there, I grabbed a camera and shot a video, hoping to capture the sound. The titmouse didn't exactly cooperate: he chirped a few times, not always in the group of three he uses so persistently in the morning, and the sound in the video came out terribly noisy; but after some processing in audacity I managed to edit out some of the noise. And then this morning as I was brushing my teeth, I heard him again and he was more obliging, giving me a long video of him attacking and yelling at the bedroom window. Here's the Juniper titmouse call as he attacks my window this morning, and yesterday's Juniper titmouse call at the office window yesterday. Today's video is on youtube: Titmouse attacking the window but that's without the sound edits, so it's tough to hear him.

(Incidentally, since Audacity has a super confusing user interface and I'm sure I'll need this again, what seemed to work best was to highlight sections that weren't titmouse and use Edit→Delete; then use Effects→Amplify, checking the box for Allow clipping and using Preview to amplify it to the point where the bird is audible. Then find a section that's just noise, no titmouse, select it, run Effects→Noise Reduction and click Get Noise Profile. The window goes away, so click somewhere to un-select, call up Effects→Noise Reduction again and this time click OK.)

I feel a bit sorry for the little titmouse, attacking windows so frenetically. Titmice are cute, excellent birds to have around, and I hope he's saving some energy for attracting a mate who will build a nest here this spring. Meanwhile, he's certainly providing entertainment for me.

February 05, 2016

Updating Debian under a chroot

Debian's Unstable ("Sid") distribution has been terrible lately. They're switching to a version of X that doesn't require root, and apparently the X transition has broken all sorts of things in ways that are hard to fix and there's no ETA for when things might get any better.

And, being Debian, there's no real bug system so you can't just CC yourself on the bug to see when new fixes might be available to try. You just have to wait, try every few days and see if the system

That's hard when the system doesn't work at all. Last week, I was booting into a shell but X wouldn't run, so at least I could pull updates. This week, X starts but the keyboard and mouse don't work at all, making it hard to run an upgrade. has been fixed.

Fortunately, I have an install of Debian stable ("Jessie") on this system as well. When I partition a large disk I always reserve several root partitions so I can try out other Linux distros, and when running the more experimental versions, like Sid, sometimes that's a life saver. So I've been running Jessie while I wait for Sid to get fixed. The only trick is: how can I upgrade my Sid partition while running Jessie, since Sid isn't usable at all?

I have an entry in /etc/fstab that lets me mount my Sid partition easily:

/dev/sda6 /sid ext4 defaults,user,noauto,exec 0 0
So I can type mount /sid as myself, without even needing to be root.

But Debian's apt upgrade tools assume everything will be on /, not on /sid. So I'll need to use chroot /sid (as root) to change the root of the filesystem to /sid. That only affects the shell where I type that command; the rest of my system will still be happily running Jessie.

Mount the special filesystems

That mostly works, but not quite, because I get a lot of errors like permission denied: /dev/null.

/dev/null is a device: you can write to it and the bytes disappear, as if into a black hole except without Hawking radiation. Since /dev is implemented by the kernel and udev, in the chroot it's just an empty directory. And if a program opens /dev/null in the chroot, it might create a regular file there and actually write to it. You wouldn't want that: it eats up disk space and can slow things down a lot.

The way to fix that is before you chroot: mount --bind /dev /sid/dev which will make /sid/dev a mirror of the real /dev. It has to be done before the chroot because inside the chroot, you no longer have access to the running system's /dev.

But there is a different syntax you can use after chrooting:

mount -t proc proc proc/
mount --rbind /sys sys/
mount --rbind /dev dev/

It's a good idea to do this for /proc and /sys as well, and Debian recommends adding /dev/pts (which must be done after you've mounted /dev), even though most of these probably won't come into play during your upgrade.

Mount /boot

Finally, on my multi-boot system, I have one shared /boot partition with kernels for Jessie, Sid and any other distros I have installed on this system. (That's somewhat hard to do using grub2 but easy on Debian though you may need to turn off auto-update and Debian is making it harder to use extlinux now.) Anyway, if you have a separate /boot partition, you'll want it mounted in the chroot, in case the update needs to add a new kernel. Since you presumably already have the same /boot mounted on the running system, use mount --bind for that as well.

So here's the final set of commands to run, as root:

mount /sid
mount --bind /proc /sid/proc
mount --bind /sys /sid/sys
mount --bind /dev /sid/dev
mount --bind /dev/pts /sid/dev/pts
mount --bind /boot /sid/boot
chroot /sid

And then you can proceed with your apt-get update, apt-get dist-upgrade etc. When you're finished, you can unmount everything with one command:

umount --recursive /sid

Some helpful background reading:

February 04, 2016

Krita 2.9.11 and the second 3.0 alpha build!

Today, we’re releasing the eleventh bugfix release for Krita 2.9 and the second development preview release of Krita 3.0! We are not planning more bug fix releases for 2.9, though it is possible that we’ll collect enough fixes to warrant one release more, because there are some problems with Windows 10 that we might be able to work around. So, please check closely if you use Krita on Windows 10:

  • You get a black screen: please go to settings/configure krita/display and disable opengl. It turns out that recent Windows updates install new Intel GPU drivers that do not implement all the functionality Krita need.
  • Pressure sensitivity stops working: a recent update of Windows 10 breaks pressure sensitivity for some people. Please check whether reinstalling the tablet drivers fixes the issue. If not, please close Krita, navigate to your user’s AppData\Roaming folder and rename the krita folder to krita_old. If Krita now shows pressure sensitivity again, please zip up your krita_old folder and send to foundation@krita.org.

And now for the fixes in 2.9.11!

2.9.11 Changelog

  • Fix a memory leak when images are copied to the clipboard
  • Ask the user to wait or break off a long-running operation before saving a document
  • Update to G’Mic 1.6.9-pre
  • Fix rendering of layer styles
  • Fix a possible issue when loading files with clone layers
  • Do not crash if there are monitors with negative numbers
  • Make sure the crop tool always uses the correct image size
  • Fix a crash on closing images while a long-running operation is still working
  • Link to the right JPEG library
  • Fix the application icon
  • Fix switching colors with X after using V to temporarily enable the line tool
  • Fix the unreadable close button on the splash screen when using a light theme
  • Fix the Pencil 2B preset
  • Fix the 16f grayscale colorspace to use the right channel positions
  • Add shortcuts to lower/raise the current layer

Go to the download page to get your updated Krita!

3.0 pre-alpha Changelog

For 3.0, we’ve got a bunch of new features and bug fixes.

There is still one really big issue that we’re working hard on: OSX and the latest Intel GPU drivers break Krita’s OpenGL support badly. On OSX, you will still NOT see the brush outline, symmetry axis, assistants and so on. On Windows, if you have an Intel GPU, the Krita window might turn totally black. There’s no need to report those issues.

  • Shift+R+click onto canvas can now select multiple layers! Use this in combination with the multiple properties editor to rename a bunch of layers quickly, or use ctrl+g to group them!
  • Improved pop-up palette, now preset-icons are more readable (size depends on maximum amount of presets set in the general settings):
  • Tons of improvements to the color space browser: the Tone curve is now visible, making it easier to find linear spaces, there’s feedback for color look-up table profiles like CMYK, there’s copyright in the info box, as well as possible conversion intents, and overall just more extra info moved into the tooltips for a cleaner look. The PNG 16bit import is also alphabetised.
  • Hotkeys for Hue, Saturate/Desaturate, making a color redder, yellower, bluer or greener, as well as making lighter/darker use luminance where possible. The new hotkeys have no default key and need to be set in the shortcuts editor.:
  • HSI, HSY and YCrCb for the HSV/HSL adjustment filter. HSY and YCrCb can use the correct coefficients for most rgb spaces, but it isn’t linearisable yet, so not true luminance yet. Regardless, below a comparison:
  • The color smudge brush can now do subpixel precision in dulling mode: krita-smudge-brush-pixel-precision
  • Add progress reporting when Krita saves a .KRA file
  • Fix wheel events in Krita 3.0
  • Sanitize the order of resource and tag loading. This makes startup a bit slower, so ideally we’d like to replace the whole system with something more sophisticated but that won’t happen for 3.0
  • Show more digits in the Memory Reporting popup in the status bar
  • Add a workaround for an assert while loading some weird PSD files
  • BUG: 346430: Make sure the crop tool always uses the current image size.
  • BUG:357173 Fix copy constructor of KisSelectionMask
  • BUG:357987 Don’t crash on loading the given file
  • Fix starting Krita without XDG_DATA_PATH set


We recommend building Krita from git, not from the source zip file. Krita for OSX is build from a separate branch.


Download the zip file. Unzip the zip file where you want to put Krita..

Run the vcredist_x64.exe installer to install Microsoft’s Visual Studio runtime.

Then double-click the krita link.

Known issues on Windows:

  • If the entire window goes black, disable OpenGL for now. We’ve figured out the reason, now we only need to write a fix. It’s a bug in the Intel driver, but we know how to work around it now.


Download the DMG file and open it. Then drag the krita app bundle to the Applications folder, or any other location you might prefer. Double-click to start Krita.

Known issues on OSX:

  • We built Krita on El Capitan. The bundle is tested to work on a mid 2011 Mac Mini running Mavericks. It looks like you will need hardware that is capable of running El Capitan to run this build, but you do not have to have El Capitan, you can try running on an earlier version of OSX.
  • You will not see a brush outline cursor or any other tool that draws on the canvas, for instance the gradient tool. This is known, we’re working on it, it needs the same fix as the black screen you can get with some Intel drivers.


For the Linux builds we now have AppImages! These are completely distribution-independent. To use the AppImage, download it, and make it an executable in your terminal or using the file properties dialog of your file manager. Another change is that configuration and custom resources are now stored in the .config/krita.org/kritarc and .local/share/krita.org/ folders of the user home folder, instead of .kde or .kde4.

Known issues on Linux:

  • Your distribution needs to have Fuse enabled
  • On some distributions or installations, you can only run an AppImage as root because the Fuse system is locked down. Since an AppImage is a simple iso, you can still mount it as a loopback device and execute Krita directly using the AppRun executable in the top folder.

February 03, 2016

darktable 2.0.1 released

we're proud to announce the first bugfix release for the 2.0 series of darktable, 2.0.1!

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.0.1

as always, please don't use the autogenerated tarball provided by github, but only our tar.xz. the checksums are:

$ sha256sum darktable-2.0.1.tar.xz
4d0e76eb42b95418ab59c17bff8aac660f5348b082aabfb3113607c67e87830b  darktable-2.0.1.tar.xz
$ sha256sum darktable-2.0.1.dmg 
580d1feb356e05d206eb74d7c134f0ffca4202943388147385c5b8466fc1eada  darktable-2.0.1.dmg

and the changelog as compared to 2.0.0 can be found below.

New features:

  • add export variables for Creator, Publisher and Rights from metadata
  • add support for key accels for spot removal iop
  • add some more info to --version
  • add collection sorting by group_id to keep grouped images together
  • add $(IMAGE.BASENAME) to watermark
  • OSX packaging: add darktable-cltest
  • OSX packaging: add darktable-generate-cache


  • make sure GTK prefers our CSS over system's
  • make selected label's background color visible
  • make ctrl-t completion popup nicer
  • fixed folder list scrolling to the top on select
  • scale waveform histogram to hidpi screens
  • really hide all panels in slideshow
  • add filename to missing white balance message
  • fix wrong tooltip in print scale
  • changing mask no longer invalidates the filmstrip thumb, making it faster
  • fix calculated image count in a collection
  • don't allow too small sidepanels
  • fixes white balance sliders for some cameras
  • fix some memleaks
  • code hardening in color reconstruction
  • validate noiseprofiles.json on startup
  • no longer lose old export presets
  • fix some crash with wrong history_end
  • don't load images from cameras with CYGM/RGBE CFA for now
  • some fixes in demosaicing
  • fix red/blue interpolation for XTrans
  • fix profiled denoise on OpenCL
  • use sRGB when output/softproof profile is missing
  • fix loading of .hdr files
  • default to libsecret instead of gnome keyring which is no longer supported
  • fix a bug in mask border self intersections
  • don't allow empty strings as mask shape names
  • fix a crash in masks
  • fix an OpenCL crash
  • eliminate deprecated OpenCL compiler options
  • update appdata file to version 0.6
  • allow finding Saxon on Fedora 23

Camera support:

  • Fujifilm XQ2 RAW support
  • support all Panasonic FZ150 crop modes
  • basic support for Nikon 1 V3
  • add defs for Canon CHDK DNG cameras to make noise profiles work

White balance presets:

  • add Nikon D5500
  • add Nikon 1 V3
  • add missing Nikon D810 presets
  • add Fuji X100T


  • copy X100S to X100T

Noise profiles:

  • fix typo in D5200 profiles to make them work again
  • add Panasonic FZ1000
  • add Nikon D5500
  • add Ricoh GR
  • add Nikon 1 V3
  • add Canon PowerShot S100
  • copy Fuji X100S to X100T


  • add Hungarian
  • update German
  • update Swedish
  • update Slovak
  • update Spanish
  • update Dutch
  • update French

February 01, 2016

Interview with Jóhann Örn Geirdal


Could you tell us something about yourself?

My name is Jóhann Örn Geirdal and I am a professional artist and a fine art gallery supervisor. I’m from Iceland and currently living in the Reykjavik city area.

Do you paint professionally, as a hobby artist, or both?

I paint digital fine art professionally and it’s definitely my hobby as well.

What genre(s) do you work in?

Everything that gets the job done.

Whose work inspires you most — who are your role models as an artist?

The most important artists to me are Erro, Android Jones, Francis Bacon and Miro.

How and when did you get to try digital painting for the first time?

Back in 2000 I went to a multimedia school to learn to make digital art. Since then I have switched completely to digital media from traditional media.

What makes you choose digital over traditional painting?

Definitely the high level of experimentation and it’s a lot cleaner.

How did you find out about Krita?

It was through the Blender community. The artist David Revoy introduced it.

What was your first impression?

I did not fall in love with it but it was interesting enough to explore more. Now I can’t go back.

What do you love about Krita?

It is simply the best digital art software on the market.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I think it’s on the right track. Just keep going.

What sets Krita apart from the other tools that you use?

It’s the fast development and that the developers are definitely listening to the artists who use it. That is not always the case with other software.

What techniques and brushes do you prefer to use?

I use a lot of custom brushes but I also use default Krita brushes and brushes from other artists.

Where can people see more of your work?

My website is http://www.geirdal.is. There you can see my current work.

Anything else you’d like to share?

I like to thank everyone who has made Krita possible and this amazing!

January 31, 2016

Setting mouse speed in X

My mouse died recently: the middle button started bouncing, so a middle button click would show up as two clicks instead of one. What a piece of junk -- I only bought that Logitech some ten years ago! (Seriously, I'm pretty amazed how long it lasted, considering it wasn't anything fancy.)

I replaced it with another Logitech, which turned out to be quite difficult to find. Turns out most stores only sell cordless mice these days. Why would I want something that depends on batteries to use every day at my desktop?

But I finally found another basic corded Logitech mouse (at Office Depot). Brought it home and it worked fine, except that the speed was way too fast, much faster than my old mouse. So I needed to find out how to change mouse speed.

X11 has traditionally made it easy to change mouse acceleration, but that wasn't what I wanted. I like my mouse to be fairly linear, not slow to start then suddenly zippy. There's no X11 property for mouse speed; it turns out that to set mouse speed, you need to call it Deceleration.

But first, you need to get the ID for your mouse.

$ xinput list| grep -i mouse
⎜   ↳ Logitech USB Optical Mouse                id=11   [slave  pointer  (2)]

Armed with the ID of 11, we can find the current speed (deceleration) and its ID:

$ xinput list-props 11 | grep Deceleration
        Device Accel Constant Deceleration (259):       3.500000
        Device Accel Adaptive Deceleration (260):       1.000000

Constant deceleration is what I want to set, so I'll use that ID of 259 and set the new deceleration to 2:

$ xinput set-prop 11 259 2

That's fine for doing it once. But what if you want it to happen automatically when you start X? Those constants might all stay the same, but what if they don't?

So let's build a shell pipeline that should work even if the constants aren't.

First, let's get the mouse ID out of xinput list. We want to pull out the digits immediately following "id=", and nothing else.

$ xinput list | grep Mouse | sed 's/.*id=\([0-9]*\).*/\1/'

Save that in a variable (because we'll need to use it more than once) and feed it in to list-props to get the deceleration ID. Then use sed again, in the same way, to pull out just the thing in parentheses following "Deceleration":

$ mouseid=$(xinput list | grep Mouse | sed 's/.*id=\([0-9]*\).*/\1/')
$ xinput list-props $mouseid | grep 'Constant Deceleration'
        Device Accel Constant Deceleration (262):       2.000000
$ xinput list-props $mouseid | grep 'Constant Deceleration' | sed 's/.* Deceleration (\([0-9]*\)).*/\1/'

Whew! Now we have a way of getting both the mouse ID and the ID for the "Constant Deceleration" parameter, and we can pass them in to set-prop with our desired value (I'm using 2) tacked onto the end:

$ xinput set-prop $mouseid $(xinput list-props $mouseid | grep 'Constant Deceleration' | sed 's/.* Deceleration (\([0-9]*\)).*/\1/') 2

Add those two lines (setting the mouseid, then the final xinput line) wherever your window manager will run them when you start X. For me, using Openbox, they go in .config/openbox/autostart. And now my mouse will automatically be the speed I want it to be.

Show me the way

If you need further proof that OpenStreetMap is a great project, here’s a very nice near real-time animation of the most recent edits: https://osmlab.github.io/show-me-the-way/

Show me the way

Seen today at FOSDEM, at the stand of the Humanitarian OpenStreetMap team which also deserves attention: https://hotosm.org

Comments | More on rocketeer.be | @rubenv on Twitter

January 29, 2016


Rio UX Design Hackfest from jimmac on Vimeo.

I was really pleased to see Endless, the little company with big plans, initiate a GNOME Design hackfest in Rio.

The ground team in Rio arranged a visit to two locations where we met with the users that Endless is targeting. While not strictly a user testing session, it helped to better understand the context of their product and get a glimpse of the lives in Rocinha, one of the Rio famous favelas or a more remote rural Magé. Probably wouldn’t have a chance to visit Brazil that way.

Points of diversion

During the workshop at the Endless offices we went through many areas we identified as being problematic in both the stock GNOME and Endless OS and tried to identify if we could converge on and cooperate on a common solution. Currently Endless isn’t using the stock GNOME 3 for their devices. We aren’t focusing as much on the shell now, as there is a ton of work to be done in the app space, but there are a few areas in the shell we could revisit.

GNOME could do a little better in terms of discoverability. We investigated the role of the app picker versus the window switcher in the overview and being able to enter the overview on boot. Some design choices have been explained and our solution was reconsidered to be a good way forward for Endless. Unified system menu, window controls, notifications, lock screen/screen shield have been analyzed.

Endless demoed how the GNOME app-provided system search has been used to great effect on their mostly offline devices. Think “offline google”.

DSC02567 DSC02589 DSC02616

Another noteworthy detail was the use of CRT screens. The new mini devices sport a cinch connection to old PAL/NTSC CRT TVs. Such small resolutions and poor quality brings more constraints on the design to keep things legible. This also has had a nice effect in that Endless has investigated some responsive layout solutions for gtk+ they demoed.

I also presented GNOME design team’s workflow, and the free software toolchain we use. Did a little demo of Inkscape for icon design and wireframing and Blender motion design.

Last but not least, I’d like to thank the GNOME Foundation for making it possible for me to fly to Rio.

Rio Hackfest Photos

Krita AppImages

Years and years ago, before Krita had even had one single official or unofficial release, we created something called "klik" packages. Basically, an iso that would contain Krita and all its dependencies and that could be used to run Krita on any Linux distribution. The klik packages were quite hard to maintain and hard-ish to use, though. It was easier than trying to build rpm's for SuSE, Redhat, Mandrake, debs for Debian, PKG for Slackware and whatever else was out there, though.

Fast-forward a decade. Despite advances like Launchpad and the OpenSuse OBS, it's still hard to create Krita packages for every distribution. There are more distributions, more versions, more architectures... Just maintaining the Krita Lime PPA for Ubuntu and derivatives takes a serious amount of time. Basically, the problem of distributing ones application to Linux users is still a problem.

And if you're working on even a moderately popular application that has a moderate development velocity, if it's an application that users rely on to do their job, you really want to provide your work in a binary form.

Distributions do a good job combining all the software we free software developers write into distribution releases; distributions really make it easy and convenient to install a wide range of applications. But there is a big mis-match between what users need and what they get:

Most users want a stable, unchanging operating system that they can install and use without upgrading for a couple of years. On top of that, some users don't want to be bothered by desktop upgrades, others cannot live without the latest desktop. That's often a personal preference, or a matter of not caring about the desktop as long as it can launch their work applications. And those work applications, the production tools they use to earn their money with, those need to be the very latest version.

So, Krita users often still use Ubuntu 12.04. It's the oldest LTS release that's still supported. But Ubuntu doesn't support it by providing the latest productivity applications on top of the stable base, not even through backport ppa's and if you use the Ubuntu-provided Krita, you're stuck in what now feels like the dark ages.

Enter the spiritual successor of Klik: AppImage. AppImages sprang into the limelight when they got Linus Torvalds' Seal of Approval. That distributing software on Linux is problematical has been a thorn in his eye for a long time, and when particularly when he started working on an end-user application: Subsurface. When the person behind AppImage created a SubSurface package, that resulted in a lot of publicity.

And I contacted Simon to ask for help creating a Krita AppImage. After all, we are in the middle of working up to a 3.0 release, and I'd like to be able to produce regular development builds, not just for Windows and OSX, but also for Linux.

Krita's AppImage is built on CentOS 6.5 using a long bash script. It updates CentOS using the Epel repository so we get a reasonably recent Qt5, then installs an updated compiler, gets Krita, installs dependencies, builds dependencies, builds krita, checks all the output for their dependencies, copies them into a tree, edits eveyrthing to look for dependencies locally instead of on the system and packages it up with a small executable that runs the Krita executable. The one thing that was really hard was figuring out how to integrate with the GPU drivers.

You can get the recipe here: https://github.com/boudewijnrempt/AppImages/blob/master/recipes/krita/Recipe.

There are some refinements possible: AppImage offers a way to update AppImages by only downloading and applying a delta, which we don't support yet. It's possible to setup a system where we can generate nightly builds, but I haven't figured out the combination of docker, travis and github that supports that yet, either. And Simon is working on an improved first-run script that would ask the user whether they would like to have some desktop integration, for instance for file handling or menu integration. All of that is in the future. There are also a handful of distributions that disable fuse by default, or close it for non-root users. Unfortunately, CentOS is one of them...

For now, though, it's really easy to generate binaries that seem to run quite well on a wide variety of Linux distributions, that performs just like native (well, the packages is native), are easy to download and run. So I'm ready to declare "problem solved!"

January 28, 2016

January 26, 2016

HDR Photography with Free Software (LuminanceHDR)

HDR Photography with Free Software (LuminanceHDR)

A first approach to creating and mapping HDR images

I have a mostly love/hate relationship with HDR images (well, tonemapping HDR more than the HDR themselves). I think the problem is that it’s very easy to create really bad HDR images that the photographer thinks look really good. I know because I’ve been there:

Hayleys - Mobile, AL Don’t judge me, it was a weird time in my life…

The best term I’ve heard used to describe over-processed images created from an HDR is “clown vomit” (which would also be a great name for a band, by the way). They are easily spotted with some tell-tale signs such as the halos at high-contrast edges, the unrealistically hyper-saturated colors that make your eyes bleed, and a general affront to good taste. In fact, while I’m putting up embarrassing images that I’ve done in the past, here’s one that scores on all the points for a crappy image from an HDR:

Tractor “My Eyes! The goggles do nothing!”

Crap-tastic! Of course, the allure here is that it provides first timers a glimpse into something new, and they feel the desire to crank every setting up to 11 with no regards to good taste or aesthetics.

If you take anything away from this post, let it be this: “Turn it DOWN. If it looks good to you, then it’s too much. ;)

HDR lightprobes are used in movie fx compositing to ensure that the lighting on CG models matches exactly the lighting for a live-action scene. By using an HDR lightprobe, you can match the lighting exactly to what is filmed.

I originally learned about, and used, HDR images when I would use them to illuminate a scene in Blender. In fact, I will still often use Paul Debevec’s Uffizi gallery lightprobe to light scene renders in Blender today.

For example, you may be able to record 10-12 stops of light information using a modern camera. Some old films could record 12-13 stops of light, while your eyes can approximately see up to 14 stops.

HDR images are intended to capture more than this number of stops. (Depending on your patience, significantly more in some cases).

I can go on a bit about the technical aspects of HDR imaging, but I won’t. It’s boring. Plus, I’m sure you can use Wikipedia, or Google yourselves. :) In the end, just realize that an HDR image is simply one where there is a greater amount of light information being stored than is able to be captured by your camera sensor in one shot.

Taking an HDR image(s)

More light information than my camera can record in one shot?
Then how do I take an HDR photo?

You don’t.

You take multiple photos of a scene, and combine them to create the final HDR image. Before I get into the process of capturing these photos to create an HDR with, consider something:

When/Why to use HDR

An HDR image is most useful to you when the scene you want to capture has bright and dark areas that fall outside the range of a single exposure, and you feel that there is something important enough outside that range to include in your final image.

That last part is important, because sometimes it’s OK to have some of your photo be too dark for details (or too light). This is an aesthetic decision of course, but keep it in mind…

Here’s what happens. Say you have a pretty scene you would like to photograph. Maybe it’s the Lower Chapel of Sainte Chapelle:

Sainte Chapelle Lower Chapel Sainte Chapelle Lower Chapel by iwillbehomesoon on Flickr (cbsna)

You may setup to take the shot, but when you are setting your exposure you may run into a problem. To expose for the brighter parts of the image means that the shadows fall to black too quickly, crushing out the details there.

If you expose for the shadows, then the brighter parts of the image quickly clip beyond white.

The use case for an HDR is when you can’t find a happy medium between those two exposures.

A similar situation comes up when you want to shoot any ground details against a bright sky, but you want to keep the details in both. Have a look at this example:

HDR Layers by dontmindme, on Flickr HDR Layers by dontmindme, on Flickr (cbna)

In the first column, if you expose for the ground, the sky blows out.

In the second, you can drop the exposure to bring the sky in a bit, but the ground is getting too dark.

In the third, the sky is exposed nicely, but the ground has gone to mostly black.

If you wanted to keep the details in the sky and ground at the same time, you might use an HDR (you could technically also use exposure blending with just a couple of exposures and blend them by hand, but I digress) to arrive at the last column.

Shooting Images for an HDR

Many cameras have an auto-bracketing feature that will let you quickly shoot a number of photos while changing the exposure value (EV) of each. You can also do this by hand simply by changing one parameter of your exposure each time.

You can technically change any of ISO, shutter speed, or aperture to modify the exposure, but I’d recommend you change only the shutter speed (or EV value when in Aperture Priority modes).

The reason is that changing the shutter speed will not alter the depth-of-field (DoF) of your view or introduce any extra noise the way changing the aperture or ISO would.

When considering your scene, you will also want to try to stick to static scenes if possible. The reason is that objects that move around (swaying trees, people, cars, fast moving clouds, etc.) could end up as ghosts or mis-alignments in your final image. So as you’re starting out, choose your scene to help you achieve success.

Set up your camera someplace very steady (like a tripod), dial in your exposure and take a shot. If you let your camera meter your scene for you then this is a good middle starting point.

For example, if you setup your camera and meter your scene, it might report a 1160 second exposure. This is our starting point (0EV).

The base exposure, 1160 s, 0EV

To capture the lower values, just cut your shutter speed in half ( 180 second, +1EV), and take a photo. Repeat if you’d like ( 140 second, +2EV).

180 second, +1EV (left), 140 second, +2EV (right)

To capture the upper values, just double your starting point shutter speed ( 1320, -1EV) and take a photo. Repeat if you’d like again ( 1640, -2EV).

1320, -1EV (left), 1640, -2EV (right)

This will give you 5 images covering a range of -2EV to +2EV:

Shutter SpeedExposure Value

Your values don’t have to be exactly 1EV each time, LuminanceHDR is usually smart enough to figure out what’s going on from the EXIF data in your image - I chose full EV stops here to simplify the example.

So armed with your images, it’s time to turn them into an HDR image!

Creating an HDR Image

You kids have it too easy these days. We used to have to bring all the images into Hugin and align them before we could save an hdr/exr file. Nowadays you’ve got a phenomenal piece of Free/Open Source Software to handle this for you:

(Previously qtpfsgui. Seriously.)

After installing it, open it up and hit “New HDR Image“:

LuminanceHDR startup screen

This will open up the “HDR Creation Wizard” that will walk you through the steps of creating the HDR. The splash screen notes a couple of constraints.

LuminanceHDR wizard splash screen

On the next screen, you’ll be able to load up all of the images in your stack. Just hit the big green “+“ button in the middle, and choose all of your images:

LuminanceHDR load wizard

LuminanceHDR will load up each of your files, and investigate them to try and determine the EV values for each one. It usually does a good job of this on its own, but if there a problem you can always manually specify what the actual EV value is for each image.

Also notice that because I only adjusted my shutter speed by half or double, that each of the relative EV values is neatly spaced 1EV apart. They don’t have to be, though. I could have just as easily done ½ EV or &frac13 EV steps as well.

LuminanceHDR creation wizard

If there is even the remotest question about how well your images will line up, I’d recommend that you check the box for “Autoalign images”, and let Hugin’s align_image_stack do it’s magic. You really need all of your images to line up perfectly for the best results.

Hit “Next“, and if you are aligning the images be patient. Hugin’s align_image_stack will find control points between the images and remap them so they are all aligned. When it’s done you’ll be presented with some editing tools to tweak the final result before the HDR is created.

LuminanceHDR Creation Wizard

You are basically looking at a difference view between images in your stack at the moment. You can choose which two images to difference compare by choosing them in the list on the left. You can now shift an image horizontally/vertically if it’s needed, or even generate a ghosting mask (a mask to handle portions of an image where objects may have shifted between frames).

If you are careful, and there’s not much movement in your image stacks, then you can safely click through this screen. Hit the “Next“ button.

LuminanceHDR Creation Wizard

This is the final screen of the HDR Creation Wizard. There are a few different ways to calculate the pixel values that make up an HDR image, and this is where you can choose which ones to use. For the most part, people far smarter than I had a look at a bunch of creation methods, and created the predefined profiles. Unless you know what you’re doing, I would stick with those.

Hit “Finish“, and you’re all done!

You’ll now be presented with your HDR image in LuminanceHDR, ready to be tonemapped so us mere mortals can actually make sense of the HDR values present in the image. At this point, I would hit the “Save As…” button, and save your work.

LuminanceHDR Main

Tonemapping the HDR

So now you’ve got an HDR image. Congratulations!

The problem is, you can’t really view it with your puny little monitor.

The reason is that the HDRi now contains more information than can be represented within the limited range of your monitor (and eyeballs, likely). So we need to find a way to represent all of that extra light-goodness so that we can actually view it on our monitors. This is where tonemapping comes in.

We basically have to take our HDRi and use a method for compressing all of that radiance data down into something we can view on our monitors/prints/eyeballs. We need to create a Low Dynamic Range (LDR) image from our HDR.

Yes - we just went through all the trouble of stacking together a bunch of LDR images to create the HDRi, and now we’re going back to LDR ? We are - but this time we are armed with way more radiance data than we had to begin with!

The question is, how do we represent all that extra data in an LDR? Well, there’s quite a few different ways. LuminanceHDR provides for 9 different tonemapping operators (TMO’s) to represent your HDRi as an LDR image:

Just a small reminder, there’s a ton of math involved in how to map these values to an LDR image. I’m going to skip the math. The references are out there if you want them.

I’ll try to give examples of each of the operators below, and a little comment here and there. If you want more information, you can always check out the list on the Open Source Photography wikidot page.

Before we get started, let’s have a look at the window we’ll be working in:

LuminanceHDR Main Window

Tonemap is the section where you can choose which TMO you want to use, and will expose the various parameters you can change for each TMO. This is the section you will likely be spending most of your time, tweaking the settings for whichever TMO you decide to play with.

Process gives you two things you’ll want to adjust. The first is the size of the output that you want to create (Result Size). While you are trying things out and dialing in settings you’ll probably want to use a smaller size here (some operators will take a while to run against the full resolution image). The second is any pre-gamma you want to apply to the image. I’ll talk about this setting a bit later on.

Oh, and this section also has the “Tonemap” button to apply your settings and generate a preview. I’ll also usually keep the “Update current LDR” checked while I rough in parameters. When I’m fine-tuning I may uncheck this (it will create a new image every time you hit the “Tonemap” button).

Results are shown in this big center section of the window. The result will be whatever Result Size you set in the previous section.

Previews are automatically generated and shown in this column for each of the TMO. If you click on one, it will automatically apply that TMO to your image and display it (at a reduced resolution - I think the default is 400px, but you can change it if you want). It’s a nice way to quickly get a preview overview of what all the different TMOs are doing to your image.

Ok, with that out of the way, let’s dive into the TMOs and have a look at what we can do. I’m going to try to aim for a reasonably realistic output here that (hopefully) won’t make your eyeballs bleed. No promises, though.

Need an HDR to follow along? I figured it might be more fun (easier?) to follow along if you had the same file I do.
So here it is, don’t say I never gave you anything (This hdr is licensed cc-by-sa-nc by me):
Download from Google Drive (41MB .hdr)

Another note - all of the operators can have their results tweaked by modification of the pre-gamma value ahead of time. This is applied the image before the TMO is applied, and will make a difference in the final output. Usually pushing the pre-gamma value down will increase contrast/brightness in the image, while increasing it will do the opposite. I find it better to start with pre-gamma set to 1 as I experiment, just remember that it is another factor that you use to modify your final result.

Mantiuk ‘06

I’m starting with this one because it’s the first in the list of TMOs. Let’s see what the defaults from this operator look like against our base HDRi:

Mantiuk 06 default Default Mantiuk ‘06 applied

By default Mantiuk ‘06 produces a muted color result that seems pleasing to my eye. Overall the image feels like it’s almost “dirty” or “gritty” with these results. The default settings produce a bit of extra local contrast boosting as well.

Let’s see what the parameters do to our image.

Contrast Factor

The default factor is 0.10.

Pushing this value down to as low as 0.01 produces just a slight increase in contrast across the image from the default. Not that much overall.

Pushing this value up, though, will tone down the contrast overall. I think this helps to add some moderation to the image, as hard contrasts can be jarring to the eyes sometimes. Here is the image with only the Contrast Factor pushed up to 0.40:

Mantiuk 06 Contrast Factor 0.4 Mantiuk ‘06 - Contrast Factor increased to 0.40
(click to compare to defaults)

Saturation Factor

The default value is 0.80.

This factor just scales the saturation in the image, and behaves as expected. If you find the colors a bit muted using this TMO, you can bump this value a bit (don’t get crazy). For example, here is the Saturation Factor bumped to 1.10:

Mantiuk 06 Saturation 1.10 Mantiuk ‘06 - Saturation Factor increased to 1.10
(click to compare to defaults)

Of course, you can also go the other way if you want to mute the colors a bit more:

Mantiuk 06 Saturation 0.40 Mantiuk ‘06 - Saturation Factor decreased to 0.40
(click to compare to defaults)

Detail Factor

The default is 1.0.

The Detail Factor appears to control local contrast intensity. It gets overpowering very quickly, so make small movements here (if at all). Here is what pushing the Detail Factor up to 10.0 produces:

Mantiuk 06 Detail Factor Don’t do this. Mantiuk ‘06 - Detail Factor increased to 10.0
(click to compare to defaults)

Contrast Equalization

This is supposed to equalize the contrast if there are heavy swings of light/dark across the image on a global scale, but in my example did little to the image (other than a strange lightening in the upper left corner).

My Final Version

I played a bit starting from the defaults. First I wanted to push down the contrast a bit to make everything just a bit more realistic, so I pushed Contrast Factor up to 0.30. I slightly bumped the Saturation Factor to 0.95 as well.

I liked the textures of the tree and house, so I wanted to bring those back up a bit after decreasing the Contrast Factor, so I pushed the Detail Factor up to 5.0.

Here is what I ended up with in the end:

Mantiuk 06 Final Result My final output (Contrast 0.3, Saturation 0.95, Detail 5.0)
(click to compare to defaults)

Mantiuk ‘08

Mantiuk ‘08 is a global contrast TMO (for comparison, Mantiuk ‘06 uses local contrast heavily). Being a global operator, it’s very quick to apply.

Mantiuk 08 default Default Mantiuk ‘08 applied

As you can see, the effect of this TMO is to compress the dynamic range into an LDR output using a function that operates across the entire image globally. This will produce a more realistic result I think, overall.

The default output is not bad at all, where brights seem appropriately bright, and darks are dark while still retaining details. It does feel like the resulting output is a little over-sharp to my eye, however.

There are only a couple of parameters for this TMO (unless you specifically override the Luminance Level with the checkbox, Mantiuk ‘08 will automatically adjust it for you):

Predefined Display

There are options for LCD Office, LCD, LCD Bright, and CRT but they didn’t seem to make any difference in my final output at all.

Color Saturation

The default is 1.0.

Color Saturation operates exactly how you’d expect. Dropping this value decreases the saturation, and vice versa. Here’s a version with the Color Saturation bumped to 1.50:

Mantiuk ‘08 - Color Saturation increased to 1.50
(click to compare to defaults)

Contrast Enhancement

The default value is 1.0.

This will affect the global contrast across the image. The default seemed to have a bit too much contrast, so it’s worth it to dial this value in. For instance, here is the Contrast Enhancement dialed down to 0.51:

Mantiuk 08 Contrast Enhancement 0.51 Mantiuk ‘08 - Contrast Enhancement decreased to 0.51
(click to compare to defaults)

Compared to the default settings I feel like this operator can work better if the contrast is turned down just a bit to make it all a little less harsh.

Enable Luminance Level

This checkbox/slider allows you to manually specify the Luminance Level in the image. The problem that I ran into was that with this enabled, I couldn’t adjust the Luminance far enough to keep bright areas in the image from blowing out. if I let the default behavior of automatically adjusting Luminanace, then it kept things more under control.

My Final Version

Starting from the defaults, I pushed down the Contrast Enhancement to 0.61 to even out the overall contrast. I bumped the Color Saturation to 1.10 to bring out the colors a bit more as well.

I also dropped the pre-gamma correction to 0.91 in order to bring back some of the contrast lost from the Contrast Enhancement.

Mantiuk 08 final result My final Mantiuk ‘08 output
(pre-gamma 0.91, Contrast Enhancement 0.61, Color Saturation 1.10)
(click to compare to defaults)


Crap. Time for this TMO I guess…

THIS is the TMO responsible for some of the greatest sins of HDR images. Did you see the first two images in this post? Those were Fattal. The problem is that it’s really easy to get stupid with this TMO.

Fattal (like the other local contrast operators) is dependent on the final output size of the image. When testing this operator, do it at the full resolution you will want to export. The results will not match up if you change size. I’m also going to focus on using only the newer v.2.3.0 version, not the old one.

Here is what the default values look like on our image:

Fattal default Default Fattal applied

The defaults are pretty contrasty, and the color seems saturated quite a bit as well. Maybe we can get something useful out of this operator. Let’s have a look at the parameters.


The default is 1.00.

This parameter is supposed to be a threshold against which to apply the effect. According to the wikidot, decreasing this value should increase the level of details in the output and vice versa. Here is an example with the Alpha turned down to 0.25:

Fattal - Alpha decreased to 0.25
(click to compare to defaults)

Increasing the Alpha value seems to darken the image a bit as well.


The default value is 0.90.

This parameter is supposed to control the amount of the algorithm applied on the image. A value of 1 is no effect on the image (straight gamma=1 mapping). Lower values will increase the amount of the effect. Recommended values are between 0.8 and 0.9. As the values get lower, the image gets more cartoonish looking.

Here is an example with Beta dropped down to 0.75:

Fattal Beta 0.75 Fattal - Beta decreased to 0.75
(click to compare to defaults)

Color Saturation

The default value is 1.0.

This parameter does exactly what’s described. Nothing interesting to see here.

Noise Reduction

The default value is 0.

This should suppress fine detail noise from being picked up by the algorithm for enhancement. I’ve noticed that it will slightly affect the image brightness as well. Fine details may be lost if this value is too high. Here the Noise Reduction has been turned up to 0.15:

Fattal NR 0.15 Fattal - Noise Reduction increased to 0.15
(click to compare to defaults)

My Final Version

This TMO is sensitive to changes in its parameters. Small changes can swing the results far, so proceed lightly.

I increased the Noise Reduction a little bit up front, which lightened up the image. Then I dropped the Beta value to let the algorithm work to brighten up the image even further. To offset the increase, I pushed Alpha up a bit to keep the local contrasts from getting too harsh. A few minutes of adjustments yielded this:

Fattal Final Result My Fattal output - Alpha 1.07, Beta 0.86, Saturation 0.7, Noise red. 0.02
(click to compare to defaults)

Overall, Fattal can be easily abused. Don’t abuse the Fattal TMO. If you find your values sliding too far outside of the norm, step away from your computer, get a coffee, take a walk, then come back and see if it still hurts your eyes.


Drago is another of the global TMOs. It also has just one control: bias.

Here is what the default values produce:

Default Drago applied

The default values produced a very washed out appearance to the image. The black points are heavily lifted, resulting in a muddy gray in dark areas.

Bias is the only parameter for this operator. The default value is 0.85. Decreasing this value will lighten the image significantly, while increasing it will darken it. For my image, even pushing the Bias value all the way up to 1.0 only produced marginal results:

Drago Bias 1.0 Drago - Bias 1.0
(click to compare to defaults)

Even at this level the image still appears very washed out. The only other parameter to change would be the pre-gamma before the TMO can operate. After adjusting values for a bit, I settled on a pre-gamma of 0.67 in addition to the Bias being set to 1:

My Final Version

Drago final result My result: Drago - Bias 1.0, pre-gamma 0.67
(click to compare to defaults)


Most of the older documentation/posts that I can find describe Durand as the most realistic of the TMOs, yielding good results that do not appear overly processed.

Indeed the default settings immediately look reasonably natural, though it does exhibit a bit of blowing out in very bright areas - which I imagine can be fixed by adjustment of the correct parameters. Here is the default Durand output:

Default Durand applied

There are three parameters that can be adjusted for this TMO, let’s have a look:

Base Contrast

The default is 5.00.

This value is considered a little high from most sources I’ve read. Usually recommending to drop this value to the 3-4 range. Here is the image with the Base Contrast dropped to 3.0:

Durand Base Contrast 3.5 Durand - Base Contrast decreased to 3.5
(click to compare to defaults)

The Base Contrast does appear to drop the contrast in the image, but it also drops the blown-out high values on the house to more reasonable levels.

Spatial Kernel Sigma

The default value is 2.00.

This parameter seems to produce a change to contrast in the image. Large value swings are required to notice some changes, depending on the other parameter values. Pushing the value up to 65.00 looks like this:

Durand Spatial Kernel 65.00 Durand - Spatial Kernel Sigma increased to 65.00
(click to compare to defaults)

Range Kernel Sigma

The default value is 2.00.

My limited testing shows that this parameters doesn’t quite operate correctly. Changes will not modify the output image until you reach a certain threshold in the upper bounds, where it will overexpose the image. I am assuming there is a bug in the implementation, but will have to test further before filing a bug report.

My Final Version

In experiment I found that pre-gamma adjustments can affect the saturation in the output image. Pushing pre-gamma down a bit will increase the saturation.

Durand final result My Durand results - pre-gamma 0.88, Contrast 3.6, Spatial Sigma 5.00
(click to compare to defaults)

I pulled the Base Contrast back to keep the sides of the house from blowing out. Once I had done that, I also dropped the pre-gamma to 0.88 to bump the saturation slightly in the colors. A slight boost to Spatial Kernel Sigma let me increase local contrasts slightly as well.

Finally, I used the Adjust Levels dialog to modify the levels slightly by raising the black point a small amount (hey - I’m the one writing about all these #@$%ing operators, I deserve a chance to cheat a little).

Reinhard ‘02

This is supposed to be another very natural looking operator. The initial default result looks good with medium-low contrast and nothing blowing out immediately:

Default Reinhard ‘02 applied

Even though many parameters are listed, they don’t really appear to make a difference. At least with my test HDR. Even worse, attempting to use the “Use Scales” option usually just crashes my LuminanceHDR.

Key Value

The default is 0.18.

This appears to be the only operator that does anything in my image at the moment. Increasing it will increase the brightness of the image, and decreasing it will darken the image.

Here is the image with Key Value turned down to 0.05:

Reinhard 02 Key Value 0.05 Reinhard ‘02 - Key Value 0.05
(click to compare to defaults)


The default is 1.00.

This parameter does not appear to have any affect on my image.

Use Scales

Turning this option on currently crashes my session in LuminanceHDR.

My Final Version

I started by setting the Key Value very low (0.01), and adjusted it up slowly until I got the highlights about where I wanted them. Due to this being the only parameter that modified the image, I then started adjusting pre-gamma up until I got to roughly the exposure I thought looked best (1.09).

Reinhard 02 final result Final Reinhard ‘02 version - Key Value 0.09, pre-gamma 1.09
(click to compare to defaults)

Reinhard ‘05

Reinhard ‘05 is supposed to be another more ‘natural’ looking TMO, and also operates globally on the image. The default settings produce an image that looks under-exposed and very saturated:

Default Reinhard ‘05 applied

There are three parameters for this TMO that can be adjusted.


The default value is -10.00.

Interestingly, pushing this parameter down (all the way to its lowest setting, -20) did not darken my image at all. Pulling it up, however, did increase the brightness overall. Here the brightness is increased to -2.00:

Reinhard 05 brightness -2.00 Reinhard ‘05 - Brightness increased to -2.00
(click to compare to defaults)

Chromatic Adaptation

The default is 0.00.

This parameter appears to affect the saturation in the image. Increasing it desaturates the results, which is fine given that the default value of 0.00 shows a fairly saturated image to begin with. Here is the Chromatic Adaptation turned up to 0.60:

Reinhard 05 chromatic adaptation 0.6 Reinhard ‘05 - Chromatic Adaptation increased to 0.6
(click to compare to defaults)

Light Adaptation

The default is 1.00.

This parameter modifies the global contrast in the final output. It starts at the maximum of 1.00, and decreasing this value will increase the contrast in the image. Pushing the value down to 0.5 does this to the test image:

Reinhard 05 light adaptation 0.50 Reinhard ‘05 - Light Adaptation decreased to 0.50
(click to compare to defaults)

My Final Version

Reinhard 05 final result My Reinhard ‘05 - Brightness -5.00, Chromatic Adapt. 0.60, Light Adapt. 0.75
(click to compare to defaults)

Starting from the defaults, I raised the Brightness to -5.00 to lift the darker areas of the image, while keeping an eye on the highlights to keep them from blowing out. I then decreased the Light Adaptation until the scene had a reasonable amount of contrast without becoming overpowering to 0.75. At that point I turned up the Chromatic Adaptation to reduce the saturation in the image to be more realistic, and finished at 0.60.


This TMO has little in the way of controls - just options for two different equations that can be used, and a slider. The default (Eqn. 2) image is very dark and heavily saturated:

Ashikhmin default Default Ashikhmin applied

There is a checkbox option for using a “Simple” method (that produces identical results regardless of which Eqn is checked - I’m thinking it doesn’t use that information).


Checking the Simple checkbox removes any control over the image parameters, and yields this image:

Ashikhmin simple Ashikhmin - Simple
(click to compare to defaults)

Fairly saturated, but exposed reasonably well. It lacks some contrast, but the tones are all there. This result could use some further massaging to knock down the saturation and to bump the contrast slightly (or adjust pre-gamma).

Equation 4

This is the result of choosing Equation 4 instead:

Ashikhmin equation 4 Ashikhmin - Equation 4
(click to compare to defaults)

There is a large loss of local contrast details in the scene, and some of the edges appear very soft. Overall the exposure remains very similar.

Local Contrast Threshold

The default value is 0.50.

This parameter modifies the local contrast being applied to the image. The result will be different depending on which Equation is being used.

Here is Equation 2 with the Local Contrast Threshold reduced to 0.20:

Ashikhmin eqn 2 local contrast 0.20 Ashikhmin - Eqn 2, Local Contrast Threshold 0.20
(click to compare to defaults)

Lower values will decrease the amount of local contrast in the final output.

Equation 4 with Local Contrast Threshold reduced to 0.20:

Ashikhmin eqn 4 local contrast 0.20 Ashikhmin - Eqn 4, Local Contrast Threshold 0.20
(click to compare to defaults)

My Final Version

After playing with the options, the overall best version I feel is had by just using the Simple option. Further tweaking may be necessary to get usable results beyond this.


This TMO appears to attempt to mimic the behavior of human eyes with the inclusion of terminology like “Rod” and “Cone”. There are quite a few different parameters to adjust if wanted. The default TMO results in an image like this:

Default Pattanaik applied

The default results are very desaturated, and tends to blow out in the highlights. The dark areas appear well exposed, with the problems (in my test hdr) being mostly constrained to highlights for this example. On first glance, the results look like something that could be worked with.

There are quite a few different parameters for this TMO. Let’s have a look at them:


The default value is 1.00.

This parameter appears to modify the overall contrast in the image. Decreasing the value will decrease contrast, and vice versa. It also appears to slightly modify the brightness in the image as well (pushing the highlights to a less blown-out value). Here is the Multiplier decreased to 0.03:

Pattanaik multiplier 0.03 Pattanaik - Multiplier 0.03
(click to compare to defaults)

Local Tone Mapping

This parameter is just a checkbox, with no controls. The result is a washed out image with heavy local contrast adjustments:

Pattanaik local tone mapping Pattanaik - Local Tone Mapping
(click to compare to defaults)

Cone/Rod Levels

The default is to have Auto Cone/Rod checked, greying out the options to change the parameters manually.

Turning off Auto Cone/Rod will get the default manual values of 0.50 for both applied:

Pattanaik manual cone/rod 0.5 each Pattanaik - Manual Cone/Rod (0.50 for each)
(click to compare to defaults)

The image gets very blown out everywhere, and modification of the Cone/Rod values does not significantly reduce brightness across the image.

My Final Version

Starting with the defaults, I reduced the Multiplier to bring the highlights under control. This reduced contrast and saturation in the image.

Pattanaik final result My final Pattanaik - Multiplier 0.03, pre-gamma 0.91
(click to compare to defaults)

To bring back contrast and some saturation, I decreased the pre-gamma to 0.91. The results are not too far off of the defualt settings. The results could still use some further help with global contrast and saturation, and might benefit from layering or modifications in GIMP.

Closing Thoughts

Looking through all of the results shows just how different each TMO will operate across the same image. Here are all of the final results in a single image:

I personally like the results from Mantiuk ‘06. The problem is that it’s still a little more extreme than I would care for in a final result. For a really good, realistic result that I think can be massaged into a great image, I would go to Mantiuk ‘08 or Reinhard.

I could also do something with Fattal, but would have to tone a few things down a bit.

While you’re working, remember to occasionally open up the Levels Adjustment to keep an eye on the histogram. Look for highlights blowing out, and shadows becoming too murky. All the normal rules of image processing still apply here - so use them!

You’re trying to use HDR as a tool for you to capture more information, but remember to still keep it looking realistic. If you’re new to HDR processing, then I can’t recommend enough to stop occasionally, get away from the monitor, and come back to look at your progress.

If it hurts your eyes, dial it all back. Heck, if you think it looks good, still dial it back .

If I can head off even one clown-vomit image, then I’ll consider my mission accomplished with this post.

A Couple of Further Resources

Here’s a few things I’ve found scattered around the internet if you want to read more.

We also have a sub-category on the forums dedicated entirely to LuminanceHDR and HDR processing in general: https://discuss.pixls.us/c/software/luminancehdr.

This tutorial was originally published here.

January 25, 2016

AppData and the gettext domain

When users are searching for software in GNOME Software it is very important to answer the the question “Is this localized in my language?” If you can only speak Swedish then an application talking just in American English is not much use at all. The way we calculate this in the AppStream builder is to look at the compiled .mo files, breaking them apart and then using statistics to work out what locales are included.

When we’re processing distro packages we usually extract them one at a time. We first try for a gettext domain (the .mo file name) of the distro package name, and if that’s not found then we just try and find the first .mo file in any of the locale directories. This works about 70% of the time (which is good) but fails about 30% of the time (which is bad). For xdg-app we build the application in a special prefix, along with any dependent libraries. We don’t have a distro package name for the bundle (only the application ID) and so the “first .mo file we can find” heuristic fails more often that it works. We clearly need some more information about the gettext domain from the upstream project.

AppData to the rescue. By adding this in the AppData file informs the AppStream generation code in the xdg-app builder what gettext domain to use for an application. To use this you just need to add:

  <translation type="gettext">the_gettext_domain_here</translation>

under the <component> tag. The gettext domain is normally set in the configure.ac file with the GETTEXT_PACKAGE define. If you don’t have this extra data in your application then appstream-util validate is soon going to fail, and your application isn’t going to get the language metadata and so will be lower in the search results for users using GNOME Software in a non-C locale. If your GNOME application is available in jhbuild the good news is that I’ve automatically added the <translation> tag to 104 projects semi-automatically today. For XFCE and KDE I’m going to be sending emails to the development mailing lists tomorrow. For all other applications I’m going to be using the <update_contact> email address set in the AppData file for another mass-emailing.

Although it seems I’m asking people to do more things again and again I can assure you that slowly we’re putting the foundations in place for an awesome software installer experience. Just today I merged in the xdg-app branch into gnome-software and so I’m hoping to have a per-user xdg-app preview available in Fedora 24. Exciting times. :)

Kicking off 2016 — the first Krita Sprint

This weekend, we had our place full of hackers again. The Calligra Text Layout Sprint coincided with the Krita 2016 Kick-Off Sprint. Over the course of the sprint, which started on Wednesday, we had two newcomers to KDE-related hacking sprints, and during the weekend, we had an unusual situation for free software: actual gender parity.

When asked the Calligra people whether their sprint was a success the answer Camilla gave was an unqualified “Ja!”. The main topic was sections and columns. There was also a lot of knowledge transfer and fixing of the okular plugin that’s part of Calligra. And Jos gave a sneak preview of his Fosdem presentations, not at all hindered by the noise the Krita hackers were making.

As for Krita, we started on Wednesday already, and first discussed cleaning up our source tree. We recently had a patch by a new contributor, who commented that, compared to some other projects he’d hacked our codebase is an exemplar of clarity… But it can and should be improved, we’re still having lots of legacy from the calligra days… So we’re moving things around to make the structure more logical.

Next was OpenGL. Once of the promises of Qt4 was that QPainter would work on an opengl surface. Back in the Krita 1.x days, the Qt3 days, we already had a QPainter and an OpenGL canvas. Both canvases needed separate code for brush outlines, tool helplines and so on. When we ported to Qt4, we unified that code. Painting the image on the canvas was still different for OpenGL and QPainter canvas, but what we call the tool and canvas decorations, that was all unified.

However, the OpenGL QPainter engine has never been really maintained and got stuck in the OpenGL v2 days. Maybe sufficient for some mobile applications, but not for a desktop application like Krita, which needs several OpenGL 3.2 features to function correctly. That wasn’t a problem until we decided to make a serious effort of our port to OSX. The OpenGL QPainter engine, being OpenGL v2, can only work in an OpenGL 3.2 context if the compatibility profile is set: it needs all those deprecated things.

Apple decided that nobody would need that, and offers only the Core Profile. That sucks. That means the OpenGL QPainter engine is not available on OSX for applications that need V3.2 Core Profile. Worse, Intel’s Windows drivers regularly go through a phase where using V3.2 Compatibility Profile causes black screens.

So… We teased out Qt’s OpenGL QPainter engine and started trying to port it to OpenGL v3.2. It’s supposed to be possible, but it’s likely to be extremely tricky. If we got it working, it would be nice if it could become a part of Qt… But that’s likely as challenging as writing the code in the first place.

When, the next day, we started discussing OpenGL in the context of the Qt5 and QtQuick 2 port of Krita Sketch, which Friedrich is working on, we sounded like this:


(Image by Wolthera)

After that, on Saturday, we started planning the next Kickstarter, which will have as its main topics Text and Vector. And we even began to look forward to 2017, when we might want to focus on issues around the creation of comics. Here are the minutes!

A nice dinner at our favourite Greek restaurant, Kreta, a quiet evening, and on Sunday…

On Sunday we really went through all the registered Wish bugs in bugzilla. There were 316 of them. A whole bunch we closed for now: wonderful ideas, but not going to happen in the next two years, another bunch turned out to be already implemented and the rest we carefully categorized using the following formula:

  • WISHGROUP: Pie-in-the-sky: not going to happen, but it would be really cool
  • WISHGROUP: Big Projects: needs more definition, maybe two, three months of work
  • WISHGROUP: Stretchgoal : up to a couple of weeks or a month of work
  • WISHGROUP: Larger Usability Fixes: maybe a week or two weeks of work
  • WISHGROUP: Small Usability Fixes: half a day or a day of work
  • WISHGROUP: Out of scope: too far from our current core goals to implement
  • WISHGROUP: Needs proposal and design: needs discussion among artists to define scope first

And now the library reorganization is in progress. We also fixed a bunch of bugs, so expect new Windows, OSX and Linux Krita 3.0 development builds later this week! And end of this month, the last Krita 2.9 bugfix release!

The Art of Open Source

This article introduces Blender to a wider audience.

Written for Linux Format magazine, Jim Thacker sketches Blender’s history and the successful content-driven development model.

Download or read the pdf here.

(Text and pdf is (C) by Linux Format, copied on blender.org with permission)

Screen Shot 2016-01-25 at 12.05.21 Screen Shot 2016-01-25 at 12.23.57

January 23, 2016

Bit shifting, done by the << and >> operators, allows in C languages to express memory and storage access, which is quite important to read and write exchangeable data. But:

Question: where is the bit when moved shifted left?

Answere: it depends

Long Answere:

// On LSB/intel a left shift makes the bit moving 
// to the opposite, the right. Omg 
// The shift(<<,>>) operators follow the MSB scheme
// with the highest value left. 
// shift math expresses our written order.
// 10 is more than 01
// x left shift by n == x << n == x * pow(2,n)
#include <stdio.h> // printf
#include <stdint.h> // uint16_t

int main(int argc, char **argv)
  uint16_t u16, i, n;
  uint8_t * u8p = (uint8_t*) &u16; // uint16_t as 2 bytes
  // iterate over all bit positions
  for(n = 0; n < 16; ++n)
    // left shift operation
    u16 = 0x01 << n;
    // show the mathematical result
    printf("0x01 << %u:\t%d\n", n, u16);
    // show the bit position
    for(i = 0; i < 16; ++i) printf( "%u", u16 >> i & 0x01);
    // show the bit location in the actual byte
    for(i = 0; i < 2; ++i)
        printf(" byte[%d]", i); 
  return 0;

Result on a LSB/intel machine:

0x01 << 0:      1 
1000000000000000 byte[0] 
0x01 << 1:      2 
0100000000000000 byte[0] 
0x01 << 2:      4 
0010000000000000 byte[0] 
0x01 << 3:      8 
0001000000000000 byte[0]

<< MSB Moves bits to the left, while << LSB is a Lie and moves to the right. For directional shifts I would like to use a separate operator, e.g. <<| and |>>.

January 20, 2016

xdg-app and GNOME Software

With a huge amount of support from alex, installing applications using xdg-app in GNOME Software is working. There are still a few rough edges, but it’s getting there quickly now.

Screenshot from 2016-01-20 16-50-02

PROTOCULTURAL Taipei New Causes for the Old Event Announced

Taipei Contemporary Art Center hosts a Protocultural exhibition from January 23rd to 31st, 2016. Featured in the exhibition are 3D Models printed from around the world from the #NEWPALMYRA #ARCHOFTRIUMPH #3DPRINTSPRINT.

Join the workshop and opening on January 23rd, 2016 from 2pm to 7pm.

January 18, 2016

Interview with Cremuss

Could you tell us something about yourself?

My name is Ghislain, I’m 25 years old and I live in Saint-Etienne, France. I’ve worked as a freelancer in the video game industry since 2009. Through the years I experienced a lot of different software: I began to use 3dsmax 5.1 when I entered secondary school (I had a computer very young), then a bit of Maya but I wasn’t really committed to learn 3D properly yet: it was more a toy than anything else at this point.

I grew a bit tired of it during high school so I stopped for a time and it’s only after finishing high school that I wanted to start digging into CG software again. I was fully converted to open-source projects and GNU/Linux at this moment so in my mind I obviously had to give Blender a try. I learned it, loved it and fall in love with video game art while helping with the development of an open source video game/engine, SpringRTS.

I love computers and I’m very much interested in the “science” behind it: I did two years of C++ in my free time, learned html/css and javascript, toyed a lot with Gentoo/Linux as well as a bit of computer aided music just for fun :)

Besides computers, I spend a lot of time on my BMX. I’ve been riding for seven and a half years and it’s a huge source of motivation for me.

Do you paint professionally, as a hobby artist, or both?

Both, I paint professionally but I also like to spend a lot of my free time on personal 2D/3D projects. Although to be fair, that’s kinda irrelevant to me. I’m here as an artist to do my best, no matter what. If you’re passionate enough about something, I’d say you should be serious about it and honour it by doing your best. The difference between painting as a hobby or professionally doesn’t make much sense then.

What genre(s) do you work in?


I mostly paint stylized textures for 3D models. I’m not much of a drawer or painter so most of my 2D work is involving 3D sooner or later.

Whose work inspires you most — who are your role models as an artist?

There’s so many amazingly talented artists that it’s hard to give a specific name but if I had to mention someone, it would probably be the team who worked on Allods Online. That game has the best hand-painted stylized textures I’ve ever seen. Artists at Blizzard are also obviously a huge inspiration as well as Riot Games artists (such as Bogdanbl4 who is doing crazy work!)

How and when did you get to try digital painting for the first time?

I’m pretty sure my first ever digital painting session was when my dad bought me my first ever wacom intuos 3 M ! I was in high school and I gave it a try. I drew some sort of ugly dragon I think. I remember I signed the painting with a signature as big as the painting itself haha, it was awful.

What makes you choose digital over traditional painting?

I didn’t really choose. I’m not really a good drawer or painter so I never actually started traditional painting although I should have. I had to learn digital painting because it’s a huge part of cartoony/stylized 3D work.


How did you find out about Krita?

I’m a long time open source software user so I like to stay in touch with what’s new in the open source world. I heard about Krita quite early when it started to grow. I had always  used Gimp until then but I grew really tired of its slow development, lack of 16 bits, layers group and all and most of all, lack of communication from the developers. I migrated to Krita as soon as I had all the tools I needed in order to do my work.

What was your first impression?

I had to wait quite a bit between the time I first learned about Krita and the time I began to use it in my workflow since it lacked several features I found quite important at that time, such as color balance as well as stability. But first of all, I was really impressed by how fast it grew and how quickly it matured.

What do you love about Krita?

Its fast development and the feeling that the community is really involved in the development. There’s a lot of news about the software and development reports so users know what to expect and when. Developers are really committed to do something great.

Also, specific features such as wrap tool, mirrored painting, the brush engine and non-destructive filters.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Developers are going to hate me for this, and I’ve really discussed it before with them, but I feel the colour management in Krita could be less painful. I know it’s something they’re proud of and I understand why, but it’s annoying me as well as other artists. I feel the user has to worry too much about it, especially regarding something that complex that so few of us, artists, understand and/or are willing to understand.

I get the theory behind colour management, but how did we end up with a list of 10 different sRGB profiles when it’s something that has been normalized in 1999 and was supposed to be a”standard”. Linear and gamma corrected color space is a thing, yes, but to a lot of artists, that doesn’t justify the 76 different color profiles list that we currently have in Krita. It’s scary.

For instance, I had to ask the devs themselves how to transfer properly 16 bits image form Blender to Krita because neither the software nor internet could tell me. Why and how are we supposed to know what profile uses Blender ? What to do if you realize to late that Blender doesn’t use the same profile as your file in Krita ? I’m always so scared to convert the color space because I don’t want my work to be screwed over some stuff that I don’t understand and that I really don’t care. I feel lost and I know for fact that other artists are as lost as I am.

To sum up, I feel like colour management is necessary but that it should be dealt with more by the software, underground, than by the user. How to achieve that, I don’t know 😉

Otherwise, I never liked the smudge behaviour. I think it’s a little weird and could be improved.

What sets Krita apart from the other tools that you use?

I’m not experienced enough with 2D applications to answer that honestly. I feel Krita is a great mix between a painting and a photo editing application and it blends quite nicely into my workflow pipeline so I just want to say it’s great :)

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?


I would pick these 3D stylized houses. I’m the kind of artist that never seems to be satisfied with his work but somehow I’m satisfied with those. Textures were 100% hand-painted in Krita.

What techniques and brushes did you use in it?

No particular techniques and the default brush :) Concerning the brushes I usually like to keep it as simple as possible : the default brush is just fine by me 99% of the time.

Where can people see more of your work?

I have a portfolio here: http://www.cremuss.net I always take ages to update it but right now it is almost up to date, so take a look :p

Anything else you’d like to share?

Thanks a lot for the interview ! Thanks to Krita devs, keep it up and peace :)

January 17, 2016

A Week in the Life of a Krita Maintainer

Sunday, 10th of January

Gah... I shouldn't have walked that much yesterday... It was fun, to see A's newborn daughter, A, and then we went to Haarlem to buy tea and have dinner. But with a nasty foot infection, that wasn't wise. So, no chance of serving in church today. Which means... More Krita time! Around 9:30 started the first OSX build, CentOS build and Windows build, time to try and figure out some bug fixes. Also, reply to forum posts and mails to foundation@krita.org... And prepare review requests for making Krita .kra and OpenRaster .ora images visible in all Qt applications, as images and thumbnails. Fix warnings in the OSX build. Fix deprecated function calls everywhere. Yay! Wolthera and Scott start cleaning up color stuff and the assistants gui. Dinner.

Monday, 11th of January.

Dammit, still cannot walk. But that means... More Krita time! I'm missing a whole day of income, being a small, independent enterpreneur, but I've got a better chance of fixing those Windows, OSX and Linux builds. Looks like OSX works fine now, Windows sometimes, but there's still something wrong with the Linux builds. I think we need more people in our project, people who like making builds and packages, so I can go back to bug fixing. Bug fixes... Let's fix the CentOS build issue by dropping the desktop file to json file conversion build-time. Fix a memory leak in the jpeg filter, been there for ages. Make it possible to load and save GBR and GIH brushes! Kickstarter feature lands! Not with the big rewrite of our import/export system I'd wanted to do, but it's better now than it was, import/export can specify a mapping from filename extension to mimetype, so we can load files that the shared desktop's mime database doesn't know about yet. Break selecting the right style and theme -- oops! Finally fix the unreadable Close button on the splash screen (when the user used a light-colored theme). User-support mail, forum posts, irc chat... Dmitry adds cut, copy and paste of layers, another kickstarter feature! Yay!!! Tonight is roleplaying night, need to prepare the adventure for my readers, with maps. (Session report is here.)

Tuesday, 12th of January

Six-forty-effing-five. Alarm clock. I was dreaming of Qt builds going awry, so probably a good time to get up. Erm... Mail, more mail, and forum posts during breakfast. Orange juice, coffee, tea. Off to the railways station around 7:40. Do a couple of Italian lessons with Duolingo while waiting for the train to arrive, interspersed with Facebook Page Manager community management moments. On the train. Sleepy! Time to start working on our OSX port. Beelzy did an awesome job providing me with lots of patches, now they need to be integrated. Cool, Dmitry doing lots of cleanups! But where did Nimmy go? We really need his patch to make Krita work on OSX... Ah! And there's the bad boy, we accidentally had the wrong application icon. Let's remove that one, and use ours instead. And then 9:12, arrival in Duivendrecht. 9:25, arrival at the day job -- Krita cannot pay my bills yet, so I'm working on a QtQuick1 to QtQuick2 port for a Dutch home automation company. Work, work, work, without a break, until 17:30, when it's time to go back to the train. Dinner -- and yay! Smjert has got his setup fixed and is fixing bugs again. Users keep mailing foundation@krita.org with support questions, and I'm just too nice... Answered. Time to go to bed, around midnight.

Wednesday, January the 13th

Exciting! Windows builds and OSX builds were working last Sunday, and today the Linux appimage builds are working on most systems! We might be able to release the pre-pre-pre-pre-pre-alpha tomorrow! And we're creating the correct OSX application bundles, with icon! And Timothee has fixed the icon, and Jouni has started implementing importing image sequences as animations! And the alarm clock buzzed me at 6:45. Wait, that's not yay-worthy. Refactor the PNG export dialog a bit. Work, work, work. I realize that after three months I'm one of the people at this office who's been here longest. There are ten people who've been here for more than six months, twenty who've been here for six to three months and it seems there's a legion who've just started... Fix the top toolbar sliders. And I've got extra-double-plus long hacking time on the train because the track is blocked and I have to make a detour over Zwolle. No, tonight I'm not going to finish the release notes or the fixed Windows (OpenGL is broken. wth), OSX and Linux packages. Time for dinner, a bath and bed. And all kickstarter rewards except for some shirts have arrived!

Thursday, January the 14th.

Gah, six colon four five. Time to get up. And I was dreaming of a bunch of kittens playing in a hay-loft that was being converted into yuppie student housing. Must be significant or something. At least I wasn't trying to form keys out of my pillow cover so I could type "./configure" in the qt source directory, which is what my mind tried to make me do last night. Oooh! Ben Cooksley has enabled docs.krita.org, our new manual home! Exciting! People having trouble with preset files, photoshop files, krita files. Let's try to offer a helping hand, while guzzling orange juice, tea and coffee. Dmitry adds multi-node editing of layer properties, Wolthera fixes canvas rotation. A british VFX studio tries Krita and the artists are excited -- must not forget to follow-up Layer property shortcuts, drag&drop in tabbed mode and more gets pushed by Dmitry. At work, there are meetings, and more meetings. The train home fortunately isn't delayed, because we've got our priest and his wife for dinner. After dinner, I go out for a beer with our priest. The barlady wonders what kind of a monk he is, is put right, and later on, after choir practice, our wifes join us. No more coding tonight, I've had two beers.

Friday, January 15.

My last day on my current contract, but my agenda is full with meetings and things for next week. Next week is also the mini-sprint to prepate the next kickstarter. I'm guessing they'll want to keep me, we'll see on Monday. Breakfast. Forum posts. This guy is a bit agressive, though no doubt well-meaning. Mail. Time to get started with the spriter plugin! Jouni fixes the build... I'm fixing OSX stuff left and right, and trying to figure out howto make builds faster, and get them tested. Maybe we can release on Sunday? It's only a pre-alpha, but still exciting! More forum posts. More work -- meetings, it's the end of our sprint, so sprint review, sprint retrospective, sprint planning...

Saturday, 16 January

I sleep until 9:30. Well, I wake up at seven, but then go back to snoozing and dreaming of the comic book scenario that's been whirling around my mind for a while now. It's going to be cool, if I can sit down and do something about. Fried eggs and bacon. Coffee. Orange juice. Tea. Time to fire up some more builds. Things are falling together! Some preliminary tests by other OSX users shows that my packages are okay, on recent hard, with a range of OSX versions. Figuring out the Linux and Windows builds. Some more bug fixing. Jouni pushes an even more advanced image sequence importer. In the evening, guests, but I'm too tired to go down for the Vigil service, and my foot is aching again. But I did buy new, better shoes and some pullovers, because my old shoes and pullovers were completely gone and tattered. That should help...

Sunday, January 17th.

Getting up at 8:45. Time to check a bit of mail, forward an enquiry about a Secrets of Krita download to Irina. Forum posts. This guy sure posts a lot, but it's all bug reports. Liturgy, fortunately I can serve. Coffee afterwards, then upstairs and switch on the desktop, the windows laptop and the OSX laptop. Ah! The problem with Intel drivers and OpenGL is the same problem we've got on OSX: insufficient support for the Compatibily Profile of OpenGL, which breaks Qt's OpenGL QPainter engine. Good... There's a way forward. But first... RELEASE!!!

First Krita 3.0 pre-alpha!

More than a year in the making… We proudly present the first pre-alpha version of Krita 3.0 you can actually try to run! So what is Krita 3.0 pre-alpha? It’s the Qt5 port, with animation, instant preview, a handful of new features and portable packages for everyone! When we feel everything is nice and stable we’ll release Krita 3.1, and we’ll keep on releasing new versions as and when we finish Kickstarter stretch goals. So keep in mind: Krita 3.0 is experimental.

This “release” includes the latest version of the animation and the instant-preview performance work, plus there are a number of stretch goals from the Kickstarter already available, too. And it is a major upgrade of the core technology that Krita runs on: from Qt4 to Qt5. The latter wasn’t something that was a lot of fun, but it’s needed to keep Krita code healthy for the future! Whatever may come, we’re ready for it!


The port to Qt5 meant a complete rewrite of our tablet and display code, which, combined with animation and the instant preview means that Krita is really unstable right now! And that means that we need you to help us test!

Another little project was updating our build-systems for Windows, OSX, and Linux. We fully intend to make Krita 3.0 as supported on OSX as on Windows and Linux, and to that end, we got ourselves a faster Mac.

One of the cool things coming from this system is that for Krita 3.0 we can have portable packages for all three systems! We have AppImages for Linux, DMG’s for OSX and a portable zip file for 64 bits Windows. Sorry, no 32 bits Windows builds yet…


Download Instructions


Download the zip file. Unzip the zip file where you want to put Krita..

Run the vcredist_x64.exe installer to install Microsoft’s Visual Studio runtime.

Then double-click the krita link.

Known issues on Windows:

  • The location of the configuration files has changed: configuration data and custom resources, and the new location isn’t correct yet. The settings are in %APPDATA%\Local\kritarc and the resources in %APPDATA%\Roaming\Krita\krita\krita
  • If the entire window goes black, disable OpenGL for now. We’ve figured out the reason, now we only need to write a fix. It’s a bug in the Intel driver, but we know how to work around it now.


Download the DMG file and open it. Then drag the krita app bundle to the Applications folder, or any other location you might prefer. Double-click to start Krita.

Known issues on OSX:

  • We built Krita on El Capitan. The bundle is tested to work on a mid 2011 Mac Mini running Mavericks. It looks like you will need hardware that is capable of running El Captitan to run this build, but you do not have to have El Capitan, you can try running on an earlier version of OSX.
  • You will not see a brush outline cursor or any other tool that draws on the canvas, for instance the gradient tool. This is known, we’re working on it, it needs the same fix as the black screen you can get with some Intel drivers.


For the Linux builds we now have Appimages! These are completely distribution-independent. To use the AppImage, download it, and make it an executable in your terminal or using the file properties dialog of your file manager,Another change is that configuration and custom resources are now stored in the .config/krita.org/kritarc and .local/share/krita.org/ folders of the user home folder, instead of .kde or .kde4.

Known issues on Linux:

  • Your distribution needs to have Fuse enabled
  • On some distributions or installations, you can only run an AppImage as root because the Fuse system is locked down. Since an AppImage is a simple iso, you can still mount it as a loopback device and execute Krita directly using the AppRun executable in the top folder.

What’s Next?

More alpha builds! We’ll keep fixing bugs and implementing features, and keep making releases! Right now, we’re aiming for an update every week. Remember that Krita 3.0 will not include all of the features from the last Kickstarter. We still have a ways to go with adding the rest of the stretch goals, but with this release you’ll get…

Change Log

All the animation features from the Animation Beta

And more animation goodness:

Animation Drop Frame Support

We implemented a “Drop Frames” mode for Krita and made it default option. Now you can switch on the “Drop Frames” mode in the Animation Docker to ensure your animation is playing with the requested frame rate, even when the GPU cannot handle this amount of data to be shown.

Show the current frames per second (fps) and whether the frames are dropped in the tooltip of the drop frames button.

The animation playback buttons become red if the frames are dropped. The tool tip shows the following values:

  •   Effective FPS – the visible speed of the clip
  •   Real FPS – how many real frames per second is shown (always smaller)
  •   Frames dropped – percentage of the frames dropped

Other Animation Features

  • Allow switching frames using arrow keys (canvas input setting)
  • Add “Show in Timeline” action to the Layers Docker
  • Fix Duplicate Layer feature for animated layers
  • Let the current frame spin box have a higher limit as well/ Let the user choose the start frame higher than 99
  • Fix crashes with cropped animations, the move tool and changed backgrounds.
  • Fix loading of the animation playback properties
  • Fix initialization of the offset of the frame when it is duplicated
  • Fix crash when loading a file with Onion Skins activated
  • Frames import: Under file->import animation. This requires that you have removed the krita.rc in the resource folder(settings->manage resources->open resource folder) if you had a previous version of Krita installed. Only has a filebrowser that’ll allow you to select multiple files for now, but we’ll enhance the UI in the future.

Tablet handling

  • We rewrote our tablet handling. If tablets didn’t work for you with 2.9 or even crashed, check out the 3.0 branch.
  • On Windows, we should better support display scaling
  • On Windows, support tablet screen rotation

Tool Improvements

  • Move increment keys for the move tool! This is still under development, but we are sure it’s basic form is appreciated.

Layer Improvements

  • We removed the ‘move in/out of group layer’ buttons. Moving a layer up and down will also pass it into the group.
  • Duplication of multiple layers
  • Shift+Delete shortcut to the Remove Layer action
  • Move Up/Down actions for multiple layer selections
  • Make Merge Down for multiple layers and selecting the right merged layer afterwards
  • Ctrl+G when having multiple layers selected now groups them
  • Ctrl+Shift+G will now have the currently selected layer put into a group with a alpha inherited layer above it, not unlike Photoshop clipping masks.
  • Copy-paste layer actions. This is a little different from regular copy-paste, as the latter copies pixels onto the clipboard, while copy-paste layers copies full layers onto the clipboard
  • Implemented Select All/Visible/Locked layers actions. By default they have no shortcuts, but you can assign any to them
  • Mass editing of layers. Select multiple layers and press the layer properties to mass-edit or rename them
  • properties and renaming layers now have hotkeys: F2 and F3


  • Our shortcut system is now ordered into groups.
  • You can now save and share custom versions of your shortcuts.
  • Krita now has Photoshop and Painttool Sai compatible shortcuts included by default.
  • You can now switch the selection modifiers to use ctrl instead of alt. Useful if you are on Linux, or prefer ctrl to alt.
  • Reset Canvas Rotation had gotten lost in 2.9, it’s now back and visible under view->canvas

Other features

  • Add import/export of GBR and GIH brush files, generating from animated .kra files is still coming.
  • Show editing time of a document in the document information dialog, useful for proffesional illustrators, speedpainters and other commision-takers. It detects when you haven’t performed actions for a while, and has a precision of +- 60 seconds. You can empty it in the document info dialog and of course by unzipping you .kra file and editing the meta-data there.

Minor changes

  • The popup palette now has anti-aliased edges (but it’s square on OSX…)
  • simple color selector now has white on top and black on the bottom.
  • updated ICC profiles.
  • Added a Smudge_water preset to make smudging easier.
  • Added printing of the current FPS on canvas when the debugging is activated

Because our release is so fresh and fragile, we are, for once not going to ask you to report bugs. Instead, we have a


With that in mind, it shouldn’t be surprising that we don’t recommend using this version for production work! Right now, Krita is in the “may eat your cat”-stage… But it is sure fun to play with!


(Animations created by Achille, thanks!)

January 15, 2016

The New Laptop

So, some time ago, I was wondering a) what new laptop to get and b) what to do with Krita on OSX. As for the laptop, I felt I wanted something fast, something with at least 16GB of memory and a largish screen. Preferably with a good keyboard. As for OSX, I felt it might not be worth either mine or the Krita Foundation's money to plunk down the serious moolah that Apple is asking for their hardware... After all, how many people fall for Apple's glamourie, in the real world, after all? Especially now that the reality distortion field's progenitor is no longer among us.

Then I did an interview with CGWorld's Jim Thacker, about Krita. He's very much someone from the graphics software world, not the free software world. And he expressed his amazement at my dismassal of Apple. And then my bank account was getting seriously empty, and I had to take a temporary consulting gig to make sure I could continue paying my mortgage. And at the place I'm working now, and in the commuter train I'm travelling on, more than half of the people have Apple laptops.

I don't know why... And I guess they don't know why, either. Well, Windows has always been kind of ugly, especially Windows 7 and 10. Windows 8 I really liked, by the way -- if you have a touch screen, the interaction design is simple, effective and efficient. Everything is consistent, easy and pleasant. The few metro apps I used, I loved. But, well, Apple. Apparently more people than I was able to imagine think getting an Apple laptop is a good idea.

So, all together, I decided to go and get an Apple laptop, too. Let's try to make Krita 3.0's OS X port a first-class citizen! It can only expand our community and make our next fundraiser stronger!

So we got a 15" Macbook Pro Retina. Not the most expensive model, but it was still plenty expensive. More than a thousand cups of coffee. Here's what I think of it, after a month or so.

What follows now is part hardware, part software review. I guess I need to state up-front that while I'm a long-time free software person, I've never been an Apple hater any more than a Microsoft hater. Or lover. I've used or owned three Apple computers before this one.

The first was a Powerbook Pismo I got when Tryllian went broke and the artist department was disbanded. That thing had a great screen, a great keyboard (apart from the missing keys), a great shape and style, ran OS 9 and OS X equally well. I had wanted one of the tiBooks, but they were all broken. The Pismo served me for a long time as a writing machine, as a holiday games, music and photo machine, as a Krita development machine (it dual-booted to Debian). I loved it, and then a clumsy daughter tripped over the power cable, causing it to drop nearly half a meter, onto the floor. It sparked and smoked whenever I applied current to it afterwards, so I discarded it.

Sadness! But when I started working for Hyves, I got a first generation 17" macbook pro. Still a thoroughly respectable keyboard (apart from the missing keys), great screen, really fast. And using an Apple laptop was sort of inevitable, since at Hyves we were developing a cross-platform chat client for the Hyves social network. Hyves was the Dutch Facebook, by the way. It's dead now. So was the Macbook Pro, after a year. After a year in my backpack the screen started developing vertical green, red and blue lines. Actually... It was the second device Hyves got me, the first one was dead on arrival. Still, it had a decent keyboard.

At KO GmbH, one of our less well-considered ventures was to develop a WebODF-based app for the iPad. To that end, we got an iPad and a 2011 Mac Mini. The iPad is still with Jos, but after a while, building Krita for OSX also seemed a good idea, so I got the Mac Mini. It's got a nice amount of memory, 8GB, and the disk is exceedingly roomy, at 1TB. But... The disk is also really slow, and the Krita hack, build, deploy, test, hack again cycle could easily take an hour! Which is the reason I never really did much Krita on OSX hacking since the 2014 kickstarter, when I first ported Krita to OSX.

(The keyboard I use with the Mac Mini, by the way, is more than excellent. It's a WASD custom-built keyboard, and I bought it for using with the Thinkstation desktop machine. It's got a penguin key.)

So, time for the fourth Apple computer. My needs were:

  • Fast
  • Large screen
  • Good keyboard

Two out of three isn't bad... Except for a laptop that costs more than 2000 euros. I got a 15" Macbook Pro with a 256Gb SSD. For only about 500 euros more, I would have had a bigger disk, and the disk on this laptop is already fullish, what with two Linux and one Windows virtual machines and an OSX build tree or two.

So, what's good? The screen is really good, sharp, clear, excellent color, unless you turn the brightness down. It's not as clear and sharp as the Dell XPS 12 screen, but it doesn't have the Dell's ghosting problem. And if you turn the brightness down? The contrast goes down and the colors go down and it looks washed out.

Unfortunately, it isn't a touch screen, which frustrates me, because I have gotten used to direct interaction in the past couple of years. I also don't get the way Apple uses display scaling, but that'll come, no doubt. It seems to me that if you just blow up ever pixel to four pixels the result isn't really sharper, but somehow it is, for text at least.

It's also fast. It builds Krita faster than my desktop workstation, which is really impressive. And useful, because apart from writing mail, handling bugs and irc, building Krita is pretty much what I do. Oh, and a little coding...

For the coding, I need a good keyboard, and that's where this laptop falls down.

The keyboard is ghastly. Honestly. The only reason anyone can think it's adequate is because they are too young to have used really good keyboards on laptops.

Not only does it still miss Home, End, PgUp, PgDn and Delete (the key Apple labels as Delete is Backspace), the keys have next to no travel. Yes, I get it, thin is the new black. But not when it impairs my productivity. The keys are little black squares of sharp-edged plastic with no shape. And they are also sort of wobbly.

As on Thinkpads, Fn and Control are reversed. Which makes the remarks you read now and then from people who've chosen to buy Apple instead of Lenovo because of the Fn key position rather silly.

Because of the lack of Home and End, and because of Apple's confusion about what those keys should do, it gets really tricky to navigate to the start or end of a line, something which anyone who codes does all the time. You need a different key combination in the shell, in vi, in Qt Creator, in TextMate, everywhere! I am a fast touch typist, but I am having to look under my left hand at the block of Fn, Control, Option and Command all the time to hit the right combination. I still cannot switch between the browser and the terminal and remember the shortcut to move to the next or previous tab, they are different! Honestly, I am not making this up.

The other thing that's below par, though probably related to the "really fast" bit, is the battery life. Two hours of coding and building will drain the battery down to about 40%. When building in a Windows VM and in OSX at the same time, the charger seems to have a hard time keeping up. I saw the battery drain while it was plugged in. No, I'm not asking you to believe me, I don't believe myself either.

There are other niggles about the hardware: the laptop gets really hot (again probably related to the "really fast"...), the edges are sharp, the power button is where my little finger expects the delete button. The aluminum case is really prone to scratches, even the plastic zipper of my laptop bag manages it.

But actually, Apple's design is one reason I didn't want to wait another six months for the updated model. Just imagine a Macbook Pro that is remodeled after the Macbook redesign, with keys with all of two-tenths of a milimeter of travel! Better live for a bit with an older processor.

Now for the other part of the deal...


The software. OSX. It's an operating system. Not a particularly brilliant one, but it does run applications. And it's got a gui with a a window manager. A particularly aenemic window manager that needs extensions to tile windows left and right, but that's getting "modernized" by making it more like a tablet. In the El Capitan version, it really, really, really wants you to run your applications full-screen. Okay. It's a bit stupid that from version to version the meaning of the title bar button changes, apparently randomly, too.

What is also quite irritating is the bunch of crap extra applications that take up space and are completely useless to me: safari, garageband, imovie, pages, keynote, itunes and so on. I wonder if I can just trash them...

As a development platform, OSX sucks, too, with limited OpenGL support, huge crippling changes between versions and horrible developer documentation. Oh, and a bunch of proprietary languages and API's that nobody in their right mind would even consider learning, because they are bound to be deprecated just when they get established.


The short version: I still take the Dell XPS 12 with me on the train most days. It's slow, small, the keyboard is lacking, and it's still a more usable computer. If that isn't a damning indictment, I don't know what is.

The slightly longer version: the only valid reason to buy an Apple computer is because you need to write software for OSX or iOS, in other words, to provide the people who didn't have a valid reason to get an Apple with software.


I bought this laptop from a website with a .nl extension. The website was in Dutch. It's no doubt being maintained by people who live in the Netherlands and pay income tax in the Netherlands. After ordering it, it was manufactured in China, and shipped from Shanghai to Korea, from Korea to Kazachstan, from Kazachstan to Germany, from Germany to the Netherlands. And then to me. I paid VAT in the Netherlands. At no point in the buying of this piece of crap was Ireland involved.

Except that Ireland's where the bill was ostensibly coming from.

Tim, me boy, you sell a crap OS on a crap piece of hardware and you're cheating my country of the tax income it needs, which I and the other Dutch people then need to make up, just so you can sit on a pile of cash big enough to make all of Africa into an affluent continent. If you were a honest dealer, my tax burden would be lower and my laptop would, presumably, be better. And so would the world. Time to think different?

January 14, 2016

Snow hiking

[Akk on snowshoes crossing the Jemez East Fork]

It's been snowing quite a bit! Radical, and fun, for a California ex-pat. But it doesn't slow down the weekly hiking group I'm in. When the weather turns white, the group switches to cross-country skiing and snowshoeing.

A few weeks ago, I tried cross-country skiing for the first time. (I've downhill skied a handful of times, so I know how, more or less, but never got very good at it. Ski areas are way too far away and way too expensive in Californian.) It was fun, but I have a chronic rotator cuff problem, probably left over from an old motorcycle injury, and found my shoulder didn't deal well with skiing. Well, the skiing was probably fine. It was probably more the falling and trying to get back up again that it didn't like.

So for the past two weeks I've tried snowshoes instead. That went just fine. It doesn't take much learning: it's just like hiking, except a little bit harder work remembering not to step on your own big feet. "Bozo goes hiking!" Dave called it, but it isn't nearly as Bozo-esque as I thought it would be.

Last week we snowshoed from a campground out to the edge of Frijoles Canyon, in a snowstorm most of the way, and ice fog -- sounds harsh when described like that, but it was lovely, and we were plenty warm when we were moving. This week, we followed the prettiest trail in the area, the East Fork of the Jemez River. In summer, it's a vibrantly green meadow with the sparkling creek snaking through it. In winter, it turns into a green and sparkling white forest. Someone took a photo of me snowshoeing across one of the many log bridges spanning the East Fork. You can't see any hint of the river itself -- it's buried in snow.

But if you hike in far enough, there's a warm spring: we're on the edge of the Valles Caldera, an old supervolcano that still has plenty of low-level geothermal activity left. The river is warm enough here that it's still running even in midwinter ... and there was a dipper there. American dippers are little birds that dive into creeks and fly under the water in search of food. They're in constant motion, diving, re-emerging, bathing, shaking off, and this dipper went about its business fifteen feet from where we were standing watching it. Someone had told me that he saw two dippers at this spot yesterday, but we were happy to get such a good look at even one.

We had lunch in a sunny spot downstream from the dipper, then headed back to the trailhead. A lovely way to spend a winter day.

January 10, 2016

High DPI with FLTK

After switchig to a notebook with higher resolution monitor, I noticed, that the FLTK based ICC Examin application looked way too small. Having worked in the last months much with pixel independent resolutions in QML, it was a pain to see the non adapted FLTK GUI. I had the impression, that despite of several years of a very appreciated advancement of monitor technology, some parts of graphics stacks did not move and take advantage. So I became curious on how to solve high DPI support the hard way.

First of all a bit of introduction to my environment, which is openSUSE Linux and KDE-5 with KF5 5.5.3. Xorg use many times a hardcoded default of 96 dpi, which is very unfortune or just a bug? Anyway, KDE follows X11. So the desktop on the high resolution monitor looks initially as bad as any application. All windows, icons and text are way too small to be useable. In KDE’s system settings, I had to set Force Font with DPI and doubled its size from 96 to 192. In the kscreen module I had to set scale 2.0 and then increased the KDE task bars width. Out of the box useability is bad with so many inconsistent manual user intervention. In comparision with the as well tested Unity DE, I had to set a single display scaling factor to 2.0 and everything worked fine and instantly, icons, fonts and window sizes. It would be cool if DE’s and Xorg understand screen resolution. In the tested OS X 10.10 even different screen resolutions of multiple monitors are scaled correctly, so moving a window from a high DPI monitor screen to a traditional low resolution external monitor gives reasonable physical GUI rendering. Apples OS X provides that good behaviour initially, without manual user intervention. It would be interessting how GNOME behaves with regards to display scaling.

Back to FLTK. As FLTK appears to define itself as pixel based, DPI detecion or settings have no effect in FLTK. As a app developer I want to improve user experience and modified first ICC Examin to initially render physically reasonably. First I looked at the FL::screen_dpi() function. It is only a helper for detecting DPI values. FL::screen_dpi() has has in FLTK-1.3.3 hardcoded values of 96DPI under Linux. I noticed XRandR provides correct milimeter based screen dimensions. Together with the XRandR provided screen resolution, it is easy to calculate the DPI values. ICC Examin renderd much better with that XRandR based DPI’s instead of FLTK’s 96DPI. But ICC Examin looked slightly too big. The 192DPI set in KDE are lower than the XRandR detected 227 DPI of my notebooks monitor. KDE provides its forced DPI setting to applications by setting Xft.dpi in XResources. That way all Xft based applications should have the same basic font scaling. KDE and Mozilla apps do use Xft. So add parsing of a Xlib XResources solved that for ICC Examin. The remainder of programing was to programatically scale FLTK’s default font size from 14 pixels with: FL_NORMAL_SIZE = scale(14) . Some more widget sizes, the FtGl font sizes for OpenGL, drawing line widths and graphics scaling where needed. After all those changes, ICC Examin takes now advantage of high resolution rendering inside KDE. Testing under Windows and OS X must follow.

The way to program high DPI support into a FLTK application was basically the same as in QML. However Qt’s QML takes off more tasks by providing a relative font unit, much like CSS em sizes. For FLTK, I would like to see some relative based API’s, in addition to the pixel based API’s. That would be helpful to write more elegant code and integrate with FLTK’s GUI layout program fluid. Computer times point more and more toward W3C technology. FLTK would be helpful to follow.

Support for "Airplane mode" keys

As we were working on audio jack notifications, and were wondering whether the type of notification we'd pop up in this case could be reused in other cases, I encountered a feature request that could now be solved easily with the rfkill D-Bus service we added to gnome-settings-daemon for the 3.10 release.

If you have keyboard buttons on your laptop to enable or disable Bluetooth, or Airplane mode, you can now use them. Note that the "UWB" toggle key will toggle the whole airplane mode mainly because no in-kernel driver uses it, and nobody remembers what UWB is.

Note that the labels and icons used are still subject to changes. In particular as you can see that the labels are too long for lower resolutions.

January 09, 2016

SVG Mesh Gradients, Heat Maps, and a Plea


Mesh gradients are great for creating life-like illustrations. Long asked for, they were one of the first things we added to the SVG 2 specification. I added mesh support to Inkscape (behind a compiler flag) for testing. There was one problem that was immediately apparent: as mesh gradients use bilinear interpolation between corner colors, there were non-smooth color transitions between the mesh patches leading to unwanted visual artifacts. See my blog post from a few years ago for more details about this problem.

A little bit of investigation showed that Adobe Illustrator and CoralDRAW use some sort of smoothing algorithm to get rid of the artifacts. We asked Adobe if they would give us the algorithm but they replied no. I did a little research and found that using bicubic interpolation would be a good solution. After adding a trial implementation in Inkscape and demonstrating it at last year’s Sydney SVG working group meeting, I got the group’s approval to add this to the SVG 2 specification.

Heat Maps

The discussion in the Wikipedia of bicubic interpolation has some nice heat map illustrations showing the effects of bilinear vs. bicubic interpolation. I thought it would be interesting to duplicate these illustrations using SVG. Here is how I did it.

The first step is to create a mesh. It is rather easy to do this in Inkscape with the mesh tool. I created a raw mesh with a 3×3 array of patches to match the one in the Wikipedia article. The tricky part is then to set each patch corner to a gray level based on the data. I estimated the data values from looking at the color chart in the Wikipedia illustrations. I then converted each gray level into an appropriate RGB hex color value. Here is the result with bilinear interpolation:

Heat map using a mesh gradient without smoothing.

A mesh gradient with a 3×3 array of patches using bilinear interpolation (i.e. without smoothing). The gray levels at the patch corners represent the input data.

One can clearly see the visual artifacts at the mesh boundaries (enhanced by Mach Banding). Switching to bicubic interpolation produces a smoother mesh as seen next:

Heat map using a mesh gradient without smoothing.

A mesh gradient using bicubic interpolation (i.e. with smoothing). The gray levels of the patch corners represent the input data.

The next step is to transform the gray levels into a color scale. For this I turned to SVG filters. I used a filter consisting of a single Component Transfer filter primitive. I created a table to map gray level to color by inverting the estimation I used to convert the colors into gray values. Here are the results for both bilinear and bicubic interpolation:

Heat map using a mesh gradient without smoothing.

A mesh gradient using bilinear interpolation (i.e. without smoothing) with a simple SVG filter to convert gray levels to color values.

Heat map using a mesh gradient without smoothing.

A mesh gradient using bicubic interpolation (i.e. with smoothing) with a simple SVG filter to convert gray levels to color values.

The match between these images (generated with Inkscape) and those in the Wikipedia article are pretty good.


Convincing the SVG working group to add meshes to SVG and then to adding the auto-smoothing option would not of been possible without attending the SVG working group meetings in person. It is much easier to lobby for things when one can provide live demonstrations. At these meetings I’ve been able to get auto-flowed text, text on a shape, font-features support, hatch fills, the arcs and clipped miter line joins, etc. into the SVG 2 specification. The Inkscape board has been allocating funds for my travel (thanks Inkscape donors!). To enable travel to future working group meetings, a designated fund for SVG specification work has been set up. The SVG working group holds about three face-to-face meetings each year (in addition to weekly teleconferences). If you are interested in supporting SVG development from an end-user perspective, please consider donating. More information on the work I’ve done as well a donations link can be found at the Inkscape website on the SVG Standards Work page. (Donations are tax deductible in the US.)

gom is now usable from JavaScript/gjs

Prodded by me while I snoozed on his sofa and with his cat warming me up, a day before the Content Applications hackfest, Florian Müllner started working on fixing a long-standing gjs bug that made it impossible to use gom in GNOME/JavaScript applications. The result of that initial research came a few days later, and is now part of the latest gjs release.

This also fixes using GtkBuilder and json-glib when the libraries create new objects for the benefit of the JavaScript code.

We can finally use gom to store user data in applications like Bolso. Thanks Florian!

January 08, 2016

Libre Graphics Meeting London

Libre Graphics Meeting London

Join us in London for a PIXLS meet-up!

We’re heading to London!

LGM/London Logo

I missed LGM last year in Toronto (having a baby - well, my wife was). I am going to be there this year for LGM/London!

Help Support Us

I don’t ever do this normally, but you’ve got to start somewhere, right?

It’s my long-term desire to be able to hold a PIXLS meetup/event every year where the community can get together. Where we can hold workshops, photowalks, and generally share knowledge and information. For free, for anyone.

For now though, we need support. LGM is a great opportunity for us to meet with many different projects usually having representatives there.

Donations will help us to offset travel costs to attend LGM as well as a pre-LGM meetup we are holding (more below). Anything further will go to creating new content and to cover hosting costs for the site.


I have started a Pledgie campaign to help ease the solicitation of donations:

Here’s the fancy little widget they make available:

Click here to lend your support to: PIXLS.US at Libre Graphics Meeting 2016 and make a donation at pledgie.com !

If you want to help by adding this button places, here’s the code to do it:

<a href='https://pledgie.com/campaigns/30905'>
<img alt='Click here to lend your support to: PIXLS.US at Libre Graphics Meeting 2016 and make a donation at pledgie.com !' src='https://pledgie.com/campaigns/30905.png?skin_name=chrome' border='0' style='width: initial;'>

Feel free to use it wherever you think it might help. :)


You can also donate directly via PayPal if you want:

Lend a hand via PayPal


I realize that not everyone will be able to donate funds. No sweat! If you’d still like to help out then perhaps you can help us raise awareness for the campaign? The more folks that know about it the better!

Re-tweeting, blogging, linking, yelling on a street corner all help to raise awareness of what we are doing here. Heck, just invite folks to come read and participate in the community. Let’s help even more people learn about free software!

Come Join Us

Of course, even better if you are able to make your way to London and actually join us at the Libre Graphics Meeting 2016!

The event will be April 15th — 18th, hosted by Westminster School of Media Arts and Design, University of Westminster at the Harrow Campus (red marker on the map).

The little checkered flag on the map is for something really neat: a PIXLS meetup!


I am going to arrive a day early so that we can have a gathering of PIXLS community folks and anyone else who wants to join us for some photographic fun!

Thanks to the local organizers in London (yay Lara!), we have facilities for us to use. We will be meeting on Thursday, April 14th at the Furtherfield Commons. The facilities will be available from 1000 – 1800 for us to use.

Furtherfield Commons
Finsbury Gate – Finsbury Park
Finsbury Park, London, N4 2NQ

As near as I can tell, here’s a street view of the Finsbury Gate:

I believe the Commons building is just inside this gate, and on the left.

In 2014 I held a photowalk with LGM attendees in Leipzig the day before the event that was great fun. Let’s expand the idea and do even more!

Nikolaikirche, Leipzig, LGM 2014 Nikolaikirche, Leipzig, from the 2014 LGM photowalk.
(That’s houz in the bottom right)

Here’s a Flickr album of my images from LGM2014 in Leipzig:


This year I plan on bringing a model along to shoot while we are out and about (my friend Mairi if she’s available - or a local model if not). I will also be doing a photowalk again, either in the morning or afternoon.

I am also looking for folks from the community to suggest holding their own photoshoots or workshops, so please step forward and let me know if you’d be interested in doing something! The facilities have bench seating for approximately 20 people, a big desk, and a projector as well.

Three things that I personally will be doing are (in no particular order):

  • Natural + flash portraits and model shooting workshop.
  • Photowalk around the park + surrounding environs.
  • Portraits + architectural photos for Furtherfield (the hosts).

I am hoping to possibly record some of these workshops and interactions for posterity and others that might not be able to make it to London. It might be fun to record some shoots for the community to be able to use!

I am also 100% open to suggestions for content that you, the community, might be interested in seeing. If you have something you’d like me to try (and record), please let me know!

Mairi Troisieme Hopefully Mairi will be able to make it to London to model for us!

Stellarium 0.14.2

The Stellarium development team after month of development is proud to announce the second correcting release of Stellarium in series 0.14.x - version 0.14.2. This version contains few closed bugs (ported from version 0.15.0).

A huge thanks to our community whose contributions help to make Stellarium better!

List of changes between version 0.14.1 and 0.14.2:
- Reduce planet brightness in daylight (LP: #1503248)
- Fixed perspective mode with offset viewport in scenery3d (LP: #1509728)
- Fixed wrong altitudes for some locations (LP: #1530759)
- Fixed some skyculture links
- Fixed editing some shortcut keys (LP: #1530567)
- Fixed drawing reticle for telescope (LP: #1526348)
- Refactoring coloring markers of the DSO
- Removed info about Moon phases (avoid inconsistency for strings).
- Updated default config options
- Updated icons for View dialog
- Updated Stellarium DSO Catalog
- Added list of dwarf galaxies (Search Tool)
- Added improvements in Scenery 3D plugin

January 07, 2016

The importance of Keywords for the software center

In the software center we allow the user to search using case-insensitive keywords, for instance searching for ‘excel’ could match Libreoffice Calc or many other free software spreadsheet applications. At the moment we use the translated keywords set in the desktop file, any extra <keyword> entries in the AppData file, and then fall back to generating tokens from the name, summary and description using a heuristic. This heuristic works most of the time, but a human can often do much better when we know what the most important words are. I’ve started emailing maintainers who do not have any keywords in their application (using the <update_contact> details in the AppData file), but figured I should also write something here.

So, what do I want you to do? If you have no existing keywords, I would like you to add some keywords in the desktop file or the AppData file. If you want the keywords to be used by GNOME Shell as well (which you probably do), the best place to put any search terms is in the keywords section of the desktop file. This can also be marked as translatable so non-English users can search in their own language. This would look something like Keywords=3D;printer; (remember the trailing semicolon!)

The alternative is to put the keywords in the AppData file so that they are only used by the software center and not the desktop shell. You can of course combine putting keywords in both places. The AppData keywords can also be translated, and would look like this:


Of course, you don’t have to do a release with this fix straight away, and if you have a stable branch it would be a good thing to backport this as well if it does not add translated strings or you have no string freeze policy. Nothing bad will happen if you ignore this request, but please be aware that matches from keywords are ordered higher in the search results than other partial matches from the name or summary. You also don’t have to add keywords that are the same as the application name or package name, as these are automatically added as case insensitive search tokens. If you don’t have any keywords then your application will still be visible in the various software centers, but it may be harder to find.

Comments welcome.

How to learn architecture

I've heard a couple of people asking this recently, and I like the idea of trying to learn architecture outside of the conventional way, so here are a couple of ideas. This is all just personal opinion, okay? I'm also illustrating this article with works of mine, in no particular order or meaning, just to...

January 06, 2016

Speaking at SCALE 14x

I'm working on my GIMP talk for SCALE 14x, the Southern California Linux Expo in Pasadena.

[GIMP] My talk is at 11:30 on Saturday, January 23: Stupid GIMP tricks (and smart ones, too).

I'm sure anyone reading my blog knows that GIMP is the GNU Image Manipulation Program, the free open-source photo and image editing program which just celebrated its 20th birthday last month. I'll be covering an assortment of tips and tricks for beginning and intermediate GIMP users, and I'll also give a quick preview of some new and cool features that will be coming in the next GIMP release, 2.10.

I haven't finished assembling the final talk yet -- if you have any suggestions for things you'd love to see in a GIMP talk, let me know. No guarantees, but if I get any requests I'll try to accommodate them.

Come to SCALE! I've spoken at SCALE several times in the past, and it's a great conference -- plenty of meaty technical talks, but it's also the most newbie-friendly conference I've been to, with talks spanning the spectrum from introductions to setting up Linux or introductory Python programming all the way to kernel configuration and embedded boot systems. This year, there's also an extensive "Ubucon" for Ubuntu users, including a keynote by Mark Shuttleworth. And speaking of keynotes, the main conference has great ones: Cory Doctorow on Friday and Sarah Sharp on Sunday, with Saturday's keynote yet to be announced.

In the past, SCALE been held at hotels near LAX, which is about the ugliest possible part of LA. I'm excited that the conference moving to Pasadena this year: Pasadena is a much more congenial place to be, prettier, closer to good restaurants, and it's even close to public transportation.

And best of all, SCALE is fairly inexpensive compared to most conferences. Even more so if you use the promo-code SPEAK for a discount when registering.

How to learn architecture

I've heard a couple of people asking this recently, and I like the idea of trying to learn architecture outside of the conventional way, so here are a couple of ideas. This is all just personal opinion, okay? I'm also illustrating this article with works of mine, in no particular order or meaning, just to...

January 04, 2016

Interview with SchwarzerAlptraum


Could you tell us something about yourself?

I moved to Germany from Canada after a thought experiment involving the German language. I work as a software developer now.

Do you paint professionally, as a hobby artist, or both?

I would consider myself to be primarily a hobby artist, but I have worked on a few projects in a professional capacity. Those are projects I’ve chosen to be involved with though.

What genre(s) do you work in?

I tend to draw a lot of fantasy-horror subjects, and would probably describe my work as something like semirealism. My work has probably also been influenced by manga/anime from previous interest.

Whose work inspires you most — who are your role models as an artist?

It’s difficult to point to a particular artist or work that really inspires me–I get my inspiration from various sources. Sometimes, it will just be some picture I see on Tumblr or random stuff my other artist friends reblog or draw themselves. I guess I am just content to have my own unique art style not dependent on other artists, and since realism/semirealism is all based off of reality and science, I just rely on real life references if I get stuck on a drawing.

How and when did you get to try digital painting for the first time?

When I was still in high school and I got my first digital tablet as a gift. I would switch between using Photoshop and Corel Painter, and then finally settle on Photoshop for a long time.

What makes you choose digital over traditional painting?

I have to honestly admit I have rather limited exposure to traditional painting, though I have done pencil sketching before. I guess I just gravitate towards digital painting in general though because it appears very convenient. You don’t need a lot of space setting up, and you don’t have to prepare for a lot of mess. Then you can do a lot of convenient stuff like undo, resize, move, etc. that you can’t through traditional mediums. However, I do think that the traditional painting experience is at least worth it for workflow reasons. I regret not having spent at least some time learning it the proper way.

How did you find out about Krita?

After Adobe announced their Cloud only offering of Photoshop, I became disillusioned with their tools and found out about Krita ironically enough through one of the threads discussing alternatives on the Adobe forums.

What was your first impression?

I wasn’t honestly expecting much to begin with. Maybe my previous experience with Gimp colored my perception there. I would say by the time I considered switching, I already cleaned up the workflow I used to paint stuff with on Photoshop, so I didn’t require a lot of features for painting, and I think that has helped. If Krita even just supported layers, transparency masks and a quick and convenient way to use the eyedropper tool, I would be happy. I didn’t need a lot of fancy brushes, filters or all the other bells and whistles that come with programs like Photoshop. At the time I tried Krita, it had all those, so I was fairly happy with it. It wasn’t entirely intuitive where everything I wanted to use or try was though, and there were a few small adjustments I had to make going from Photoshop to Krita, so I had some help from other people to explain or point out to me where features are and how to use them.

What do you love about Krita?

There’s a lot to love about Krita. I was pleasantly surprised at the sheer number of blending modes available in Krita. My technique generally involves layering colors on top of a grayscale image, so having more blending modes available means more choice. I also liked that there was an autosave feature. I rarely had problems with crashing on Photoshop, so I didn’t think too much about crashes then, and didn’t have to save often. But earlier versions of Krita would sometimes crash, and having an autosaved file was very useful, especially if you are not in the habit of saving frequently. I like that there are keyboard shortcuts to grab colors off of the current layer or the entire picture and to instantly resize your brush. And of course, I absolutely love that Krita is free software. I would never have considered even using it if it wasn’t free software.

What do you think needs improvement in Krita? Is there anything that  really annoys you?

Well right now, I think just the performance on larger canvases and brushes could use some work, but they have a Kickstarter for that, and have already fixed some of those issues. I guess better Mac support would be nice, since that’s the computer I’m currently running Krita on. Although I do understand the lack of support for the Mac OSX platform is due to lack of developers working on that platform. Since I’m one of the few people interested in Krita who owns a Mac and can at least program, I try to help out there when I can.

What sets Krita apart from the other tools that you use?

That it’s specifically geared towards digital painting. It contains a subset of the features that are available on Photoshop, and some of those features are not as well implemented. But that’s because Krita doesn’t try to be an all purpose generic image manipulation program like Gimp. I’m not disappointed that some of these features weren’t implemented or aren’t as polished as they are in other programs. Because they were never the focus of Krita. But on the upside, there are other features that Krita has that other graphics software don’t have because of this focus. I also find that besides being free software, Krita offers many many options. As I mentioned before, it has a very large set of blending modes. It also incorporates G’mic, which has a lot more filters than Photoshop does. You have a lot of freedom and flexibility in doing your art, and there are many ways to do that.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

“Höllentat”, which I did recently after it was recommended I give the 16-bit mode a try for better blending. The result looks great, and I think I will use it for future pictures as well. I spent more time on the background details than I usually do.

What techniques and brushes did you use in it?

I used the basic brushes that came with Krita. I didn’t need too many brushes as I paint the textures manually unless they’re too small for that. A hard brush and a soft brush for blending are generally all I need for stuff like this. I used alpha inheritance to split the foreground objects from the background objects. It allows me to paint and add lots of coloring and adjustment layers on top of the foreground objects without worrying that it will spill over into the background.

Where can people see more of your work?

I post sometimes on Deviantart: angelus-tenebrae.deviantart.com

Anything else you’d like to share?

I get very excited when people get together to make free software like this. I am impressed with the positivity that this community generates, and above all, the great work on the software that Krita developed into I do hope that more people will continue to learn about, try, promote and/or consider using Krita.

January 03, 2016

January drawing challenge

We’re starting the new year with a new drawing challenge! Here it is on the forum. The topic is Horizon(s).

You can enter until January 24, UTC mid-day. See the forum for more details.

Let’s get drawing!


Blender branches to watch in 2016

Often work on Blender happens in ‘branches'; outside of the sources that make the releases.

A branch gets usually added on blender.org for approved projects, when development work is still too experimental, or when designs need to be tested and proven still. Branches provide developers with a more quiet working place as well, without too many users breathing down their necks daily.


Several new features started as branch and went to ‘master’  (the release sources).  Well known examples were FreeStyle, OpenSubdiv, Multiview and the Dependency Graph.

There are branches that never really made it, or that are still waiting to be completed or approved on. And there are new exciting branches!

In this article I will outline interesting branches you might hear more of in 2016.


Fracture Modifier – Martin Felke

The fracture modifier allows you to break Mesh models in pieces (shards), and have it animated using rigid body physics. This branch is a rare example of “code that works” but that wasn’t approved to be added for technical design reasons. Blender first needs a thorough recode of essential parts this modifier requires.
Luckily Martin doesn’t give up, keeps working on it and reminds us every other week to not forget about his work!

Documentation and more info.

Object Nodes – Lukas Toenne

Blender’s animation system had a number of great improvements in the past years. With the new dependency graph (basically the code that ensures fastest possible object/data updates) we can also redesign the way how to define animation or scenes in general.

This includes rethinking modifiers, particles, hair, physics caches, duplicators, constraints, and so on. Using Nodes, of course!
This branch is  Lukas’ experimentation place to test Blender’s Object updates and relations using node trees.

Custom Manipulators – Julian Eisel

This branch tests new input methods for viewports in Blender. We need more ways to make the 2d/3d representation useful for tools.

The “custom manipulator” or “widgets” allow artists and tool designers to define elements in the views that directly connect to tools. In the example above, a group of faces controls the bones that deform a face.

Documentation and more info.

OpenVDB – Kévin Dietrich

OpenVDB is a toolkit from Dreamworks to manage Volumetric data and rendering. Many 3D tools have adopted it already (Maya, Houdini, Modo) making it a reliable industry standard.

The planning is that Blender 2.77 will get early support for OpenVDB data (caches), and later releases full OpenVDB support for rendering as well.

Documentation and more info.

Asset Manager – Bastien Montagne

The Asset Manager project will expand Blender’s library system with end-user-friendly asset management tools and interfaces. The concept is to use “Asset Engines”; Python add-ons communicating with Blender through an API in a similar way to how the “Render Engine API” works for external renderers.

More information on the code blog.

Realtime video in BGE – Benoit Bolsee

This branch (started as Decklink branch) contains a series of developments that aims at mixing BGE scenes with a live 3D video stream with the lowest possible latency on the video stream.

Several solutions have been tested, which explains the variety of features that are present in this branch. All of them however, can be used in other types of applications such as realtime keying and capture.

More information in wiki.


Gooseberry – Blender Institute

Several developers have been working for 18 months on making the short film “Cosmos Laundromat” possible. Much of this work has made it in a release, but for other parts it was decided to keep it in the branch for now.

Still not in ‘master’ is (for example):
– hair editing and simulation improvements
– Alembic cache system to override linked data.

Antony Riakiotakis end report (blog post)

Lukas Toenne extensive end report (wiki)

Hair sim and caching video

December 31, 2015

Weather musing, and poor insulation

It's lovely and sunny today. I was just out on the patio working on some outdoor projects; I was wearing a sweatshirt, but no jacket or hat, and the temperature seemed perfect.

Then I came inside to write about our snowstorm of a few days ago, and looked up the weather. NOAA reports it's 23°F at Los Alamos airport, last reading half an hour ago. Our notoriously inaccurate (like every one we've tried) outdoor digital thermometer says it's 26&deg.

Weather is crazily different here. In California, we were shivering and miserable when the temperature dropped below 60°F. We've speculated a lot on why it's so different here. The biggest difference is probably that it's usually sunny here. In the bay area, if the temperature is below 60°F it's probably because it's overcast. Direct sun makes a huge difference, especially the sun up here at 6500-7500' elevation. (It feels plenty cold at 26°F in the shade.) The thin, dry air is probably another factor, or two other factors: it's not clear what's more important, thin, dry, or both.

We did a lot of weather research when we were choosing a place to move. We thought we'd have trouble with snowy winters, and would probably want to take vacations in winter to travel to warmer climes. Turns out we didn't know anything. When we were house-hunting, we went for a hike on a 17° day, and with our normal jackets and gloves we were fine. 26° is lovely here if you're in the sun, and the rare 90° summer day, so oppressive in the Bay Area, is still fairly pleasant if you can find some shade.

But back to that storm: a few days ago, we had a snowstorm combined with killer blustery winds. The wind direction was whipping around, coming from unexpected directions -- we never get north winds here -- and it taught us some things about the new house that we hadn't realized in the nearly two years we've lived here.

[Snow coming under the bedroom door] For example, the bedroom was cold. I mean really cold. The windows on the north wall were making all kinds of funny rattling noises -- turned out some of them had leaks around their frames. There's a door on the north wall, too, that leads out onto a deck, and the area around that was pretty cold too, though I thought a lot of that was leakage through the air conditioner (which had had a cover over it, but the cover had already blown away in the winds). We put some towels around the base of the door and windows.

Thank goodness for lots of blankets and down comforters -- I was warm enough overnight, except for cold hands while reading in bed. In the morning, we pulled the towel away from the door, and discovered a small snowdrift inside the bedroom.

We knew the way that door was hung was fairly hopeless -- we've been trying to arrange for a replacement, but in New Mexico everything happens mañana -- but snowdrifts inside the room are a little extreme.

We've added some extra weatherstripping for now, and with any luck we'll get a better-hung door before the next rare north-wind snowstorm. Meanwhile, I'm enjoying today's sunshine while watching the snow melt in the yard.

December 28, 2015

Extlinux on Debian Jessie

Debian "Sid" (unstable) stopped working on my Thinkpad X201 as of the last upgrade -- it's dropping mouse and keyboard events. With any luck that'll get straightened out soon -- I hear I'm not the only one having USB problems with recent Sid updates. But meanwhile, fortunately, I keep a couple of spare root partitions so I can try out different Linux distros. So I decided to switch to the current Debian stable version, "Jessie".

The mouse and keyboard worked fine there. Except it turned out I had never fully upgraded that partition to the "Jessie"; it was still on "Wheezy". So, with much trepidation, I attempted an apt-get update; apt-get dist-upgrade

After an interminable wait for everything to download, though, I was faced with a blue screen asking this:

No bootloader integration code anymore.
The extlinux package does not ship bootloader integration anymore.
If you are upgrading to this version of EXTLINUX your system will not boot any longer if EXTLINUX was the only configured bootloader.
Please install GRUB.

No -- it's not okay! I have good reasons for not using grub2 -- besides which, extlinux on exact machine has been working fine for years under Debian Sid. If it worked on Wheezy and works on Sid, why wouldn't it work on the version in between, Jessie?

And what does it mean not to ship "bootloader integration", anyway? That term is completely unclear, and googling was no help. There have been various Debian bugs filed but of course, no explanation from the developers for exactly what does and doesn't work.

My best guess is that what Debian means by "bootloader integration" is that there's a script that looks at /boot/extlinux/extlinux.conf, figures out which stanza corresponds to the current system, figures out whether there's a new kernel being installed that's different from the one in extlinux.conf, and updates the appropriate kernel and initrd lines to point to the new kernel.

If so, that's something I can do myself easily enough. But what if there's more to it? What would actually happen if I upgraded the extlinux package?

Of course, there's zero documentation on this. I found plenty of questions from people who had hit this warning, but most were from newbies who had no idea what extlinux was or why their systems were using it, and they were advised to install grub. I only found one hit from someone who was intentionally using extlinux. That person aborted the install, held back the package so the potentially nonbooting new version of extlinux wouldn't be installed, then updated extlinux.conf by hand, and apparently that worked fine.

It sounded like a reasonable bet. So here's what I did (as root, of course):

  • Open another terminal window and run ps aux | grep apt to find the apt-get dist-upgrade process and kill it. (sudo pkill apt-get is probably an easier approach.) Ensure that apt has exited and there's a shell prompt in the window where the scary blue extlinux warning was.
  • echo "extlinux hold" | dpkg --set-selections
  • apt-get dist-upgrade and wait forever for all the packages to install
  • aptitude search linux-image | grep '^i' to find out what kernel versions are installed. Pick one. I picked 3.14-2-686-pae because that happened to be the same kernel I was already running, from Sid.
  • ls -l /boot and make sure that kernel is there, along with an initrd.img of the same version.
  • Edit /boot/extlinux/extlinux.conf and find the stanza for the Jessie boot. Edit the kernel and append initrd lines to use the right kernel version.

It worked fine. I booted into jessie with the kernel I had specified. And hooray -- my keyboard and mouse work, so I can continue to use my system until Sid becomes usable again.

December 27, 2015

Top 22 developers 2015

Let’s salute and applaud the most active developers for Blender of the past year again! Obviously a commit total doesn’t mean much, and it doesn’t include work on the branches* even. Nevertheless, it’s a great overview to get to know some of the people who make Blender possible.


Names are listed in increasing commit count order, with commit total between parentheses.

(* A top list of exciting 2016 branches is coming soon!)

Joerg Mueller (19)audio

Joerg (Austria) is our sound developer – very active on keeping his Audaspace (“Outer Space”)  library to work in Blender. Aside of the game engine, sound is essential for animations (lip sync) and of course the video sequence editor.

Since two years ago, this library is also available stand-alone for other projects who need sound playback or editing.

Howard Trickey (19)bevel

Howard (USA) started helping out with the BMesh project, to modernize mesh editing and to add ngon support. He worked on the Knife tool and Edge loops.

In the course of 2012 he became the main owner of Blender’s beveling tool and modifier. In 2015 mostly worked on maintaining the bevel code and fix reported bugs. His favorite fix was to preserve UV layouts while beveling. Sounds simple but was a very complex code job!

Nicholas Bishop (22)tree_franck.blend2_-1200x800

Nicholas (USA) is the developer who brought us sculpting and multires. His biggest recent contribution was dynamic topology sculpting.

In the past year Nicholas mostly worked on small fixes and code cleanup. In a special branch he worked on PTex support for Cycles.

Gaia Clary (23)collada-banner

Gaia (Germany) is the maintainer of COLLADA in Blender – using the OpenCollada library. She is also active in the Second Life community, where a lot of Blender users depend on her efforts to maintain this 3d file format for import/export.

Gaia is very interested in usability, to get Blender more accessible for occasional users. While digging into (unmaintained) parts of Blender she has a special talent opening cans of worms, finding the dirt others have been sweeping under the carpet!

Martijn Berger (34)visual-studio-2013-logo

Martijn (Netherlands) became active in 2014. He helped out keeping the build system for the Windows platform to work for us, and in the past year he took over the role of platform maintainer for OSX as well.

Aside of this, Martijn is closely involved with Cycles and OpenCL development.

Tamito Kajiyama (35)npr

Tamito (Japan) is the maintainer of FreeStyle, the stroke render engine allowing cartoon rendering.

His FreeStyle work in 2015 was work on memory consumption optimization, rendering speedup and bug fixing.

Inês Almeida (38)customicons

Inês (Portugal) is one of the Blender Game Engine maintainers. In 2015 she has been cleaning up and fixing Python scripts for the BGE.

The major new contribution was Python API code to manage (custom) Icon previews in Blender. Python scripters can make much nicer preview UIs and icon lists for Add-ons this way.

Thomas Szepe (39)bge-mist

Thomas (Austria) is a new Blender Game Engine team member. 

His main project in 2015 was to allow better control over the type and intensity of Fog in the BGE, also animated using Python.

Jorge Bernal (42)Hysteresis_scene

Jorge (Spain) is also one of the new BGE team members. His activity in 2015 was mostly maintenance and bug fixing.

A notable new feature was a new hysteresis offset to improve  LOD level transitions; to avoid popping.

Mike Erwin (53)OpenGL_logo.svg

Mike Erwin (USA) kickstarted in November this year the decision to move Blender’s drawing code (and viewports) to OpenGL 2.1 minimum. 

This involves a lot of recode for ancient code in Blender – going all the way back to 90ies.

Sybren Stüvel (57)sybrencrowd

Sybren (Netherlands) is another new team member working (mostly) on the Blender Game Engine. 

His main contributions were fixes and feature improvements related to the BGE animation system  – this because he uses it for his Phd research on crowd simulation. (*Image from his paper, not in Blender)

Dalai Felinto (87)multiview_workflow_3

Dalai (Brazil) finished and committed his Multiview project in 2015 – after two years of hard work. Thanks to this project Blender now can display and render stereo ‘3d’. Blender now can also be used for (stereo) dome rendering and it even got ready for the next VR hype!

Dalai also worked on BGE features (walk mode) and Cycles baking.

Brecht van Lommel (91)cycles

Brecht (Belgium) is the original creator of the Cycles render engine. In 2015 he was very active in our bug tracker, fixing a lot of bugs and did essential maintenance and code cleanup. 

Recently he joined the OpenGL viewport team, helping to modernize Blender’s drawing.

Tristan Porteries (97)400px-Shadow_box

Tristan (France) also joined the team this year, to work on the Blender Game Engine. He has proven to be a passionate and very capable bug fixer, helping to make the BGE much more usable.

His main contribution was improving collision raycast masking – enabling much more precise control over what hits rays and not.
Another project (picture) is drawing debug info for tweaking lamp shadow.

Thomas Dinges (104)black_bg_rendering

Thomas (Germany) main contributions are to the Cycles rendering engine.

In 2015 he mostly worked on shader graph optimizations – to speedup rendering for black areas, or to exclude shaders when they don’t emit light.

Julian Eisel (167)400px-Nodes_insert_offset

Julian (Germany) became active a little over a year ago, and already entered the top 10 Blender committers. His passion is UI and usability.

His most notable 2015 project was the “Auto Offset Node” feature, making it much simpler to work with nodes. It was one of the innovative commits Blender nowadays gets well known for.

Joshua Leung (182)greasepencil

Joshua (New Zealand) is the long-time maintainer of the animation module in Blender.

His focus during the past two years was on improving the Grease Pencil feature – the annotation sketching tool he added long ago. In the course of the past two years it became a full fledged innovative 2d/3d drawing tool for animators and story artists. His paper on this topic was accepted for SIGGRAPH Asia 2015.

Lukas Toenne (264)Screen Shot 2015-12-28 at 15.25.14

Lukas (Germany) worked on the Cosmos Laundromat movie project during the first half of 2015. He contributed a lot of hair and particle fixes in the ‘gooseberry branch’. Several of these went to ‘master’ for release as well.

Most notable commit was a proper implementation of angular bending spring forces including Jacobian derivatives (needed for curly hair or sheep fur).

Antony Riakiotakis (445)doc-ao-viewport

Antony (Greece) also worked for Blender Institute during the first half of 2015. It is thanks to him we now have a viewport with DOF and AO rendering.

His personal favourite commit this year was to speed up initialization of Cycles tile rendering, which was slowing down exponentially to 20 minutes in cases. Now it’s back to just a few seconds.

Antony is currently one of the key team members for the Viewport upgrade project.

Bastien Montagne (593)splitnormal

Bastien (France) works for Blender Foundation, supported by donations and the Development Fund. He is one of our most active bug fixers – he handled over 500 reports last year.

His main project last year was “Split Normals”, Meshes now support custom normals, i.e. normals differing from automatically computed ones. This was one of the main features wanted by game artists.

Sergey Sharybin (958)2.75-depsgraph-designoverview03

Sergey (Russia) works for both Blender Foundation and Institute, he’s the coding monster tackling every complex issue you give to him!

In 2015 he worked on topics like OpenSubdiv, a new Dependency Graph, Cycles ‘split kernel’ and support for Cycles OpenCL. He is also one of the main bug fixers and code reviewers on blender.org.

On 25 December 2015, Sergey completed his Phd at the university of Perm.

Campbell Barton (1925)Bmesh_boolean_example_03

Campbell (Australia) also works for Blender Foundation. He is ranking the #1 position of most frequent committer already since 2007.

Aside of doing hundreds of bug fixes, code reviews and even more code cleanups – his main interest is with Mesh tools in Blender. For example – he improved Mesh decimation (using weights) and added Boolean editing support in Mesh edit-mode.

Campbell is currently writing a much improved boolean mesh intersection library.

December 25, 2015

darktable 2.0

darktable 2.0

An awesome present for the end of 2015!

Sneaking a release out on Christmas Eve, the darktable team have announced their feature release of darktable 2.0! After quite a few months of Release Candidates the 2.0 is finally here. Please join me in saying Congratulations and a hearty Thank You! for all of their work bringing this release to us.

Alex Prokoudine of Libre Graphics World has a more in-depth look at the release including a nice interview with part of the team: Johannes Hanika, Tobias Ellinghaus, Roman Lebedev, and Jeremy Rosen. My favorite tidbit from the interview:

There is a lot less planning involved than many might think.

— Tobias Ellinghaus

Robert Hutton has taken the time to produce a video covering the new features and other changes between 1.6 and 2.0 as well:

A high-level look at the changes and improvements from the release post on the darktable site:


  • darktable has been ported to gtk-3.0
  • the viewport in darkroom mode is now dynamically sized, you specify the border width
  • side panels now default to a width of 350px in dt 2.0 instead of 300px in dt 1.6
  • further hidpi enhancements
  • navigating lighttable with arrow keys and space/enter
  • brush size/hardness/opacity have key accels
  • allow adding tone- and basecurve nodes with ctrl-click
  • the facebook login procedure is a little different now
  • image information now supports gps altitude


  • new print mode
  • reworked screen color management (softproof, gamut check etc.)
  • delete/trash feature
  • pdf export
  • export can upscale
  • new “mode” parameter in the export panel to fine tune application of styles upon export

core improvements:

  • new thumbnail cache replaces mipmap cache (much improved speed, stability and seamless support for even up to 4K/5K screens)
  • all thumbnails are now properly fully color-managed
  • it is now possible to generate thumbnails for all images in the library using new darktable-generate-cache tool
  • we no longer drop history entries above the selected one when leaving darkroom mode or switching images
  • high quality export now downsamples before watermark and framing to guarantee consistent results
  • optimizations to loading jpeg’s when using libjpeg-turbo with its custom features
  • asynchronous camera and printer detection, prevents deadlocks in some cases
  • noiseprofiles are in external JSON file now
  • aspect ratios for crop&rotate can be added to config file

image operations:

  • color reconstruction module
  • magic lantern-style deflicker was added to the exposure module (extremely useful for timelapses)
  • text watermarks
  • shadows&highlights: add option for white point adjustment
  • more proper Kelvin temperature, fine-tuning preset interpolation in white balance iop
  • monochrome raw demosaicing (for cameras with color filter array physically removed)
  • raw black/white point module


  • removed dependency on libraw
  • removed dependency on libsquish (solves patent issues as a side effect)
  • unbundled pugixml, osm-gps-map and colord-gtk


  • 32-bit support is soft-deprecated due to limited virtual address space
  • support for building with gcc earlier than 4.8 is soft-deprecated
  • numerous memory leaks were exterminated
  • overall stability enhancements


  • lua scripts can now add UI elements to the lighttable view (buttons, sliders etc…)
  • a new repository for external lua scripts was started: https://github.com/darktable-org/lua-scripts
  • it is now possible to edit the collection filters via lua
  • it is now possible to add new cropping guides via lua
  • it is now possible to run background tasks in lua
  • a lua event is generated when the mouse under the cursor changes

The source is available now as well as a .dmg for OS X.
Various Linux distro builds are either already available or will be soon!

December 24, 2015

darktable 2.0 released

we're proud to finally announce the new feature release of darktable, 2.0!

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.0.0

as always, please don't use the autogenerated tarball provided by github, but only our tar.xz. the checksums are:

$ sha256sum darktable-2.0.0.tar.xz
d4f2f525bbbb1355bc3470e74cc158d79d7e236f3925928f67a88461f1df7cb1  darktable-2.0.0.tar.xz
$ sha256sum darktable-2.0.0.dmg
1019646522c3fde81ce0de905220a88b506c7cec37afe010af7d458980dd08bd  darktable-2.0.0.dmg

and the changelog as compared to the 1.6.x series can be found below.

when updating from the currently stable 1.6.x series, please bear in mind that your edits will be preserved during this process, but it will not be possible to downgrade from 2.0 to 1.6.x any more.

happy 2.0 everyone :)

Robert Hutton has done a video covering the new features and other changes between darktable 1.6 and 2.0: https://youtu.be/VJbJ0btlui0
Gource visualization of git log from 1.6.0 to right before 2.0: https://youtu.be/CUiSSfbMwb8

* darktable has been ported to gtk-3.0
* the viewport in darkroom mode is now dynamically sized, you specify the border width
* side panels now default to a width of 350px in dt 2.0 instead of 300px in dt 1.6
* further hidpi enhancements
* navigating lighttable with arrow keys and space/enter
* brush size/hardness/opacity have key accels
* allow adding tone- and basecurve nodes with ctrl-click
* the facebook login procedure is a little different now
* image information now supports gps altitude

* new print mode
* reworked screen color management (softproof, gamut check etc.)
* delete/trash feature
* pdf export
* export can upscale
* new "mode" parameter in the export panel to fine tune application of styles upon export

core improvements:
* new thumbnail cache replaces mipmap cache (much improved speed, stability and seamless support for even up to 4K/5K screens)
* all thumbnails are now properly fully color-managed
* it is now possible to generate thumbnails for all images in the library using new darktable-generate-cache tool
* we no longer drop history entries above the selected one when leaving darkroom mode or switching images
* high quality export now downsamples before watermark and framing to guarantee consistent results
* optimizations to loading jpeg's when using libjpeg-turbo with its custom features
* asynchronous camera and printer detection, prevents deadlocks in some cases
* noiseprofiles are in external JSON file now
* aspect ratios for crop&rotate can be added to config file

image operations:
* color reconstruction module
* magic lantern-style deflicker was added to the exposure module (extremely useful for timelapses)
* text watermarks
* shadows&highlights: add option for white point adjustment
* more proper Kelvin temperature, fine-tuning preset interpolation in white balance iop
* monochrome raw demosaicing (for cameras with color filter array physically removed)
* raw black/white point module

* removed dependency on libraw
* removed dependency on libsquish (solves patent issues as a side effect)
* unbundled pugixml, osm-gps-map and colord-gtk

* 32-bit support is soft-deprecated due to limited virtual address space
* support for building with gcc earlier than 4.8 is soft-deprecated
* numerous memory leaks were exterminated
* overall stability enhancements

* lua scripts can now add UI elements to the lighttable view (buttons, sliders etc...)
* a new repository for external lua scripts was started: https://github.com/darktable-org/lua-scripts
* it is now possible to edit the collection filters via lua
* it is now possible to add new cropping guides via lua
* it is now possible to run background tasks in lua
* a lua event is generated when the mouse under the cursor changes

User manual has been updated, and will be released shortly after.

New camera support, compared to 1.6.9:
Base Support
- Canon PowerShot G5 X
- Olympus SP320
- Panasonic DMC-FZ150 (3:2)
- Panasonic DMC-FZ70 (1:1, 3:2, 16:9)
- Panasonic DMC-FZ72 (1:1, 3:2, 16:9)
- Panasonic DMC-GF7 (1:1, 3:2, 16:9)
- Panasonic DMC-GX8 (4:3)
- Panasonic DMC-LF1 (3:2, 16:9, 1:1)
- Sony DSC-RX10M2

White Balance Presets
- Canon EOS M3
- Canon EOS-1D Mark III
- Canon EOS-1Ds Mark III
- Canon PowerShot G1 X
- Canon PowerShot G1 X Mark II
- Canon PowerShot G15
- Canon PowerShot G16
- Canon PowerShot G3 X
- Canon PowerShot G5 X
- Canon PowerShot S110
- Panasonic DMC-GX8
- Panasonic DMC-LF1
- Pentax *ist DL2
- Sony DSC-RX1
- Sony DSC-RX10M2
- Sony DSC-RX1R
- Sony DSLR-A500
- Sony DSLR-A580
- Sony ILCE-3000
- Sony ILCE-5000
- Sony ILCE-5100
- Sony ILCE-6000
- Sony ILCE-7S
- Sony ILCE-7SM2
- Sony NEX-3N
- Sony NEX-5T
- Sony NEX-F3
- Sony SLT-A33
- Sony SLT-A35

Noise Profiles
- Canon EOS M3
- Fujifilm X-E1
- Fujifilm X30
- Nikon Coolpix P7700
- Olympus E-M10 Mark II
- Olympus E-M5 Mark II
- Olympus E-PL3
- Panasonic DMC-GX8
- Panasonic DMC-LF1
- Pentax K-50
- Sony DSC-RX1
- Sony DSC-RX10M2
- Sony ILCA-77M2
- Sony ILCE-7M2
- Sony ILCE-7RM2
- Sony SLT-A58

If you are a journalist writing about darktable you are welcome to ask if anything isn't clear. we can also proofread articles in some languages like English and German.

December 23, 2015

GNOME Software and xdg-app

Here’s a little Christmas present. This is GNOME Software working with xdg-app to allow live updates of apps and runtimes.

Screenshot from 2015-12-22 15-06-44

This is very much a prototype and needs a lot more work, but seems to work for me with xdg-app from git master (compile with --enable-libxdgapp). If any upstream projects needed any more encouragement, not including an AppData file means the application gets marked as nonfree as we don’t have any project licensing information. Inkscape, I’m looking at you.

December 21, 2015

Interview with serenitywing

Drei Kleidchen 31.10.2015

Could you tell us something about yourself?

Hello, I am a young woman who is passionate about colorful art and details. My addiction to details is a dominant feature in my artwork.

Do you paint professionally, as a hobby artist, or both?

Well, I see drawing as a hobby only. That’s because I need a clear “border” between working life and free time. I admit it.

What genre(s) do you work in?

I would say that my art style is unique. It’s difficult for me to describe my style. I would say that my artwork is most closely to naive art. Naive art’s characteristics are colorful, detailed and simplistic. It also ignores the rules of perspective.

Whose work inspires you most — who are your role models as an artist?

I have many influences. My most important sources of inspiration are naive and folk art paintings, simplistic cartoons and books about art techniques. I’ve never had formal art training because I see drawing as a hobby only.

How and when did you get to try digital painting for the first time?

I bought my Wacom graphics tablet in late 2014. I wanted to try something different and that was the reason I started digital painting at that time.

What makes you choose digital over traditional painting?

Digital painting is inexpensive to me because of this forgiving nature. You cannot mess up a picture in digital art because you can start over and over. I love digital painting for this reason.

How did you find out about Krita?

I painted my very first digital artwork with GIMP. But soon, GIMP was too complicated for digital painting for me. I searched for free GIMP alternatives. After hours of searching, I found Krita and I downloaded it.

What was your first impression?

“It looks and feels very professional but it is for free. WOW!” I thought.

What do you love about Krita?

I love Krita because it is professional in the look and feel but it is for free. I also like the large amount of digital painting brushes in Krita.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I like Krita very much but the perspective grid tools could be easier to handle. That’s all.

What sets Krita apart from the other tools that you use?

Krita is better for digital painting than other tools. Other software is more focused on image manipulation. And I don’t use closed source painting programs because they are very expensive.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

One of my most favorite Krita paintings is “Three Little Dresses”. This digital painting shows my addiction to naive and colorful art very well. It is also focused on details.

What techniques and brushes did you use in it?

I have used three kinds of brushes: basic tip, sponge texture and circle filling. I use these brushes most frequently in my digital paintings.

Where can people see more of your work?

I have a few social network accounts for publishing my best artworks. But my most important account is on DeviantArt. Here is the link to my art on DeviantArt: www.serenitywing.deviantart.com

Anything else you’d like to share?

Thank you for the great efforts on the development of Krita!

December 20, 2015

Christmas Bird Count

Yesterday was the Los Alamos Christmas Bird Count.

[ Mountain chickadee ] No big deal, right? Most counties have a Christmas Bird Count, a specified day in late December when birders hit the trails and try to identify and count as many birds as they can find. It's coordinated by the Audubon Society, which collects the data so it can be used to track species decline, changes in range in response to global warming, and other scientific questions. The CBC has come a long way from when it split off from an older tradition, the Christmas "Side Hunt", where people would hit the trails and try to kill as many animals as they could.

But the CBC is a big deal in Los Alamos, because we haven't had one since 1953. It turns out that to run an official CBC, you have to be qualified by Audubon and jump through a lot of hoops proving that you can do it properly. Despite there being a very active birding community here, nobody had taken on the job of qualifying us until this year. There was a lot of enthusiasm for the project: I think there were 30 or 40 people participating despite the chilly, overcast weather.

The team I was on was scheduled to start at 7. But I had been on the practice count in March (running a practice count is one of the hoops Audubon makes you jump through), and after dragging myself out of bed at oh-dark-thirty and freezing my toes off slogging through the snow, I had learned that birds are mostly too sensible to come out that early in winter. I tried to remind the other people on the team of what the March morning had been like, but nobody was listening, so I said I'd be late, and I met them at 8. (Still early for me, but I woke up early that morning.)

[ Two very late-season sandhill cranes ] Sure enough, when I got there at 8, there was disappointment over how few birds there were. But actually that continued all day: the promised sun never came out, and I think the birds were hoping for warmer weather. We did see a good assortment of woodpeckers and nuthatches in a small area of Water Canyon, and later, a pair of very late-season sandhill cranes made a low flyover just above where we stood on Estante Way; but mostly, it was disappointing.

In the early afternoon, the team disbanded to go home and watch our respective feeders, except for a couple of people who drove down the highway in search of red-tailed hawks and to the White Rock gas station in search of rock pigeons. (I love it that I'm living in a place where birders have to go out of their way to find rock pigeons to count.)

I didn't actually contribute much on the walks. Most of the others were much more experienced, so mostly my role was to say "Wait, what's that noise?" or "Something flew from that tree to this one" or "Yep, sure enough, two more juncos." But there was one species I thought I could help with: scaled quail. We've been having a regular flock of scaled quail coming by the house this autumn, sometimes as many as 13 at a time, which is apparently unusual for this time of year. I had Dave at home watching for quail while I was out walking around.

When I went home for a lunch break, Dave reported no quail: there had been a coyote sniffing around the yard, scaring away all the birds, and then later there'd been a Cooper's hawk. He'd found the hawk while watching a rock squirrel that was eating birdseed along with the towhees and juncos: the squirrel suddenly sat up and stared intently at something, and Dave followed its gaze to see the hawk perched on the fence. The squirrel then resumed eating, having decided that a Cooper's hawk is too small to be much danger to a squirrel.

[ Scaled quail ] But what with all the predators, there had been no quail. We had lunch, keeping our eyes on the feeder area, when they showed up. Three of them, no, six, no, nine. I kept watch while Dave went over to another window to see if there were any more headed our way. And it turns out there was a whole separate flock, nine more, out in the yard. Eighteen quail in all, a record for us! We'd suspected that we had two different quail families visiting us, but when you're watching one spot with quail constantly running in and out, there's no way to know if it's the same birds or different ones. It needed two people watching different areas to get our high count ot 18. And a good thing: we were the only bird counters in the county who saw any quail, let alone eighteen. So I did get to make a contribution after all.

I carried a camera all day, but my longest regular lens (a 55-250 f/4-5.6) isn't enough when it comes to distant woodpeckers. So most of what I got was blurry, underexposed "record shots", except for the quail, cranes, and an obliging chickadee who wasn't afraid of a bunch of binocular-wielding anthropoids. Photos here: Los Alamos Christmas Bird Count, White Rock team, 2015.

December 18, 2015

Game Art Quest update

Nathan here:

Hey there! I’m back with 3 more videos, but also with an important announcement.

Kickstarter Funding update

First, we’re pretty close to the first stretch goal now. 600€ to go in 5 days. Hell yeah! You’re so many to support the training… thank you from the bottom of my heart!

If we reach 8 000€, I’ll open-source all of the game assets made as part of the Udemy. Not just the final sprites: every useful Krita source file! That will benefit everyone, the backers and the community alike. This will allow you to dissect my workflow, and to keep a set of template sprites at hand to prototype your games.

Before we move on to the videos, let’s talk about what’s coming next.

I’m making a tutorial about life bars in Krita. I’m also working on the 3rd part of the rocks painting series, and I recorded a talk I gave at a game developers’ meetup in Lausanne last week. It is called “Building a lasting community”,

If you want to work on your game art skills, we’re working on bars this week on the Game Art Quest Facebook group.

The tangent normal brush engine overview is out!

The feature was implemented by Wolthera as part of the latest Google Summer of Code. It is one that few seem to know about, despite it being both pretty unique and useful. It allows you to draw tangent space normal map data, as well as flow maps. Sounds like Chinese to you? Just check out the 3 videos below to learn a bit more about that.

It’s now time to get back to work. Thank you kindly for your time! I’m going to communicate on the Kickstarter again on social networks this week, so please help me spread the word! We have to secure that stretch goal, right?

ʕ •̀ᴥ•́ʔ Nathan out.

December 17, 2015

Coming soon: Roda Pantura

Indonesian artist Hizkia Subiyantoro is working on a short animated film, Roda Pantura (Wheels of Pantura), to be finished in late January 2016.

The motto of the film is: “PULANG MALU, TAK PULANG RINDU: “Too ashamed to go home, too homesick not to.” It tells the story of a poor truck driver who struggles to support his family during the economic crisis. Under pressure from his job, he gets caught up in the lifestyle of Pantura, the highway along the north coast of Java, and indulges in alcohol, gambling, and even prostitution.

All the concept art, painting, coloring and 2D animation on this film is done with Krita, supplemented with Blender for compositing and video editing. Hizkia says: “Krita is the best software for me, I like the very smooth brush engine. This is useful, and a lot of people share their brushes. My concept styles are: tropical, dirty, imperfect, colorful.”

Here’s a video of the film’s painting and animation process:

Roda Pantura was first shown off in a pitch session at the Animation du Monde” festival at Annecy, France, in June 2015.

Check out the Facebook fan page and visit Hizkia’s site!

December 16, 2015

fourth release candidate for darktable 2.0

we're proud to announce the fourth and hopefully last release candidate in the new feature release of darktable, 2.0~rc4.

the release notes and relevant downloads can be found attached to this git tag:
please only use our provided packages ("darktable-2.0.rc4.*" tar.xz and dmg) not the auto-created tarballs from github ("Source code", zip and tar.gz). the latter are just git snapshots and will not work! here are the direct links to tar.xz and dmg:

the checksums are:

$ sha256sum darktable-2.0~rc4.tar.xz
$ sha256sum darktable-2.0~rc4.dmg

packages for individual platforms and distros will follow shortly.

the changes from rc3 include minor bugfixes, such as:

  • translation updates
  • an OpenCL bug fixed
  • fixed a rare crash when leaving darkroom
  • fixed a bug in gamut checking
  • fixed a possible crash in lua garbage collection
  • fixed a bug in rawspeed's sraw handling
  • fixed a bug in circle masks
  • allow toggling tethering zoom with 'z'
  • don't make some modules too wide in some languages
  • fixed high CPU load when hovering filmstrip
  • fixed lighttable prefetching
  • fixed thumbnail color management
  • make tethered focusing for Canon cameras more robust wrt. libgphoto2 version
  • some styling fixes
  • fixed filmstrip width when duplicating images in darkroom
  • scroll sidepanels when mouse is next to the window border
  • speed up thumbnail color management using OpenMP
  • fixed a few small memleaks
  • fixed PDF exporter when compiled without Lua
  • camera support improvements:
    • noiseprofiles:
      • add Olympus E-M5 Mark II
      • add Canon M3
      • add Fuji X30
      • add Sony RX10M2
      • add Panasonic GX8
      • add Sony A7RII
    • whitebalance:
      • Canon S110
      • Canon S100
      • Canon G1 X Mark II
      • Canon PowerShot G3 X
      • Canon PowerShot G16
      • Canon PowerShot G15
      • Canon PowerShot G1 X
      • Canon 1D Mark III
      • Canon 1Ds Mark III
      • Canon EOS M3
      • Panasonic GX8
      • Pentax *ist DL2
      • Sony NEX-F3
      • Sony SLT-A33
      • Sony NEX-5T
      • Sony NEX-3N
      • Sony A3000
      • Sony A5000
      • Sony A5100
      • Sony A500
      • Sony RX1R
      • Sony RX1
      • Sony DSLR-A580
      • Sony ILCE-6000
      • Sony ILCE-7S
      • Sony ILCE-7SM2
      • Sony SLT-A35
    • rawspeed fixes:
      • support all Panasonic GF7 crops
      • support all Panasonic FZ70/FZ72 crops
      • support FZ150 3:2 and fix 4:3 blackpoint
      • fixed whitebalance for Canon G3 X
      • whitebalance support for the Leaf Credo line
      • fixed Nikon D1 whitebalance
      • whitebalance support for Canon Pro1/G6/S60/S70
      • add another whitebalance mode for Canon D30
      • fixed whitebalance for Canon G3/G5/S45/S50
      • fixed whitebalance for Canon S90
      • support another Canon 350D alias

Christmas is already here for image processing folks…

… Because the latest version 1.6.8 « XMas 2015 Edition » of G’MIC has been released last week :)G’MIC (GREYC’s Magic for Image Computing) is an open-source framework for image processing that I’ve started to develop in August 2008. This new release is a good occasion for me to discuss some of the advances and new features added to the project since my last digest, published here 8 months ago. Seven versions have been published since then.

I’ll talk about the few improvements done on the G’MIC plug-in for GIMP (which is currently the most used interface of the project). But I’ll also describe more technical developments of the framework, which already permitted to design some cool new filters and above all, which promise great things for the future.

1. Overview of the G’MIC project

(Skip this section, if you already know what G’MIC is !)


Fig .1.1. Mascot and logo of the G’MIC project, a full-featured open-source framework for image processing.

G’MIC is an open-source project developed in the IMAGE team at the GREYC laboratory (a public research unit based in Caen / France). It is distributed under the free software license CeCILL. It provides several user interfaces for the manipulation of generic image data. Currently, its most used interface is a plugin for GIMP (the plug-in alone seems to be quite popular, as it has been downloaded more than 2 million times so far).

G'MIC plugin for GIMP Overview

Fig. 1.2. The G’MIC plug-in (version 1.6.8) running inside GIMP 2.8.

G’MIC also proposes other worthy interfaces, as a command line tool (similar to the CLI tools from ImageMagick or GraphicsMagick) which fully exploits its capabilities. There is also a nice web service G’MIC Online to apply effects on images directly from a web browser (check it out, it has been re-designed recently!). Currently, the G’MIC core library (used by all those interfaces) provides more than 900 different functions to perform various image processing tasks. The library alone takes approximately 5.5 MB and has more than 100,000 lines of source code. There is a wide diversity of commands available for geometric and color manipulation, image filtering (denoising and enhancement by spectral, variational or non-local methods), motion estimation / image alignment, primitive drawing (including 3D meshes), edge detection, segmentation, artistic rendering, etc.  It’s a very generic tool, as you can imagine. To learn more about the motivations and goals of the G’MIC project, I suggest you take a look at the presentation slides. You can also meet the developers on the new forum, or on the IRC channel #pixls.us (on Freenode).

2. Improvements of the G’MIC plug-in for GIMP

Three major improvements have been done on the GIMP plug-in interface, these last months.

2.1. Import and export of image data via GEGL buffers

That’s probably the greatest news to announce about the plug-in itself. The G’MIC plug-in is now able to import and export image data from/to GIMP using GEGL buffers, which are the foundations of the current development version (2.9) of GIMP. In practice, the plug-in can process high-bit depth images (for example with pixels stored as 32 bits-floats per channel), without loss of numerical accuracy. Actually, the application of a single filter on an image rarely generates serious visual artifacts only because of the 8 bits quantization of an input image. But people applying many filters and effects sequentially on their images will appreciate that the numerical accuracy is preserved during their entire workflow. The animation below (Fig.2.1) shows the subtle quantization phenomenon occurring when certain types of filters are applied on an image (here, an anisotropic smoothing filter).

Where the application of the same G'MIC filter on an image by 8 bits and 32 bits per channel

Fig. 2.1. Comparison of output histograms generated by the same G’MIC filter applied on an image encoded with 8 bits or 32 bits-per-channel values.

The histograms in Fig.2.1 illustrate the fact we get a greater diversity of pixel values ​​when the filter is applied on an image stored as 32-bits floats rather than 8 bits integers. Applying such filters several times in 8 bits-mode could slowly create undesirable quantization / posterization / banding effects on the processed image. To say it briefly, the G’MIC plug-in seems to be ready for the next major stable release GIMP 2.10 :) (a big THANKS to Mitch from the GIMP team, who took the time to explain me how to achieve this).

2.2. Support for UTF-8 strings in widgets

The plugin now handles strings encoded in UTF-8 for all interface widgets, which will ease its internationalization. For instance, we have now a Japanese version of the interface (as well as Japanese translations for some of the filters), as shown in the screenshot below (Fig. 2.2).

The GIMP plugin G'MIC partially translated into Japanese

Fig.2.2. The G’MIC plug-in for GIMP, partially translated in Japanese.

2.3. A more responsive interface

The plug-in now reacts faster to a parameter change, particularly when an image preview is being computed. A cancellation mechanism has been added, and we don’t have to wait the rendering of the filter preview anymore before having the possibility to act on the interface. It might sound obvious, but this greatly improves the user experience.

Also check out the interesting initiative of Jean-Philippe Fleury – a nice contributor – who displays on his web page a gallery of most of the filters and image effects available in the G’MIC plug-in. It’s a very quick and easy way to get an overview of the filters, and to observe how they perform on different images (sometimes with different parameters).

3. The PatchMatch algorithm

PatchMatch is a fast algorithm for computing image correspondences using a block-matching technique. Since a few years, it raises a high interested in the image processing community, thanks to its computation speed and the fact it can be parallelized relatively easily. It has become then a key step in many recent algorithms that need to compute fast comparisons between image patches, including methods for reconstructing missing parts of an imagesynthesizing textures, or performing super-resolutionThus, I’m happy to announce that G’MIC has now a parallelized implementation of PatchMatch (through the native command -patchmatch) both for 2D and 3D images. Based on this recent implementation, I’ve been already able to develop and add three interesting filters in G’MIC, described hereafter.

3.1. Another filter for image Inpainting

A new image inpainting filter – using a multi-scale approach – has been included in G’MIC to reconstruct missing or corrupted image regions. Very convenient to pull your stepmother out of the holidays photos you shot in Ibiza! Note that this is not the first patch-based Inpainting algorithm available in G’MIC (this post on Pat David’s blog already presented such a tool in February 2014). It’s just a different algorithm, thus giving different results. But in this field, having the choice to generate alternative results is definitely not a luxury, given the ill-posed nature of the problem. And if – like me – you have a machine with a lot of CPU cores just waiting to be heated, then this new multi-threaded filter is for you! Here are two examples of results (Fig.3.1 and Fig.3.2) we can obtain with this Inpainting filter.


Fig. 3.1. Removing a bear from an image, using the PatchMatch-based Inpainting filter of G’MIC.


Fig. 3.2. Removing the Eiffel Tower, using the PatchMatch-based Inpainting filter of G’MIC.

A video showing how this latest example has been obtained (in only 1 minute 07) is visible below. This is a real-time video, no tricks were used, except using a PC with 24 cores! Even to me (the developer), it’s a bit magical to see the algorithm regenerating the whole trees at the feet of the Eiffel Tower. Actually, its does nothing more than cloning an existing tree from elsewhere in the picture. But that is still impressive. Of course, it does not work as well on all images where I’ve experimented the algorithm :) .

This new Inpainting filter is roughly the same as the one Adobe introduced in Photoshop CS5 in 2010, under the (very windy) name “Content-Aware Fill”. The main strength of this kind of algorithms is their ability to reconstruct large coherent textured areas. Note also that my PhD student Maxime Daisy (who defended his thesis successfully two weeks ago, well done pal!) has worked on some nice extensions of such Inpainting methods to remove moving objects in video sequences (see his demo page for examples). These extensions for video Inpainting are not yet available in G’MIC (and are still very demanding in terms of computation time), but it may happen one day, who knows?

3.2. Texture re-synthesis

By using a very similar multi-scale approach, I’ve also added a new texture synthesis filter in G’MIC. It allows to synthesize a texture of arbitrary size from an input template texture (which is usually smaller). Of course, the filter does something smarter than just tiling the template texture: It actually regenerates a whole new texture having similar characteristics by copying / pasting some bits of the template texture so that the synthesized result does not contain visible seams. Fig.3.3 shows such a 512x512 re-synthesized texture (top-right) from an input template image having a smaller size 280x350 (left). A comparison of this result with a more basic tiling process is also shown (bottom-right).


Figure 3.3. Example of re-synthesizing a complex texture using G’MIC and comparison with basic tiling.

Note that we already had a texture re-synthesis algorithm in G’MIC, but it worked well only with micro-textures. On the contrary, the example on Fig.3.3 shows that the new algorithm is capable of regenerating more complex macro-textures.

3.3 Make seamless textures

And here is the third (and last) PatchMatch-based filter added so far in G’MIC. This one allows to transform an input texture to make it appear seamless when tiled. The idea is a bit different than with the previous filter, as most of the original input texture is actually untouched here. The filter only performs some global color normalization (to remove low-frequency gradients due to the non-homogeneous illumination) and locally adds an inner (or outer) frame that makes the texture appear seamless. Look at what the filter can do on the two examples below (textures shown as 2x2 tiled). Not bad at all ! I have to admit anyway that the parameters are still a bit hard to tweak.


Fig.3.4. Make a seamless texture with the third PatchMatch-based filter in G’MIC.


Fig.3.5 Make a seamless texture with the third PatchMatch-based filter in G’MIC.

Believe me, having this PatchMatch algorithm now implemented in G’MIC is a really great news! No doubts many filters could benefit from this method in the future.

4. A more powerful expression evaluator

In my mind, G’MIC has always been a very convenient tool to quickly design and build complex image processing pipelines (possibly with loops and conditionals). However, when I do algorithm prototyping, it frequently happens I want to experiment with quite “low-level” operations. For instance, I may want to write nested (x,y) loops over an image, where each loop iteration performs quite “exotic” things at the pixel scale. This kind of low-level algorithms cannot be expressed easily as pipelines of macro-operators (assuming we don’t already have a macro-operator that does the job, of course!). Trying to express such an algorithm anyway as a G’MIC pipeline was a bit tedious, either because it required some tricks to write, or because it ended up in a very slow implementation (due to the nested loops over all the image pixels that keep being interpreted). Note that this problem is not specific to G’MIC : try to discuss with image processing folks working with Matlab to see how creative they are when it comes to avoid writing explicit loops in a Matlab script ! It’s a fact: image processing often requires a lot of (pixel) data to process, and prototyping heavy IP algorithms only with interpreted languages introduces a speed bottleneck.

My goal anyway was to be to able to prototype most of my algorithms (including those low-level ones) directly in G’MIC, with the constraint they never become ridiculously slow to run (compared to an equivalent C++ implementation, for instance). As you might have guessed, a conventional solution to this kind of problems would consist in embedding a Just-in-Time compiler into my interpreter. So that’s what I’ve done – but only for a subset of G’MIC – namely the mathematical expression evaluator. As one might expect, evaluating math expressions is indeed a key feature for an image processing framework such as G’MIC.

To be perfectly honest, this JIT has been already included in G’MIC for years. But it has been also greatly optimized and improved these last months to allow the parallelized evaluation of more complex expressions. These expressions can now be in fact considered as small programs themselves (containing loops, variables, conditional tests, pixel access, etc.), rather than just plain mathematical formulas. As a result, we now have a very efficient expression evaluator in G’MIC, that appears to be at least 10x faster than the equivalent feature proposed in ImageMagick (with its generic command -fx) !

Thus, G’MIC is now a tool good enough to prototype almost any kind of image processing algorithms, without sacrificing the computation speed. I wrote a quite comprehensive post about this improved expression evaluator a few months ago, so please take some time to read it if you want more details. Here again, this is very encouraging news for the future of the project.

5. An abundance of filters!

This section proposes a bulk list of some other features/effects/filters added in G’MIC since last April.The screenshots below often show those filters running  in the G’MIC plug-in for GIMP, but of course they may be applied also from all the other interfaces (the CLI tool, in particular).

  • Filters search engine: this has been a long-time request from users, so finally we have proposed a first draft for such a tool (thanks Andy!). Indeed, the number of filters in the plug-in keeps increasing and it is not always convenient to find a particular filter, especially if you forgot to put it in your favorites. Anyway, you have still to specify relevant keywords to be able to find you filters with this tool.

Fig.5.1. New filter search engine by keywords in the G’MIC plug-in for GIMP.

  • Vector Painting: This image filter is a direct consequence of the recent improvements made on the math expression evaluator. It basically generates a piecewise constant abstraction of an input color image. It’s quite funny to look at the full source code of this filter, because it is surprisingly short (19 lines) !

Fig. 5.2. The Vector Painting filter creates image abstractions.

  • Freaky B&W: This filter performs black & white conversion (grayscale to be precise) of an input color image. But it does this by solving a Poisson equation rather than applying a simple linear formula to compute the luminance. Here, the goal is to get a B&W image which contains all of the contrast details present in each color channel of the original image. This filter often generates images that have a kind of HDR look (but in grayscale).

Fig. 5.3. The Freaky B&W filter converts a color image into HDR-like grayscale.

  • Bokeh: This one tries to generate synthetic Bokeh backgrounds on color images (artistic blur). It is highly configurable and can generate Bokeh with various kind of shapes (circles, pentagons, octagons, stars ..). It can be used for instance to add some light effects to a color image, as illustrated below.

Fig. 5.4. Application of the new Bokeh filter on a color image.

  • Rain & snow: as the name suggests, this filter adds rain or snow on your images (example below is a zoom of the previous image).

Fig. 5.5. Adding rain on a color image with the filter Rain & snow.

  • Neon lightning: not much to say about this one. It generates random curves starting from a region A to another region B and stylizes those curves as neon lights. Convenient for generating cool wallpapers probably :)

Fig. 5.6. Effect Neon Lightning in action!

  • Stroke: This filter decorate simple monchrome shapes on a transparent layer with color gradients. The figure below illustrates the transformation of a simple text (initially drawn with a single color) by this filter.

Fig. 5.7. Applying the Stroke filter to decorate a plain text.

  • Light leaks: this one simulates the presence of undesired light artifacts in pictures. In general, that is rather the kind of effects we want to remove! But the simulation of image degradation can be useful in some cases (for those who want to make their synthesized images more realistic for instance).

Fig. 5.8. Simulation of image degradation with the Light leaks filter.

  • Grid [triangular]: This filter converts an image into a grid composed of colored triangles, allowing many choices for the type of grids to apply.

Fig. 5.9. Transforming an image in a triangular grid.

  • Intarsia: This is a quite original filter. It allows to turn an image into a knitting diagram for Intarsia. The idea came from a user of the GimpChat forum to help her distributing custom knitting patterns (apparently there are websites that offer such patterns in exchange for hard cash). The filter itself does not modify the picture. It only generate a knitting diagram as an output HTML page. An example of such a diagram is visible here.
  • Drop water: I must admit I particularly like this one. The filter simulates the appearance of water droplets over an image. In its most basic version (with the default parameters), it looks like this:

Fig. 5.10. Adding water droplets on an image with the Drop water filter.

But it gets really interesting when the user defines its own droplet shapes by adding a transparent layer containing a few colored strokes on it. For instance, if you add this layer (here represented with pink strokes) to the leaf image:


Fig. 5.11. Adding a layer to define the desired shape of water drops.

Then, the filter Drop water will generate this image, with your custom water drops:


Fig. 5.12. Synthesis of custom shaped water drops.

Moreover, the filter has the good sense to generate its result as a stack of several ouput layers, each corresponding to the simulation of a different physical phenomena, namely: the specular light spots, the drop shadows, the own shadows, and the light refraction. It is then easy to modify these layers indivisually afterwards to create interesting effects with the image. For instance, if we apply a color-to-monochrome filter on the base image layer while preserving the refraction layer (computed from the initial color image), we end up with this:


Fig. 5.13. Applying the Drop water filter, followed by a colorimetric change on the initial layer only.

The short video tutorial below explains how to achieve this effect, step by step, using the G’MIC plug-in for GIMP. It takes less than 2 minutes, even for a complete beginner.

Even better, you can superimpose the result of the Drop water filter on another image. Here is an example where two portraits have been aligned then “merged” together, separated by a kind of liquid interface. It’s really easy to do, and the result looks pretty cool to me.


Fig. 5.14. The G’MIC Drop water filter applied to create a liquid interface between two distinct portraits.

Well, that’s it for the newest filters!

6. Other notable points

To end this long post, let me list some other interesting news about the project:

  • First of all, we have been contacted by Tobias Fleischer. Tobias is a professional plug-in developer for Adobe After Effects and Premiere Pro (among other video editing software). He has made ​​a tremendous job in developing a DLL for Windows which encapsulates libgmic (the G’MIC core library). And, he has used this DLL to implement a prototype of a G’MIC plug-in for After Effects. He was kind enough to share some screenshots with us of what he has done recently: you can see below an example of the G’MIC skeleton filter running in After Effects.

Fig. 6.1. Prototype of a G’MIC plug-in running in Adobe After Effects.

I can already hear grumpy guys saying: “But why didn’t he try doing that for a free software, as Natron, rather than a 100% proprietary software? “. But in fact, he did it also! And not just for Natron, but for any other video processing software compatible with the OpenFX API. The screenshot below shows a prototype of an OpenFX-compliant G’MIC plug-in running in Sony Vegas Pro. There are probably no reasons why this couldn’t work with Natron too.


Fig. 6.2. Prototype of an OpenFX-compliant G’MIC plug-in running under Sony Vegas Pro.

Of course, all this should be still considered as Work In Progress. There are probably still bugs to fix and improvements to do, but this is really promising. I can’t wait to see these plug-ins released and running on Natron :)

  • Let me continue with some other good news: Andrea, a nice contributor (who is also the developer of PhotoFlow) managed to understand why G’MIC had frequent crashes under MacOSX when computations were done using multiple threads. He proposed a simple patch to solve this issue (it was actually related to the tiny stack size allocated for each thread). G’MIC under MacOSX should be then fully functional now.
  • The webcam/video interface ZArt of G’MIC has been also improved a lot, with new added filters, automatic detection of the webcam resolutions, and the possibility of having a dual-window (one for monitoring + one for display, on a second screen).

Fig 6.3. The ZArt interface running the G’MIC filter Drop water in dual-window mode.

  • A new animated demo has been also added to G’MIC, through the command -x_landscape. The demo itself is of little interest for the user, but was gainful for me to test/improve the speed of the G’MIC interpreter (and isn’t it funny to have this kind of things in G’MIC 😉 ). Note that all animated and/or interactive G’MIC scripts are available via this invokation of the CLI tool:
$ gmic -demo

Fig. 6.4. Animated virtual landscape in the G’MIC CLI tool gmic with command -x landscape.

As I had more and more small and funny animations implemented into G’MIC, I’ve decided to put them together as a little intro, entirely coded as a single G’MIC script (750 lines of code). The result smells the divine fragrance of old 8/16 bits machines (the corresponding video is shown below). Remember that all have been generated 100% with the G’MIC interpreter. Isn’t that cool ?

8. So, what’s next?

Now it’s probably time to rest a little bit :) . 2015 has already been a very busy year for the G’MIC project. Christmas is rapidly approaching and I’ll take a (small) break.

To be honest, I don’t have yet a clear idea about the things I’ll focus on. In any case, it appears that G’MIC keeps improving gradually and can potentially raise the interest of more and more people. I wanted to thank especially all the contributors of the project and users of the software who give regular feedback to make it better and better. Also, be aware that each significant addition is usually subject to an announcement on the Google+ page of G’MIC, so it’s easy to stay informed about the project progression, day after day. Dear reader, would you have any proposal for major improvements ?

Let me finally conclude this post by proclaiming it mightily: “Long live open-source image processing ! And Happy XMas everyone !



December 15, 2015

Let's Encrypt!

Let's Encrypt!

Also a neat 2.5D parallax video for Wikipedia.

I finally got off my butt to get a process in place to obtain and update security certificates using Let’s Encrypt for both pixls.us and discuss.pixls.us. I also did some (more) work with Victor Grigas and Wikipedia to support their #Edit2015 video this year.

Wikipedia #Edit2015

Last year, I did some 2.5 parallax animations for Wikipedia to help with their first-ever end-of-the-year retrospective video (see the blog post from last year). Here is the retrospective from #Edit2014:

So it was an honor to hear from Victor Grigas again this year! This time around there was a neat new crop of images he wanted to animate for the video. Below you’ll find my contributions (they were all used in the final edit, just shortened to fit appropriately):

Wiki #Edit2015 Bel from Pat David on Vimeo.
Wiki #Edit2015 Je Suis Charlie from Pat David on Vimeo.
Wiki #Edit2015 Samantha Cristoforetti Nimoy Tribute from Pat David on Vimeo.
Wiki #Edit2015 SCOTUS LGBQT from Pat David on Vimeo.

Here is the final cut of the video, just released today:

Victor chose some really neat images that were fun to work on! Of course, all free software was used in this creation (GIMP for cutting up the images into sections and rebuilding textures as needed and Blender for re-assembling the planes and animating the camera movements). I had previously written a tutorial on doing this with free software on my blog.

You can read more on the wikimedia.org blog!

New Certificates

Let's Encrypt Logo

Yes, this is not very exciting I’ll concede. I think it is important though.

I recently took advantage of my beta invite to Let’s Encrypt. It’s a certificate authority that provides free X.509 certs for domain owners that was founded by the Electronic Frontier Foundation, Mozilla, and the University of Michigan.

The key principles behind Let’s Encrypt are:

  • Free: Anyone who owns a domain name can use Let’s Encrypt to obtain a trusted certificate at zero cost.
  • Automatic: Software running on a web server can interact with Let’s Encrypt to painlessly obtain a certificate, securely configure it for use, and automatically take care of renewal.
  • Secure: Let’s Encrypt will serve as a platform for advancing TLS security best practices, both on the CA side and by helping site operators properly secure their servers.
  • Transparent: All certificates issued or revoked will be publicly recorded and available for anyone to inspect.
  • Open: The automatic issuance and renewal protocol will be published as an open standard that others can adopt.
  • Cooperative: Much like the underlying Internet protocols themselves, Let’s Encrypt is a joint effort to benefit the community, beyond the control of any one organization.

It was relatively painless to obtain the certs. I only had to run their program to use ACME to verify my domain ownership through placing a file on my web root. Once the certs were generated I only had to make some small changes for it to work automatically on https://discuss.pixls.us. (And to automatically get picked up when I update the certs within 90 days).

I still had to manually copy/paste the certs into cpanel for https://pixls.us, though. Not automated (or elegant) but it works and only takes an extra moment to do.

#waiting4bassel Released

“Waiting…”, a poetry book by Bassel’s wife, Noura Ghazi Safadi, is now available: waiting4bassel.cc.

December 12, 2015

Emacs rich-text mode: coloring and styling plain text

I use emacs a lot for taking notes, during meetings, while watching lectures in a MOOC, or while researching something.

But one place where emacs falls short is highlighting. For instance, if I paste a section of something I'm researching, then I want to add a comment about it, to differentiate the pasted part from my added comments, I have to resort to horrible hacks like "*********** My comment:". It's like the stuff Outlook users put in emails because they can't figure out how to quote.

What I really want is a simple rich-text mode, where I can highlight sections of text by changing color or making it italic, bold, underlined.

Enter enriched-mode. Start it with M-x enriched-mode and then you can apply some styles with commands like M-o i for italic, M-o b for bold, etc. These styles may or may not be visible depending on the font you're using; for instance, my font is already bold and emacs isn't smart enough to make it bolder, the way some programs are. So if one style doesn't work, try another one.

Enriched mode will save these styles when you save the file, with a markup syntax like <italic>This text is in italic.</italic> When you load the file, you'll just see the styles, not the markup.


But they're all pretty subtle. I still wanted colors, and none of the documentation tells you much about how to set them.

I found a few pages saying that you can change the color of text in an emacs buffer using the Edit menu, but I hide emacs's menus since I generally have no use for them: emacs can do everything from the keyboard, one of the things I like most about it, so why waste space on a menu I never use? I do that like this:

(tool-bar-mode 0)
(menu-bar-mode 0)

It turns out that although the right mouse button just extends the selection, Control-middleclick gives a context menu. Whew! Finally a way to change colors! But it's not at all easy to use: Control-middleclick, mouse over Foreground Color, slide right to Other..., click, and the menu goes away and now there's a prompt in the minibuffer where you can type in a color name.

Colors are saved in the file with a syntax like: <x-color><param>red</param>This text is in red.</x-color>

All that clicking is a lot of steps, and requires taking my hands off the keyboard. How do I change colors in an easier, keyboard driven way? I drew a complete blank with my web searches. A somewhat irritable person on #emacs eventually hinted that I should be using overlays, and I eventually figured out how to set overlay colors ((overlay-put (make-overlay ...)) turned out to be the way to do that) but it was a complete red herring: enriched-mode doesn't pay any attention to overlay colors. I don't know what overlays are useful for, but it's not that.

But in emacs, you can find out what's bound to a key with describe-key. Maybe that works for mouse clicks too? I ran describe-key, held down Control, clicked the middle button -- the context menu came up -- then navigated to Foreground Color and Other... and discovered that it's calling (facemenu-set-foreground COLOR &optional START END).

Binding to keys

Finally, a function I can bind to a key! COLOR is just a string, like "red". The documentation implies that START and END are optional, and that the function will apply to the selected region if there is one. But in practice, if you don't specify START and END, nothing happens, so you have to specify them. (region-beginning) and (region-end) work if you have a selected region.

Similarly, I learned that Face->italic from that same menu calls (facemenu-set-italic), and likewise for bold, underline etc. They work on the selected region.

But what if there's no region defined? I decided it might be nice to be able to set styles for the current line, without selecting it first. I can use (line-beginning-position) and (line-end-position) for START and END. So I wrote a wrapper function. For that, I didn't want to use specific functions like (facemenu-set-italic); I wanted to be able pass a property like "italic" to my wrapper function.

I found a way to do that: (put-text-property START END 'italic). But that wasn't quite enough, because put-text-property replaces all properties; you can't make something both italic and bold. To add a property without removing existing ones, use (add-text-properties START END (list 'face 'italic)).

So here's the final code that I put in my .emacs. I was out of excuses to procrastinate, and my enriched-mode bindings worked fine for taking notes on the project which had led to all this procrastination.

;; Text colors/styles. You can use this in conjunction with enriched-mode.

;; rich-style will affect the style of either the selected region,
;; or the current line if no region is selected.
;; style may be an atom indicating a rich-style face,
;; e.g. 'italic or 'bold, using
;;   (put-text-property START END PROPERTY VALUE &optional OBJECT)
;; or a color string, e.g. "red", using
;;   (facemenu-set-foreground COLOR &optional START END)
;; or nil, in which case style will be removed.
(defun rich-style (style)
  (let* ((start (if (use-region-p)
                    (region-beginning) (line-beginning-position)))
         (end   (if (use-region-p)
                    (region-end)  (line-end-position))))
     ((null style)      (set-text-properties start end nil))
     ((stringp style)   (facemenu-set-foreground style start end))
     (t                 (add-text-properties start end (list 'face style)))

(defun enriched-mode-keys ()
  (define-key enriched-mode-map "\C-ci"
    (lambda () (interactive)    (rich-style 'italic)))
  (define-key enriched-mode-map "\C-cB"
    (lambda () (interactive)    (rich-style 'bold)))
  (define-key enriched-mode-map "\C-cu"
    (lambda () (interactive)    (rich-style 'underline)))
  (define-key enriched-mode-map "\C-cr"
    (lambda () (interactive)    (rich-style "red")))
  ;; Repeat for any other colors you want from rgb.txt

  (define-key enriched-mode-map (kbd "C-c ")
    (lambda () (interactive)    (rich-style nil)))
(add-hook 'enriched-mode-hook 'enriched-mode-keys)

How to report a bug

One of the coolest thing you get when working with open-source software, is the possibility to report bugs to the developers, and follow the progresses until it gets fixed. Most, if not all open-source software has, somewhere, a bug tracker, which is an online application where you open such a bug report (sometimes called "issue"...

Second Animation Beta

With lots of bug fixes. This is still based on the stable 2.9 version of Krita, though, not on what will become Krita 3.0. But there are a lot of crash fixes, bug fixes and improvements, and the following operations now work on animations as well:

  • Merge Down
  • Merge multiple layers
  • Flatten Layer
  • Flatten Image
  • Convert Image/Layer Color Space
  • Crop Image/Layer
  • Scale Image/Layer
  • Rotate Image/Layer
  • Shear Image/Layer

Nathan Lovato has made a nice introduction video:

(And don’t forget to support his kickstarter!, now close to reaching the first stretch goal.)

For Ubuntu Linux, you can get the second animation beta through the Krita Lime repositories. Just choose ‘krita-animation-testing’ package:

sudo add-apt-repository ppa:dimula73/krita
sudo apt-get update
sudo apt-get install krita-animation-testing

Packages for Windows:

For Windows, two packages are available: 64-bit, 32-bit:

You can download the zip files and just unpack them, for instance on your desktop and run. You might get a warning that the Visual Studio 2012 Runtime DLL is missing: you can download the missing dll here. You do not need to uninstall any other version of Krita before giving these packages a try!

User manuals and tutorials:

December 11, 2015

Game Art Quest videos!

Here’s Nathan’s overview of the Instant Preview beta:

And the animation toolset:

December 10, 2015

New Cantarell Maintainer

GNOME’s default UI typeface Cantarell gained a new maintainer, Nikolaus Waxweiler. Nikolaus was on a holy crusade to improve the state of text rendering on Linux by improving FreeType and lobbying for changes in different projects. While he continues on those efforts, bug reports hinted (pun intended) that GNOME’s font rendered worse as FreeType improved so he went on to investigate why. It turns out that Cantarell had many metric related issues and its development was quite stagnant.

Cantarell with properly defined Blue Zones

The process of making fonts look good even on our crappy LoDPI screens is commonly called hinting and it requires precision. Cantarell ships as an .otf font or OpenType font with Postscript-flavor. Hinting .otf fonts works differently from hinting common TrueType or .ttf fonts. You define several horizontal snapping zones, also called blue zones, like descender, x-height, capital height, ascender height, etc. so that they match your design. That means that the outlines you are designing must as a general rule be placed precisely within these blue zones or the hinting algorithm will ignore them. Blue zones must be constructed to contain everything they should contain. The idea is that a well designed typeface is consistent and regular enough that coarse blue zones describe the design well. The hinting algorithm of the font design application will then place stem information according to those blue zones, among other considerations. For a final rendering, glyphs are snapped to those horizontal blue zones, meaning they are only snapped on the Y-axis. Think ClearType.

Cantarell was full of off-by-ones-or-twos and technical don’t-do-thats, diacritics were inconsistent and Cyrillics still need a look-over. The bold face was in an even poorer state. Back in June 2013 Adobe’s contributed a new high-quality OpenType/Postscript-flavor hinting engine. The problems were only magnified because the new engine actually takes hinting information seriously and will spit out garbage when the font designer isn’t careful.

Nikolaus has cleaned up the fonts considerably by fixing the blue zones, outline precision to fall within them and numerous other problems. You might also notice that letters like bdfklh are a bit taller for a more harmonious look. It should display consistently at all sizes now.

Oh, by the way: FreeType 2.6.2 brings more user-visible changes. If you are on a rolling-release distribution, you might have noticed them already. If you wish to read up more on those changes, Nikolaus wrote a lengthy article about the changes and future plans on freetype.org.

For a Cantarell 0.1.0 release we plan to have all accented glyphs fixed. Nikolaus has finished a first pass at diacritics and is now looking for testers. Anyone who deals with diacritics in his/her language, especially central European people, please get the .otf fonts from the git repo and report bugs to the GNOME bug tracker.

Do note that Nikolaus didn’t just dive into maintainership, but wrote most of this post. My incentives to get him set up a blog and post on Planet GNOME have been fruitless so far.

Game Art Quest 150% funded

Here’s Nathan with a new update:gameartquest150-funded

I’m back with a new batch of tutorials for you! As promised, here are the overviews of the Beta instant preview and animation toolset in Krita.

The next batch of tutorials will come at the end of the week, with the 3rd part of the rocks painting series. And another video will focus on creating life bars, an essential UI element in many game genres, in Krita.

I’m delaying the video about the tangent brush engine. I’ve written the overview, but I want to add 2 examples using Krita and Blender to show you how to use it in practice. It’s all coming out next Tuesday!

The campaign is now over 150% funded! Less than €2000 to go and we get to the first stretch goal. Pretty cool, isn’t it? I’m going to do my best to get there, and beyond!

The Linux Vendor Firmware Service Welcomes Dell

I’m finally able to talk about one of the large vendors who have been trialing the LVFS service for the last few months. Dell have been uploading embargoed UEFI firmware files with metadata for a while, testing the process and the workflow ready for upcoming new models. Mario (Dell) and myself (Red Hat) have been working on fixing all the issues that pop up on real hardware and making the web service both secure and easy to use.

Screenshot from 2015-12-10 08-43-08

The Dell Edge Gateway will be available for purchase soon. When it goes on sale, firmware updates in Linux will work out-of-the-box. I’ve been told that Dell are considering expanding out the LVFS support to all new models supporting UEFI updates. In order to prioritize what models to work on first, I’ve been asked to share this anonymous survey on what Dell hardware people are using on Linux and to gauge if people actually care about being able to upgrade the firmware easily in Linux.

In November, 224 firmware files were installed onto client systems using fwupd. At the moment to update the firmware metadata you need to manually click the refresh button in the updates page, which so far 40,000 people have done. Given that the ColorHug hardware is the only released hardware with firmware on the LVFS, the 224 downloads is about what I expected. When we have major vendors like Dell (and other vendors I can’t talk about yet) shipping real consumer hardware with UEFI update capability the number of files provided should go up by orders of magnitude.

For Fedora 24 we’ll be downloading the firmware metadata automatically (rather than requiring a manual refresh in the updates panel) and we’ve been using the Fedora 23 users as a good way of optimizing the service so we know we can handle the load when we get hundreds of thousands of automatic requests a month. Fedora 24 will also be the first release able to do updates on DFU USB devices, and also the first release with system upgrade capabilities inside GNOME Software so it’s quite exciting from my point of view.

With Dell on board, I’m hoping it will give some of the other vendors enough confidence in the LVFS to talk about distributing their own firmware in public. The LVFS is something I run for all distributions free of charge, but of course Red Hat pays for my time to develop and run the service. I’m looking forward to working with more Red Hat partners and OpenHardware vendors adding even more firmware for even more types of device in the future.

December 09, 2015

Krita 2.9.10

The tenth bugfix release already! There are quite a few fixes, though the focus these days is really on getting Krita 3.0 ready for the first alpha release. Of particular interest to owners of a system with an AMD processor is the option to disable vectorization support. That might improve performance, but only for users of AMD systems! We’ve also got a fix by a new contributor, Nicholas LaPointe!

Here’s the full, unordered, list:

  • Fix crash in artistic text tool selection (bug 354907)
  • Fix saving tags: use the UTF-8 codec to save the tags instead of the locale codec (bug 356306)
  • Do no longer allow users to save 16 bit/channel linear gamma sRGB files to PNG without a profile
  • Do not crash when scaling down an image if the scaling factor gets too close to 0 (bug 356156)
  • Add a basic storyboard template
  • Fix generating the .kra and .ora thumbnail (bug 355884)
  • Fix loading some PSD files by Photoshop after saving from Krita (bug 355110)
  • Add an option to disable the vectorization speed up. This is for broken AMD processors.
  • Add an option to log OpenGL calls for debugging purposes
  • Remember the last-used profile when importing an untagged 16 bit/channel PNG image
  • Fix a number of import/export filters that reported the wrong error code after the user pressed cancel. Patch by Nicholas LaPointe, thanks!
  • Fix a rare crash that could happen during slow operations (bug 352918)
  • Fix an even rarer crash that could happen when recalculating the image under some circumstances. (bug 353043)
  • Fix a crash when switching sub-windows after removing a layer (bug 355205)
  • Improve memory usage when saving images by now creating a big image then scaling it down for the thumbnail
  • Make the small color selector consistent in color layout with other color selectors (bug 353505)
  • Fix a crash that occasionally happened when working with multiple images (bug 354975)
  • Fix a crash when using painting assistants (bug 353152)
  • Fix a race condition that could happen during complex operations (bug 353638)
  • Fix a crash in the shortcut system (bug 345562)
  • Restore the window correctly after going to canvas-only and back (bug 352018)

There are packages for Ubuntu Linux, Windows and OSX in the usual place! Steam users will be able to get the new packages shortly through the beta channel, and Stuart is also working on packages from the Animation branch. And speaking of animation… Later this week, we’ll probably have new animation branch packages for Ubuntu Linux and Windows!

Announcing Libre Graphics magazine issue 2.4, Capture


We’re very pleased to announce the release of issue 2.4 of Libre Graphics magazine.

This issue looks at Capture, the act of encompassing, emulating and encapsulating difficult things, subtle qualities. Through a set of articles we explore capture mechanisms, memory, archiving and preservation of volatile digital information, physicality and aesthetization of data.

Capture is the fourth and final issue in volume two of Libre Graphics magazine. Libre Graphics magazine is a print publication devoted to showcasing and promoting work created with Free/Libre Open Source Software. We accept work about or including artistic practices which integrate Free, Libre and Open software, standards, culture, methods and licenses.

We invite you to buy the print edition of the issue, download the PDF or browse through the source files. We invite everyone to download, view, write, pull, branch and otherwise engage.

This issue features pieces and contributions by Raphael Bastide, Antonio Roberts, Eric Schrijver, Birgit Bachler, Walter Langelaar, Stéphanie Vilayphiou, Scandinavian Institute for Computational Vandalism, Sebastian Schmieg, Kenneth Goldsmith, Robert M Ochshorn, Jessica Fenlon, Anna Carreras, Carles Domènech and Mariona Roca.

December 08, 2015

Contents Apps Hackfest 2015

As you might already have noticed from the posts on Planet GNOME, and can find again on the hackfest's page, we spent some time in the MediaLab Prado discussing and hacking on Content Apps.


Following discussions about Music's state, I did my bit trying to gather more contributors by porting it to grilo 0.3, and thus bringing it back into the default jhbuild target.


I made some progress on Videos' "series grouping" feature. Loads of backend code written, but not much in the way of UI for now. We however made some progress discussing said UI with Allan.

I also took the opportunity to fix a few low-hanging fruit^Wbugs.


This is where the majority of my energy went. After getting a new enough version of LibreOffice going on my machine (Fedora users, that lives in rawhide only right), no thanks to COPR, I tested Pranav's LibreOfficeKit integration into gnome-documents, after Cosimo rebased it.

You can test it now by checking out the wip/lokdocview-rebase branch of gnome-documents, grabbing the above mentioned version of LibreOffice, and running:

LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib64/libreoffice/program/  gjs org.gnome.Documents

After a number of fixes, and bugs filed in the Document Foundation bugzilla, we should be able to land this so that you can preview and edit word processing documents, presentations and spreadsheets without going through the heavy PDF preview.

A picture, which doubles the length of my blog post

And the side-effect of this work is that we can start adding new "views" to the application without too much trouble, like, say, an epub view.


Many thanks to the GNOME Foundation for sponsoring my travel, the MediaLab Prado for hosting us, and Allan and Florian for organising the hackfest.

December 07, 2015

Interview with Jack the Vulture

Grinspitter Portrait

Could you tell us something about yourself?

Hi! My name is Crystal Snyder, but most people call me Jack. I’m 22 years old, I’m from New Jersey, USA. I have an associates degree in Studio Art but digital painting and drawing is my main focus. Animation and nature are my biggest inspirations.

Do you paint professionally, as a hobby artist, or both?

Right now I’m just a hobby artist though I would like to work professionally one day.
Dragon Character Design

What genre(s) do you work in?

I’m kind of all over the place. Lately I’ve been drawing a lot of fanart. I know some artists look down on it, but for me its a fun way to interact with the community of fans and explore ideas that I have. Its fun! Sometimes we need a little fun. And I get to make other people happy, which is the best part. When I’m not doing that I would say creatures, and fantasy creatures. I love creature design, though I’m a beginner at it. It’s one of my favorite things to do creatively! Whenever I see a creature that inspires me I get excited and think “I can use that!”

Whose work inspires you most — who are your role models as an artist?

Chris Sanders and Nico Marlet come to mind immediately. Chris Sanders has a beautiful and distinct drawing style, and I adore his story telling. Nico Marlet’s character and creature designs, particularly his work on movies like How to Train Your Dragon and Kung Fu Panda, have been a huge influence on me. They are beautiful to look and and very detailed while remaining very sketchy. I like art where I can see the artist’s process and lines rather than super polished. David Revoy has been a huge influence and help in the open source painting world. He’s a phenomenal artist and I am definitely a fan of his Pepper and Carrot comic!
Jack The Vulture

How and when did you get to try digital painting for the first time?

Not counting scribbling in MS Paint as a kid, I took a Graphic Arts course when I was 14. I had no idea what to expect, but I learned how to use the Adobe Creative Suite, and they introduced tablets and digital painting to me. I took to it immediately and asked my parents for a tablet for Christmas. Before I got a tablet, I used Gimp to color sketches. My dad was and still is an avid Linux user, and he was the first to introduce me to open source programs.

What makes you choose digital over traditional painting?

So much more freedom. To experiment, to make mistakes, to change things around, to try whatever you can think in your mind without having to make the journey to an art supply store. Especially when you don’t have the money to buy all those paints and canvases. Also, digital art has its own look, or a collection of looks really, digital art is so varied. But like any medium, digital art has its own charm to me. I like seeing digital brush strokes as much as I like seeing oil paint strokes. It has its own charm. I think its a beautiful medium with lots of possibilities. Its also very accessible. For me, as long as I have a tablet and a computer, I can create anything I am willing to work to create, I won’t run out of digital canvases.

How did you find out about Krita?

Probably about 7 or 8 years ago, when I was just starting to learn digital art, my dad showed it to me.

What was your first impression?

I wasn’t extremely impressed, having been taught only Photoshop and really not knowing enough about digital art to have any worthy opinions about art programs. I barely remember what Krita was like back then. But over the years, I liked collecting as many free digital art programs as I could get my hands on. I eventually checked back in on Krita and saw that it was still in development. The tools looked exciting. I don’t think I remember exactly but I think back then it wasn’t yet available on Windows, which was all I had at the time. I waited until it was available, started using it, and never really looked back.

What do you love about Krita?

So much. I don’t only use this program because its free, that’s for sure. I bought Photoshop in college and all but abandoned it for Krita as my main painting application. The navigation is one of my favorite things. How easy it is to move around the canvas, rotate, scale my brush, open my favorite brushes with just a click of my pen, and continue painting without having to take my hands off my tablet and hit extra keys makes almost every other program I’ve used feel clunky by comparison. The program is also very customizable, there are so many brush engines to play with, and new features are being worked on all the time. It develops very fast, there’s always something to look forward to. The developers actually care about what the community wants, and its focused on a great painting experience. I love that. I love that our opinions as users are so valued, I love how dedicated the developers are to making Krita a wonderful professional experience.
Mountain Goat Creature

What do you think needs improvement in Krita? Is there anything that
really annoys you?

It’s actually hard for me to tell since my current computer is not very fast at all, but Krita still feels pretty slow sometimes with large brushes and canvases. Though I know that is being worked on and I’m excited to see the improvements! And hopefully a faster computer will help me.

What sets Krita apart from the other tools that you use?

Customization, navigation, development speed, and developers who care about the needs and wants of the painting community. Krita feels like it was made for painters. It feels like it was made to accommodate anyone’s style. I love that. Photoshop never gave me that. Painter makes me feel like I’m being pushed into a “real media” box. SAI doesn’t have enough features for me. Krita takes the best of all these programs and gives it to me in one package. I feel like I can do anything with it. I’m also very excited for the animation feature!

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

It’s actually very hard for me to choose favorites. I draw and paint a lot but rarely work on big projects. Sometimes why I like a picture is based on the emotion I felt I expressed, sometimes its on how successful I think my technical skill was. Right now its probably a portrait I did of a dragon species I designed. I spent time on her scales and I like the lighting. I’m really bad at naming my artwork, so it doesn’t have a proper title.

What techniques and brushes did you use in it?

​I don’t really remember but I probably my usual workflow. Sketch, color under the sketch, use layer modes to achieve desired lighting, paint over the sketch, clean up, etc. It’s different every time. It really depends on the mood I’m in what brushes I’ll use.

Where can people see more of your work?

​My Deviantart http://jackthevulture.deviantart.com/ is probably the best place to view my art.

Anything else you’d like to share?

​I really just want to thank everyone working on Krita for their hard work on this incredible program. You make so much possible, especially for people who can’t afford “industry standard” software. But Krita never feels like an alternative to paid programs, I use it because I love it. It is its own, incredible software that happens to be free and open source. Thank you for all you do.

December 04, 2015

Distclean part 2: some useful zsh tricks

I wrote recently about a zsh shell function to run make distclean on a source tree even if something in autoconf is messed up. In order to save any arguments you've previously passed to configure or autogen.sh, my function parsed the arguments from a file called config.log.

But it might be a bit more reliable to use config.status -- I'm guessing this is the file that make uses when it finds it needs to re-run autogen.sh. However, the syntax in that file is more complicated, and parsing it taught me some useful zsh tricks.

I can see the relevant line from config.status like this:

$ grep '^ac_cs_config' config.status
ac_cs_config="'--prefix=/usr/local/gimp-git' '--enable-foo' '--disable-bar'"

--enable-foo --disable-bar are options I added purely for testing. I wanted to make sure my shell function would work with multiple arguments.

Ultimately, I want my shell function to call autogen.sh --prefix=/usr/local/gimp-git --enable-foo --disable-bar The goal is to end up with $args being a zsh array containing those three arguments. So I'll need to edit out those quotes and split the line into an array.

Sed tricks

The first thing to do is to get rid of that initial ac_cs_config= in the line from config.status. That's easy with sed:

$ grep '^ac_cs_config' config.status | sed -e 's/ac_cs_config=//'
"'--prefix=/usr/local/gimp-git' '--enable-foo' '--disable-bar'"

But since we're using sed anyway, there's no need to use grep to get the line: we can do it all with sed. First try:

sed -n '/^ac_cs_config/s/ac_cs_config=//p' config.status

Search for the line that starts with ac_cs_config (^ matches the beginning of a line); then replace ac_cs_config= with nothing, and p print the resulting line. -n tells sed not to print anything except when told to with a p.

But it turns out that if you give a sed substitution a blank pattern, it uses the last pattern it was given. So a more compact version, using the search pattern ^ac_cs_config, is:

sed -n '/^ac_cs_config=/s///p' config.status

But there's also another way of doing it:

sed '/^ac_cs_config=/!d;s///' config.status

! after a search pattern matches every line that doesn't match the pattern. d deletes those lines. Then for lines that weren't deleted (the one line that does match), do the substitution. Since there's no -n, sed will print all lines that weren't deleted.

I find that version more difficult to read. But I'm including it because it's useful to know how to chain several commands in sed, and how to use ! to search for lines that don't match a pattern.

You can also use sed to eliminate the double quotes:

sed '/^ac_cs_config=/!d;s///;s/"//g' config.status
'--prefix=/usr/local/gimp-git' '--enable-foo' '--disable-bar'
But it turns out that zsh has a better way of doing that.

Zsh parameter substitution

I'm still relatively new to zsh, but I got some great advice on #zsh. The first suggestion:

sed -n '/^ac_cs_config=/s///p' config.status | IFS= read -r; args=( ${(Q)${(z)${(Q)REPLY}}} ); print -rl - $args

I'll be using final print -rl - $args for all these examples: it prints an array variable with one member per line. For the actual distclean function, of course, I'll be passing the variable to autogen.sh, not printing it out.

First, let's look at the heart of that expression: the args=( ${(Q)${(z)${(Q)REPLY}}}.

The heart of this is the expression ${(Q)${(z)${(Q)x}}} The zsh parameter substitution syntax is a bit arcane, but each of the parenthesized letters does some operation on the variable that follows.

The first (Q) strips off a level of quoting. So:

$ x='"Hello world"'; print $x; print ${(Q)x}
"Hello world"
Hello world

(z) splits an expression and stores it in an array. But to see that, we have to use print -l, so array members will be printed on separate lines.

$ x="a b c"; print -l $x; print "....."; print -l ${(z)x}
a b c

Zsh is smart about quotes, so if you have quoted expressions it will group them correctly when assigning array members:

x="'a a' 'b b' 'c c'"; print -l $x; print "....."; print -l ${(z)x} 'a a' 'b b' 'c c' ..... 'a a' 'b b' 'c c'

So let's break down the larger expression: this is best read from right to left, inner expressions to outer.

${(Q) ${(z) ${(Q) x }}}
   |     |     |   \
   |     |     |    The original expression, 
   |     |     |   "'--prefix=/usr/local/gimp-git' '--enable-foo' '--disable-bar'"
   |     |     \
   |     |      Strip off the double quotes:
   |     |      '--prefix=/usr/local/gimp-git' '--enable-foo' '--disable-bar'
   |     \
   |      Split into an array of three items
    Strip the single quotes from each array member,
    ( --prefix=/usr/local/gimp-git --enable-foo --disable-bar )

For more on zsh parameter substitutions, see the Zsh Guide, Chapter 5: Substitutions.

Passing the sed results to the parameter substitution

There's still a little left to wonder about in our expression, sed -n '/^ac_cs_config=/s///p' config.status | IFS= read -r; args=( ${(Q)${(z)${(Q)REPLY}}} ); print -rl - $args

The IFS= read -r seems to be a common idiom in zsh scripting. It takes standard input and assigns it to the variable $REPLY. IFS is the input field separator: you can split variables into words by spaces, newlines, semicolons or any other character you want. IFS= sets it to nothing. But because the input expression -- "'--prefix=/usr/local/gimp-git' '--enable-foo' '--disable-bar'" -- has quotes around it, IFS is ignored anyway.

So you can do the same thing with this simpler expression, to assign the quoted expression to the variable $x. I'll declare it a local variable: that makes no difference when testing it in the shell, but if I call it in a function, I won't have variables like $x and $args cluttering up my shell afterward.

local x=$(sed -n '/^ac_cs_config=/s///p' config.status); local args=( ${(Q)${(z)${(Q)x}}} ); print -rl - $args

That works in the version of zsh I'm running here, 5.1.1. But I've been warned that it's safer to quote the result of $(). Without quotes, if you ever run the function in an older zsh, $x might end up being set only to the first word of the expression. Second, it's a good idea to put "local" in front of the variable; that way, $x won't end up being set once you've returned from the function. So now we have:

local x="$(sed -n '/^ac_cs_config=/s///p' config.status)"; local args=( ${(Q)${(z)${(Q)x}}} ); print -rl - $args

You don't even need to use a local variable. For added brevity (making the function even more difficult to read! -- but we're way past the point of easy readability), you could say:

args=( ${(Q)${(z)${(Q)"$(sed -n '/^ac_cs_config=/s///p' config.status)"}}} ); print -rl - $args
or even
print -rl - ${(Q)${(z)${(Q)"$(sed -n '/^ac_cs_config=/s///p' config.status)"}}}
... but that final version, since it doesn't assign to a variable at all, isn't useful for the function I'm writing.

December 03, 2015

December Drawing Challenge Open!

You’re all invited to join John’s monthly drawing challenge again. It’s fun, it’s friendly and helps with the all-important goal of keeping drawing. This month’s theme is “Complementary”.

Here’s last month winner!

Waiting, by Elenav

Waiting, by Elenav

December 02, 2015

Users Guide to High Bit Depth GIMP 2.9.2, Part 2

Users Guide to High Bit Depth GIMP 2.9.2, Part 2

Part 2: Radiometrically correct editing, unbounded ICC profile conversions, and unclamped editing

This is Part 2 of a two-part guide to high bit depth editing in GIMP 2.9.2 with Elle Stone.
The first part of this article can be found here: Part 1.


  1. Using GIMP 2.9.2 for radiometrically correct editing
    1. Linearized sRGB channel values and radiometrically correct editing
    2. Using the “Linear light” option in the “Image/Precision” menu
    3. A note on interoperability between Krita and GIMP
  2. GIMP 2.9.2’s unbounded floating point ICC profile conversions (handle with care!)
  3. Using GIMP 2.9.2’s floating point precision for unclamped editing
    1. High bit depth GIMP’s unclamped editing: a whole realm of new editing possibilities
    2. If the thought of working with unclamped RGB data is unappealing, use integer precision
  4. Looking to the future: GIMP 3.0 and beyond

Radiometrically correct editing

Linearized sRGB channel values and radiometrically correct editing

One goal for GIMP 2.10 is to make it easy for users to produce radiometrically correct editing results. “Radiometrically correct editing” reflects the way light and color combine out there in the real world, and so requires that the relevant editing operations be done on linearized RGB.

Like many commonly used RGB working spaces, the sRGB color space is encoded using perceptually uniform RGB. Unfortunately colors simply don’t blend properly in perceptually uniform color spaces. So when you open an sRGB image using GIMP 2.9.2 and start to edit, in order to produce radiometrically correct results, many GIMP 2.9 editing operations will silently linearize the RGB channel information before the editing operation is actually done.

GIMP 2.9.2 editing operations that automatically linearize the RGB channel values include scaling the image, Gaussian blur, UnSharp Mask, Channel Mixer, Auto Stretch Contrast, decomposing to LAB and LCH, all of the LCH blend modes, and quite a few other editing operations.

GIMP 2.9.2 editing operations that ought to, but don’t yet, linearize the RGB channels include the all-important Curves and Levels operations. For Levels and Curves, to operate on linearized RGB, change the precision to “Linear light” and use the Gamma hack. However, the displayed histogram will be misleading.

The GIMP 2.9.2 editing operations that automatically linearize the RGB channel values do this regardless of whether you choose “Perceptual gamma (sRGB)” or “Linear light” precision. The only thing that changes when you switch between the “Perceptual gamma (sRGB)” and “Linear light” precisions is how colors blend when painting and when blending different layers together.

(Well, what the Gamma hack actually does changes when you switch between the “Perceptual gamma (sRGB)” and “Linear light” precisions, but the way it changes varies from one operation to the next, which is why I advise to not use the Gamma hack unless you know exactly what you are doing.)

Using the “Linear light” option in the “Image/Precision” menu

normal-blend-perceptual-vs-linear-cyan-background Large soft disks painted on a cyan background.
  1. Top row: Painted using “Perceptual gamma (sRGB)” precision. Notice the darker colors surrounding the red and magenta disks, and the green surrounding the yellow disk: those are “gamma” artifacts.
  2. Bottom row: Painted using “Linear Light” precision. This is how light waves blend to make colors out there in the real world.
normal-blend-perceptual-vs-linear Circles painted on a red background.
  1. Top row: Painted using “Perceptual gamma (sRGB)” precision. The dark edges surrounding the paint strokes are “gamma” artifacts.
  2. Bottom row: Painted using “Linear Light” precision. This is how light waves blend to make colors out there in the real world.

In GIMP 2.9.2, when using the Normal, Multiply, Divide, Addition, and Subtract painting and Layer blending:

  • For radiometrically correct Layer blending and painting, use the “Image/Precision” menu to select the “Linear light” precision option.
  • When “Perceptual gamma (sRGB)” is selected, layers and colors will blend and paint like they blend in GIMP 2.8, which is to say there will be “gamma” artifacts.

The LCH painting and Layer blend modes will always blend using Linear light precision, regardless of what you choose in the “Image/Precision” menu.

What about all the other Layer and painting blend modes? The concept of “radiometrically correct” doesn’t really apply to those other blend modes, so choosing between “Perceptual gamma (sRGB)” and “Linear light” depends entirely on what you, the artist or photographer, actually want to accomplish. Switching back and forth is time-consuming so I tend to stay at “Linear light” precision all the time, unless I really, really, really want a blend mode to operate on perceptually uniform RGB.

A note on interoperability between Krita and GIMP

Many digital artists and photographers are switching to linear gamma image editing. Let’s say you use Krita for digital painting in a true linear gamma sRGB profile, specifically the “sRGB-elle-V4-g10.icc” profile that is supplied with recent Krita installations, and you want to export your image from Krita and open it with GIMP 2.9.2.

Upon opening the image, GIMP will automatically detect that the image is in a linear gamma color space, and will offer you the option to keep the embedded profile or convert to the GIMP built-in sRGB profile. Either way, GIMP will automatically mark the image as using “Linear light” precision.

For interoperability between Krita and GIMP, when editing a linear gamma sRGB image that was exported to disk by Krita:

  1. Upon importing the Krita-exported linear gamma sRGB image into GIMP, elect to keep the embedded “sRGB-elle-V4-g10.icc” profile.
  2. Keep the precision at “Linear light”.
  3. Then assign the GIMP built-in Linear RGB profile (“Image/Color management/Assign”). The GIMP built-in Linear RGB profile is functionally exactly the same as Krita’s supplied “sRGB-elle-V4-g10.icc” profile (as are the GIMP built-in sRGB profile and Krita’s “sRGB-elle-V4-srgbtrc.icc” profile).

Once you’ve assigned the GIMP built-in Linear RGB profile to the imported linear gamma sRGB Krita image, then feel free to change the precision back and forth between “Linear light” and “Perceptual gamma (sRGB)”, as suits your editing goal.

When you are finished editing the image that was imported from Krita to GIMP:

  1. Convert the image to one of the “Perceptual gamma (sRGB) precisions (“Image/Precision”).
  2. Convert the image to the Krita-supplied “sRGB-elle-V4-g10.icc” profile (“Image/Color management/Convert”).
  3. Export the image to disk and import it into Krita.

If your Krita image is in a color space other than sRGB, I would suggest that you simply not try to edit non-sRGB images in GIMP 2.9.2 because many GIMP 2.9.2 editing operations do depend on hard-coded sRGB color space parameters.

GIMP 2.9.2’s unbounded floating point ICC profile conversions (handle with care!)

Compared to most other RGB color spaces, the sRGB color space gamut is very small. When shooting raw, it’s incredibly easy to capture colors that exceed the sRGB color space.

srgb-inside-prophoto-3-views The sRGB (the gray blob) and ProPhotoRGB (the multicolored wire-frame) color spaces as seen from different viewing angles inside the CIELAB reference color space. (Images produced using ArgyllCMS and View3DScene).

Every time you convert saturated colors from larger gamut RGB working spaces to GIMP’s built-in sRGB working space using floating point precision, you run the risk of producing out of gamut RGB channel values. Rather than just explaining how this works, it’s better if you experiment and see for yourself:

  1. Download this 16-bit integer ProPhotoRGB png, “saturated-colors.png“.
  2. Open “saturated-colors.png” with GIMP 2.9.2. GIMP will report the color space profile as “LargeRGB-elle-V4-g18.icc” — this profile is functionally equivalent to ProPhotoRGB.
  3. Immediately change the precision to 32-bit floating point precision (“Image/Precision/32-bit floating point) and check the “Perceptual gamma (sRGB)” option.
  4. Using the Color Picker Tool, make sure the Color Picker is set to “Use info Window” in the Tools dialog. Then eye-dropper the color squares, and make sure to set one of the columns in the Color Picker info Window to “Pixel”. The red square will eye-dropper as (1.000000, 0.000000, 0.000000). The cyan square will eyedropper as (0.000000, 1.000000, 1.000000), and so on. All the channel values will be either 1.000000 or 0.000000.
  5. While still at 32-bit floating point precision, and still using the “Perceptual gamma (sRGB)” option, convert “saturated-colors.png” to GIMP’s built-in sRGB.
  6. Eyedropper the color squares again. The red square will now eyedropper as approximately (1.363299, -2.956852, -0.110389), the cyan square will eyedropper as approximately (-13.365499, 1.094588, 1.003746), and so on.
  7. For extra credit, change the precision from 32-bit floating point “Perceptual gamma (sRGB)” to 32-bit floating point “Linear light” and eye-dropper the colors again. I will leave it to you as an exercise to figure out why the eye-droppered RGB “Pixel” values change so radically when you switch back and forth between “Perceptual gamma (sRGB)” and “Linear light”.

Where did the funny RGB channel values come from? At floating point precision, GIMP uses LCMS2 to do unbounded ICC profile conversions. This allows an RGB image to be converted from the source to the destination color space without clipping otherwise out of gamut colors. So instead of clipping the RGB channels values to the boundaries of the very small sRGB color gamut, the sRGB color gamut was effectively “unbounded”.

When you do an unbounded ICC profile conversion from a larger color space to sRGB, all the otherwise out of gamut colors are encoded using at least one sRGB channel value that is less than zero. And you might get one or more channel values that are greater than 1.0. Figure 11 below gives you a visual idea of the difference between bounded and unbounded ICC profile conversions:

red-flower-clipping-prophoto-to-srgb Unbounded (unclipped floating point) and bounded (clipped integer) conversions of a very colorful red flower from the original ProPhotoRGB color space to the much smaller sRGB color space. (Images produced using ArgyllCMS and View3DScene).

  • Top row: Unbounded (unclipped floating point) and bounded (clipped integer) conversions of a very colorful red flower from the original ProPhotoRGB color space to the much smaller sRGB color space. The unclipped flower is on the left and the clipped flower is on the right.
  • Middle and bottom rows: the unclipped and clipped flower colors in the sRGB color space. The unclipped colors are shown on the left and the clipped colors are shown on the right:
    • The gray blobs are the boundaries of the sRGB color gamut.
    • The middle row shows the view inside CIELAB looking straight down the LAB Lightness axis.
    • The bottom row shows the view inside CIELAB looking along the plane formed by the LAB A and B axes.
The unclipped sRGB colors shown on the left are all encoded using at least one sRGB channel value that is less than zero, that is, using a negative RGB channel value.

When converting saturated colors from larger color spaces to sRGB, not clipping would seem to be much better than clipping. Unfortunately a whole lot of RGB editing operations don’t work when performed on negative RGB channel values. In particular, multiplying such colors produces meaningless results, which of course applies not just to the Multiply and Divide blend modes (division and multiplications are inverse operations), but to all editing operations that involve multiplication by a color (other than gray, which is a special case).

So here’s one workaround you can use to clip the out of gamut channel values: Change the precision of “saturated-colors.png” from 32-bit floating point to 32-bit integer precision (“Image/Precision/32-bit integer”). This will clip the out of gamut channel values (integer precision always clips out of gamut RGB channel values). Depending on your monitor profile’s color gamut, you might or might not see the displayed colors change appearance; on a wide-gamut monitor, the change will be obvious.

When switching to integer precision, all colors are clipped to fit within the sRGB color gamut. Switching back to floating point precision won’t restore the clipped colors.

As an important aside (and contrary to a distressingly popular assumption), when doing a normal “bounded” conversion to sRGB, using “Perceptual intent” does not “keep all the colors”. The regular and linear gamma sRGB working color space profiles are matrix profiles, which don’t have perceptual intent tables. When you ask for perceptual intent and the destination profile is a matrix profile, what you get is relative colorimetric intent, which clips.

Using GIMP 2.9.2’s floating point precision for unclamped editing

High bit depth GIMP’s unclamped editing: a whole realm of new editing possibilities

I’ve warned you about the bad things that can happen when you try to multiply or divide colors that are encoded using negative sRGB channel values. However, out of gamut sRGB channel values can also be incredibly useful.

GIMP 2.9.2 does provide a number of “unclamped” editing operations from which the clipping code in the equivalent GIMP 2.8 operation has been removed. For example, at floating point precision, the Levels upper and lower sliders, Unsharp Mask, Channel Mixer and “Colors/Desaturate/Luminance” do not clip out of gamut RGB channel values (however, Curves does clip). Also the Normal, Lightness, Chroma, and Hue blend modes do not clip out of gamut channel values.

Unclamped editing opens up a whole realm of new editing possibilities. Quoting from Autumn colors: An Introduction to High Bit Depth GIMP’s New Editing Capabilities:

Unclamped editing operations might sound more arcane than interesting, but especially for photographers this is a really big deal:

  • Automatically clipped RGB data produces lost detail and causes hue and saturation shifts.
  • Unclamped editing operations allow you, the photographer, to choose when and how to bring the colors back into gamut.
  • Of interest to photographers and digital artists alike, unclamped editing sets the stage for (and already allows very rudimentary) HDR scene-referred image image editing.

Having used high bit depth GIMP for quite a while now, I can’t imagine going back to editing that is constrained to only using clipped RGB channel values. The Autumn colors tutorial provides a start-to-finish editing example making full use of unclamped editing and the LCH blend modes, with a downloadable XCF file so you can follow along.

If the thought of working with unclamped RGB data is unappealing, use integer precision

If working with unclamped RGB channel data is simply not something you want to do, then use integer precision for all your image editing. At integer precision all editing operations clip. This is a function of integer encoding and so happens regardless of whether the particular editing function includes or doesn’t include clipping code.

Looking to the future: GIMP 3.0 and beyond

Even though GIMP 2.10 hasn’t yet been released, high bit depth GIMP is already an amazing image editor. GIMP 3.0 and beyond will bring many more changes, including the port to GTK+3 (for GIMP 3.0), full color management for any well-behaved RGB working space (maybe by 3.2?), plus extended LCH processing with HSV strictly for use with legacy files. Also users will eventually have the ability to choose “Perceptual” encodings other than the sRGB TRC.

If you would like to see GIMP 3.0 and beyond arrive sooner rather than later, GIMP is coded, documented, and maintained by volunteers, and GIMP needs more developers. If you are not a programmer, there are many other ways you can contribute to GIMP development.

All text and images ©2015 Elle Stone, all rights reserved.

Stellarium 0.14.1 has been released

The Stellarium development team after month of development is proud to announce the first correcting release of Stellarium in series 0.14.x - version 0.14.1. This version contains 13 closed bugs (ported from version 0.15.0).

A huge thanks to our community whose contributions help to make Stellarium better!

List of changes between version 0.14.0 and 0.14.1:
- Added support for side-by-side assembly technology (LP: #1400045)
- Enhancements of the Ocular plugin: add OAG support (LP: #1354427)
- Added Belarusian translation for landscapes and sky cultures (LP: #1520303)
- Added designations for few stars in Scorpius (LP: #1518437)
- Fixed constellation art brightness and zooming (LP: #1520783)
- Fixed number of satellite orbit segments resets (LP: #1510592)
- Fixed certain outplanet moons with wrong Longitudes (LP: #1509693, #1509692)
- Fixed saving settings for some View panel options (LP: #1509639)
- Fixed fails to run up on Windows when invoked from a different directory (LP: #1410529)
- Fixed wrong value of ecliptic obliquity (LP: #1520792)
- Fixed segmentation fault (core dumped) while try update stars catalog (LP: #1514542)
- Tentative fix for support 4K resolution (GUI scaling) (LP: #1372781)

OpenHardware and code signing (update)

I posted a few weeks ago about the difficulty of providing device-side verification of firmware updates, at the same time remaining OpenHardware and thus easily hackable. The general consensus was that allowing anyone to write any kind of firmware to the device without additional authentication was probably a bad idea, even for OpenHardware devices. I think I’ve come up with an acceptable compromise I can write up as a recommendation, as per usual using the ColorHug+ as an example. For some background, I’ve sold nearly 3,000 original ColorHug devices, and in the last 4 years just three people wanted help writing custom firmware, so I hope you can see the need to protect the majority is so much larger than making the power users happy.

ColorHug+ will be supplied with a bootloader that accepts only firmware encrypted with the secret XTEA key I that I’m using for my devices. XTEA is an acceptable compromise between something as secure as ECC, but that’s actually acceptable in speed and memory usage for a 8-bit microcontroller running at 6MHz with 8k of ROM. Flashing a DIY or modified firmware isn’t possible, and by the same logic flashing a malicious firmware will also not work.

To unlock the device (and so it stays OpenHardware) you just have to remove the two screws, and use a paper-clip to connect TP5 and GND while the device is being plugged into the USB port. Both lights will come on, and stay on for 5 seconds and then the code protection is turned off. This means you can now flash any home-made or malicious firmware to the device as you please.

There are downsides to unlocking; you can’t re-lock the hardware so it supports official updates again. I don’t know if this is a huge problem; flashing home-made firmware could damage the device (e.g. changing the pin mapping from input to output and causing something to get hot). If this is a huge problem I can fix CH+ to allow re-locking and fix up the guidelines, although I’m erring on unlocking being a one way operation.

Comments welcome.

December 01, 2015

Game Art Quest Kickstarter Nearly There

And that’s going to be celebrated with some free tutorial videos. Over to Nathan!

It’s Tuesday today, so it’s time for a new Krita tutorial. In this video, you will get an overview of the new features added in Krita since version 2.9.5. It covers all of the smaller, yet very useful workflow improvements brought to the application over the past few months.

There are 3 bigger features that we will focus on next week: the tangent normal brush, the beta brush preview and the animation tools.

Over the next few weeks, you will get free tutorials on both Tuesdays and Thursdays on the Gdquest Youtube channel. On Tuesdays, we will talk about application specific topics. Just like today. And on the next 3 Thursdays, we will talk about environment art, user interface and monster design. Those 3 tutorials will give you a sense of what you will find in the Game Art Quest training series.

Talking about that, there is excellent news! The Kickstarter has almost reached its goal! As I write these lines, it is 98% funded, and there are 22 days to go. It’s time to shift our focus to the first stretch goal. If we reach €8000, all of the backers will get a 2nd training series for their pledge. In other words, if we double the funding, you get double the content.


But to get there, I need your help. So many people don’t know that the campaign even exists! Please spread the word! Share the campaign on social networks, on your favorite game related forum or group… together, we can reach the first stretch goal!


Do you want to improve your game art skills? I launched a Facebook group for game artists 2 weeks ago. It is (also) called Game Art Quest. The goal is to become better artists together. Every week, you get a new game art assignment. You can submit your work in progress to the group and both give and get constructive feedback on your art.

Everyone is welcome, regardless of their skill level. The senior game artist Chris Hildenbrand, whom you know for his Inkscape and Gimp tutorials on 2dgameartguru.com, is participating. Come check it out!

November 30, 2015

third release candidate for darktable 2.0 & string freeze

we're proud to announce the third release candidate in the new feature release of darktable, 2.0~rc3.

the release notes and relevant downloads can be found attached to this git tag:
please only use our provided packages ("darktable-2.0.rc3.*" tar.xz and dmg) not the auto-created tarballs from github ("Source code", zip and tar.gz). the latter are just git snapshots and will not work! here are the direct links to tar.xz and dmg:

the checksums are:

$ sha256sum darktable-2.0~rc3.tar.xz
$ sha256sum darktable-2.0~rc3.dmg

packages for individual platforms and distros will follow shortly.

as we're closing in to the final version, we are also officially in string freeze as of now. this affects darktable, not the user manual.

the changes from rc2 include minor bugfixes, such as:

  • camera support improvements
    • add support for the Canon PowerShot G5 X
    • basic support for Olympus SP320
    • Panasonic LF1 noise profile and white balance presets
    • noiseprofiles: add Sony A77mk2
  • high-dpi fixes
  • fixed a few memleaks
  • 3:1 aspect ratio as preset in crop&rotate
  • magic lantern-style deflicker has been activated in the exposure module
  • updated translations

and the preliminary changelog as compared to the 1.6.x series can be found below.

when updating from the currently stable 1.6.x series, please bear in mind that your edits will be preserved during this process, but it will not be possible to downgrade from 2.0 to 1.6.x any more. be careful if you need darktable for production work!

happy 2.0~rc3 everyone :)

  • darktable has been ported to gtk-3.0
  • new thumbnail cache replaces mipmap cache (much improved speed, less crashiness)
  • added print mode
  • reworked screen color management (softproof, gamut check etc.)
  • removed dependency on libraw
  • removed dependency on libsquish (solves patent issues as a side effect)
  • unbundled pugixml, osm-gps-map and colord-gtk
  • text watermarks
  • color reconstruction module
  • raw black/white point module
  • delete/trash feature
  • addition to shadows&highlights
  • more proper Kelvin temperature, fine-tuning preset interpolation in WB iop
  • noiseprofiles are in external JSON file now
  • monochrome raw demosaicing (not sure whether it will stay for release, like Deflicker, but hopefully it will stay)
  • aspect ratios for crop&rotate can be added to conf (ae36f03)
  • navigating lighttable with arrow keys and space/enter
  • pdf export – some changes might happen there still
  • brush size/hardness/opacity have key accels
  • the facebook login procedure is a little different now
  • export can upscale
  • we no longer drop history entries above the selected one when leaving darkroom or switching images
  • text/font/color in watermarks
  • image information now supports GPS altitude
  • allow adding tone- and basecurve nodes with ctrl-click
  • new "mode" parameter in the export panel
  • high quality export now downsamples before watermark and frame to guarantee consistent results
  • Lua scripts can now add UI elements to the lighttable view (buttons, sliders etc …)
  • a new repository for external Lua scripts was started

November 28, 2015

Going Wild With Animation

We knew the animation plugin was something artists all over the world were really waiting for… What we hadn’t expected was the flood of cool little animations that suddenly appeared everywhere! Let’s take a look at a selection of them! And in another way, the release also helped find some issues. First: if onion skinning doesn’t work for you, check that you’re not trying to onion skin a completely opaque layer. Yes — white is also opaque! The best setup: create a white background layer, but paint on a transparent layer above that. That’s Krita’s default setup in any case. For some people with some combinations of graphics cards and drivers, the Instant Preview doesn’t work. There’s not much we can do about that, but we do need your reports! And finally, a couple of real bugs surfaced, and we’ll look at that next.

But here’s the animation gallery!
By Timothee Giet

Toothless dargon turning head and Horse galloping by JackTheVulture on tumblr.

Falling ball by はまの ‏@HaMoO0NoO0 on twitter.

exploding sparkles by Nahuel Belich on twitter

Dancing Gronky and did anyone say bitmap animation by SJ Bennet on twitter

Growing Tree by 雑賀屋鳶 ‏@tomB_saikaya on twitter.

walkcycle by JeffersonSN/Llama guy on twitter

walking dogman by Godzillu on twitter

by Степан Крани on youtube

by Немитько Николай on youtube

By Dileep N on youtube

Test gif of walking pruple alien man by Benjamin Mitchley on twitter

mouse hiding under pillows and
bird flying on tree by Vincent Sautter on twitter

By Popescu Sorin on youtube

hand smashing block/alarm clock By Pablo Mendoza on twitter

by Laura Sulter on youtube/twitter

November 27, 2015

Getting around make clean or make distclean aclocal failures

Keeping up with source trees for open source projects, it often happens that you pull the latest source, type make, and get an error like this (edited for brevity):

$ make
cd . && /bin/sh ./missing --run aclocal-1.14
missing: line 52: aclocal-1.14: command not found
WARNING: aclocal-1.14' is missing on your system. You should only need it if you modified acinclude.m4' or configure.ac'. You might want to install the Automake' and Perl' packages. Grab them from any GNU archive site.

What's happening is that make is set up to run ./autogen.sh (similar to running ./configure except it does some other stuff tailored to people who build from the most current source tree) automatically if anything has changed in the tree. But if the version of aclocal has changed since the last time you ran autogen.sh or configure, then running configure with the same arguments won't work.

Often, running a make distclean, to clean out all local configuration in your tree and start from scratch, will fix the problem. A simpler make clean might even be enough. But when you try it, you get the same aclocal error.

Whoops! make clean runs make, which triggers the rule that configure has to run before make, which fails.

It would be nice if the make rules were smart enough to notice this and not require configure or autogen if the make target is something simple like clean or distclean. Alas, in most projects, they aren't.

But it turns out that even if you can't run autogen.sh with your usual arguments -- e.g. ./autogen.sh --prefix=/usr/local/gimp-git -- running ./autogen.sh by itself with no extra arguments will often fix the problem.

This happens to me often enough with the GIMP source tree that I made a shell alias for it:

alias distclean="./autogen.sh && ./configure && make clean"

Saving your configure arguments

Of course, this wipes out any arguments you've previously passed to autogen and configure. So assuming this succeeds, your very next action should be to run autogen again with the arguments you actually want to use, e.g.:

./autogen.sh --prefix=/usr/local/gimp-git

Before you ran the distclean, you could get those arguments by looking at the first few lines of config.log. But after you've run distclean, config.log is gone -- what if you forgot to save the arguments first? Or what if you just forget that you need to re-run autogen.sh again after your distclean?

To guard against that, I wrote a somewhat more complicated shell function to use instead of the simple alias I listed above.

The first trick is to get the arguments you previously passed to configure. You can parse them out of config.log:

$ egrep '^  \$ ./configure' config.log
  $ ./configure --prefix=/usr/local/gimp-git --enable-foo --disable-bar

Adding a bit of sed to strip off the beginning of the command, you could save the previously used arguments like this:

    args=$(egrep '^  \$ ./configure' config.log | sed 's_^  \$ ./configure __')

(There's a better place for getting those arguments, config.status -- but parsing them from there is a bit more complicated, so I'll follow up with a separate article on that, chock-full of zsh goodness.)

So here's the distclean shell function, written for zsh:

distclean() {
    setopt localoptions errreturn

    args=$(egrep '^  \$ ./configure' config.log | sed 's_^  \$ ./configure __')
    echo "Saved args:" $args
    make clean

    echo "==========================="
    echo "Running ./autogen.sh $args"
    sleep 3
    ./autogen.sh $args

The setopt localoptions errreturn at the beginning is a zsh-ism that tells the shell to exit if there's an error. You don't want to forge ahead and run configure and make clean if your autogen.sh didn't work right. errreturn does much the same thing as the && between the commands in the simpler shell alias above, but with cleaner syntax.

If you're using bash, you could string all the commands on one line instead, with && between them, something like this: ./autogen.sh && ./configure && make clean && ./autogen.sh $args Or perhaps some bash user will tell me of a better way.

November 25, 2015

Happy Birthday GIMP!

Happy Birthday GIMP!

Also, wallpapers and darktable 2.0 creeps even closer!

I got busy building a birthday present for a project I work with and all sort of neat things happened in my absence! The Ubuntu Free Culture Showcase chose winners for it’s wallpaper contest for Ubuntu 15.10 ‘Wily Werewolf’ (and quite a few community members were among those chosen).

The darktable crew is speeding along to a 2.0 release with a new RC2 being released.

Also, a great big HAPPY 20th BIRTHDAY GIMP! I made you a present. I hope it fits and you like it! :)

Ubuntu Wallpapers

Back in early September I posted on discuss about the Ubuntu Free Culture Showcase that was looking for wallpaper submissions from the free software community to coincide with the release of Ubuntu 15.10 ‘Wily Werewolf’. The winners were recently chosen from among the submissions and several of our community members had their images chosen!

The winning entries from our community include:

Moss inflorescence by carmelo75 Moss inflorescence
The first winner is from PhotoFlow creator Andrea Ferrero
Light my fire, evening sun by Dariusz Duma Light my fire, evening sun
by Dariusz Duma
Sitting Here, Making Fun by Philipp Haegi Sitting Here, Making Fun
by Mimir
Tranquil by Pat David Tranquil
by Pat David

A big congratulations to you all for some amazing images being chosen! If you’re running Ubuntu 15.10, you can grab the ubuntu-wallpapers package to get these images right here!

darktable 2.0 RC2

Hot on the heels of the prior release candidate, darktable now has an RC2 out. There are many minor bugfixes from the previous RC1, such as:

  • high iso fix for exif data of some cameras
  • various macintosh fixes (fullscreen)
  • fixed a deadlock
  • updated translations

The preliminary changelog from the 1.6.x series:

  • darktable has been ported to gtk-3.0
  • new thumbnail cache replaces mipmap cache (much improved speed, less crashiness)
  • added print mode
  • reworked screen color management (softproof, gamut check etc.)
  • removed dependency on libraw
  • removed dependency on libsquish (solves patent issues as a side effect)
  • unbundled pugixml, osm-gps-map and colord-gtk
  • text watermarks
  • color reconstruction module
  • raw black/white point module
  • delete/trash feature
  • addition to shadows&highlights
  • more proper Kelvin temperature, fine-tuning preset interpolation in WB iop
  • noiseprofiles are in external JSON file now
  • monochrome raw demosaicing (not sure whether it will stay for release, like Deflicker, but hopefully it will stay)
  • aspect ratios for crop&rotate can be added to conf (ae36f03)
  • navigating lighttable with arrow keys and space/enter
  • pdf export — some changes might happen there still
  • brush size/hardness/opacity have key accels
  • the facebook login procedure is a little different now
  • export can upscale
  • we no longer drop history entries above the selected one when leaving dr or switching images
  • text/font/color in watermarks
  • image information now supports gps altitude
  • allow adding tone- and basecurve nodes with ctrl-click
  • new “mode” parameter in the export panel
  • high quality export now downsamples before watermark and frame to guarantee consistent results
  • lua scripts can now add UI elements to the lighttable view (buttons, sliders etc…)
  • a new repository for external lua scripts was started.

More information and packages can be found on the darktable github repository.

Remember, updating from the currently stable 1.6.x series is a one-way street for your edits (no downgrading from 2.0 back to 1.6.x).

GIMP Birthday

All together now…

Happy Birthday to GIMP! Happy Birthday to GIMP!

GIMP Wilber Big Icon

This past weekend GIMP celebrated it’s 20th anniversary! It was twenty years ago on November 21st that Peter Mattis announced the availability of the “General Image Manipulation Program” on comp.os.linux.development.apps.

Twenty years later and GIMP doesn’t look a day older than a 1.0 release! (Yes, there’s a double entendre there).

To celebrate, I’ve been spending the past couple of months getting a brand new website and infrastructure built for the project! Just in case anyone was wondering where I was or why I was so quiet. I like the way it turned out and is shaping up so go have a look if you get a moment!

There’s even an official news post about it on the new site!

GIMP 2.8.16

To coincide with the 20th anniversary, the team also released a new stable version in the 2.8 series: 2.8.16. Head over to the downloads page to pick up a copy!!

New PhotoFlow Tutorial

Still working hard and fast on PhotoFlow, Andreas took some time to record a new video tutorial. He walks through some basic usage of the program, in particular opening an image, adding layers and layer masks, and saving the results. Have a look and if you have a moment give him some feedback!

Andreas is working on PhotoFlow at a very fast pace, so expect some more news about his progress very soon!

Krita 2.9 Animation Edition Beta released!


Today we are happy to announce the long awaited beta-version of Krita with Animation and Instant Preview support! Based on Krita 2.9, you can now try out the implementation of the big 2015 kickstarter features!

What’s new in this version? From the user point of view Krita didn’t change much. There are three new dockers: Animation, Timeline and Onion Skins, which let you control everything about your animation frames and one new menu item View->Instant Preview Mode (previously known as Level of Detail) allowing painting on huge canvases. For both features, you need a system that supports OpenGL 3.0 or higher.

For people who previously installed Krita, to get Instant Preview to show up on the view menu, delete the krita.rc(not kritarc) file in your resource folder(which can be accessed quickly via Settings->Manage Resources->Open Resource Folder) and restart Krita. Or just use the hotkey Shift+L.

But under these visually tiny changes hides a heap of work done to the Krita kernel code. We almost rewritten it to allow most of the rendering processes run in the background. So now all animated frames and view cache planes are calculated in the moments of time when the user is idle (thinks, or chooses a new awesome brush). Thanks to these changes now it is possible to efficiently work with huge images and play a sequence of complex multi-layered frames in real time (the frames are recalculated in the background and are uploaded to you GPU directly from the cache).


So, finally, welcome Krita 2.9 Animation Edition Beta! (Note the version number! The final release will be based on Krita 3.0, this version is created from the 2.9 stable release, but it is still beta. We welcome your feedback!

Video tutorial from Timothee Giet:

A short video introduction into Krita animation features is available here.

Packages for Ubuntu:

You can get them through Krita Lime repositories. Just choose ‘krita-animation-testing’ package:

sudo add-apt-repository ppa:dimula73/krita
sudo apt-get update
sudo apt-get install krita-animation-testing

Packages for Windows:

Two packages are available: 64-bit, 32-bit:

You can download the zip files and just unpack them, for instance on your desktop and run. You might get a warning that the Visual Studio 2012 Runtime DLL is missing: you can download the missing dll here. You do not need to uninstall any other version of Krita before giving these packages a try!

User manuals and tutorials:

November 24, 2015

Krita 2.9 Animation Edition beta

I’m very happy to tell you, finally, a version of Krita supporting basic animation features is released ! (Check it here)

This is still in early stage, based on latest 2.9 version, with lot of additional features to come later from version 3.

If you want to have fun with it, here is a little introduction tutorial to get started, with some text and a video to illustrate it.

-Load the animation workspace to quickly activate the timeline and animation dockers.

-The timeline only has the selected layer. To keep a layer always visible on it, click on the plus icon and select the corresponding option (Show in timeline to keep the selected layer, or Add existing layer and select one in the list …)

-To make a layer animated, create a new frame on it (with the right-click option on the timeline, or with the button on the animation docker). Now the icon to activate the onion skins on it becomes visible (the light bulb icon), activate it to can see previous and next frames.

-The content of a frame is visible in the next frames until you create a new one.

-After drawing the first frame, go further in the timeline and do any action to edit the image (draw, erase, delete all, use transform tool, …). It creates a new frame with the content corresponding to the action you made.

-If you prefer to only create new frames manually, disable the auto-frame-mode with the corresponding button in the animation docker (the film with a pen icon).

-To move a frame in time, just drag and drop it to a new time position

-To duplicate a frame, press Control while you drag and drop it to a new time position.

-In the animation docker, you can define the start and end of the animation (to define the frames to use for export, and for the playback loop). You can also define the speed of the playback with the Frame rate value (frames per second) and the Play speed (multiplier or the frame rate).

-In the Onion Skins docker, you can change the opacity for each of the 10 previous and next frames. You can also select a color overlay to distinguish previous and next frames. You can act on the global onion skins opacity with the 0 slider.

-To change the opacity of several onion skins at the same time, press Shift while clicking across the sliders.

-To export your animation, use the menu entry File – Export animation, and select the image format you want for the image sequence.

Have fun animating in Krita, and don’t forget to report any issue you find to help improve the final version ;)

SDN/NFV DevRoom at FOSDEM: Deadline approaching!

We extended the deadline for the SDN/NFV DevRoom at FOSDEM to Wednesday, November 25th recently – and we now have the makings of a great line-up!

To date, I have received proposals about open source VNFs, dataplane acceleration and accelerated virtual switching, an overview of routing on the internet that looks fascinating, open switch design and operating systems, traffic generation and testing, and network overlays.

I am still interested in having a few more NFV focussed presentations, and one or two additional SDN controller projects – and any other topics you might think would tickle our fancy! Just over 24 hours until the deadline.

November 23, 2015

Game Art Quest Kickstarter!

Today an exciting new crowdfunding campaign kicks off! Nathan Lovato, the author of Game Design Quest, wants to create a new series of video tutorials on creating 2D game art with Krita. Nathan is doing this on his own, but the Krita project, through the Krita Foundation, really wants this to happen! Over to Nathan, introducing his campaign:

“There are few learning resources dedicated to 2d game art. With Krita? Close to none. That is why I started working on Game Art Quest. This training will show you the techniques and concepts game artists use in their daily work. If you want to become a better artist, this one is for you.”

“We are developing this project together with the Krita Foundation. This is an opportunity for Krita to reach new users and to sparkle the interest of the press. However, for this project to come to life, we need your help. A high quality training series requires months of full-time work to create. That is why we are crowdfunding it on Kickstarter.”

“But who the heck am I to teach you game art? I’m Nathan, a professional game designer and tutor. I am the author of Game Design Quest, a YouTube channel filled with tutorials about game creation. Every Thursday, I release a new video. And I’ve done so since the start of the year, on top of my regular work. Over the months, my passion for open source technologies grew stronger. I discovered Krita 2.9 and felt really impressed by it. Krita deserves more attention.”

“Long story short, Game Art Quest is live on Kickstarter. And its existence depends on you!”

“Even if you can’t afford to pledge, share the word on social networks! This would help immensely. Also, this campaign is not only supporting the production of the premium series. It will allow me to keep offering you free tutorials for the months to come. And for the whole duration of the campaign, you’re getting 2 tutorials every single week!”

Check out Nathan’s campaign: https://www.kickstarter.com/projects/gdquest/game-art-quest-make-professional-2d-art-with-krita.


Interview with Christopher Stewart


Could you tell us something about yourself?

My name is Christopher, and I am an illustrator living in Northern California. When I’m not in a 2d mindset I like to sculpt with Zbrush and Maya. Some of my interests include Antarctica, Hapkido and racing planes of the 1930s.

Do you paint professionally, as a hobby artist, or both?

I have been working professionally for quite some time. I have worked for clients such as Ubisoft, Shaquille O’Neal and Universal Studios. I’m always looking for new and interesting work.

What genre(s) do you work in?

SF, Fantasy, and Comic Book/ Sequential art. This is where the foundation of my work lies – these genres have always been an inspiration to me ever since I was a kid.

Whose work inspires you most — who are your role models as an artist?

Wow, what a tough question! So many great artists out there.. Brom- definitely, N.C. Wyeth, George Perez, and Alphose Mucha. Recently I have revisited the background stylists of Disney with their immersive environments.

How and when did you get to try digital painting for the first time?

About 9 years ago. Until then my work was predominantly traditional. I wanted to try new mediums, and I thought digital painting would be a great area to explore.

What makes you choose digital over traditional painting?

Time and space.

Alterations and color adjustments can be done quickly for a given digital piece.

The physicality of traditional medium has different challenges, and usually the solution will take longer to accomplish with traditional mediums in general.

Digital painting doesn’t take up a lot of space, a few decent sized stretched canvases..

How did you find out about Krita?

I had tried Painter X and CS and they were unsatisfying, so I was looking for a paint program. Krita was recommended by a long-time friend who liked the program, and I was hooked.

What was your first impression?

It was very intuitive. It had a UI that I had very few difficulties with.

What do you love about Krita?

I really really liked the responsiveness of the brushes. With other applications I was experiencing a “flatness” from the tablet I use to the results I wanted on screen, Krita’s brushes just feel more supple. The ability to customize the interface and brushes was also a huge plus.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I haven’t been using Krita very long (less than 6 months) but I would like to be able to save and import/export color history as a file within an open Krita document.

What sets Krita apart from the other tools that you use?

When a company makes an application as powerful as Krita available for free, it’s a statement about how confident they are that artists will love it. And judging from the enthusiastic and knowledgeable people in the forums, they not only love it they want others to be able to love it and use it too. Developing and experienced artists need to evaluate new tools easily. Access to those tools should never be so prohibitively costly as to turn them away. Krita doesn’t get in the way of talent being explored, it supports it.

What techniques and brushes do you prefer to use?

I use a lot of the default brushes especially the Bristle brushes, a semi transparent texture to add to a plein air look as a final layer. I use some of David Revoy’s brushes, specifically the Splatter brushes. I recently made a new custom brush that I tried out on my most recent illustration.

Where can people see more of your work?

My website is redacesmedia.com. You can reach me there!

Anything else you’d like to share?

Thank you so much for the interview and a special thanks to the developers and community that make Krita work!

November 22, 2015

Call to translators

Dear translators,

We plan to release Stellarium 0.14.1 at first day of next month.

This is a bugfix release, who has few small fixes and has few ported features from version 0.15.0. Currently translators can improve translation of version 0.14.0 and fix some mistakes in translations. If you can assist with translation to any of the 134 languages which Stellarium supports, please go to Launchpad Translations and help us out: https://translations.launchpad.net/stellarium

If it will be required we can postpone release on few days.

Thank you!

ഇവൻ മര്യാദരാമൻ

ഇവൻ മര്യാദരാമൻ, 2015 : ഒരു ഗ്രാമം, രണ്ട് വീട്ടുകാർ, അദ്യം സ്നേഹം, പിന്നെ പക, പ്രതികാരം, കുടിപ്പക, കുടിക്കാത്ത പക, അച്ഛന്റെ പൊക. കാലം മുന്നോട്ട് ഓടി പോവുന്നു. നായകൻ ഒരു പിച്ചയായി വളരുന്നു. വില്ലൻ വളർന്ന് ഓവർ സമ്പന്നൻ ആവുന്നു. വില്ലന്റെ മകൾ സുന്ദരിയായി വളരുന്നു. അപ്പോൾ അവളുടെ ആങ്ങളമാർ ജിമ്മന്മാർ ആവണമല്ലോ – അതും ഉണ്ട്, രണ്ടെണ്ണം. ഒരിക്കലും പ്രതീക്ഷികാത്ത (നാലു തവണ ദിലീപ് സിനിമയിൽ വന്നിട്ടുണ്ടെങ്കിലും) കാരണങ്ങളാൽ ഗ്രാമത്തിലേക്ക് തിരിച്ച് വരുന്ന […]

എന്നും എപ്പോഴും

എന്നും എപ്പോഴും, 2015 : ഒരേ ഫോർമാറ്റിൽ സിനിമ ഇടുക്കുന്ന സത്യൻ അന്തിക്കാട് പല സ്ഥിരം ഘടങ്ങളും മാറ്റി വച്ച് ‘സ്റ്റോറി ബോർഡ് എടുക്കാൻ മറന്ന്’, പിന്നെ കട്ട് പറയാനും മറന്ന സിനിമയാണ് എന്നും എപ്പോഴും. കട്ട് പറയാൻ മറന്നു എന്നത് സത്യമാണ്, നീണ്ട് വലിഞ്ഞ ഇലാസ്റ്റിക്ക് ജെട്ടി പോലെ കിട്ടക്കുന്നു സിനിമ -കാലിടാൻ ഓട്ടയില്ലെന്ന് മാത്രം. ഫ്ലാഷ്ബാക്കെന്ന മാരകായുധം ഉപയോഗിക്കാതെ, തമിഴ്നാട്ടിലേക്ക് ഒരു കണക്ഷനുമില്ലാത്ത, ഗ്രാമത്തിന്റെ പച്ചപ്പ് കാണിക്കാത്ത സത്യൻ അന്തിക്കാട് സിനിമ എന്നത് ഒരുതരത്തിൽ […]

November 20, 2015

Kubernetes from the ground up

I really loved reading Git from the bottom up when I was learning Git, which starts by showing how all the pieces fit together. Starting with the basics and gradually working towards the big picture is a great way to understand any complex piece of technology.

Recently I’ve been working with Kubernetes, a fantastic cluster manager. Like Git it is tremendously powerful, but the learning curve can be quite steep.

But there is hope. Kamal Marhubi has written a great series of articles that take the same approach: start from the basic building blocks, build with those.

Currently available:

Highly recommended.


Comments | More on rocketeer.be | @rubenv on Twitter

Industry benchmark SPEC 2.0 uses Blender

SPEC is the Standard Performance Evaluation Corporation – the Industry leading benchmark provider for performance testing suites. The newly released SPECwpc V2.0 benchmark measures all key aspects of workstation performance based on diverse professional applications.

The SPECwpc benchmark will be used by vendors to optimize performance and publicize benchmark results for different vertical market segments. Users can employ the benchmark for buying and configuration decisions specific to their industries. SPECwpc V2.0 runs under the 64-bit versions of Microsoft Windows 7 SP1 and Windows 8.1 SP1.

New in the SPECwpc V2.0 is a prominent inclusion of two free/open software projects: Blender and LuxRender.



For more information: the official SPEC announcement.


November 17, 2015

Ubuntu Unstable Repository and our Release Candidates

Following is a public service announcement from Pascal de Bruijn, the maintainer of the Ubuntu PPAs.

As most of you know, my darktable-unstable PPA was serving as a pre-release repository for our stable maintenance tree, as it usually does. Now as master has settled down, and we're slowly gearing up for a 2.0 release, I'll do pre-release (release candidate) builds for darktable 2.0 there.

On my darktable-unstable PPA I will support Ubuntu Trusty (14.04, the latest Long Term Support release) as always. Temporarily I'll support Ubuntu Wily (15.10, the latest plain release) as well, at least until we have a final 2.0 stable release. Once we have a final 2.0 stable release I will support all Ubuntu versions (still) supported by Canonical at that time via my darktable-release PPA as usual.

In general updates on my darktable-unstable PPA should be expected to be fairly erratic, completely depending on the number and significance of changes being made in git master. That said, I expect that it will probably average out at once a week or so.

If you find any issues with these darktable release candidates please do report them to our bug tracker.

November 16, 2015

second release candidate for darktable 2.0

we're proud to announce the second release candidate in the new feature release of darktable, 2.0~rc2.

as always, please don't use the autogenerated tarball provided by github, but only our tar.xz.

the release notes and relevant downloads can be found attached to this git tag:
please only use our provided packages ("darktable-2.0.rc2.*" tar.xz and dmg) not the auto-created tarballs from github ("Source code", zip and tar.gz). the latter are just git snapshots and will not work! here are the direct links to tar.xz and dmg:

the checksums are:

$ sha256sum darktable-2.0~rc2.tar.xz 
9349eaf45f6aa4682a7c7d3bb8721b55ad9d643cc9bd6036cb82c7654ad7d1b1  darktable-2.0~rc2.tar.xz
$ sha256sum darktable-2.0~rc2.dmg 
f343a3291642be1688b60e6dc98930bdb559fc5022e32544dcbe35a38aed6c6d  darktable-2.0~rc2.dmg

packages for individual platforms and distros will follow shortly.

for your convenience, robert hutton collected build instructions for quite a few distros in our wiki:


the changes from rc1 include many minor bugfixes, such as:

  • high iso fix for exif data of some cameras
  • various macintosh fixes (fullscreen)
  • fixed a deadlock
  • updated translations

and the preliminary changelog as compared to the 1.6.x series can be found below.

when updating from the currently stable 1.6.x series, please bear in mind that your edits will be preserved during this process, but it will not be possible to downgrade from 2.0 to 1.6.x any more. be careful if you need darktable for production work!

happy 2.0~rc2 everyone :)

  • darktable has been ported to gtk-3.0
  • new thumbnail cache replaces mipmap cache (much improved speed, less crashiness)
  • added print mode
  • reworked screen color management (softproof, gamut check etc.)
  • removed dependency on libraw
  • removed dependency on libsquish (solves patent issues as a side effect)
  • unbundled pugixml, osm-gps-map and colord-gtk
  • text watermarks
  • color reconstruction module
  • raw black/white point module
  • delete/trash feature
  • addition to shadows&highlights
  • more proper Kelvin temperature, fine-tuning preset interpolation in WB iop
  • noiseprofiles are in external JSON file now
  • monochrome raw demosaicing (not sure whether it will stay for release, like Deflicker, but hopefully it will stay)
  • aspect ratios for crop&rotate can be added to conf (ae36f035e1496b8b8befeb74ce81edf3be588801)
  • navigating lighttable with arrow keys and space/enter
  • pdf export -- some changes might happen there still
  • brush size/hardness/opacity have key accels
  • the facebook login procedure is a little different now
  • export can upscale
  • we no longer drop history entries above the selected one when leaving dr or switching images
  • text/font/color in watermarks
  • image information now supports gps altitude
  • allow adding tone- and basecurve nodes with ctrl-click
  • new "mode" parameter in the export panel
  • high quality export now downsamples before watermark and frame to guarantee consistent results
  • lua scripts can now add UI elements to the lighttable view (buttons, sliders etc...)
  • a new repository for external lua scripts was started.

November 14, 2015

fwupd and DFU

For quite a long time fwupd has supported updating the system ‘BIOS’ using the UpdateCapsule UEFI mechanism. This open specification allows vendors provide a single update suitable for Windows and Linux, and the mechanism for applying it is basically the same for all vendors. Although there are only a few systems in the wild supporting capsule updates, a lot of vendors are planning new models next year, and a few of the major ones have been trialing the LVFS service for quite a while too. With capsule updates, fwupd and the LVFS we now have a compelling story for how to distribute and securely install system BIOS updates automatically.

It’s not such a rosy story for USB devices. In theory, everything should be using the DFU specification which has been endorsed by the USB consortium, but for a number of reasons quite a few vendors don’t use this. I’m guilty as charged for the ColorHug devices, as I didn’t know of the existance of DFU when designing the hardware. For ColorHug I just implemented a vendor-specific HID bootloader with a few custom commands as so many other vendors have done; it works well, but every vendor does things a slightly different way which needs having vendor specific update tools and fairly random firmware file formats.

With DFU, what’s supposed to happen is there are two modes for the device, a normal application runtime which is doing whatever the device is supposed to be doing, and another DFU mode which is really just an EEPROM programmer. By ‘detaching’ the application firmware using a special interface you can program the device and then return to normal operation.

So, what to do? For fwupd I want to ask vendors of removable hardware to implement DFU so that we don’t need to write code for each device type in fwupd. To make this a compelling prospect I’ve spent a good chunk of time of last week:

  • Creating a GObjectIntrospectable and cancellable host-side library called libdfu
  • Writing a reference GPLv3+ device-side implementation for a commonly used USB stack for PIC microcontrollers
  • Writing the interface code in fwupd to support DFU files wrapped in .cab files for automatic deployment

At the moment libdfu supports reading and writing raw, DFU and DfuSe file types, and supports reading and writing to DFU 1.1 devices. I’ve not yet implemented writing to ST devices (a special protocol extension invented by ST Microsystems) although that’s only because I’m waiting for someone to lend me a device with a STM32F107 included (e.g. DSO Nano). I’ve hopefully made the code flexible enough to make this possible without breaking API, although the libdfu library is currently private to fwupd until it’s had some proper review. You can of course use the dependable dfu-util tool to flash firmware, but this wasn’t suitable for use inside fwupd for various reasons.

Putting my money where my mouth is, I’ve converted the (not-yet-released) ColorHug+ bootloader and firmware to use DFU; excluding all the time I spent writing the m-stack patch and the libdfu support in fwupd it only took a couple of hours to build and test. Thanks to Christoph Brill, I’ll soon be getting some more hardware (a Neo FreeRunner) to verify this new firmware update mechanism on a real device with multiple implemented DFU interfaces. If anyone else has any DFU-capable hardware (especially Arduino-style devices) I’d be glad of any donations.

Once all this new code has settled down I’m going to be re-emailing a lot of the vendors who were unwilling to write vendor-specific code in fwupd. I’m trying to make the barrier to automatic updates on Linux as low as possible.

Comments welcome.

November 11, 2015

evolution of seccomp

I’m excited to see other people thinking about userspace-to-kernel attack surface reduction ideas. Theo de Raadt recently published slides describing Pledge. This uses the same ideas that seccomp implements, but with less granularity. While seccomp works at the individual syscall level and in addition to killing processes, it allows for signaling, tracing, and errno spoofing. As de Raadt mentions, Pledge could be implemented with seccomp very easily: libseccomp would just categorize syscalls.

I don’t really understand the presentation’s mention of “Optional Security”, though. Pledge, like seccomp, is an opt-in feature. Nothing in the kernel refuses to run “unpledged” programs. I assume his point was that when it gets ubiquitously built into programs (like stack protector), it’s effectively not optional (which is alluded to later as “comprehensive applicability ~= mandatory mitigation”). Regardless, this sensible (though optional) design gets me back to his slide on seccomp, which seems to have a number of misunderstandings:

  • A Turing complete eBPF program watches your program Strictly speaking, seccomp is implemented using a subset of BPF, not eBPF. And since BPF (and eBPF) programs are guaranteed to halt, it makes seccomp filters not Turing complete.
  • Who watches the watcher? I don’t even understand this. It’s in the kernel. The kernel watches your program. Just like always. If this is a question of BPF program verification, there is literally a program verifier that checks various properties of the BPF program.
  • seccomp program is stored elsewhere This, with the next statement, is just totally misunderstood. Programs using seccomp define their program in their own code. It’s used the same way as the Pledge examples are shown doing.
  • Easy to get desyncronized either program is updated As above, this just isn’t the case. The only place where this might be true is when using seccomp on programs that were not written natively with seccomp. In that case, yes, desync is possible. But that’s one of the advantages of seccomp’s design: a program launcher (like minijail or systemd) can declare a seccomp filter for a program that hasn’t yet been ported to use one natively.
  • eBPF watcher has no real idea what the program under observation is doing… I don’t understand this statement. I don’t see how Pledge would “have a real idea” either: they’re both doing filtering. If we get AI out of our syscall filters, we’re in serious trouble. :)

OpenBSD has some interesting advantages in the syscall filtering department, especially around sockets. Right now, it’s hard for Linux syscall filtering to understand why a given socket is being used. Something like SOCK_DNS seems like it could be quite handy.

Another nice feature of Pledge is the path whitelist feature. As it’s still under development, I hope they expand this to include more things than just paths. Argument inspection is a weak point for seccomp, but under Linux, most of the arguments are ultimately exposed to the LSM layer. Last year I experimented with creating a “seccomp LSM” for path matching where programs could declare whitelists, similar to standard LSMs.

So, yes, Linux “could match this API on seccomp”. It’d just take some extensions to libseccomp to implement pledge(), as I described at the top. With OpenBSD doing a bunch of analysis work on common programs, it’d be excellent to see this usable on Linux too. So far on Linux, only a few programs (e.g. Chrome, vsftpd) have bothered to do this using seccomp, and it could be argued that this is ultimately due to how fine grained it is.

© 2015, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

Remembrance Day 2015

I’m definitely emotionally vulnerable due to newborn-induced sleep deprivation, but this drawing that my five-year-old daughter brought home from school today actually made me cry:

Remembrance Day 2015 - Je me souviens, by Olivia, age 5

November 09, 2015

Interview with Bruno Fernandez

Could you tell us something about yourself?

My name is Bruno Fernandez. I’m 41 years old and I live in Buenos Aires, Argentina. I work as sysadmin in a financial company in Argentina. Besides, I’m an illustrator who works in different graphic media in my country and abroad.

I have a beautiful family and two children, They are called Agustina and Dante.

Do you paint professionally, as a hobby artist, or both?

I have been working professionally for ten years but I have always worked on setting a professional vision and genuineness to every line, every colour. The word ‘hobby’ sometimes removes the sense of sincerity for what I consider a passion.

What genre(s) do you work in?

One of the most recognizable genres I focus on is editorial illustration near surrealism. However, referring to another aspects of my work, I could also mention children’s illustration.

Whose work inspires you most — who are your role models as an artist?

Those are difficult questions because what inspires me does not always guarantee similar results. I usually observe and this observation generates the necessity of looking for lines, style, colour, emphasis in texture or feeling transmutation.

However, I like many artists like Carlos Alonso, Edvard Munch, Klimt, Egon Schiele, Viviana Bilotti, Poly Bernatene, Enrique Breccia, Quique Alcatena, Frank Frazetta, Joaquin Sorolla, Maria Wernicke, and so on. Some of them are not famous but I am able to find enriching details that help me to stay on the course I want to move.

All in all, I cannot forget my children: their freshness and flow without conditioning. They always remind me the child I used to be and the future I imagined.

Bruno Fernandez 1

How and when did you get to try digital painting for the first time?

My first time was frustrating. I remembered I tried to do something with Gimp, one of the first available versions for Linux. It wasn’t worth the trouble because
I always felt more comfortable using traditional media  pencil, acrylics and a piece of paper.

What makes you choose digital over traditional painting?

I wanted to find applications that fulfilled the same expectations I had with physical tools. Thus, Mypaint was the first application I tried.

Bruno Fernandez 2

How did you find out about Krita? What do you love about  Krita?

I have been working in system administration on Linux systems for ten years and I have always provided opportunities to each available application, considering not only my sysadmin job, but also my creative side. So, after using Mypaint, I found out that Krita provided a world full of possibilities. I also found artists like David Revoy who exemplified the professional possibilities of the application.

All in all, Krita is my favourite application because of its potential and resources. Krita covers all my expectations without needing proprietary applications like Adobe Photoshop or Adobe Illustrator.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I am always excited when I finish a picture. In my last work, I try to see my own transition to observe the aspects I overcame. I think “Campo de rosas” represents the state of my art at the moment. As regards resources, it is important to mention paintbrushes by Pablo Cazorla and David Revoy.

Bruno Fernandez Campo de Rosas

Where can people see more of your work?

You can find out more about me on behance.net. There, you can contact all my social networks: tumblr, Deviantart and facebook.

Anything else you’d like to share?

Special thanks to the Krita team for this magnificent tool and for the opportunity of showing my works .

November 06, 2015

Call For Content: Blenderart Magazine issue #48

We are ready to start gathering up tutorials, making of articles and images for Issue # 48 of Blenderart Magazine.

The theme for this issue is “Time Flies: 10 years of Blenderart Magazine”

Blenderart Magazine is 10 years old and we are going to celebrate by asking for projects, images and animations that you have done in the last 10 years. The older the better. What is your oldest project? Are you brave enough to share it with us?

Looking for:

*Old projects

*Old work-arounds that have been rendered obsolete by improvements to Blender

*”Do You Remember?” Articles: memories about how it used to be, the problems you encountered and how you solved it

*warning: lack of submissions could result in an entire issue of strange sculpting experiments, half completed models and a galley filled with random bad sketches by yours truly…. :P …… goes off to start filling sketchbook with hundreds of stick figures, just in case. :P


Send in your articles to sandra
Subject: “Article submission Issue # 48 [your article name]”

Gallery Images

As usual you can also submit your best renders based on the theme of the issue. The theme of this issue is “Time Flies: 10 years of Blenderart Magazine”. Please note if the entry does not match with the theme it will not be published.

Send in your entries for gallery to gaurav
Subject: “Gallery submission Issue # 48″

Note: Image size should be of 1024x (width) at max.

Last date of submissions December 5, 2015.

Good luck!
Blenderart Team

Gadget reviews

Not that I'm really running after more gadgets, but sometimes, there is a need that could only be soothed through new hardware.

Bluetooth UE roll

Got this for my wife, to play music when staying out on the quays of the Rhône, playing music in the kitchen (from a phone or computer), or when she's at the photo lab.

It works well with iOS, MacOS X and Linux. It's very easy to use, with whether it's paired, connected completely obvious, and the charging doesn't need specific cables (USB!).

I'll need to borrow it to add battery reporting for those devices though. You can find a full review on Ars Technica.

Sugru (!)

Not a gadget per se, but I bought some, used it to fix up a bunch of cables, repair some knickknacks, and do some DIY. Highly recommended, especially given the current price of their starter packs.

15-pin to USB Joystick adapter

It's apparently from Ckeyin, but you'll find the exact same box from other vendors. Made my old Gravis joystick work, in the hope that I can make it work with DOSBox and my 20-year old copy of X-Wing vs. Tie Fighter.

Microsoft Surface ARC Mouse

That one was given to me, for testing, works well with Linux. Again, we'll need to do some work to report the battery. I only ever use it when travelling, as the batteries last for absolute ages.

Logitech K750 keyboard

Bought this nearly two years ago, and this is one of my best buys. My desk is close to a window, so it's wireless but I never need to change the batteries or think about charging it. GNOME also supports showing the battery status in the Power panel.

Logitech T650 touchpad

Got this one in sale (17€), to replace my Logitech trackball (one of its buttons broke...). It works great, and can even get you shell gestures when run in Wayland. I'm certainly happy to have one less cable running across my desk, and reuses the same dongle as the keyboard above.

If you use more than one devices, you might be interested in this bug to make it easier to support multiple Logitech "Unifying" devices.

ClicLite charger

Got this from a design shop in Berlin. It should probably have been cheaper than what I paid for it, but it's certainly pretty useful. Charges up my phone by about 20%, it's small, and charges up at the same time as my keyboard (above).

Dell S2340T

Bought about 2 years ago, to replace the monitor I had in an all-in-one (Lenovo all-in-ones, never buy that junk).

Nowadays, the resolution would probably be considered a bit on the low side, and the touchscreen mesh would show for hardcore photography work. It's good enough for videos though and the speaker reaches my sitting position.

It's only been possible to use the USB cable for graphics for a couple of months, and it's probably not what you want to lower CPU usage on your machine, but it works for Fedora with this RPM I made. Talk to me if you can help get it into RPMFusion.

Shame about the huge power brick, but a little bonus for the builtin Ethernet adapter.

Surface 3

This is probably the biggest ticket item. Again, I didn't pay full price for it, thanks to coupons, rewards, and all. The work to getting Linux and GNOME to play well with it is still ongoing, and rather slow.

I won't comment too much on Windows either, but rather as what it should be like once Linux runs on it.

I really enjoy the industrial design, maybe even the slanted edges, but one as to wonder why they made the USB power adapter not sit flush with the edge when plugged in.

I've used it a couple of times (under Windows, sigh) to read Pocket as I do on my iPad 1 (yes, the first one), or stream videos to the TV using Flash, without the tablet getting hot, or too slow either. I also like the fact that there's a real USB(-A) port that's separate from the charging port. The micro SD card port is nicely placed under the kickstand, hard enough to reach to avoid it escaping the tablet when lugged around.

The keyboard, given the thickness of it, and the constraints of using it as a cover, is good enough for light use, when travelling for example, and the layout isn't as awful as on, say, a Thinkpad Carbon X1 2nd generation. The touchpad is a bit on the small side though it would have been hard to make it any bigger given the cover's dimensions.

I would however recommend getting a Surface Pro if you want things to work right now (or at least soon). The one-before-last version, the Surface Pro 3, is probably a good target.

November 05, 2015

Krita 2.9.9 Released

The ninth semi-monthly bug fix release of Krita is out! Upgrade now to get the following fixes and features:


  • Show a message when trying to use the freehand brush tool on a vector layer
  • Add a ctrl-m shortcut for calling up the Color Curves filter dialog. Patch by Raghavendra Kamath. Thanks!
  • Improve performance by not updating the image when adding empty layers and masks.


  • Fix typing in the artistic text tool. A regression in 2.9.8 made it impossible to type letters that were also used as
    global shortcuts. This is now fixed.
  • Don’t crash when opening an ODG file created in inkscape. The files are not displayed correctly, though and we need to figure out what the issue is.
  • Fix the gaussian blur filter: another 2.9.8 regression where applying a gaussian blur filter would cause the right and bottom edge to become semi-transparent.
  • Fix calculating available memory on OSX. Thanks to René J.V. Bertin for the patch!
  • When duplicating layers, duplicate the channel flags so the new layers are alpha locked if the original layers were alpha locked.
  • Fix a number of hard to find crashes in the undo system and the compositions docker.
  • Another exiv2-related jpeg saving fix.
  • Add a new dark pass-through icon.

Go to the Download page to get the freshest Krita! (And don’t forget to check out Scott’s book, or Animtim’s latest training DVD either!)

Krita Next

The next version of Krita will be 3.0 and we’re definitely getting there! There is a lot of development going on fixing issues with shortcuts, issues with the opengl canvas, issues with icons… And making packages. Ubuntu Linux users can already use the Krita 3.0 Unstable packages in the Lime repository, and we’re working on Windows and OSX packages.

Here’s a demo by Wolthera:

Winners Selected from Giveaway

Written by Scott Petrovic

And the giveaway is over! I want to thank everyone for entering and showing your support for Krita. The amount of comments and love that is being shown for Krita is out of this world. With the 400+ entries, there were over 20,000 words that were written. The developers spend a lot of time helping people with issues related to Krita, graphics drivers, or tablets. It is refreshing to see that a lot of people are enjoying Krita the way it currently is.

Now for the winners…

  1. John Hattan
  2. AJ2600
  3. Waru
  4. Sam M.
  5. Otxoa

Congratulations! I have your email addresses and will be contacting you shortly. I ordered the copies last week but they haven’t arrived yet. I will sign and ship them off as soon as I can.

Any Other Way to Get Free Copies?

I lose at pretty much all giveaways that I enter like this. I also know that for some of you, a large reason you are using Krita is because it is free. This was your only shot. Paying for a book of any type is out of reach at the moment, no matter what the cost.

For those of you that really want the education and cannot afford the book, there might be another way to get it while supporting Krita. Did you know that many libraries will get you a book for free if you just ask them for it? I cannot speak for most countries, but I know this works in the USA. They don’t charge you for anything. I have done this recently with other books. Some library websites will have a request form that you can ask for books. If you fill that out, they usually respond back and let you know when/if it comes in.

Show Your Support

It is exciting for us volunteers to see that Krita is making a difference in people’s lives. When people share their work in things like the monthly drawing challenge, it shows us that people are using and enjoying the software. If you have any skill sets that you would like to volunteer for, feel free to get in contact with us through the chatroom or forum.  Even beta testing helps new releases go smoother. There are plenty of ways to help Krita and keep it moving forward.

attribute((cleanup)), mixed declarations and code, and goto.

One of the cool features of recent GLib is g_autoptr() and g_autofree. It’s liberating to be able to write:

g_autofree char *filename = g_strdup_printf("%s/%d.txt", dir, count);

And be sure that will be freed no matter how your function returns. But as I started to use it, I realized that I wasn’t very sure about some details about the behavior, especially when combined with mixing declarations and code as allowed by C99.

Internally g_autofree uses __attribute__((cleanup)), which is supported by GCC and clang. The definition of g_autofree is basically:

static inline void
g_autoptr_cleanup_generic_gfree (void *p)
  void **pp = (void**)p;
  g_free (*pp);

#define g_autofree __attribute__((cleanup(g_autoptr_cleanup_generic_gfree)))

Look at the following examples:

int count1(int arg)
  g_autofree char *str;

  if (arg < 0)
    return -1;

  str = g_strdup_printf("%d", arg);

  return strlen(str);

int count2(int arg)
  if (arg < 0)
    return -1;

  g_autofree char *str = g_strdup_printf("%d", arg);

  return strlen(str);

int count3(int arg)
  if (arg < 0)
    goto out;

  g_autofree char *str = g_strdup_printf("%d", arg);

  return strlen(str);

  return -1;

int count4(int arg)
  if (arg < 0)
    goto out;

    g_autofree char *str = g_strdup_printf("%d", arg);

    return strlen(str);

  return 0;

Which ones of these do you think work as intended, and which ones are buggy? (I’m not recommending this as a way counting the digits in a number – the example is artificial.)

count1() is pretty clearly buggy – the cleanup function will run in the error
path and try to free an uninitialized string. Slightly, more subtly, count3() is also buggy – because the goto jumps over the initialization. But count2() and count4() work as intended.

To understand why this is the case, it’s worth looking at how attribute((cleanup)) is described in the GCC manual – all it says is “the ‘cleanup’ attribute runs a function when the variable goes out of scope.” I first thought that this was a completely insufficient definition – not complete enough to allow figuring out what was supposed to happen in the above cases, but thinking about it a bit, it’s actually a precise definition.

To recall, the scope of a variable in C is from the point of the declaration of the variable to the end of the enclosing block. What the definition is saying is that any time a variable is in scope, and then goes out of scope, there is an implicit call to the cleanup function.

In the early return in count1() and at the return that is jumped to in count3(), the variable ‘str’ is in scope, so the cleanup function will be called, even though the variable is not initialized in either case. In the corresponding places in count2() and count4() the variable ‘str’ is not in scope, so the cleanup function will not be called.

The coding style takewaways from this are 1) Don’t use the g_auto* attributes on a variable that is not initialzed at the time of definition 2) be very careful if combining goto with g_auto*.

It should be noted that GCC is quite good at warning about it if you get it wrong, but it’s still better to understand the rules and get it right from the start.

November 04, 2015

first release candidate for darktable 2.0

We're proud to announce the first release candidate in the new feature release of darktable, 2.0~rc1.

The release notes and relevant downloads can be found attached to this git tag:
Please only use our provided packages ("darktable-2.0.rc1.*" tar.xz and dmg) not the auto-created tarballs from GitHub ("Source code", zip and tar.gz). The latter are just git snapshots and will not work! Here are the direct links to tar.xz and dmg:

$ sha256sum darktable-2.0.rc1.tar.xz
$ sha256sum darktable-2.0.rc1.dmg 

Packages for individual platforms and distros will follow shortly.

For your convenience, these are the ubuntu/debian packages required to build the source:

$ sudo apt-get build-dep darktable && sudo apt-get install libgtk-3-dev libpugixml-dev libcolord-gtk-dev libosmgpsmap-1.0-0-dev libcups2-dev

And the preliminary changelog can be found below.

When updating from the currently stable 1.6.x series, please bear in mind that your edits will be preserved during this process, but it will not be possible to downgrade from 2.0 to 1.6.x any more. Be careful if you need darktable for production work!

Happy 2.0~rc1 everyone :)

  • darktable has been ported to gtk-3.0
  • new thumbnail cache replaces mipmap cache (much improved speed, less crashiness)
  • added print mode
  • reworked screen color management (softproof, gamut check etc.)
  • text watermarks
  • color reconstruction module
  • raw black/white point module
  • delete/trash feature
  • addition to shadows&highlights
  • more proper Kelvin temperature, fine-tuning preset interpolation in the white balance module
  • noiseprofiles are in external JSON file now
  • monochrome raw demosaicing (not sure whether it will stay for release, like Deflicker, but hopefully it will stay)
  • aspect ratios for crop&rotate can be added to conf (ae36f03)
  • navigating lighttable with arrow keys and space/enter
  • pdf export – some changes might happen there still
  • brush size/hardness/opacity have key accels
  • the facebook login procedure is a little different now
  • export can upscale
  • we no longer drop history entries above the selected one when leaving darkroom or switching images
  • text/font/color in watermarks
  • image information now supports GPS altitude
  • allow adding tone- and basecurve nodes with ctrl-click
  • we renamed mipmaps to thumbnails in the preferences
  • new “mode” parameter in the export panel
  • high quality export now downsamples before watermark and frame to guarantee consistent results
  • Lua scripts can now add UI elements to the lighttable view (buttons, sliders etc.)
  • a new repository for external Lua scripts was started.

November 03, 2015

Mathilde Ampe – Automotive design with Blender


As an automotive company, we were used to do all the digital work in one software, Alias, which is mostly dedicated to industrial design. However, we knew that using this software in the early stages of creation was very time consuming. The questions raised were how to speed up the process and how to make our digital life easier: one answer, use a different software during those early stages.

That’s how we got into Blender; in a car creating a digital model of a seat is particularly long because of the soft materials used and the criteria. I was between interviews when I was asked to model, in Blender, a seat of my choice for a test. Based on pictures, I modelled a front seat in 4 hours when my future manager Pierre-Paul Andriani expected me to release this data in a day or two. That was the first step of our journey into Blender.



Then step by step or, should I say, proof of concept by proof of concept, we implemented Blender into the process. We took advantage of a new project to try new possibilities. We first rebuilt the exterior of the project based on a scan then we gave this data to the engineers. In the first stages, engineers were used to receive the scans which are incredibly heavy. Getting some lighter data changed their life.

As I was the only one being able to use Blender, I taught to my team how to use it. We are now four modellers being able to work on Blender. Despite the differences between the surface modelling and polygonal modelling and their logic of construction, guys learned very quickly how to use it.

Because our project needed us to create a range of cars, we used Blender to create it. Based on the rebuilt model and with two or three modellers, we created nine iterations of the base car in five days. This type of work would take us between ten and fifteen days with Alias. That was definitely the exercise which showed to the design studio the huge advantage of using Blender in the early stages of a project. “Early form studies don’t need to be as precise as NURBS models. We use Blender as a tool for management to sign off on the design volumes. Once package decisions are made we can move on to the next steps in Alias” said Pierre-Paul.

Since then we keep trying new tasks. We milled an exterior with Blender data, we created a model from scratch based on a sketch and a defined wheelbase. The designer can see modifications in real time instead of having to wait for a couple hours. Between the first test and full implementation only 4 months had gone by. We are also experimented with rendering, animations and how to create a new process for these two tasks.

Mathilde Ampe

Digital Modeller – Design

Tata Motors European Technical Centre


SDN/NFV DevRoom at FOSDEM 2016

We are pleased to announce the Call for Participation in the FOSDEM 2016 Software Defined Networking and Network Functions Virtualization DevRoom!

Important dates:

  • Nov 18: Deadline for submissions
  • Dec 1: Speakers notified of acceptance
  • Dec 5: Schedule published

This year the DevRoom topics will cover two distinct fields:

  • Software Defined Networking (SDN), covering virtual switching, open source SDN controllers, virtual routing
  • Network Functions Virtualization (NFV), covering open source network functions, NFV management and orchestration tools, and topics related to the creation of an open source NFV platform

We are now inviting proposals for talks about Free/Libre/Open Source Software on the topics of SDN and NFV. This is an exciting and growing field, and FOSDEM gives an opportunity to reach a unique audience of very knowledgeable and highly technical free and open source software activists.

Topics accepted include, but are not limited to:


  • SDN controllers – OpenDaylight, OpenContrail, ONOS, Midonet, OVN, OpenStack Neutron,Calico, IOvisor, …
  • Dataplane processing: DPDK, OpenDataplane, netdev, netfilter, ClickRouter
  • Virtual switches: Open vSwitch, Snabb Switch, VDE, Lagopus
  • Open network protocols: OpenFlow, NETCONF, OpenLISP, eBPF, P4, Quagga


  • Management and Orchestration (MANO): Deployment and management of network functions, policy enforcement, virtual network functions definition – rift.io, Cloudify, OpenMANO, Tacker, …
  • Open source network functions: Clearwater IMS, FreeSWITCH, OpenSIPS, …
  • NFV platform features: Service Function Chaining, fault management, dataplane acceleration, …

Talks should be aimed at a technical audience, but should not assume that attendees are already familiar with your project or how it solves a general problem. Talk proposals can be very specific solutions to a problem, or can be higher level project overviews for lesser known projects. Please include the following information when submitting a proposal:

  • Your name
  • The title of your talk (please be descriptive, as titles will be listed with around 250 from other projects)
  • Short abstract of one or two paragraphs
  • Short bio (with photo)

The deadline for submissions is November 18th, 2015. FOSDEM will be held on the weekend of January 30th-31st 2016 and the SDN/NFV DevRoom will take place on Sunday, January 31st 2016. Please use the following website to submit your proposals: https://penta.fosdem.org/submission/FOSDEM16 (you do not need to create a new Pentabarf account if you already have one from past years). You can also join the devroom’s mailing list, which is the official communication channel for the DevRoom: network-devroom at lists.fosdem.org (subscription page: https://lists.fosdem.org/listinfo/network-devroom)

The Networking DevRoom 2016 Organization Team