November 20, 2017

LVFS Article On

I wrote an article on the LVFS for If you’re interested in an overview of how firmware updates work in Linux, and a little history it might be an interesting read.

November 16, 2017

Talking picture in picture

Tech Youtube-guy extraordinaire, MKBDH, pulls off a subtle but clever way to show how video playback works on the new iPhone X screen (relevant portion is around 5:50):

November 15, 2017

security things in Linux v4.14

Previously: v4.13.

Linux kernel v4.14 was released this last Sunday, and there’s a bunch of security things I think are interesting:

vmapped kernel stack on arm64
Similar to the same feature on x86, Mark Rutland and Ard Biesheuvel implemented CONFIG_VMAP_STACK for arm64, which moves the kernel stack to an isolated and guard-paged vmap area. With traditional stacks, there were two major risks when exhausting the stack: overwriting the thread_info structure (which contained the addr_limit field which is checked during copy_to/from_user()), and overwriting neighboring stacks (or other things allocated next to the stack). While arm64 previously moved its thread_info off the stack to deal with the former issue, this vmap change adds the last bit of protection by nature of the vmap guard pages. If the kernel tries to write past the end of the stack, it will hit the guard page and fault. (Testing for this is now possible via LKDTM’s STACK_GUARD_PAGE_LEADING/TRAILING tests.)

One aspect of the guard page protection that will need further attention (on all architectures) is that if the stack grew because of a giant Variable Length Array on the stack (effectively an implicit alloca() call), it might be possible to jump over the guard page entirely (as seen in the userspace Stack Clash attacks). Thankfully the use of VLAs is rare in the kernel. In the future, hopefully we’ll see the addition of PaX/grsecurity’s STACKLEAK plugin which, in addition to its primary purpose of clearing the kernel stack on return to userspace, makes sure stack expansion cannot skip over guard pages. This “stack probing” ability will likely also become directly available from the compiler as well.

set_fs() balance checking
Related to the addr_limit field mentioned above, another class of bug is finding a way to force the kernel into accidentally leaving addr_limit open to kernel memory through an unbalanced call to set_fs(). In some areas of the kernel, in order to reuse userspace routines (usually VFS or compat related), code will do something like: set_fs(KERNEL_DS); ...some code here...; set_fs(USER_DS);. When the USER_DS call goes missing (usually due to a buggy error path or exception), subsequent system calls can suddenly start writing into kernel memory via copy_to_user (where the “to user” really means “within the addr_limit range”).

Thomas Garnier implemented USER_DS checking at syscall exit time for x86, arm, and arm64. This means that a broken set_fs() setting will not extend beyond the buggy syscall that fails to set it back to USER_DS. Additionally, as part of the discussion on the best way to deal with this feature, Christoph Hellwig and Al Viro (and others) have been making extensive changes to avoid the need for set_fs() being used at all, which should greatly reduce the number of places where it might be possible to introduce such a bug in the future.

SLUB freelist hardening
A common class of heap attacks is overwriting the freelist pointers stored inline in the unallocated SLUB cache objects. PaX/grsecurity developed an inexpensive defense that XORs the freelist pointer with a global random value (and the storage address). Daniel Micay improved on this by using a per-cache random value, and I refactored the code a bit more. The resulting feature, enabled with CONFIG_SLAB_FREELIST_HARDENED, makes freelist pointer overwrites very hard to exploit unless an attacker has found a way to expose both the random value and the pointer location. This should render blind heap overflow bugs much more difficult to exploit.

Additionally, Alexander Popov implemented a simple double-free defense, similar to the “fasttop” check in the GNU C library, which will catch sequential free()s of the same pointer. (And has already uncovered a bug.)

Future work would be to provide similar metadata protections to the SLAB allocator (though SLAB doesn’t store its freelist within the individual unused objects, so it has a different set of exposures compared to SLUB).

setuid-exec stack limitation
Continuing the various additional defenses to protect against future problems related to userspace memory layout manipulation (as shown most recently in the Stack Clash attacks), I implemented an 8MiB stack limit for privileged (i.e. setuid) execs, inspired by a similar protection in grsecurity, after reworking the secureexec handling by LSMs. This complements the unconditional limit to the size of exec arguments that landed in v4.13.

randstruct automatic struct selection
While the bulk of the port of the randstruct gcc plugin from grsecurity landed in v4.13, the last of the work needed to enable automatic struct selection landed in v4.14. This means that the coverage of randomized structures, via CONFIG_GCC_PLUGIN_RANDSTRUCT, now includes one of the major targets of exploits: function pointer structures. Without knowing the build-randomized location of a callback pointer an attacker needs to overwrite in a structure, exploits become much less reliable.

structleak passed-by-reference variable initialization
Ard Biesheuvel enhanced the structleak gcc plugin to initialize all variables on the stack that are passed by reference when built with CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF_ALL. Normally the compiler will yell if a variable is used before being initialized, but it silences this warning if the variable’s address is passed into a function call first, as it has no way to tell if the function did actually initialize the contents. So the plugin now zero-initializes such variables (if they hadn’t already been initialized) before the function call that takes their address. Enabling this feature has a small performance impact, but solves many stack content exposure flaws. (In fact at least one such flaw reported during the v4.15 development cycle was mitigated by this plugin.)

improved boot entropy
Laura Abbott and Daniel Micay improved early boot entropy available to the stack protector by both moving the stack protector setup later in the boot, and including the kernel command line in boot entropy collection (since with some devices it changes on each boot).

eBPF JIT for 32-bit ARM
The ARM BPF JIT had been around a while, but it didn’t support eBPF (and, as a result, did not provide constant value blinding, which meant it was exposed to being used by an attacker to build arbitrary machine code with BPF constant values). Shubham Bansal spent a bunch of time building a full eBPF JIT for 32-bit ARM which both speeds up eBPF and brings it up to date on JIT exploit defenses in the kernel.

seccomp improvements
Tyler Hicks addressed a long-standing deficiency in how seccomp could log action results. In addition to creating a way to mark a specific seccomp filter as needing to be logged with SECCOMP_FILTER_FLAG_LOG, he added a new action result, SECCOMP_RET_LOG. With these changes in place, it should be much easier for developers to inspect the results of seccomp filters, and for process launchers to generate logs for their child processes operating under a seccomp filter.

Additionally, I finally found a way to implement an often-requested feature for seccomp, which was to kill an entire process instead of just the offending thread. This was done by creating the SECCOMP_RET_ACTION_FULL mask (née SECCOMP_RET_ACTION) and implementing SECCOMP_RET_KILL_PROCESS.

That’s it for now; please let me know if I missed anything. The v4.15 merge window is now open!

© 2017, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

They don’t make ’em like they used to (two years ago)

I know I may be suffering from Early Onset Grumpiness, because I agree with this rant about how laptops aren’t as good as they were in the good old days (two years ago).

November 14, 2017

Firefox is (still) great

Perhaps due to my history with Mozilla and Firefox, I’ve been a happy Firefox user since version 0.6 (14 years ago!?). In recent years, the Chrome browser from Google has become the most commonly used browser, especially among web developers.

I’m delighted to see that Mozilla has made great strides in improving Firefox and is winning people over again. The latest version of Firefox, released today, is worth a try. There are also some great nerdy details on how Firefox has improved.

Even if Firefox doesn’t regain the market-share it once had, these efforts push the other browser vendors to improve and generally improve the Web as a platform.

Thanks and congrats to all of the designers, engineers, and other humans who continue to improve Firefox. The update icon looks slick too.

November 13, 2017

Interview with Lars Pontoppidan

Could you tell us something about yourself?

Yes certainly! I’m Lars Pontoppidan; a 36 year old, self-employed programmer, game developer, musician and artist.

I’ve been drawing and painting since I could put my pen to the paper – so about 35 years.

I made my first recognizable painting when I was around 3 or 4 – my mom still has it framed at her house.

I’ve always wanted to end up at some level where I could combine all my skills and hobbies. Somewhere along the way I found out that game development demand a lot of the skills I possess – so 1.5 years ago, I decided to cancel all my contracts with my clients and go for a new path in life as “indie game developer”. I’ve now found out that it’s probably the worst time I could ever get into indie game development. The bubble has more or less already burst. There’s simply too many game releases for the consumers to cope with at the moment. But hey I’ve tried worse so it doesn’t really bother me – and I get to make art with Krita!

Do you paint professionally, as a hobby artist, or both?

Both I’d say. I’ve always been creating things on a hobby level – but have also delivered a lot of designs, logos and custom graphics as self-employed. I like the hobby work the most – as there are no deadlines or rules for when the project is done.

What genre(s) do you work in?

Cartooning, Digital painting, Animation and Video game art. All these (and maybe more) blend in when producing a game. I also like painting dark and gloomy pictures once in a while.

I think I’ve mostly done cartoon styled work – but with a grain of realism in it. My own little mixture.

I started out with pencil and paper – moved to the Deluxe Paint series, when I got my first Amiga – and ended up with Krita (which is an absolute delight to work with. Thanks to you guys!). I still occasionally do some sketching with pencil and paper – depending on my mood.

Whose work inspires you most — who are your role models as an artist?

* A list too long for me to compile here, of sci-fi fantasy artists. Peter Elson is the first that comes to mind. These artists, in my opinion, lay the very foundation of what’s (supposedly) possible with human technology – and currently, the only possibility to get a glimpse of how life might look like other places in the vast universe that surrounds us. It’s mind blowing how they come up with all the alien designs they do.

* Salvador Dalí – It’s hard to find the right words for his creations – which, I think, is why his works speak to me.

* “vergvoktre” He’s made some really dark, twisted and creepy creations that somehow get under my skin.

How and when did you get to try digital painting for the first time?

The very first digital painting program I’ve ever tried was KoalaPainter for the Commodore 64. I had nothing but a joystick and made, if I recall correctly, a smiley face in black and white.

Thankfully my Amiga 500 came with a copy of Deluxe Paint IV, a two-button mouse and the luxury of a 256+ color palette.

What makes you choose digital over traditional painting?

The glorious “Undo” buffer. I mean… It’s just magic.Especially in the first part of the day (before the first two cups of coffee) where your hand just won’t draw perfect circles, nor any straight lines.

How did you find out about Krita?

I read an article about the Calligra office suite online. It described how Calligra compared to Open Office. I eventually installed it to see how it compared to Open Office and boom there was Krita as part of the package. This was my first encounter – unfortunately it ended up with an uninstall – because of stability issues with the Calligra suite in general.

What was your first impression?

The first impression was actually really good – unfortunately it ended up a bit in the shadows of the Calligra suite’s combined impression. This wasn’t so positive after a few segfaults in the different applications. Luckily I tried Krita later when it entered the Qt5 based versions. I haven’t looked back since.

What do you love about Krita?

The brush engines and the “Layers” docker.

The brushes, and most of the default settings for them, just feel right. Also the many options to tweak the brushes are really awesome.

The layers docker was actually what gave me the best impression of the program – you had working group layers – and you could give any layer the same names! None of the graphic creation applications I used a few years back had these basic, fundamental features done right (Inkscape and GIMP – I’m looking at you). Krita’s layers didn’t feel somewhat broken, hacked-on and had no naming scheme limitations. A small thing that has made a big difference to me.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Uhm… I was going to write ‘speed’ – but everybody is screaming for more of that already. I know how the developers are doing their best to get more juice.

Some great overall stability would be nice. I’ve only ever had 2 or 3 crashes with GIMP over a long period of time – the count is a bit higher with Krita – on a shorter time scale.

My biggest feature request would be: Cut’n’paste functionality through multiple layers, that also paste in separate layers. This would greatly improve my workflow. I’ve always worked with a group layer containing separate layers for outline, color, texture, shadow etc. – on each e.g. movable part in a character rig. So I would really benefit from a (selection based) cut’n’paste that could cut through all the selected layers – and paste all these separate selection+layers elsewhere in the layer tree.

What sets Krita apart from the other tools that you use?

I find that most of Krita’s tools actually do what you expect them to do – without any weird limitations or special cases. Plus the different brushes, brush engines and all the flexibility to tweak them, are real killer features.

The non-destructive masks (Transparency, Filter and Transform) are also on my list of favourite features. I use these layer types a lot when creating game art – to make them blend in better with the game backgrounds.

And maybe the single most important thing: it’s free and open source. So I’m quite certain I will be able to open up my old Krita files many years into the future.

… and speaking of the future; I really look forward to getting my hands dirty with the Python scripting API.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

It would have to be the opening scene of my upcoming 2D game “non”. It’s using a great variety of Krita’s really awesome and powerful features. The scenes in the game features full day and night cycles where all the lighting and shadows change dynamically – this makes it especially hard to get beautiful painted scenes in all the states each scene has between day and night. Krita’s tool set makes it easier and quicker for me to test out a specific feature for an object or sprite – before throwing it into the game engine.

The biggest scene I have so far is 10200×4080 pixels – Krita was actually performing decently up to a certain point where I had to break the scene into smaller projects. I’m not blaming Krita for this 🙂

What techniques and brushes did you use in it?

For cartoon styled work I use a Group layer containing:
* background blend (Transparency Mask)
* shadows (Paint layer)
* outlines (Paint layer)
* textures (Group layer)
* solid base color(s) (Paint layer)

For outlines I use the standard Pixel brush ‘Ink_gpen_10’ – it has a really nice sharp edge at small tip sizes. For texturing I mostly use the ‘Splatter_thin’ Pixel brush – with both standard and custom brush tips and settings depending on the project at hand. For shadowing I really like the ‘Airbrush_pressure’ and ‘Airbrush_linear_noisy’ Pixel brushes. I use a selection mask based on the solid base color layer (Layer name -> Right mouse click -> Select Opaque) – and start shadowing the object.

Where can people see more of your work?

In my games:
On my band album covers:

Anything else you’d like to share?

I’d like to thank everyone involved with Krita for making this great open source and free software available to the world. I hope to soon get enough time on my hands to help the project grow.

Take care and be nice to each other.

Selling the Flu Shot

Every time I see a news story about the flu shot (which is available for free on Prince Edward Island this year), there’s always a stock photo close-up of a needle jabbing into an arm.

If you want to encourage people to get the flu shot, don’t use a photo of the one thing people don’t like about the flu shot.

I was able to get the flu shot for free without an appointment (or any wait time) at Shoppers Drug Mart.

November 09, 2017

Learn Digital Painting with Krita in Bogota, Colombia

Lina Porras and David Saenz from the Ubuntu Colombia user group wrote to tell us that they will give an introduction to digital painting with Krita starting this Saturday. David will be teaching Krita four Saturday sessions.

It will be an introductory course where people from 14 years old and older will learn the basics of digital painting and will start painting in Krita. Here is more information:

And you can follow them on twitter (@ubuntco and facebook as well:

If you are thinking of organizing a Krita course for your local user group, community art college or similar,  contact us so we can help you spread the word, too!

November 08, 2017

The Zen Microwave

Here’s a free invention idea for you (in that I have a stupid idea and will not do anything with it): The Zen Microwave.

The Zen Microwave only has one control: an on/off switch. Once you’ve started it, you have to remember to turn it off or your left-overs will turn into exploding spaghetti-charcoal.

If you want to heat up your lunch, it can go one of two ways:

  1. Put your lunch in the microwave and turn it on (remember, there’s no timer)
  2. Stand there for two minutes, focused and present
  3. Turn the microwave off and enjoy your hot lunch


  1. Put your lunch in the microwave and turn it on
  2. Wander around the kitchen, get a glass of water, check in on your online click-farm business on your phone, thumb through the Home Hampers & Hobbits flyer
  3. Smell burning and notice that your lunch has been super-heated for 10 minutes

Is it dangerous? Yes – but how else are you ever going to learn?

November 07, 2017

Hardware CI Tests in fwupd

Usually near the end of the process of getting a vendor on the LVFS I normally ask them to send me hardware for the tests. Once we’ve got a pretty good idea that the hardware update process is going to work with fwupd (i.e. they’re not insisting on some static linked ELF to be run…) and when they’ve got legal approval to upload the firmware to the LVFS (without an eyewateringly long EULA) we start thinking about how to test the hardware. Once we say “Product Foo from Vendor Bar is supported in Linux” we better make damn sure it doesn’t regress when something in the kernel changes or when someone refactors a plugin to support a different variant of a protocol.

To make this task a little more manageable, we have a little python script that helps automate the devices that can be persuaded to enter DFU mode themselves. To avoid chaos, I also have a little cardboard tray under a little HP Microserver with two 10-port USB hubs with everything organised. Who knew paper-craft would be such an important skill at Red Hat…

As the astute might notice, much of the hardware is a bare PCB. I don’t actually need the complete device for testing, and much of the donated hardware is actually a user return or with a cosmetic defect, or even just a pre-release PCB without the actual hardware attached. This is fine, and actually preferable to the entire device – I only have a small office!

As much of the hardware needs special handling to put it in update mode we can’t 100% automate this task, and sometimes it really is just me sitting in front of the laptop pressing and holding buttons for 30 minutes before uploading a tarball, but it’s sure it comforting to know that firmware updates are tested like this. As usual, thanks should be directed to Red Hat for letting me work on this kind of stuff, they really are a marvelous company to work for.

November 06, 2017

4.0 Development Update

And then we realized we hadn’t posted news about ongoing Krita development for some time now. The main reason is that we’ve, well, been really busy doing development. The other reason is that we’re stuck making fully-featured preview builds on OSX and Linux. More about that later…

So, what’s been going on? Some of the things we’ve been doing were backported to Krita 3.2 and 3.3, like support for the Windows 8 Pointer API, support for the ANGLE Direct3D display renderer, the new gmic-qt G’Mic plugin, new commandline options, support for touch painting, the new smart patch tool, new brush presets and blending modes… But there is also a lot of other work that simply couldn’t be backported to 3.x.

The last time we did a development update with Krita 4.0 was in June 2017: the first development build for 4.0 already had a large number of new features:

  • the SVG-based vector layers with improved vector handling tools,
  • Allan Marshall’s new airbrush system,
  • Eugene Ingerman’s healing brush,
  • a new export system that reports which parts of your image cannot be saved to your chosen file format: and that is now improved: saving now happens in the background. You can press save, and continue painting. Autosave also doesn’t interrupt your painting anymore.
  • Wolthera’s new and improved palette docker
  • A new docker for loading SVG symbol collections, Which now comes with a new symbol libary with brush preset icons. Perfect with the new brush editor.
  • We added Python scripting (only available in the Windows builds: we need platform maintainers). Eliakin and Wolthera have spent the summer adding great new python-based plugins, extending and improving the scripting API while working:
    • Ten brushes: a script to assign ten favorite brushes to hotkeys
    • Quick settings docker: with brush size, opacity and flow
    • Comic Projects Management tools
    • And much, much more

What has been recently added to Krita 4.0

Big performance improvements

After the development build release we sent out a user survey:  In case you didn’t see the results of our last survey this was the summary.

The biggest item on the list was lag. Lag can have many meanings, and there will always be brushes or operations that are not instant. But we had the opportunity the past couple of months to work on an outside project to help improve the performance of Krita. While we knew this might delay the release of Krita 4.0, it would be much appreciated by artists. Some of the performance improvements contain the following:

  • multi-threaded performance using all your CPUs for pixel brush engines (80% of all the brushes that are made).
  • A lot of speed optimizations with dab grouping for all brushes
  • more caching to speed up brush rendering across all brushes.

Here’s a video of Wolthera using the multithreaded brushes:

Performance Benchmarking

We also added performance benchmarking. We can see much more accurately how brushes are performing and make our brushes better/optimized in the future:



Pixel Grid

Andrey Kamakin added an option to show a thin grid around pixels if you zoom in enough:

Live Brush Preview

Scott Petrovic has been working with a number of artists to rework the brush editor. There have been many things changed including renaming brushes and  better saving options.  There’s also a live stroke preview now to see what happens when you change settings. Parts of the editor can be shown or hidden to accommodate for smaller monitors.

Isometric Grid

The grid now has a new Isometric option. This can be controlled and modified through the grid docker:


  • A new edge detection filter
  • Height to normal map filter
  • Improved gradient map filter
  • A new ASC-CDL color balance filter with slope, offset and power parameters


  • File layers now can have the location of their reference changed.
  • A convert layer to file layer option has been added that saves out layers and replaces them with a file layer referencing them.


  • A new docker for use on touch screens: big buttons in a layout that resembles the button row of a Wacom tablet.


And there’s of course a lot of bug fixes, UI polish, performance improvements, small feature improvements. The list is too long to keep here, so we’re working on a separate release notes page. These notes, like this Krita 4.0 build, are very much a work in progress!

Features currently working on

There are still a number of features we want to have done before we release Krita 4.0:

  • a new text tool (we have started the ground work for this, but it still needs a lot more work)
  • a faster colorize mask tool (we need to make this much faster as it is currently too slow)
  • stacked brushes where you can have multiple brush tips similar to other applications.

And then there are no doubt things missing from the big new features, like SVG vector layers and Python scripting that need to be implemented, there will be bugs that need to be fixed. We’ve made packages for you to download and test, but be warned, there are bugs. And:

This is pre-alpha code. It will crash. It will do weird things. It might even destroy your images on saving!


You can have both Krita 3 and Krita 4 on the same system. They will use the same configuration (for now, that might change), which means that either Krita 3 or Krita 4 can get confused. They will use the same resources folder, so brush presets and so on are shared.


Right now, all releases and builds, except for the Lime PPA, are created by the project maintainer, Boudewijn Rempt. This is not sustainable! Only for the Windows build, a third person is helping out by maintaining the scripts needed to build and package Krita. We really do need people to step up and help maintain the Linux and macOS/OSX builds. This means that:

  • The Linux AppImage is missing Python scripting and sound playback. It may be missing the QML-based touch docker. We haven’t managed to figure out how to add those features to the appimage! The appimage build script is also seriously outdated, and Boudewijn doesn’t have time to improve it, next to all the other things that need to be done and managed and, especially, coded. We need a platform maintainer for Linux!
  • The OSX/macOS DMG is missing Python scripting as well as PDF import and G’Mic integration. Boudewijn simply does not have the in-depth knowledge of OSX/macOS needed to figure out how to add that properly to the OSX/macOS build and packages. Development on OSX is picking up, thanks to Bernhard Liebl, but we need a platform maintainer for macOS/OSX!

Windows Download

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

There are no 32 bits Windows builds yet. There is no installer.

Linux Download

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

OSX Download

Note: the gmic-qt and pdf plugins are not available on OSX.

Source code


For all downloads:


The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
. The signatures are here.

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

November 04, 2017

Things to Consume Your Time & Attention

A few things I’ve been enjoying recently:

The Vietnam WarThe Vietnam War by Ken Burns & Lynn Novick

The Vietnam War is a ten-part, 18-hour documentary from PBS. It’s available to stream for free on the PBS website/app, but only if you live in the US. For those of us fortunate enough not to live in the Greatest Country on Earth™, there are some hoops to jump through to watch it.

The easiest set of hoops I’ve found is to use the free built-in VPN in the Opera web browser. You can easily select the US as your VPN zone and enjoy the web as our freedom-loving friends see it.

The documentary is exhaustive and is propelled by remarkable interviews with participants from Both Sides™ of the war. It will leave you wondering how it could have ever been allowed to happen, and terrified that it will happen again.

How F*cked Up Is Your Management?How F*cked Up Is Your Management by Jonathan & Melissa Nightingale

Jonathan & Melissa Nightingale have gathered some of their best writing from their excellent blog, The Co-Pour. They discuss and share what they’ve learned about the mechanics and humanity of working with people in a business. It’s short, easy to read, has a profane title, and a fine domain name:

Hidden America with Jonah Ray

Hidden America with Jonah RayI came to know Johan Ray though his co-hosting of the Nerdist podcast. He plays the host of a fake travel show, Hidden America with Jonah Ray, which is generally a spoof of low-budget cable travel shows, but occasionally drifts into absurd horror.

Watching it from outside the US is frustrating. It’s a great show – but it’s barely worth jumping through the hoops required. Like The Vietnam War doc, you’ll need the Opera VPN to watch from outside of the US, then you need to create an account on the VRV site, and even then the show is peppered with ads that randomly interrupt playback. As I said, barely worth it – but still worth it.

If you’re not sure you want to jump through all of those hoops, the first episode (Boston) is on YouTube as is a short trailer.

Meet is Murder

I’d like to write a business-oriented self-help book, just so I could have a chapter titled “Meet is Murder.”

Now all I need is the contents for that chapter, and all of the other chapters. Keep an eye out for it on the best-seller lists in 8 to 10 years.

November 03, 2017

Krita 3.3.2 Released

Today we are releasing Krita 3.3.2, a bugfix release for Krita 3.3.0. This release fixes two important regressions:

  • Krita 3.3.1 would read brush presets with textures incorrectly. This is now fixed.
  • Windows 1709 broke wintab and Windows Ink tablet handling in various ways; we worked around that and it works again in this version of Krita.

Additionally, there are the following fixes and improvements:

  • Animation: make it possible to export empty frames after the end of the animation.
  • Animation: make it possible to render up to a 10,000 frames
  • Add a command-line option to start Krita with a new, empty image: krita --new-image RGBA,8,5000,3000
  • Performance: improved caching for effect and selection masks
  • Performance: Fix a leak in the smudge brush
  • Performance: Improve performance when using the hardware-accelerated canvas
  • Performance, Windows: improve the performance when loading icons
  • macOS: render the frames-per-second overlay widget correctly
  • Filters: it’s now possible to edit the filter’s settings directly in the xml that is used to save filter definitions to .krita files.
  • Filters: a new ASC_CDL color balance filter was added, with Slope, Offset and Power options.
  • Crashes: fix a crash that happened when closing a second document with infinite canvas active
  • Layers: Make it possible to copy group layers
  • UI: make it possible to use the scroll-wheel to scroll through patterns when the patterns palette is very narrow.
  • UI: Improve drag and drop feedback in the layer panel
  • UI: Hide the lock and collapse titlebar icons when a panel is floating
  • G’Mic: the included G’Mic is updated to the latest release.



Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.


(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

When it is updated, you can also use the Krita Lime PPA to install Krita 3.3.2 on Ubuntu and derivatives. There is also an updated snap.


Note: the gmic-qt and pdf plugins are not available on OSX.

Source code


For all downloads:


The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
. The signatures are here.

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

Programming an ATtiny85, Part 1: Using C with a USBtinyISP

[ATtiny85 and USBtinyISP programmer] Arduinos are great for prototyping, but for a small, low-power, cheap and simple design, an ATtiny chip seems like just the ticket. For just a few dollars you can do most of what you could with an Arduino and use a lot of the same code, as long as you can make do with a little less memory and fewer pins.

I've been wanting to try them, and recently I ordered a few ATtiny85 chips. There are quite a few ways to program them. You can buy programmers specifically intended for an ATtiny, but I already had a USBtinyISP, a chip used to program Arduino bootloaders, so that's what I'll discuss here.

Wiring to the USBtinyISP

[ATtiny85 and USBtinyISP wiring] The best reference I found on wiring was Using USBTinyISP to program ATTiny45 and ATTiny85. That's pretty clear, but I made my own Fritzing diagram, with colors, so it'll be easy to reconstruct it next time I need it. The colors I used:

MISO yellow VCC red
SCK white MOSI green
RESET orange
or red/black
GND black

Programming the ATtiny in C

I found a couple of blink examples at, Getting Started with ATtiny AVR programming, and in a Stack Exchange thread, How to program an AVR chip in Linux Here's some basic blink code:

#include io.h>
#include <utildelay.h>

int main (void)
    // Set Data Direction to output on port B, pins 2 and 3:
    DDRB = 0b00001000;
    while (1) {
        // set PB3 high
        PORTB = 0b00001000;
        // set PB3 low
        PORTB = 0b00000000;

    return 1;

Then you need a Makefile. I started with the one linked from the electronut page above. Modify it if you're using a programmer other than a USBtinyISP. make builds the program, and make install loads it to the ATtiny. And, incredibly, my light started blinking, the first time!

[ATtiny85 pinout] Encouraged, I added another LED to make sure I understood. The ATtiny85 has six pins you can use (the other two are power and ground). The pin numbers correspond to the bits in DDRB and PORTB: my LED was on PB3. I added another LED on PB2 and made it alternate with the first one:

    DDRB = 0b00001100;
[ ... ]
        // set PB3 high, PB2 low
        PORTB = 0b00001000;
        // set PB3 low, PB2 high
        PORTB = 0b00000100;

Timing Woes

But wait -- not everything was rosy. I was calling _delay_ms(500), but it was waiting a lot longer than half a second between flashes. What was wrong?

For some reason, a lot of ATtiny sample code on the web assumes the chip is running at 8MHz. The chip's internal oscillator is indeed 8MHz (though you can also run it with an external crystal at various speeds) -- but its default mode uses that oscillator in "divide by eight" mode, meaning its actual clock rate is 1MHz. But Makefiles you'll find on the web don't take that into account (maybe because they're all copied from the same original source). So, for instance, the Makefile I got from electronut has

CLOCK = 8000000
If I changed that to
CLOCK = 1000000
now my delays were proper milliseconds, as I'd specified. Here's my working attiny85 blink Makefile.

In case you're curious about clock rate, it's specified by what are called fuses, which sound permanent but aren't: they hold their values when the chip loses power, but you can set them over and over. You can read the current fuse settings like this:

avrdude -c usbtiny -p attiny85 -U lfuse:r:-:i -v
which should print something like this:
avrdude: safemode: hfuse reads as DF
avrdude: safemode: efuse reads as FF
avrdude: safemode: Fuses OK (E:FF, H:DF, L:62)

To figure out what that means, go to the Fuse calculator, scroll down to Current settings and enter the three values you got from avrdude (E, H and L correspond to Extended, High and Low). Then scroll up to Feature configuration to see what the fuse settings correspond to. In my case it was Int. RC Osc. 8 Mhz; Start-up time PWRDWN/RESET; 6CK/14CK+ 64ms; [CKSEL=1011 SUT=10]; default value and Divide clock by 8 internally; [CKDIV8=0] was checked.

More on ports and pins

There's more info on ATtiny ports in ATTiny Port Manipulation (Part 1): PinMode() and DigitalWrite()

Nobody seems to have written much about AVR/ATTINY programming in general. Symbols like PORTB and functions like _delay_ms() come from files in /usr/lib/avr/include/, at least on my Debian system. There's not much there, so if you want library functions to handle nontrivial hardware, you'll have to write them or find them somewhere else.

As for understanding pins, you're supposed to go to the datasheet and read it through, all 234 pages. Hint: for understanding basics of reading from and writing to ports, speed forward to section 10, I/O Ports. A short excerpt from that section:

Three I/O memory address locations are allocated for each port, one each for the Data Register - PORTx, Data Direction Register - DDRx, and the Port Input Pins - PINx. The Port Input Pins I/O location is read only, while the Data Register and the Data Direction Register are read/write. However, writing a logic one to a bit in the PINx Register, (comma sic) will result in a toggle in the corresponding Data Register. In addition, the Pull-up Disable - PUD bit in MCUCR disables the pull-up function for all pins in all ports when set.

There's also some interesting information there about built-in pull-up resistors and how to activate or deactivate them.

That's helpful, but here's the part I wish they'd said:

PORTB (along with DDRB and PINB) represents all six pins. (Why B? Is there a PORTA? Not as far as I can tell; at least, no PORTA is mentioned in the datasheet.) There are six output pins, corresponding to the six pins on the chip that are not power or ground. Set the bits in DDRB and PORTB to correspond to the pins you want to set. So if you want to use pins 0 through 3 for output, do this:

    DDRB = 0b00001111;

If you want to set logical pins 1 and 3 (corresponding to pins 6 and 2 on the chip) high, and the rest of the pins low, do this:

    PORTB = 0b00001010;

To read from pins, use PINB.

In addition to basic functionality, all the pins have specialized uses, like timers, SPI, ADC and even temperature measurement (see the diagram above). The datasheet goes into more detail about how to get into some of those specialized modes.

But a lot of those specialties are easier to deal with using libraries. And there are a lot more libraries available for the Arduino C++ environment than there are for a bare ATtiny using C. So the next step is to program the ATtiny using Arduino ... which deserves its own article.

November 02, 2017

Quirks in fwupd as key files

In my previous blog post I hinted at you just have to add one line to a data file to add support for new AVR32 microcontrollers and this blog entry should give a few more details.

A few minutes ago I merged a PR that moves the database of supported and quirked devices out of the C code and into runtime loaded files. When fwupd is installed in long-term support distros it’s very hard to backport new versions as new hardware is released. The idea with this functionalty is that the end user can drop an additional (or replace an existing) file in a .d directory with a simple format and the hardware will magically start working. This assumes no new quirks are required, as this would obviously need code changes, but allows us to get most existing devices working in an easy way without the user compiling anything.

The quirk files themselves are simple key files and are documented in the fwupd gtk-doc documentation.

FreeCAD Arch development news - October 2017

Hi there, Time for a new report on the development of href="">Architecture and BIM tools for href="">FreeCAD. Remember, you can help me to spend more time working on this, by sponsoring me on href="">Patreon, href="">Librepay or directly (ask me for a PayPal email or bitcoin address). Campain and future development Since I just recently opened the Librepay...

November 01, 2017

FOSDEM 2018 – SDN/NFV DevRoom Call for Content

The SDN & NFV DevRoom is back this year for FOSDEM, and the call for content is open until November 16th. Submissions are welcome now!

Here’s the full announcement:

We are pleased to announce the Call for Participation in the FOSDEM 2018 Software Defined Networking and Network Functions Virtualization DevRoom!

Important dates:

  • Nov 16: Deadline for submissions
  • Dec 1: Speakers notified of acceptance
  • Dec 5: Schedule published

This year, as it has for the past two years, the DevRoom topics will cover two distinct fields:

  • Software Defined Networking (SDN), covering virtual switching, open source SDN controllers, virtual routing
  • Network Functions Virtualization (NFV), covering open source network functions, NFV management and orchestration tools, and topics related to the creation of an open source NFV platform

We are now inviting proposals for talks about Free/Libre/Open Source Software on the topics of SDN and NFV. This is an exciting and growing field, and FOSDEM gives an opportunity to reach a unique audience of very knowledgeable and highly technical free and open source software activists.

This year, the DevRoom will focus on the emergence of cloud native Virtual Network Functions, and the management and performance requirements of those applications, in addition to our traditional focus on high performance packet processing.

A representative, but not exhaustive, list of the projects and topics we would like to see on the schedule are:

  • Low-level networking and switching: IOvisor, eBPF, XDP, DPDK,, Open vSwitch, OpenDataplane, Free Range Routing, …
  • SDN controllers and overlay networking: OpenStack Neutron, Calico, OpenDaylight, ONOS, Plumgrid, OVN, OpenContrail, Midonet, …
  • NFV Management and Orchestration: ONAP, ManageIQ, Juju, OpenBaton, Tacker, OSM, network management,, …
  • NFV related features: Service Assurance, enforcement of Quality of Service, Service Function Chaining, fault management, dataplane acceleration, security, …

Talks should be aimed at a technical audience, but should not assume that attendees are already familiar with your project or how it solves a general problem. Talk proposals can be very specific solutions to a problem, or can be higher level project overviews for lesser known projects.

Please include the following information when submitting a proposal:

  • Your name
  • The title of your talk (please be descriptive, as titles will be listed with around 250 from other projects)
  • Short abstract of one or two paragraphs
  • Short bio (with photo)

The deadline for submissions is November 16th 2017. FOSDEM will be held on the weekend of February 3-4, 2018 and the SDN/NFV DevRoom will take place on Saturday, February 3, 2017. Please use the FOSDEM submission website to submit your proposals (you do not need to create a new Pentabarf account if you already have one from past years). You can also join the devroom’s mailing list, which is the official communication channel for the DevRoom.

The Networking DevRoom 2018 Organization Team

October 31, 2017

AVR32 devices in fwupd

Over 10 years ago the dfu-programmer project was forked into dfu-utils as the former didn’t actually work at all well with generic devices supporting vanilla 1.0 and 1.1 specification-compliant DFU. It was then adapted to also support the STM variant of DFU (standards FTW). One feature that dfu-programmer did have, which dfu-util never seemed to acquire was support for the AVR variant of DFU (very different from STM DFU, but doing basically the same things). This meant if you wanted to program AVR parts you had to use the long-obsolete tool rather than the slightly less-unmaintained newer tool.

Today I merged a PR in fwupd that adds support for flashing AVR32 devices from Atmel. These are the same chips found in some Arduino protoype boards, and are also the core of many thousands of professional devices like the Nitrokey device. You can already program this kind of hardware in Linux, using clunky commands like:

# dfu-programmer at32uc3a3256s erase
# dfu-programmer at32uc3a3256s flash --suppress-bootloader-mem foo.ihx
# dfu-programmer at32uc3a3256s launch

The crazy long chip identifier is specified manually for each command, as the bootloader VID/PID isn’t always unique for each chip type. For fwupd we need to be able to program hardware without any user input, and without any chance of the wrong chip identifier bricking the hardware. This is possible to do as the chip itself knows its own device ID, but for some reason Atmel wants to make it super difficult to autodetect the hardware by publishing a table of all the processor types they have produced. I’ll cover in a future blog post how we do this mapping in fwupd, but at least for hardware like the Nitrokey you can now use the little dfu-tool helper executable shipped in fwupd to do:

# dfu-tool write foo.ihx

Or, for normal people, you can soon just click the Update button in GNOME Software which uses the DFU plugin in fwupd to apply the update. It’s so easy, and safe.

If you manufacture an AVR32 device that uses the Atmel bootloader (not the Arduino one), and you’re interested in making fwupd work with your hardware it’s likely you just have to add one line to a data file. If your dfu-tool list already specifies a Chip ID along with can-download|can-upload then there’s no excuse at all as it should just work. There is a lot of hardware using the AT32UC3, so I’m hopeful spending the time on the AVR support means more vendors can join the LVFS project.

Recommended podcast: Stay Tuned with Preet

Recommended podcast: Stay Tuned with Preet. The podcast is hosted by former U.S. Attorney Preet Bharara. I’d start with episode 1: That Time President Trump Fired Me.

October 30, 2017

Interview with Erica Wagner

Could you tell us something about yourself?

I’m Erica Wagner, a STEAM Nerd, Teenpreneur, Author, Instructor, YouTuber and self-taught 2D and 3D artist. I’ve been doing graphic design for two years, 3D sculpting, voxel art, and 3d modeling for one year, and digital drawing for a little over six months. I’m a homeschool student. My mom uses the majority of my projects as a part of school.

Do you paint professionally, as a hobby artist, or both?

Currently I’m a hobby artist but learning different art forms so I can make my own games and eventually my own animations.

What genre(s) do you work in?

I work in mostly science, cyber, sci-fi, and nature. Most of the work I make is STEAM related due to loving those areas which include but is not limited to movies, shows, games and books. Movies such as Star Wars, Interstellar, and Guardians of The Galaxy. An example of the shows I have watched are Gravity Falls and Doctor Who. Some of the games I have played are Hack ‘n’ Slash, Portal 2, Niche, and Robocraft. Lastly, some of my favorite books are the Nancy Drew Series, and Jurassic Park 1 & 2.

Whose work inspires you most — who are your role models as an artist?

When it comes to 2D art it would be the following Twitter people: loishh, viiolaceus, Cyarine, and samsantala. I love their styles. Some of them have varying styles of cartoony, realistic, and some have a mix of both. The mixture of realistic and cartoon styles appeal to me because they are realistic in the proportions, details, and colors; yet also cartoony that you’d see in webisodes. I’m not sure what the correct name for this style is but I love it. I want to develop my own style that is similar to this realistic cartoony mix so I can make my own concepts, illustrations, designs, and textures for 3d models.

How and when did you get to try digital painting for the first time?

I’m not sure the exact date but it was sometime in late 2016. Even though I did download Krita and two other programs in 2015, I didn’t actually make anything with them until late 2016. I played and tested the brushes to see what they did. I finally made something for a challenge I created in October 2016 called Artober.

What makes you choose digital over traditional painting?

I have endless resources to use. Don’t get me wrong, I enjoy traditional drawing. I did it a lot when I was younger. I can’t imagine spending money to buy lots of pens, pencils, markers, and other things when at the time I was just doing it for fun. I’m more of a techy person, so doing it digitally lets me play with different brushes without wasting anything. Plus it’s easier to paint 3D models this way and it’s easier to make things for graphics for ads, thumbnails, merch designs, etc.

How did you find out about Krita?

In late 2015, I searched in Google “Free Alternatives to Paint Tool Sai”. At the time, I was downloading all kinds of programs and just playing around in them so see which ones I liked. A website called popped up with results of different programs to use instead of Paint Tool Sai. I tried three or four different programs, one of them being Krita.

What was your first impression?

I was so amazed at all the things I could do in Krita. I had all kinds of brushes for different things at the time I had no idea what for, I could make my own animations too! I knew I had no idea how to use these features to make my own stories and worlds come to life but that didn’t matter to me. The fact I had the resource to learn how to make my own designs, concepts, and illustration and an alternative to Photoshop and Paint Tool Sai was great for me. It was such a great program I wondered why I had never heard of it or seen it in tutorials on YouTube. I was really excited to have a program that had all the features I wanted and needed to start the learning process.

What do you love about Krita?

I love how versatile and powerful it is. I can make my own brushes, drawings, animations, vectors, and textures for 3D models. When you’re just starting to teach yourself digital drawing you don’t want to spend hundreds of dollars on programs like Photoshop or Paint Tool Sai, especially if you don’t know if you’ll actually make a career from digital drawing or even like it. With Krita, I feel I’m getting the same amount and powerful features as the big name artists with Photoshop or Paint Tool Sai. The possibilities are endless! I also love that I can customize the layout of Krita to work for me or what I’m doing.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I would like to be able to open a project, my 3D model texture for example, and in the history see the brushes, textures, and patterns I used. Currently Krita only remembers what brushes you used when you last opened it and not the brushes, textures, and patterns you used in certain projects.

What sets Krita apart from the other tools that you use?

For me it’s the vector feature. I also do graphic design and since I’m learning other art forms to make my own props for my graphics this really helps me. When you do graphic design you can use raster images but it helps a lot if you have vector images. Vector images don’t lose quality when you size them up or down. Vector images are really useful when you make Merch designs, ads, thumbnails, cover art, and more. The vector feature is so easy to learn and use. Once I got my brother to use Krita, he used it to make shirt designs and remade his brand’s logo.

If you had to pick one favorite of all your work done in Krita so far, what would it be, and why?

My favorite is the texture for the 3d lowpoly model t-rex I made for a shirt design. This is my favorite because it was my first time painting a texture for a 3D model. Based on the program I was using, there were three ways to paint the model. I knew I wanted to get better at drawing so I decided to take my model’s UV map, which is basically the layout of a 3D object in a 2D cutout like form, and paint it in Krita. While following a tutorial, the model took 10 hours, the texture took 11 hours, and the last 8 hours were for last minute fixing of the model, texture, and making it ready to put on a shirt. Right now 3D is my strong suit so having the 2D texture I was happy with work correctly on the model after working on this whole project for a total of 29 hours just made my entire day. I was so proud of how it all turned out and it looked amazing on the shirt. I’m still new to digital drawing and lowpoly modeling so this was a great experience for me.

What techniques and brushes did you use in it?

I used the Krita ink gpen 25 and the smudge rake 2 brushes. I chose the colors of my brand ScienceHerWay which are white, black, neon and dark shades of pink, purple, and teal and then used some light and dark grey. Certain areas of the dinosaur I made darker to give some details such as the dark purple lines in the lips, a darker shade of the color used on the nails, elbow and knee joints, and a light shade of teal on the inside of the mouth for where he teeth would be. For the pink streaks on the dinosaur’s back and legs I made a line of neon pink with the ink gpen 25 brush and then used the smudge rake 2 brush randomly to make it look like a natural pattern until the neon pink line was gone. I repeated this process with the dark neon pink.

Where can people see more of your work?

Anything else you’d like to share?

I recommend trying art challenges and contests. It’s a great way for you to practice and get out of your comfort zone. Even try art collabs. As long as you find a supportive art community, you shouldn’t have to worry about your skill level when it comes to this. The point is to get to know other artists, practice, and have fun. At the time of writing this, I’m in an art collab myself. I’m still learning how to digitally draw while the others have been doing this for years. It may feel intimidating, but I’m collabing and meeting with people I’ve never meet before and we’re all having fun. Plus I can learn from them.

October 27, 2017

Giraffe, Tortoise? Girtoise!

Two Girtoises about to feast on cloud-rooted Bananeries on the plains of the seastern continent. These animals are also known as Toraffes or by their scientific name: Giradinoides. In German, they have the even better name Schiraffen. The Bananeries contain valuable vitamins and minerals which help the animals in maintaining smooth fur and strong shells.

Detail at full resolution:

Available printed on apparel, as poster and a few other forms.

Technical notes

This is a completely tablet-drawn work. With my trusty serial Wacom Intuos, still working as I keep compiling the module after every kernel update. Originally, I wanted to use Krita for the nice paintbrush engine and the canvas rotation. I found the later to be critical in achieving the smoothest curves, which is a lot easier in a horizontal direction. With what ended up being a 10000 x 10200 resolution and only 4 GiB RAM, I ran into performance problems. Where Krita failed, GIMP still worked, though I had to switch to the development version to have canvas rotation. At the end, GIMP’s PNG export failed due to it not being able to fork a process with no memory left! Flattening the few layers to save memory led to GIMP being killed. Luckily, there’s the package xcftools with xcf2png, so I could get my final PNGs via command line!

Filed under: Illustration, Planet Ubuntu Tagged: Apparel, GIMP, Krita, T-shirt, xcftools

The Google Weekend

Last week was the Google Summer of Code Mentors Summit, a yearly event organized by Google, where they invite mentors of the Google Summer of Code program, a program that pays students to work on open-source projects. This year, like last year, FreeCAD participated to GSOC. This year we had 4 really good students,...

October 26, 2017

Reading an IR Remote on a Raspberry Pi with LIRC

[IR remote with Raspberry Pi Zero W]

Our makerspace got some new Arduino kits that come with a bunch of fun parts I hadn't played with before, including an IR remote and receiver.

The kits are intended for Arduino and there are Arduino libraries to handle it, but I wanted to try it with a Raspberry Pi as well.

It turned out to be much trickier than I expected to read signals from the IR remote in Python on the Pi. There's plenty of discussion online, but most howtos are out of date and don't work, or else they assume you want to use your Pi as a media center and can't be adapted to more general purposes. So here's what I learned.

Install LIRC and enable the drivers on the Pi

The LIRC package reads and decodes IR signals, so start there:

$ sudo apt-get install lirc python-lirc python3-lirc

Then you have to enable the lirc daemon. Assuming the sensor's pin is on the Pi's GPIO 18, edit /boot/config.txt as root, look for this line and uncomment it:

# Uncomment this to enable the lirc-rpi module

Reboot. Then use a program called mode2 to make sure you can read from the remote at all, after first making sure the lirc daemon isn't running:

$ sudo service lirc stop
$ ps aux | grep lirc
$ mode2 -d /dev/lirc0

Press a few keys. If you see a lot of output, you're good. If not, check your wiring.

Set up a lircd.conf

You'll need to make an lircd.conf file mapping the codes the buttons send to symbols like KEY_PLAY. You can do that -- ina somewhat slow and painstaking process -- with irrecord.

First you'll need a list of valid key names. Get that with irrecord -l and you'll probably want to keep that window up so you can search or grep in it. Open another window and run:

$ irrecord -d /dev/lirc0 ~/lircd.conf

I had to repeat the command a couple of times; the first few times it couldn't read anything. But once it's running, then for each key on the remote, first, find the key name that most closely matches what you want the key to do (for instance, if the key is the power button, irrecord -l | grep -i power will suggest KEY_POWER and KEY_POWER2). Type or paste that key name into irrecord -d, then press the key. At the end of this, you should have a ~/lircd.conf.

Some guides say to copy that lircd.conf to /etc/lirc/ andI did, but I'm not sure it matters if you're going to be running your programs as you rather than root.

Then enable the lirc daemon that you stopped back when you were testing with mode2. In /etc/lirc/hardware.conf, START_LIRCMD is commented out, so uncomment it. Then edit /etc/lirc/hardware.conf as specified in's "Setting Up LIRC on the RaspberryPi". Now you can start the daemon:

sudo service lirc start
and verify that it's running: ps aux | grep lirc.

Testing with irw

Now it's time to test your lircd.conf:

Press buttons, and hopefully you'll see lines like
0000000000fd8877 01 KEY_2 /home/pi/lircd.conf
0000000000fd08f7 00 KEY_1 /home/pi/lircd.conf
0000000000fd906f 00 KEY_VOLUMEDOWN /home/pi/lircd.conf
0000000000fd906f 01 KEY_VOLUMEDOWN /home/pi/lircd.conf
0000000000fda05f 00 KEY_PLAYPAUSE /home/pi/lircd.conf

If they correspond to the buttons you pressed, your lircd.conf is working.

Reading Button Presses from Python

Now, most tutorials move on to generating a .lircrc file which sets up your machine to execute programs automatically when buttons are pressed, and then you can test with ircat. If you're setting up your Raspberry Pi as a media control center, that's probably what you want (see below for hints if that's your goal). But neither .ircrc nor ircat did anything useful for me, and executing programs is overkill if you just want to read keys from Python.

Python has modules for everything, right? The Raspbian repos have python-lirc, python-pylirc and python3-lirc, and pip has a couple of additional options. But none of the packages I tried actually worked. They all seem to be aimed at setting up media centers and wanted lircrc files without specifying what they need from those files. Even when I set up a .lircrc they didn't work. For instance, in python-lirc, lirc.nextcode() always returned an empty list, [].

I didn't want any of the "execute a program" crap that a .lircrc implies. All I wanted to do was read key symbols one after another -- basically what irw does. So I looked at the irw.c code to see what it did, and it's remarkably simple. It opens a socket and reads from it. So I tried implementing that in Python, and it worked fine: Read LIRC button input from Python.

While initially debugging, I still saw those

0000000000fda05f 00 KEY_PLAYPAUSE /home/pi/lircd.conf
lines printed on the terminal, but after a reboot they went away, so they might have been an artifact of running irw.

If You Do Want a .lircrc ...

As I mentioned, you don't need a .lircrc just to read keys from the daemon. But if you do want a .lircrc because you're running some sort of media center, I did find two ways of generating one.

There's a bash script called lirc-config-tool floating around that can generate .lircrc files. It's supposed to be included in the lirc package, but for some reason Raspbian's lirc package omits it. You can find and download the bash script witha web search for lirc-config-tool source, and it works fine on Raspbian. It generates a bunch of .lircrc files that correspond to various possible uses of the remote: for instance, you'll get an mplayer.lircrc, a mythtv.lircrc, a vlc.lircrc and so on.

But all those lircrc files lirc-config-tool generates use only small subsets of the keys on my remote, and I wanted one that included everything. So I wrote a quickie script called that takes your lircd.conf as input and generates a simple lircrc containing all the buttons represented there. I wrote it to run a program called "beep" because I was trying to determine if LIRC was doing anything in response to the lircrc (it wasn't); obviously, you should edit the generated .lircrc and change the prog = beep to call your target programs instead.

Once you have a .lircrc, I'm not sure how you get lircd to use it to call those programs. That's left as an exercise for the reader.

October 25, 2017

Jabra joins the LVFS

Some great news: the Jabra Speak devices are now supported using fwupd, and firmware files have just been uploaded to the LVFS.

You can now update the firmware just by clicking on a button in GNOME Software when using fwupd >= 1.0.0. Working with Jabra to add the required DFU quirks to fwupd and to get legal clearance to upload the firmware has been a pleasure. Their hardware is well designed and works really well in Linux (with the latest firmware), and they’ve been really helpful providing all the specifications we needed to get the firmware upgrade working reliably. We’ll hopefully be adding some different Jabra devices in the coming months to the LVFS too.

More vendor announcements coming soon too.

October 22, 2017

October 18, 2017

GCompris at KDE-edu sprint 2017

Ten days ago, I spent a week-end in Berlin with a group of KDE friends to have a KDE-edu sprint. I didn’t blog about it yet because we planned to make a group post to summarize the event, but since it takes some time, I decided to write a quick personal report too.

The sprint was hosted in Endocode offices, which was a very nice place to work together.

KDE edu Sprint 2017

Of course I came mostly because of GCompris, but the goal in the end was more to work together to try to redefine the goal and direction of KDE-edu and its website, and to work together on different tasks.

I added appstream links for all KDE-edu apps on their respective pages on KDE website. Those appstream links can be used to install directly applications from linux appstores supporting this standard.
On a side note, we thought it is a bit weird to be redirected from the KDE-edu website to when looking at application info. This is one of the things that would need some refactoring. Actually, we discussed a lot about the evolution needed for the website. I guess all the details about this discussion will be on the group-post report, but to give you an idea, I would summarize it as : let’s make KDE-edu about how KDE-applications can be used in educational context, rather than just a collection of specific apps. A lot of great ideas to work on!

For GCompris, I was very happy to can meet Rishabh, who did some work on the server part. I could test the branch with him, and discussed about what needs to be done. Next, I fixed and improved the screenshots available for our appdata info, and started to look at building a new package on Mac with Sanjiban.

I also cleaned an svg file of Ktuberling to help Albert who worked on buiding it for Android.

In the end, I would say it was a productive week-end. Many thanks to KDE e.V. for the travel support, and to Endocode for hosting the event and providing cool drinks.

October 17, 2017

Faces of Open Source

Faces of Open Source

Peter Adams’s Portraits of Revolutionaries

Recently, @houz posted about an amazing project by photographer Peter Adams called Faces of Open Source.

Peter really (ahem) throws a light on many amazing luminaries from not only the Free/Open Source Software community, but in some cases the history and roots of all modern computing. He has managed to coordinate portrait sessions with many people that may be unassuming to a layperson, but take a moment to read any of the short bios on the site and the gravity of the contributions from the subjects to modern computing becomes apparent.

It’s easy for non-technical folks to spot a Bill Gates or Steve Jobs, but what about those who invented the most-used programming language, created the web server that runs the majority of the internet, or mapped the human genome?

Dennis Ritchie, Brian Behlendorf, Jim Kent (From L-R): Dennis Ritchie, Brian Behlendorf, and Jim Kent

He is acutely aware that his subjects represent an important part of the history of Open Source, and in his artist statement for the project he notes:

This project is my attempt to highlight a revolution whose importance is not broadly understood by a world that relies heavily upon the fruits of its labor.

That’s really what Peter has done here. He has collected individuals whose contributions all add up to something far greater than their collective sums to shape the digital world many take for granted these days, and is presenting them in a powerful and thoughtful way more befitting their gifts.

Peter Adams Photography

A Chat with Peter Adams

I was lucky enough to be able to get a little bit of time with Peter recently, and with some help from the community had a few questions to present to him. He was kind enough to take some time out of his day and be patient while I prattled on…

Linus Torvalds by Peter Adams Linus Torvalds, Santa Fe, New Mexico, 2016 by Peter Adams

What was the motivation for this particular project for you? Why these people?

I had a long career working in the tech industry, and kind of grew up on a lot of this software when I was in college.
Then got to apply it throughout a career as senior technologist or CTO at a bunch of different companies in the valley. So I went from learning about it in college, to being someone that used it, to then being somebody that contributed to it and starting my own open source project back in 2006. That open source ethos, the software, and the people that created, maintained and promoted it - it’s something that’s been right there in my face for, really, the last 25 years.

I wanted to marry my knowledge of it with my passion for photography, and shine a light on it. I went through a few different chapters of the story myself in the 80’s and then the mid-90’s with linux. I kind of felt like the story was starting to slip into obscurity, not because it’s less important - in fact I think it’s more important now than it’s ever been.

The software is actually used by more people now than it has ever been. The smartphone revolution, mobile, has brought that to a forefront and all of these mobile platforms are based on this open source technology. Everything Apple does is based on BSD, and everything Google/Android does is based on Linux.

I feel like it’s a more impactful story now than ever, but very few people are telling the story. As a photographer I’ve always cringed at the photographic response to the story. Podium shot after podium shot of these incredible people.

So I wanted to put some faces to names, bring these people to life in a more impactful way than I think anyone has done before. Hopefully that’s what the project is doing!

P: It absolutely does!

Brian Kernighan by Peter Adams Brian Kernighan, New York City, 2015 by Peter Adams

How long have you been shooting the project?

I started this project in 2013/2014, in earnest probably late 2014.

Of all of the people that you’ve shot, I’m curious, who would you say is one that maybe stuck out with you the most, or even better, did you get any cool stories out of some of the subjects?

Everyone that I’ve photographed has been absolutely wonderful. I mean, that’s the first thing about this community: it’s a very gracious community. Everybody was very gracious with their time, and eager to participate. I think people recognize that this is a community they belong to and they really want me to be a part of it, which is really great.

So, I enjoyed my time with everybody. Everybody brought a different, interesting story about things. The UNIX crew from Bell Labs had particularly colorful stories, very interesting sort of historical tidbits about UNIX and Free Software.

I talked to Ken Thompson about going to Russia and flying MIGs right after the collapse of the Soviet Union. Wonderful stories from Doug McIlroy about the team and the engineering - how they worked together at Bell labs. Just a countless list of cool stories and cool people for sure.

Ken Thompson by Peter Adams Ken Thompson, Menlo Park, California, 2016 by Peter Adams
Doug McIlroy by Peter Adams Doug McIlroy, Boston, Massachusetts, 2015 by Peter Adams

P: It must have been fascinating!

It’s been really fun. A lot of these folks, I’ve really looked up to them over the years as sort of heroes, and so when you get people in front of your lens like that, it’s a really wonderful experience. It’s also a challenging experience because you want to do justice to them. Many of these folks that I’ve thought about for 20+ years, finally getting to shoot them is a real treat.

Where are you shooting these? Are you mostly bringing them into your studio in the valley?

I shot a lot of people when I had a studio in Silicon Valley. I brought a lot of people there and that was great. Now typically I’m doing shoots on the coasts. So I’ll do shoots in NY and I’ll rent a studio and bring 6 or 7 people in there or we’ll do a studio up in SF for some people. But I’ve done shoots in back alleyways, I’ve done shoots in tiny little conference rooms, I’ll bring the studio to people if that’s what I have to do. So I’d say so far it’s been about 50-50.

The lighting setups are wonderful and do justice to the subjects, and I think somebody in the community was curious if you had decided on B&W from the beginning for this series of photos? Was this a conscious decision early on?

B&W on a white background was a conscious choice right from the beginning. Knowing the group, I felt like that was going to be the best way to explore the people and the faces. Every one of these faces just tells, I think, a really interesting story. I try to bring the personality of the person into the photo, and B&W has always been my favorite way to do that. The white background just puts the emphasis right on the person.

Camille Fournier by Peter Adams Camille Fournier, New York City, 2017 by Peter Adams

How much of it would you say is you that goes into the final pose and setup of the person, or do you let the subject feel out the room and get comfortable and shoot from there?

It’s a little bit of both. I wish I got to spend a lot of time up front with the person before we started shooting, but the way everybody’s schedule worked is - none of these shoots are more than an hour and many of them are much shorter than an hour. There’s definitely the pleasantries up front and talking for a little bit, but then I try to get people right in front of the camera as quick as possible.

I don’t really pose them. My process is to sit back and observe, and I always tell people “if I’m not taking photos, it’s not because you’re doing anything wrong - I’m just waiting for you to settle or looking, examining”. Which is, for most people, a really uncomfortable process, I try to make it as comfortable as possible. Then we’ll start taking pictures. I may move them a little bit, or we may setup a table so they can rest their hand on their chin or something like that. Generally the photos that come out are not pre-meditated.

It’s very rare that I go into any of these shoots with an actual “I want the person like this, setup like that, etc…”. I’d say 99% of these shots, the expressions, the feeling that comes out, that I’m capturing is organic. It’s something that comes up in the shoot. I just try to capture it whenever I see it by clicking the shutter, that’s basically what I’m doing there.

You list what equipment you shot each portrait with, but I’m curious about the lighting setup. Is there a “go-to” lighting setup that you like to use?

The lighting is literally the same on every shot, though there’s slightly different positions. It’s a six light setup: there are four lights on the background, there’s a beauty dish overhead, and generally a fill light. The fill is either a big Photek or PLM, basically a big umbrella, or a ringflash depending on how small the room is. That’s the same lighting setup on all of them. Four lights on the background, two lights on the subject. I’ll vary the two lights on the subject positionally, but for the most part they’re pretty close.

Do you use Free Software in your normal photographic workflow at all?

I don’t use as much Free Software as I’d like in my own workflow. My workflow, because I shoot with Phase One, the files go into Capture One and then from there they go into Photoshop for final edits. I have used GIMP in the past. I really would like to use more Free Software, so I’m a learner in that regard for what tools would make sense.

Spencer Kimball by Peter Adams Spencer Kimball (co-creator of GIMP), Menlo Park, 2015 by Peter Adams
Peter Mattis by Peter Adams Peter Mattis (co-creator of GIMP), New York City, 2015 by Peter Adams

Did that habit grow out of the professional need of having those tools available to you?

Phase One, which makes the Medium Format digital back and camera that I use for all of my portrait work, also makes Capture One. They have basically customized the software to get the most of their own files. That’s pretty much why I’ve wound up there instead of Lightroom or another tool. It’s just that that software tends to bring out the tonality, especially in the B&W side, better I’ve found than any other tool.

This project was self financed to start with?

Yes, this is a self-financed project. I do hope that we’ll get some sponsors, especially for the book, just because it tends to be a pretty heavy upfront outlay to produce a book. I’m going to think about things like Kickstarter but the corporate sponsors I think will be really helpful for the exhibits and the book.

Speaking of the book, is it ready - have you already gone to print?

No, the book isn’t ready yet. I still have probably another 10-12 people that I need to photograph and then we’ll start producing it. I’ve done some prototypes and things on it but it’s still a little bit of a ways away. The biggest hurdle on this project is actually scheduling and logistics. Getting access to people in a way that is economical. Instead of me flying all over the place for one shot, I try to stack up a number of people into a day. It’s tough - this is a busy crowd, very in demand.

Faces of Open Source Book Promo

Did your working in open source teach you anything beyond computer code in some way? Was there an influence from the people you may have worked around, or the ethos of Free Software in general that stuck with you? Working with this crowd, was there a takeaway for you beyond just the photographic aspects of it?

Absolutely! First of all it’s an incredibly inspiring group of people. This is a group of people that have dedicated, in some cases most of, their lives to the development of software that they give away to the world, and don’t monetize themselves. The work they’re doing is effectively a donation to humanity. That’s incredibly inspiring when you look at how much time goes into these projects and how much time this group of people spends on that. It’s a very humbling thing.

I’d say the other big lesson is that Open Source is such a unique thing. There’s really nothing like it. It’s starting to take over other industries and moving beyond just software - it’s gone into hardware. I’ve started to photograph some of the open source hardware pioneers. It’s going into bio-tech, pharmaceuticals, agriculture (there’s an open source seed project). I think that the lessons that are learned here and that this group of people is teaching is really affecting humanity on a much much larger level than the fact that this stuff is powering your cell phone or is powering your computer.

Limor Fried by Peter Adams Limor Fried, New York City, 2017 by Peter Adams

Open source is really sort of a way of doing business now. Even more than doing business it’s a way of operating in the world. More and more people, industries, and companies are choosing that. In today’s world where all you read is bad news, that’s a lot of really good news. It’s an awesome thing to see that accelerating and catching on. It’s been incredibly inspiring to me.

P: I think even all the way back to the Polio vaccine, is one of those things. The effect that it had on humanity was immeasurable, and the fact that it wasn’t monetized by Salk was amazing.

Look at how many lives were saved because of that. If you think about the acceleration of the innovation we’ve had just in the technology sector - could things like the iPhone or the Android operating system - would these things have happened now, or over the last decade, without this [open source], or would we be looking at those types of innovations happening twenty years from now? I think that’s a question you have to ask.

I don’t think it’s an obvious answer that Apple or Google or somebody else would have just come up with this without the open source [contributions]. This stuff is so fundamental, it’s such a basic building block for everything that’s happening now. It may be responsible for the golden age that we’re seeing now. I think it is.

The average teenager they pick up and post a photo to Instagram - they don’t realize that there’s a hundred open source projects at work to make that possible.

P: And the fact that the people that underlay that entire stack gave it away.

Right. And that giving it away was necessary to create the Instagrams to create all these networks. It wasn’t just this happenstance thing where people didn’t know any better. In some cases obviously that did exist, but it’s the fact that consciously people are contributing into a commons that makes it so powerful and enables all of this innovation to happen. It’s really cool.

David Korn by Peter Adams David Korn, New York City, 2015 by Peter Adams

To close, is there another photographer, book, organization - that you’d like any of the readers to know about and maybe spend some time to go and check out. Something that maybe you’ve long admired or recently discovered?

Sure! You’ve mentioned Martin Schoeller, who is one of my personal favorites and inspirations out there. I’d say the other photographer who has had probably the most impact on my photography over the years has been Richard Avedon. For people that aren’t familiar with his work I’d say definitely go check out the Avedon foundation. Pick up any of his books which are just wonderful. You’ll definitely see that influence on my photography, especially this project, since he shot black and white on white background. Such stunning work. I’d say that those are two great ones to start with.

Alright! Avedon and Schoeller - I can certainly think of worse people to go start a journey with. Thank you so much for taking time with me today!

Hey no problem! It’s been fun to talk to you.

There are many more fascinating portraits awaiting you over on the project site, and every one of them is worth your time! See them all at:

You can also connect with the project on

Find more of Peters work at his website.

All images from “Faces of Open Source” by Peter Adams, licensed CC BY NC SA 4.0.

October 16, 2017

Shaking the tin for LVFS: Asking for donations!

tl;dr: If you feel like you want to donate to the LVFS, you can now do so here.

Nearly 100 million files are downloaded from the LVFS every month, the majority being metadata to know what updates are available. Although each metadata file is very small it still adds up to over 1TB in transfered bytes per month. Amazon has kindly given the LVFS a 2000 USD per year open source grant which more than covers the hosting costs and any test EC2 instances. I really appreciate the donation from Amazon as it allows us to continue to grow, both with the number of Linux clients connecting every hour, and with the number of firmware files hosted. Before the grant sometimes Red Hat would pay the bandwidth bill, and other times it was just paid out my own pocket, so the grant does mean a lot to me. Amazon seemed very friendly towards this kind of open source shared infrastructure, so kudos to them for that.

At the moment the secure part of the LVFS is hosted in a dedicated Scaleway instance, so any additional donations would be spent on paying this small bill and perhaps more importantly buying some (2nd hand?) hardware to include as part of our release-time QA checks.

I already test fwupd with about a dozen pieces of hardware, but I’d feel a lot more comfortable testing different classes of device with updates on the LVFS.

One thing I’ve found that also works well is taking a chance and buying a popular device we know is upgradable and adding support for the specific quirks it has to fwupd. This is an easy way to get karma from a previously Linux-unfriendly vendor before we start discussing uploading firmware updates to the LVFS. Hardware on my wanting-to-buy list includes a wireless network card, a fingerprint scanner and SSDs from a couple of different vendors.

If you’d like to donate towards hardware, please donate via LiberaPay or ask me for PayPal/BACS details. Even if you donate €0.01 per week it would make a difference. Thanks!

Blender Daily Doodles — Day 28

People following me on Instagram have been asking why I do the daily renders. You’re not gonna get better by thinking about it. The arsenal of tools and methods in Blender is giant and over the years I still struggle to call myself proficient in any of them.




Follow me on Instagram to see them come alive. I’m probably not gonna maintain the daily routine, but I will continue doing these.

Interview with Cillian Clifford

Could you tell us something about yourself?

Hi everyone – my name is Cillian Clifford, I’m a 21 year old hobbyist artist and electronic musician, and an occasional animator, writer and game developer. I go by the username of Fatal-Exit online. I live in rural Ireland, a strange place for someone so interested in technology. My interests range from creative projects to tech related fields like engineering, robotics and science. Outside of things like these I enjoy gaming from time to time.

Do you paint professionally, as a hobby artist, or both?

Definitely as a hobby. I consider digital painting to be one of my weakest areas of art skills, so I spend a lot of time trying to improve it. Other areas of digital art I’m interested in include CAD, 3d modeling, digital sculpting, vector animation, and pixel art.

What genre(s) do you work in?

It varies! Hugely, in fact. Over the past two years on my current DeviantArt account I’ve uploaded game fan-art paintings, original fantasy and Sci-Fi pieces, landscapes, pixel art, and renders of 3d pieces. I also occasionally paint textures and UV maps for 3d artwork. Outside of still art, I also animate in vector and pixel art styles. I also occasionally make not-great indie games, but as you might guess, most never get finished.

Whose work inspires you most — who are your role models as an artist?

A wide range of artists, often not particular people but more their combined efforts on projects. I will say that David Revoy and GDQuest in the Krita community are a big inspiration. Youtube artists such as Sycra, Jazza and Borodante are another few I can think of. Lots of my favorite art of all time has come from large game companies such as Blizzard and Hi-Rez Studios. Also game related, the recent rise of more retro and pixel based graphics in indie games is a huge interest of mine, and games like Terraria, Stardew Valley and Hyper-Light Drifter have an art style that truly inspires me.

How and when did you get to try digital painting for the first time?

My first time doing some sort of “digital painting” was when I was about 16-17. I did the graphics design work for a board game a team of us were working on for a school enterprise project, using the free graphics software and a mouse. It took ages. However the project ended up taking off and we ended up in the final stage of the competition. After that was over (we didn’t win) I decided digital art might be something to seriously invest in and bought a graphics tablet. For a couple of years I made unimaginably terrible art and in 2015 I decided to shut down my DeviantArt account and start fresh on a new account, with my new style. This was about when I found Krita, I believe.

What makes you choose digital over traditional painting?

A few things: Firstly, I could never paint in a traditional sense, I was absolutely terrible. At school I was considered a C grade artist, and that was even when working on pen and ink drawings, a style I used to be good at but have since abandoned. I never learned to paint traditionally.

Secondly, I can do it anywhere. In my bedroom with a Ugee graphics monitor and my workstation desktop, or lots of other places if I take my aging laptop and Huion graphics tablet with me. Soon I’m looking to buy a mobile tablet similar to the Microsoft Surface Pro, that’ll let me paint absolutely anywhere.

Thirdly, the tech involved. So not only am I able to emulate any media that exists in traditional art with various software, I can also work on art styles that aren’t even possible with traditional. As well as this, functions like undo, zooming in and out of the canvas, layers and blending modes, gradients and bucket fill, the list goes on and on.

I can happily say I never want to “go back” to traditional painting even though I was never any good at it in the first place.

How did you find out about Krita?

That’s a hard question. I’m not absolutely sure, but I’ve an idea that it might have been through David Revoy’s work on the Blender Foundation movies, and Pepper and Carrot. I was looking for a cheap or free piece of software because I didn’t want to use cracked Photoshop/Painter, and I’d already used GIMP and, and neither were good for the art I was looking to create. I tried MyPaint but it never worked properly with my tablet. I did buy ArtRage at some point but I wasn’t happy with the tools in that. It came down to probably a choice of Krita or Clip Studio Paint. Krita had the price tag of free so it was the first one I tried. And I stuck with it.

What was your first impression?


At least I think it was. When I first tried it everything just seemed to work straight off. It seemed simple enough for me to use efficiently. And the brush engine was simply amazing. I don’t know if there’s any other program with brushes that easy to customize to a huge extent but still so simple to set up. I first tried it in version 2.something so it was before animation was added.

What do you love about Krita?

Mostly, the fact that it works to use for most things you can throw at it. I’ve made game assets, textures, paintings, drawings, pixel art, a couple of test animations with the animation function, pretty much everything. I feel like it’s the Blender of 2d, the free tool that does pretty much everything, maybe not the 100% best at it, but certainly the most economical option.

The brush engine like I said before is one of it’s best assets, it has one of the most useful color pickers I’ve used, the inclusion of what is the feature-set of the paid plugin Lazy Nezumi for Photoshop for free, the fact that the interface can be there when you need it but vanish at the press of the button. Just loads of good things.

The variety of brush packs made by the community are also a great asset. I own GDQuest’s premium bundle and also use Deevad’s pack on a regular basis. I love to then tweak those brushes to suit my needs.

What do you think needs improvement in Krita? Is there anything that really annoys you?

The main current annoyance with Krita is the text tool. I just hate it. It’s the one thing that makes me want to have access to Photoshop. And I know it’s supposedly one of the things being focused on in future updates, so hopefully they don’t take too long to happen.

Another problem I had with Krita happened last year. It’s been fixed since, but it’s certainly nothing I’d like to see happen again with V4 (Which I worry is a possibility). Basically what happened was when the Krita 3 update came out it broke support for my Ugee graphics monitor. Completely broke it. I had to either stick with the old version of Krita 2.9, or when I wanted to use tools from V3 I had to uninstall my screen tablet drivers, install drivers for my tiny old Intuos Small tablet and use that. Luckily, later on, (about 6-8 months down the line) an update for my tablet drivers fixed all problems, and it just worked with my screen tablet from then on.

What sets Krita apart from the other tools that you use?

Ease of use, the brush engine, the speed that it works at (even with 4k documents on my pentium powered laptop), the way it currently works well on all my hardware, the price tag (FREE!), the community, and some great providers of custom brushes (GDQuest and David Revoy’s in particular). Even though I’ve since stopped using Krita for pixel art and moved to Aseprite (only because their pixel animation tools are more sophisticated towards making game assets), I believe it’s the most suitable program I have access to for digital painting, comic art, and traditional 2d animation.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

This is a hard question because I feel I am a terrible critic. If I had to choose it’d probably be Sailing to the Edge of the World II – from my Sailing to the Edge of the world painting series I made for a good colleague of mine. I also included the latest painting in that series, though I believe the second one was the best. Even though it’s been maybe 8 months since I made that painting it’s still one of my best.

What techniques and brushes did you use in it?

If I remember correctly I used mostly David Revoy’s brush-pack. The painterly brushes were used along with the pen and ink brushes and some of the airbrushes. To be honest it’s been so long since I made it I’m not 100% sure. I may have also used some of the default brushes such as the basic round and soft round.

Where can people see more of your work?

My DeviantArt(where I post the majority of my art):
My twitter (where I post some of my art):
And my newest place: Tumblr (Not much here at all):

Anything else you’d like to share?

I’m working on resurrecting my Youtube channel at:

As of the time of writing this it’s mostly just home to my music. However I’m looking to expand it into art, animation and game development, with tutorials and process videos. I’m certainly hoping to post some Krita reviews, tutorials and videos on how it can be used in a game development pipeline over the coming months, as well as videos of other software such as Blender, Aseprite, 3d Coat, Moho, Construct 3, Gamemaker Studio 2, Unreal Engine 4, Sunvox, FL Studio Mobile and others.

October 12, 2017

Letter to the New Mexico Public Education Department on Science Standards

For those who haven't already read about the issue in the national press, New Mexico's Public Education Department (a body appointed by the governor) has a proposal regarding new science standards for all state schools. The proposal starts with the national Next Generation Science Standards but then makes modifications, omitting points like references to evolution and embryological development or the age of the Earth and adding a slew of NM-specific standards that are mostly sociological rather than scientific.

You can read more background in the Mother Jones article, New Mexico Doesn’t Want Your Kids to Know How Old the Earth Is. Or why it’s getting warmer, including links to the proposed standards. Ars Technica also covered it: Proposed New Mexico science standards edit out basic facts.

New Mexico residents have until 5.p.m. next Monday, October 16, to speak out about the proposal. Email comments to or send snail mail (it must arrive by Monday) to Jamie Gonzales, Policy Division, New Mexico Public Education Department, Room 101, 300 Don Gaspar Avenue, Santa Fe, New Mexico 87501.

A few excellent letters people have already written:

I'm sure they said it better than I can. But every voice counts -- they'll be counting letters! So here's my letter. If you live in New Mexico, please send your own. It doesn't have to be long: the important thing is that you begin by stating your position on the proposed standards.

Members of the PED:

Please reconsider the proposed New Mexico STEM-Ready Science Standards, and instead, adopt the nationwide Next Generation Science Standards (NGSS) for New Mexico.

With New Mexico schools ranking at the bottom in every national education comparison, and with New Mexico hurting for jobs and having trouble attracting technology companies to our state, we need our students learning rigorous, established science.

The NGSS represents the work of people in 26 states, and is being used without change in 18 states already. It's been well vetted, and there are many lesson plans, textbooks, tests and other educational materials available for it.

The New Mexico Legislature supports NGSS: they passed House Bill 211 in 2017 (vetoed by Governor Martinez) requiring adoption of the NGSS. The PED's own Math and Science Advisory Council (MSAC) supports NGSS: they recommended in 2015 that it be adopted. Why has the PED ignored the legislature and its own advisory council?

Using the NGSS without New Mexico changes will save New Mexico money. The NGSS is freely available. Open source textbooks and lesson plans are already available for the NGSS, and more are coming. In contrast, the New Mexico Stem-Ready standards would be unique to New Mexico: not only would we be left out of free nationwide educational materials, but we'd have to pay to develop New Mexico-specific curricula and textbooks that couldn't be used anywhere else, and the resulting textbooks would cost far more than standard texts. Most of this money would go to publishers in other states.

New Mexico consistently ranks at the bottom in educational comparisons. Yet nearly 15% of the PED's proposed stem-ready standards are New Mexico specific standards, taught nowhere else, and will take time away from teaching core science concepts. Where is the evidence that our state standards would be better than what is taught in other states? Who are we to think we can write better standards than a nationwide coalition?

In addition, some of the changes in the proposed NM STEM-Ready Science Standards seem to be motivated by political ideology, not science. Science standards used in our schools should be based on widely accepted scientific principles. Not to mention that the national coverage on this issue is making our state a laughingstock.

Finally, the lack of transparency in the NMSRSS proposal is alarming. Who came up with the proposed NMSRSS standards? Are there any experts in science education that support them? Is there any data to indicate they'd be more effective than the NGSS? Why wasn't the development of the NMSRSS discussed in open PED meetings as required by the Open Meetings Act?

The NGSS are an established, well regarded national standard. Don't shortchange New Mexico students by teaching them watered-down science. Please discard the New Mexico Stem-Ready proposal and adopt the Next Generation Science Standards, without New Mexico-specific changes.

October 11, 2017

Have typography, will travel


I realize that I’m a bit late in publishing this news but, to be honest, I never was great about the blogging regularly anyway.

In any case, this post is a bit of a public announcement: I’m happy to say that I recently completed an extremely busy year working on my Master of Arts in Typeface Design (MATD) degree at the University of Reading. Consequently, I am now back out in the real world, and I am looking for interesting and engaging employment opportunities. Do get in touch if you have ideas!

For a bit of additional detail, the MATD program combines in-depth training about letterforms, writing, non-Latin scripts, and typeface development with rigorous academic research. On the practical side, we each developed a large, multi-style, mutli-script family of fonts (requiring the inclusion of at least one script that we do not read).

My typeface is named Sark; you can see a web and PDF specimen of it here at the program’s public site. It covers Latin, Greek, Cyrillic, and Bengali; there is a serif subfamily tailored for long-form documents and there is a sans-serif subfamily that incorporates features to make it usable on next-generation display systems like transparent screens and HUDs.

My dissertation was research into software models for automatic (and semi-automatic) spacing and kerning of fonts. It’s not up for public consumption yet (in any formal way), as we are still awaiting the marking and review process. But if you’re interested in the topic, let me know.

Anyway, it was a great experience and I’m glad to have done it. I’m also thrilled that it’s over, because it was intense.

Moving ahead from here, I am looking forward to reconnecting with the free-software community, which I only had tangential contact with during my studies. That was hard; I spent more than thirteen years working full-time as a journalist exclusively covering the free-and-open-source software movement. I did get to see a lot of my friends who work on typography and font-related projects, because I still overlapped with those circles; I look forward to seeing the rest of you at the next meetup, conference, hackathon, or online bikeshedding session.

As for what sort of work I’m looking for, I’m keeping an open mind. What I would really love to find is a way (or ways) to help improve the state of type, typography, and documents within free-software systems. The proprietary software world has typefaces and text-rendering technology that is determined by things like sales figures; free software has no such limitations. The best typesetting systems in the world (like TeX and SILE) are free software; our documents and screens and scripts have no reason to look second-best, compared to anyone.

So if I can do that, I’ll be a happy camper. But by all means, I’m still going to remain a camper with a lot of diverse and peculiar interests, so if there’s a way I can help you out in some other fashion, don’t be shy; let me know.

I have a few contract opportunities I’m working on at the moment, and I am contributing to LWN (the best free-software news source in the dimension) as time allows. And I’m gearing up to tell you all about the next editions of Texas Linux Fest and Libre Graphics Meeting. Oh, and there are some special secret projects that I’m saving for next time….

So that’s it from me; how are you?

Krita 3.3.1

Today we are releasing Krita 3.3.1, a bugfix release for Krita 3.3.0. This release fixes two important regressions:

  • Krita would crash if you would restart Krita after closing Krita with the reference images docker set to floating
  • Krita 3.3.0 could not read .kra backup files or .kra files that were unzipped, then zipped up manually.

Additionally, there are the following fixes and improvements:

  • Fix a crash when creating a swap file on OSX (Bernhard Liebl).
  • Merge down does not remove locked layers anymore (Nikita Smirnov)
  • Various performance improvements, especially for macOS (Bernhard Liebl)
  • Improve the look and feel of dragging and dropping layers (Bernhard Liebl)
  • Improve the tooltips in the brush preset selector (Bernhard Liebl)
  • Fix a memory leak in the color selectors (Boudewijn Rempt)
  • Fix rotation and tilt when using the Windows Ink api (Alvin Wong)
  • Don’t allow the fill tool to be used on group layers (Boudewijn Rempt)
  • Add brightness and contrast sliders for textured brushes (Rad)
  • Add paste-at-cursor (Dmitry Kazakov)
  • Improve performance of the cpu canvas (Alvin Wong)
  • Fix a crash on closing Krita when there is something on the clipboard (Dmitry Kazakov)
  • Add a button to open a file layer’s image in Krita (Wolthera van Hövell tot Westerflier)



Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.


(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

When it is updated, you can also use the Krita Lime PPA to install Krita 3.3.1 on Ubuntu and derivatives. There is also an updated snap.


Note: the gmic-qt and pdf plugins are not available on OSX.

Source code


For all downloads:


The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
. The signatures are here.

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

October 10, 2017

Announce: Entangle “Lithium“ release 1.0 – an app for tethered camera control & capture

I am pleased to announce a new release 1.0 of Entangle is available for download from the usual location:

This release brings some significant changes to the application build system and user interface

  • Requires Meson + Ninja build system instead of make
  • Switch to 2-digit version numbering
  • Fix corruption of display when drawing session browser
  • Register application actions for main operations
  • Compile UI files into binary
  • Add a custom application menu
  • Switch over to using header bar, instead of menu bar and tool bar.
  • Enable close button for about dialog
  • Ensure plugin panel fills preferences dialog
  • Tweak UI spacing in supported cameras dialog
  • Add keyboard shortcuts overlay

Ta-Nehisi Coates on The Ezra Klein Show

This interview with Ta-Nehisi Coates on The Ezra Klein Show actually made me stop washing the dishes and just stand there in the kitchen.

October 09, 2017

fwupd hits 1.0.0

Today I released fwupd version 1.0.0, a version number most Open Source projects seldom reach. Unusually it bumps the soname so any applications that link against libfwupd will need to be rebuilt. The reason for bumping is that we removed a lot of the cruft we’ve picked up over the couple of years since we started the project, and also took the opportunity to rename some public interfaces that are now used differently to how they were envisaged. Since we started the project, we’ve basically re-architected the way the daemon works, re-imagined how the metadata is downloaded and managed, and changed core ways we’ve done the upgrades themselves. It’s no surprise that removing all that crufty code makes the core easier to understand and maintain. I’m intending to support the 0_9_X branch for a long time, as that’s what’s going to stay in Fedora 26 and the upcoming Fedora 27.

Since we’ve started we now support 72 different kinds of hardware, with support for another dozen-or-so currently being worked on. Lots of vendors are now either using the LVFS to distribute firmware, or are testing with one or two devices in secret. Although we have 10 (!) different ways of applying firmware already, vendors are slowly either switching to a more standard mechanism for new products (UpdateCapsule/DFU/Redfish) or building custom plugins for fwupd to update existing hardware.

Every month 165,000+ devices get updated using fwupd using the firmware on the LVFS; possibly more as people using corporate mirrors and caching servers don’t show up in the stats. Since we started this project there are now at least 600,000 items of hardware with new firmware. Many people have updated firmware, fixing bugs and solving security issues without having to understand all the horrible details involved.

I guess I should say thanks; to all the people both uploading firmware, and the people using, testing, and reporting bugs. Dell have been a huge supporter since the very early days, and now smaller companies and giants like Logitech are also supporting the project. Red Hat have given me the time and resources that I need to build something as complicated and political as shared infrastructure like this. There is literally no other company on the planet that I would rather work for.

So, go build fwupd 1.0.0 in your distro development branch and report any problems. 1.0.1 will follow soon with fixes I’m sure, and hopefully we can make some more vendor announcements in the near future. There are a few big vendors working on things in secret that I’m sure you’ll all know :)

October 07, 2017

Planetary features: Call to translators

Dear translators and users!

Thank you very much for your efforts for make Stellarium available to non-English community!

We incorporated into main package a long awaited feature - support of nomenclature for planetary features. All nomenclature items are translatable and this is a big problem now for translators, because we added over 15000 new lines for translation.

All those lines was extracted in the separate category - stellarium-planetary-features. If you can assist with translation to any of the 140 languages which Stellarium supports, please go to Launchpad Translations and help us out:

Thank you!

October 06, 2017

Tarantula Under Glass, and Micro-Centipedes

Every fall, Dave and I eagerly look for tarantulas. They only show up for a few weeks a year -- that's when the males go out searching for females (the females stay snug in their burrows). In the bay area, there were a few parks where we used to hunt for them: Arastradero, Mt Hamilton, occasionally even Alum Rock. Here in semi-rural New Mexico, our back yard is as good a place to hunt as anywhere else, though we still don't see many: just a couple of them a year.

But this year I didn't even have to go out into the yard. I just looked over from my computer and spotted a tarantula climbing up our glass patio door. I didn't know they could do that!

Unfortunately it got to the top before I had the camera ready, so I didn't get a picture of tarantula belly. Right now he's resting on the sill: [Tarantula, resting after climbing up our glass patio door] I don't think it's very likely he's going to find any females up there. I'm hoping he climbs back down the same way and I can catch a photo then. (Later: nope, he disappeared when I wasn't watching.)

In other invertebrate news: we have a sporadic problem with centipedes here in White Rock. Last week, a seven-inch one dropped from the ceiling onto the kitchen floor while I was making cookies, and it took me a few minutes to chase it down so I could toss it outside.

[Tiny baby centipede] But then a few days later, Dave spotted a couple of these little guys on the patio, and I have to admit they're pretty amazing. Just like the adults only in micro-miniature.

Though it doesn't make me like them any better in the house.

October 02, 2017

Interview with Emily Wei

Could you tell us something about yourself?

Hi! My name is Emily Wei, and I’m 19 years old. I was born in Taiwan, but I grew up in New Jersey. Right now, I’m back in Taiwan juggling university, freelance work, sleep, and a one-year course I’m taking at Kadokawa International Edutainment (Advanced Commercial Illustration).

Do you paint professionally, as a hobby artist, or both?

I suppose I’d be considered a hobbyist as of now since I’m not making money off art yet, but I aim to do it professionally in the near future!

What genre(s) do you work in?

My main love is in illustration, but a lot of the things I’ve been working on lately fall under concept design, so things like characters and 2D game assets, among others. Stylewise, I’m somewhere between anime, fantasy/RPG video games, and emotional surrealism. Basically, I’m kind of all over the place since I’m trying different things to get a feel for what my likes and dislikes are; I’ve recently fallen in love with doing background illustrations, for example!

Whose work inspires you most — who are your role models as an artist?

That’s really tough; there are too many! Pretty much everyone I’m following on Twitter/DeviantArt/Artstation, masters like Sargent and Mucha as well as my friends and mentors.

How and when did you get to try digital painting for the first time?

I think I was about 9 or 10 when I started? I was a hardcore Neopets user at the time, and at some point, I stumbled upon the art community there. That led to me discovering How to Draw ____ in Photoshop tutorials by an artist I really looked up to (shameless plug: her social media handle name is droidnaut across various platforms! Do check her out ^^)

It really amazed me how versatile digital art was, and I’ve never stopped since.

What makes you choose digital over traditional painting?

Short version: CTRL+Z!

Long version: Digital art is much more forgiving than traditional medias, and you don’t really need to keep buying art supplies (not counting Adobe CC subscriptions, hardware upgrades, plugins, etc.) There are a lot of tools you can use that save you a boatload of time, and it’s easier to make changes to your work as needed.

That said, I do love traditional art. There’s nothing quite like the feeling of putting pen on paper! It’s also easier in some aspects; for
example, drawing decent circles (and most geometric shapes in general) freehand is ridiculously harder with a tablet. Limited supplies also makes you be more economic and decisive about what goes where, which is a mindset I’d like to carry over into my digital work more.

How did you find out about Krita?

I don’t really remember, actually! I think I might’ve seen a thread about it on Neopets or a post on tumblr. It was some time after the Kickstarter for Krita 3.0 ended, and the “faster than Photoshop” part of the campaign despite all the tools the program offered had me intrigued.

What was your first impression?

“Wow! This is almost just like Photoshop!” The UI is very similar, haha.

What do you love about Krita?

The brush engines are really fantastic. There are a lot of traditional media-esque brushes for people who like a little roughness/texture as well as the standard digital round opacity brushes and soft airbrushes. Here’s one of the first few sketches I did with Krita back in 2015:

There’s also the option to convert your artwork to CMYK if you want to make prints and merch, which is really convenient.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I suppose my only qualm is that the text tool and I don’t really seem to get along, haha. Text input and changing the font size is oddly challenging. It’s not that big of a deal, though.

This might have changed in version 3, but I’m still using version two-point-something since my computer can’t quite handle the newest

What sets Krita apart from the other tools that you use?

I find it amazing how much you can do with a program that is legitimately free to download! It’s basically Photoshop condensed down to just the tools and functions a CG illustrator would use. I think this is especially nice for people who are new to digital art since they can get into it without putting a huge dent into your wallet (or pirating 🙂 ).

And again, the brushes are great.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

Probably “No”!

It’s not the most technically advanced work I’ve done, and the story behind it isn’t exactly happy, but I still like the colors, the clothing folds, and the overall composition.

What techniques and brushes did you use in it?

I don’t really remember what brushes I used, actually! I do have a favorite brushes preset, though, so they’re probably among them.

As for techniques, nothing super fancy beyond simple digital painting and blending.

Where can people see more of your work?


Feel free to follow me, come say hi, ask me questions, whatever! I’m most active on Twitter, Tumblr, and Plurk.

(I’m still in the process of updating my Facebook page, Artstation, and Instagram, so hopefully there will be things for you to look at there by the time you read this.)

Anything else you’d like to share?

In a nutshell, people always say how tools do not a craftsman make, and the same thing is true with digital art. The most expensive programs and tablets in the world will not make you a master overnight, nor do you need them to make art. Explore different options (like Krita!), learn as much as you can, and just have fun with it 😀

October 01, 2017

FreeCAD Arch development news - September 2017

Hi everybody! Here we are for another monthly report about the development of BIM tools for FreeCAD, our favorite open-source 3D CAD modeling platform. The coding I am doing for FreeCAD is now more and more heavily supported and funded by many of you via my Patreon page, thanks once more to everybody who contributes...

September 30, 2017

Bullet 2.87 with pybullet robotics Reinforcement Learning environments

Bullet 2.87 has improved support for robotics, reinforcement learning and VR. In particular, see the “Reinforcement Learning” section in the pybullet quickstart guide at . There are also preliminary C# bindings to allow the use of pybullet inside Unity 3D for robotics and reinforcement learning. In addition, vectorunit Beach Buggy Racing using Bullet has been released for the Nintendo Switch!


You can download the release from
Here are some videos of some Bullet reinforcement learning environments trained using TensorFlow Agents PPO:

See also KUKA grasping
and pybullet Ant

September 29, 2017

Stellarium 0.16.1

Version 0.16.1 is based on Qt 5.6.2, but it can still be built from sources with Qt 5.4.
This version is bugfix release with some important features:
- Added moons of Saturn, Uranus and Pluto
- Added improvements for AstroCalc tool
- DSO catalog was updated to version 3.2:
-- Added support 'The Strasbourg-ESO Catalogue of Galactic Planetary Nebulae' (Acker+, 1992)
-- Added support 'A catalogue of Galactic supernova remnants' (Green, 2014)
-- Added support 'A Catalog of Rich Clusters of Galaxies' (Abell+, 1989)
- Added support asterisms and outlines for DSO
- Added improvements for the GUI

A huge thanks to the people who helped us a lot by reporting bugs!

Full list of changes:
- Added two notations for unit of measurement of surface brightness
- Added improvement for hide/unhide lines and grids in Oculars plugin
- Added few moons of Saturn (Phoebe, Janus, Epimetheus, Helene, Telesto, Calypso, Atlas, Prometheus, Pandora, Pan) with classic elliptical orbits
- Added few moons of Uranus (Cordelia, Cressida, Desdemona, Juliet, Ophelia) with classic elliptical orbits
- Added 2 moons of Pluto (Kerberos and Styx) with classic elliptical orbits
- Added code to avoid conflicts for names of asteroids and moons
- Added support of IAU moon numbers
- Added angular size into AstroCalc/Positions tool
- Added option to allow users to choose output formatting of coordinates of objects
- Added optional debug info for HDPI devices
- Added optional calculations of resolution limits for Oculars plugin
- Added new data from IAU Catalog of Star Names (LP: #1705111)
- Added support download zip archives with TLE data to Satellites plugin
- Added link to the Mike McCants' classified TLE data into the default list of TLE sources
- Added link to AMSAT TLE data into the default list of TLE sources
- Added support 'The Strasbourg-ESO Catalogue of Galactic Planetary Nebulae' (Acker+, 1992) [DSO catalog version 3.2]
- Added support 'A catalogue of Galactic supernova remnants' (Green, 2014) [DSO catalog version 3.2]
- Added support 'A Catalog of Rich Clusters of Galaxies' (Abell+, 1989) [DSO catalog version 3.2]
- Added export the predictions of Iridium flares (LP: #1707390)
- Added meta information about version and edition into file of Stellarium DSO Catalog to avoid potential crash of Stellarium in the future (validation the version of catalog before loading)
- Added support of extra physical data for asteroids
- Added support of outlines for DSO
- Added new time step: saros
- Added new time step: 7 sidereal days
- Added more checks to the network connections
- Added support of comments for constellations_boundaries.dat file (LP: #1711433)
- Added support for small asterisms with lines by the equatorial coordinates
- Added support for ray helpers
- Added new feature (crossed lines and output string near mouse cursor) to the Pointer Coordinates plugin
- Added missing cross-id data
- Added support an images within description of landscapes
- Added support Visual Studio 2017 in StelLogger
- Added tool to save list of objects in AstroCalc/WUT tool
- Added tool to save celestial positions of objects in AstroCalc/Positions tool
- Added temporary solution for bug 1498616 (LP: #1498616)
- Fixed wrong rendering Neptune and Uranus (LP: #1699648)
- Fixed Vector3 compilation error in unit tests (LP: #1700095)
- Fixed a conflict around landscape autoselection (LP: #1700199)
- Fixed HMS formatting
- Fixed generating ISS script
- Fixed tooltips for AstroCalc/Positions tool
- Fixed dark nebulae parameters for AstroCalc/Positions tool
- Fixed tool for saving options
- Fixed crash when we on the spaceship
- Fixed Solar system class to avoid conflicts and undefined behaviour
- Fixed orientation angle and its data rendering (LP: #1704561)
- Fixed wrong shadows on Jupiter's moons (Added special case for Jupiter's moons when they are in the shadow of Jupiter for compute magnitudes from Expl. Suppl. 2013 item) (LP: #1704421)
- Fixed work AstroCalc/AltVsTime tool for artificial satellites (a bit slow solution though)
- Fixed search by lists of DSO
- Fixed translation switch issue for AstroCalc/Graphs tool (LP: #1705341)
- Fixed trackpad behaviour on macOS though workaround
- Fixed couple stupid bugs in InnoSetup script
- Fixed morphology for SNR
- Fixed issue in parsing of date format in AstroCalc/Phenomena tool
- Fixed link for fileStructure.html file in README (LP: #1709523)
- Fixed the calculation for drawing a reticle on a HiDPI display (Oculars plugin)
- Fixed default option of units of measure for surface brighness to avoid possible artifacts on the macOS (LP: #1699643)
- Fixed crash when comments is added into constellations_boundaries.dat file (LP: #1711229)
- Fixed behaviour of 'Center on selected object' button (LP: #1712101)
- Fixed impossibility to select a planet after Astronomical Calculations is activated (LP: #1712652)
- Fixed crash with unknown star in asterism
- Fixed cross-ids of 42 bright double stars (LP: #1655493)
- Fixed magnitude computation for Jupiter's satellites
- Fixed crash of Stellarium when answer of has wrong format (for example this host blocked by firewall or DNS server with HTML answer) (LP: #1706187)
- Fixed translations issue in Script Console
- Fixed illumination in Scenery3D plugin: Take eclipseFactor into account
- Fixed potential crash in DSO outlines
- Fixed various issues in ray helpers, asterisms and constellations support
- Updated InfoString feature
- Updated sky brightness during solar eclipse (really, there are only few stars visible.)
- Updated Maori sky culture
- Updated list of names of deep-sky objects
- Updated list of asterisms
- Updated selection behaviour in Oculars plugin (avoid selection of objects outside ocular circle in eyepiece mode)
- Updated behaviour of methods getEnglishName() and getNameI18n() for minor bodies
- Updated behaviour of planetarium for support a new format of asteroid names
- Updated behaviour of filters for DSO catalogs
- Updated Solar System Editor plugin (support new format of asteroid names)
- Updated RTS2 telecope driver in Telescope Control plugin.
- Updated API docs
- Updated limit of magnitude for Oculars plugin (Improvements)
- Updated AstroCalc/WUT tool
- Updated AstroCalc/Ephemeris tool
- Updated rules for storing default settings
- Updated rules for computation visibility of DSO hints
- Updated plugins
- Updated default values for material fade-in/fade-out times
- Updated stellarium.appdata.xml file
- Updated tab rules in the GUI
- Reduce warnings to one when loading OBJ with non-default w texture/vertex coordinates

Keep the Raws Coming

Keep the Raws Coming

Moar samples!

Our friendly neighborhood @LebedevRI pointed out to me a little while ago that we had reached some nice milestones for Not surprisingly I had spaced out and not written anything about it (or really any sort of social posts). Bad Pat!

So let’s talk about (RPU) a bit!


For anyone not familiar with RPU, a quick recap (we had previously written about earlier this year). There used to be a website for housing a repository of raw files for as many digital cameras as possible called It was created by Jakob Rohrbach and had been running since March of 2007. Back in 2016 the site was hit with a SQL injection attack that left the Joomla database corrupted (in a teachable moment, the site also didn’t have a database backup).

With the site down, @LebedevRI and @andabata worked to get a replacement option in-place and working:!

Sexy Stats

We grabbed all the files we could salvage from and @andabata setup the new page. We’ve had a slowly growing response as folks have filled in gaps for camera models we still don’t have.

For reference, we currently have count of unique cameras in the archive unique cameras, and total count of unique samples unique samples.

RPU samples graph
RPU cameras graph

Moar samples!

As @LebedevRI has said, we still really need folks to check RPU and send us more samples!

  • We currently only have about 77% coverage.
  • We want to replace any non-CC0 (public domain) samples with CC0 licensed samples.
  • We are still missing some rarer samples like any medium-format or Sigma samples.

Our hope is that some casual reader out there might look at the list and say “Hey! I’ve got that camera lying around - let me submit a sample!”.

Here’s the current list of missing camera samples:

Canon EOS Kiss Digital F
Canon EOS Kiss X7
Canon EOS Kiss X70
Canon EOS Kiss X80
Canon EOS Kiss X9
Canon EOS Rebel SL2
Canon EOS Kiss Digital
Canon EOS Kiss Digital X
Canon Kiss Digital X2
Canon Kiss X2
Canon EOS 5DS
Canon EOS Kiss X5
Canon EOS Kiss X6i
Canon EOS Rebel T4i
Canon EOS Kiss X7i
Canon EOS Kiss X8i
Canon EOS 8000D
Canon EOS Rebel T6s
Canon EOS 9000D
Canon EOS Kiss X9i
Canon EOS M10
Canon EOS M2
Canon PowerShot G9 X
Canon PowerShot S95
Canon PowerShot SX260 HS
Fujifilm FinePix HS30EXR
Fujifilm FinePix HS50EXR
Fujifilm FinePix S100FS
Fujifilm FinePix S5200
Fujifilm FinePix S5500
Fujifilm FinePix S6000fd
Fujifilm FinePix S9000
Fujifilm FinePix S9600fd
Fujifilm IS-1
Fujifilm XF1
Fujifilm XQ2
Kodak EasyShare Z980
Kodak P880
Leaf Aptus-II 5
Leaf Credo 40
Leaf Credo 60
Leaf Credo 80
Leica D-LUX 4
Leica D-LUX 5
Leica D-LUX 6
Leica X2
Minolta DiMAGE 5
Minolta Alpha 5D
Minolta Maxxum 5D
Minolta Alpha 7D
Minolta Maxxum 7D
Nikon 1 J3
Nikon 1 J4
Nikon 1 S1
Nikon 1 V3
Nikon Coolpix A
Nikon Coolpix P7700
Nikon D1H
Nikon D2H
Nikon D2Hs
Nikon D3S
Nikon D4S
Nokia Lumia 1020
Olympus E-10
Olympus E-400
Olympus E-PL1
Olympus E-PL2
Olympus SP320
Olympus SP570UZ
Olympus Stylus1
Olympus XZ-10
Panasonic DMC-FZ80
Panasonic DMC-FZ85
Panasonic DC-FZ91
Panasonic DC-FZ92
Panasonic DC-FZ93
Panasonic DC-ZS70
Panasonic DMC-FX150
Panasonic DMC-FZ100
Panasonic DMC-FZ35
Panasonic DMC-FZ40
Panasonic DMC-FZ50
Panasonic DMC-G5
Panasonic DMC-G8
Panasonic DMC-G85
Panasonic DMC-GF2
Panasonic DMC-GM5
Panasonic DMC-LX9
Panasonic DMC-TZ110
Panasonic DMC-ZS110
Panasonic DMC-ZS40
Panasonic DMC-ZS50
Panasonic DMC-TZ85
Panasonic DMC-ZS60
Pentax 645Z
Pentax K2000
Pentax Q10
Pentax Q7
Phase One IQ250
Ricoh GR
Ricoh GR II
Samsung EK-GN120
Samsung GX10
Samsung GX20
Samsung NX10
Samsung NX1000
Samsung NX11
Samsung NX1100
Samsung NX20
Samsung NX2000
Samsung NX210
Samsung NX5
Sinar Hy6
Sony DSC-RX1
Sony DSLR-A230
Sony DSLR-A290
Sony DSLR-A380
Sony DSLR-A390
Sony DSLR-A450
Sony DSLR-A500
Sony DSLR-A560
Sony ILCE-3000
Sony ILCE-3500
Sony NEX-5N
Sony NEX-C3
Sony NEX-F3
Sony SLT-A33

If you have any of the cameras on this list and don’t mind spending a few minutes uploading a sample file, we would be very grateful for the help!

Don’t forget that we are looking for:

  • Lens mounted on the camera, cap off
  • Image in focus and properly exposed
  • Landscape orientation

and we are not looking for:

  • Series of images with different ISO, aperture, shutter, wb, lighting, or different lenses
  • DNG files created with Adobe DNG Converter
  • Photographs of people, for legal reasons.

If you don’t see your camera on this list, you’re not off the hook yet! We are also looking for files that are licensed very freely…

Non Creative-Commons Zero (CC0)

We have many raw samples that were not licensed as freely as we would like. Ideally we are looking for images that have been released Creative Commons Zero (CC0). This list is all samples we already have that are not licensed CC0, so if you happen to have one of the cameras listed below please consider uploading some new samples for us!

Canon IXUS900Ti
Canon PowerShot A550
Canon PowerShot A570 IS
Canon PowerShot A610
Canon PowerShot A620
Canon PowerShot A630
Canon Powershot A650
Canon PowerShot A710 IS
Canon PowerShot G7
Canon PowerShot S2 IS
Canon PowerShot S5 IS
Canon PowerShot SD750
Canon Powershot SX110IS
Canon EOS 10D
Canon EOS 1200D
Canon EOS-1D
Canon EOS-1D Mark II
Canon EOS-1D Mark III
Canon EOS-1D Mark II N
Canon EOS-1D Mark IV
Canon EOS-1Ds
Canon EOS-1Ds Mark II
Canon EOS-1Ds Mark III
Canon EOS-1D X
Canon EOS 300D
Canon EOS 30D
Canon EOS 400D
Canon EOS 40D
Canon EOS 760D
Canon EOS D2000C
Canon EOS D60
Canon EOS Digital Rebel XS
Canon EOS M
Canon EOS Rebel T3
Canon EOS Rebel T6i
Canon PowerShot A3200 IS
Canon Powershot A720 IS
Canon PowerShot G10
Canon PowerShot G11
Canon PowerShot G12
Canon PowerShot G15
Canon PowerShot G1
Canon PowerShot G1 X Mark II
Canon PowerShot G2
Canon PowerShot G3
Canon PowerShot G5
Canon PowerShot G5 X
Canon PowerShot G6
Canon PowerShot Pro1
Canon PowerShot Pro70
Canon PowerShot S30
Canon PowerShot S40
Canon PowerShot S45
Canon PowerShot S50
Canon PowerShot S60
Canon PowerShot S70
Canon PowerShot S90
Canon PowerShot SD450
Canon Powershot SX110IS
Canon PowerShot SX130 IS
Canon PowerShot SX1 IS
Canon PowerShot SX50 HS
Canon PowerShot SX510 HS
Canon PowerShot SX60 HS
Canon Poweshot S3IS
Epson R-D1
Fujifilm FinePix E550
Fujifilm FinePix E900
Fujifilm FinePix F600EXR
Fujifilm FinePix F700
Fujifilm FinePix F900EXR
Fujifilm FinePix HS10 HS11
Fujifilm FinePix HS20EXR
Fujifilm FinePix S200EXR
Fujifilm FinePix S2Pro
Fujifilm FinePix S3Pro
Fujifilm FinePix S5000
Fujifilm FinePix S5600
Fujifilm FinePix S6500fd
Fujifilm FinePix X100
Fujifilm X100S
Fujifilm X-A2
Fujifilm XQ1
Hasselblad CF132
Hasselblad CFV
Hasselblad H3D
Kodak DC120
Kodak DC50
Kodak DCS460D
Kodak DCS560C
Kodak DCS Pro SLR/n
Kodak EOS DCS 1
Kodak Kodak C330
Kodak Kodak C603 / Kodak C643
Kodak Z1015 IS
Leaf Aptus 75
Leaf Leaf Aptus 22
Leica Leica Digilux 2
Leica Leica D-LUX 3
Leica M8
Leica M (Typ 240)
Leica V-LUX 1
Mamiya ZD
Minolta DiMAGE 7
Minolta DiMAGE 7Hi
Minolta DiMAGE 7i
Minolta DiMAGE A1
Minolta DiMAGE A200
Minolta DiMAGE A2
Minolta Dimage Z2
Minolta Dynax 5D
Minolta Dynax 7D
Minolta RD-175
Minolta RD-175
Nikon 1 S2
Nikon 1 V1
Nikon Coolpix P340
Nikon Coolpix P6000
Nikon Coolpix P7000
Nikon Coolpix P7100
Nikon D100
Nikon D1
Nikon D1X
Nikon D2X
Nikon D300S
Nikon D3
Nikon D3X
Nikon D40
Nikon D60
Nikon D70
Nikon D800
Nikon D80
Nikon D810
Nikon E5400
Nikon E5700
Nikon LS-5000
Nokia Lumia 1020
Olympus C5050Z
Olympus C5060WZ
Olympus C8080WZ
Olympus E-1
Olympus E-20
Olympus E-300
Olympus E-30
Olympus E-330
Olympus E-3
Olympus E-420
Olympus E-450
Olympus E-500
Olympus E-510
Olympus E-520
Olympus E-5
Olympus E-600
Olympus E-P1
Olympus E-P2
Olympus E-P3
Olympus E-PL5
Olympus SP350
Olympus SP500UZ
Olympus XZ-1
Panasonic DMC-FZ150
Panasonic DMC-FZ18
Panasonic DMC-FZ200
Panasonic DMC-FZ28
Panasonic DMC-FZ30
Panasonic DMC-FZ38
Panasonic DMC-FZ70
Panasonic DMC-FZ72
Panasonic DMC-FZ8
Panasonic DMC-G1
Panasonic DMC-G3
Panasonic DMC-GF3
Panasonic DMC-GF5
Panasonic DMC-GF7
Panasonic DMC-GH2
Panasonic DMC-GH3
Panasonic DMC-GH4
Panasonic DMC-GM1
Panasonic DMC-GX7
Panasonic DMC-L10
Panasonic DMC-L1
Panasonic DMC-LF1
Panasonic DMC-LX1
Panasonic DMC-LX2
Panasonic DMC-LX3
Panasonic DMC-LX5
Panasonic DMC-LX7
Panasonic DMC-TZ60
Panasonic DMC-TZ71
Pentax *ist D
Pentax *ist DL2
Pentax *ist DS
Pentax K100D Super
Pentax K10D
Pentax K20D
Pentax K-50
Pentax K-m
Pentax K-r
Pentax K-S1
Pentax Optio S4
Polaroid x530
Samsung EX2F
Samsung NX100
Samsung NX300
Samsung NX300M
Samsung NX500
Samsung WB2000
Sigma DP2 Quattro
Sigma DP1s
Sigma DP2 Merrill
Sigma SD10
Sigma SD14
Sigma SD9
Sony DSC-R1
Sony DSC-RX100
Sony DSC-RX100M2
Sony DSC-RX100M3
Sony DSC-RX100M4
Sony DSC-RX10
Sony DSC-RX10M2
Sony DSLR-A100
Sony DSLR-A200
Sony DSLR-A300
Sony DSLR-A330
Sony DSLR-A350
Sony DSLR-A550
Sony DSLR-A580
Sony DSLR-A700
Sony DSLR-A850
Sony DSLR-A900
Sony NEX-3
Sony NEX-5R
Sony NEX-7
Sony SLT-A35
Sony SLT-A58
Sony SLT-A77
Sony SLT-A99

We are really working hard to make sure we are a good resource of freely available raw samples for all Free Software imaging projects to use. Thank you so much for helping out if you can!

September 28, 2017

Audio Output from a Raspberry Pi Zero

Someone at our makerspace found a fun Halloween project we could do at Coder Dojo: a motion sensing pumpkin that laughs evilly when anyone comes near. Great! I've worked with both PIR sensors and ping rangefinders, and it sounded like a fun project to mentor. I did suggest, however, that these days a Raspberry Pi Zero W is cheaper than an Arduino, and playing sounds on it ought to be easier since you have frameworks like ALSA and pygame to work with.

The key phrase is "ought to be easier". There's a catch: the Pi Zero and Zero W don't have an audio output jack like their larger cousins. It's possible to get analog audio output from two GPIO pins (use the term "PWM output" for web searches), but there's a lot of noise. Larger Pis have a built-in low-pass filter to screen out the noise, but on a Pi Zero you have to add a low-pass filter. Of course, you can buy HATs for Pi Zeros that add a sound card, but if you're not super picky about audio quality, you can make your own low-pass filter out of two resistors and two capacitors per channel (multiply by two if you want both the left and right channels).

There are lots of tutorials scattered around the web about how to add audio to a Pi Zero, but I found a lot of them confusing; e.g. Adafruit's tutorial on Pi Zero sound has three different ways to edit the system files, and doesn't specify things like the values of the resistors and capacitors in the circuit diagram (hint: it's clearer if you download the Fritzing file, run Fritzing and click on each resistor). There's a clearer diagram in Sudomod Forums: PWM Audio Guide, but I didn't find that until after I'd made my own, so here's mine.

Parts list:

  • 2 x 270 Ω resistor
  • 2 x 150 Ω resistor
  • 2 x 10 nF or 33nF capacitor
  • 2 x 1μF electrolytic capacitor
  • 3.5mm headphone jack, or whatever connection you want to use to your speakers

And here's how to wire it: [Adding audio to the Raspberry Pi Zero]
(Fritzing file, pi-zero-audio.fzz.)

This wiring assumes you're using pins 13 and 18 for the left and right channels. You'll need to configure your Pi to use those pins. Add this to /boot/config.txt:



Once you build your circuit up, you need to test it. Plug in your speaker or headphones, then make sure you can play anything at all:

aplay /usr/share/sounds/alsa/Front_Center.wav

If you need to adjust the volume, run alsamixer and use the up and down arrow keys to adjust volume. You'll have to press up or down several times before the bargraph actually shows a change, so don't despair if your first press does nothing.

That should play in both channels. Next you'll probably be curious whether stereo is actually working. Curiously, none of the tutorials address how to test this. If you ls /usr/share/sounds/alsa/ you'll see names like Front_Left.wav, which might lead you to believe that aplay /usr/share/sounds/alsa/Front_Left.wav might play only on the left. Not so: it's a recording of a voice saying "Front left" in both channels. Very confusing!

Of course, you can copy a music file to your Pi, play it (omxplayer is a nice commandline player that's installed by default and handles MP3) and see if it's in stereo. But the best way I found to test audio channels is this:

speaker-test -t wav -c 2

That will play those ALSA voices in the correct channel, alternating between left and right. (MythTV has a good Overview of how to use speaker-test.

Not loud enough?

I found the volume plenty loud via earbuds, but if you're targeting something like a Halloween pumpkin, you might need more volume. The easy way is to use an amplified speaker (if you don't mind putting your nice amplified speaker amidst the yucky pumpkin guts), but you can also build a simple amplifier. Here's one that looks good, but I haven't built one yet: One Transistor Audio for Pi Zero W

Of course, if you want better sound quality, there are various places that sell HATs with a sound chip and line or headphone out.

Krita 3.3.0

Less than a month after Krita 3.2.1, we’re releasing Krita 3.3.0. We’re bumping the version because there are some important changes, especially for Windows users in this version!

Alvin Wong has implemented support for the Windows 8 event API, which means that Krita now supports the n-trig pen in the Surface line of laptops (and similar laptops from Dell, HP and Acer) natively. This is still very new, so you have to enable this in the tablet settings:

And he also refactored Krita’s hardware-accelerated display functionality to optionally use Angle on Windows instead of native OpenGL. That means that many problems with Intel display chips and broken driver versions are worked around because Krita now can use Direct3D indirectly.

There are more changes in this release, of course:

  • Some visual glitches when using hi-dpi screens are fixed (remember: on Windows and Linux, you need to enable this in the settings dialog).
  • If you create a new image from clipboard, the image will have a title
  • Favorite blending modes and favorite brush presets are now loaded correctly on startup
  • GMIC
    • the plugin has been updated to the latest version for Windows and Linux.
    • the configuration for setting the path to the plugin has been removed. Krita looks for the plugin in the folder where the krita executable is, and optionally inside a folder with a name that starts with ‘gmic’ next to the krita executable.
    • there are several fixes for handling layers and communication between Krita and the plugin
  • Some websites save jpeg images with a .png extension: that used to confuse Krita, but Krita now first looks inside the file to see what kind of file it really is.
  • PNG:
    • 16 and 32 bit floating point images are now converted to 16 bit integer when saving the images as PNG.
    • It’s now possible to save the alpha channel to PNG images even if there are no (semi-) transparent pixels in the image
  • When hardware accelerated display is disabled, the color picker mode of the brush tool showed a broken cursor; this has been fixed.
  • The Reference Images docker now only starts loading images when it is visible, instead on Krita startup. Note: the reference images docker uses Qt’s imageio plugins to load images. If you are running on Linux, remove all Deepin desktop components. Deepin comes with severely broken qimageio plugins that will crash any Qt application that tries to display images.
  • File layers now correctly reload on change again
  • Add several new commandline options:
    • –nosplash to start Krita without showing the splash screen
    • –canvasonly to start Krita in canvas-only mode
    • –fullscreen to start Krita full-screen
    • –workspace Workspace to start Krita with the given workspace
  • Selections
    • The Select All action now first clears the selection before selecting the entire image
    • It is now possible to extend selections outside the canvas boundary
  • Performance improvements: in several places superfluous reads from the settings were eliminated, which makes generating a layer thumbnail faster and improves painting if display acceleration is turned off.
  • The smart number input boxes now use the current locale to follow desktop settings for numbers
  • The system information dialog for bug reports is improved
  • macOS/OSX specific changes:
    • Bernhard Liebl has improved the tablet/stylus accuracy. The problem with circles having straight line segments is much improved, though it’s not perfect yet.
    • On macOS/OSX systems with and AMD gpu, support for hardware accelerated display is disabled because saving to PNG and JPG hangs Krita otherwise.



Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.


(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

When it is updated, you can also use the Krita Lime PPA to install Krita 3.3.0 on Ubuntu and derivatives.


Note: the gmic-qt and pdf plugins are not available on OSX.

Source code


For all downloads:


The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
. The signatures are here.

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

September 27, 2017

Enabling New Contributors

I had a random idea today and wanted to share it in case anybody has thought about this too, or tried something like it, or could add on to the idea.

How We Onboard Today

I onboard, mentor, and think a lot about enabling new contributors to open source software. Traditionally in Fedora, we’ve called out a ‘join’ process for people to join Fedora. If you visit, you’ll get redirected to a wiki page that gives broad categories of skill sets and suggests Fedora teams you might want to look at to see if you could join them.

I started thinking about this because I’m giving a keynote about open source and UX at Ohio Linux Fest this weekend. One of the sections of the talk basically reviews where / how to find UX designers to help open source projects. Some of the things I mention that have proven effective are internships (Outreachy, formal Red Hat intern program, etc.), training, and design bounties / job boards. Posting UX assistance on say Didn’t come up. I can’t tell you if I’ve actually onboarded folks from that workflow – certainly possible. My best success ratio in onboarding contributors in terms of them feeling productive and sticking around the community for a while, though, is with the methods I listed above – not a general call for folks of a certain discipline to come to the design team.

In fact, one of the ways we onboard people to the design team is to assign them a specific task, with the thought that they can learn how our team / processes / tools work by doing, and have a task to focus on for getting help from another member of the team / mentor.

Successful Onboarding Methods are Task-Oriented

Thinking about this, these successful recruitment methods of new contributors all focus on tasks, not skills:

  • Internships – internships have a set time period focused on the completion of a particular project, scoped for that duration and complexity, that has been documented for the intern. This is such that digging through archives of proposed Outreachy and GSoC projects unearths (if it were still current) a great set of directions that any new contributor could use to get started.
  • Training – in my experience, when training folks without UX experience in UX, they had a specific task they were working on already, knew they needed the skill to complete it, and sought out help with the skill. A task was the driver to seek out the skill.
  • Job board postings – (e.g., like – they are focused on a specific task / thing to do.
  • Bounties – super task-focused!

If onboarding new contributors works well when those new contributors are put to work right away on a specific, assigned task with a well-defined scope, why do we attempt to recruit by categories of skills with loose pointers to teams (that get out of date), instead of tasks? You might have someone fired up to do *something*, but they’re redirected to a wiki page, to a mailing list, to wait a few days for something to respond and tell them “hi, welcome!” without actually helping them figure out what it is they could do.

An Idea For

If you’re with me here, up to this point… here’s the idea. I haven’t done it yet. I want to hear your feedback on it first.

I thought about redoing as a bounty board, really a job posting board, but let’s call it a bounty board. Bounties are very well defined tasks. I did a talk on how to create an effective bounty a while back, here’s the high-level crash-course:

  1. Set the Stage. Give the narrative around the task / project. What is the broader story around what the software / website / project / etc. does? Who does it help? How does it make the world a better place? Most importantly, what’s the problem to be solved that the bounty taker will work on, and how does it fit into that broader narrative?
  2. State the Mission. Make a clear statement at what exactly the bounty is – state what the successful completion of the bounty would look like / work.
  3. Provide a Specification with Clear Examples. Give all the details needed – the specification – for the completion of the work. Is there a specific process with steps they should follow? Provide those steps. A specific language,or a specific length, or a certain number of items? Make this all clear.
  4. Provide Resources and Tools. What are the resources that would be the most useful in completing this bounty? Where is the IRC channel for the project? The mailing list? Are there any design asset / source files they will need? How about style guidelines / specifications to follow? Will they need to create any accounts to submit their work? Where? Are there any tutorials / videos / documentation / blog posts that explains the technology of interest that they could refer to in order to familiarize themselves with the domain they’ll be working in? Link out to all this stuff.
  5. Outline the Benefits. Clearly and explicitly state what’s in it for them to take on this bounty. Job sites do (or at least, they try) this too. You’ll become a Fedora contributor! You’ll get a Fedora account and membership in the team, which will get you an email forward! When I did bounties, I sent handwritten thank you notes with some swag through the mail. You’ll gain skills in X, Y, or Z. You’ll make life better for our users. Some of this is obvious, but it helps to state it explicitly!
  6. Ground Rules and Contact Info. How does someone claim the bounty? Do they need to get an account and assign it to themselves? What happens if they don’t do anything and time has passed, can it be opened up to others interested? (We had a 48-hour rule before we passed on to the next person when we did this on the Design Team.) Who is the contact person / mentor for the assignment? How can they contact that person?
  7. Show Off the Work! – After a bounty is completed, show off the work! Make a post, on a blog or mailing list or wherever, to tell the story of how the person who took the bounty completed it and give a demo or show off their work. (This is a big part of the benefits too 🙂 ) This not only gives the new contributor a boost, it’s encouraging to other potential new contributors as they can see that new contributors are valued and can achieve cool things, and it’s also helpful in that it shows folks who haven’t set up bounties that maybe they should because it works!

I was thinking about setting this up as a pagure repo, and using the issues section for the actual bounty posting. The notion of status that applies to bugs / issues also applies to bounties, as well as assigning, etc. So maybe it would work well. Issues don’t explicitly manage the queue of bounty takers (should the 1st claimer fall through) but that could be managed through the comments. Any one from any Fedora team could post a bounty in this system. The git repo part of the pagure repo could be used for hosting some general bounty assets / resources – maybe a guide on how to write a good bounty with templates and cool graphics to include, maybe some basic instructions that would be useful for all bounty takers like how to create a new FAS account.

What about easy fix?

We do have a great resource,, that is similar to this idea in that it uses issues/tickets in a manner geared towards new contributors. It provides a list of bugs that have been denoted as easy to fix by project owners all in one place.

The difference here though, is that these are raw bugs. They don’t have all the components of a bounty as explained above, and looking through some of the active and open ones, you could not get started right away without flagging down the right person and getting an explanation of how to proceed or going back and forth on the ticket. I think one of the things that makes bounties compelling is that you can read them and get started right away.

Bounties *do* take a long time to formulate and document. It is a very similar process to proposing a project for an internship program like Outreachy or Google Summer of Code. I bet, though, I could go around different teams in Fedora and find projects that would fit this scope quite well and start building out a list. Maybe as teams have direct success with a program like this, they’d continue to use it and it’d become self-sustaining. I don’t know, though. Clearly, I stopped doing the design team bounties after 4 or 5 because of the amount of work involved. 🙂 But maybe if it was a regular thing, we did one every month or something… not sure.

What do you think?

Does this idea make sense? Did I miss something (or totally miss the point)? Do you have a great idea to make it better? Let me know in the comments. 🙂

Title Design: from Wonder Woman to xXx

Joseph Conover is a 3D artist at Greenhaus GFX, where he created graphics for several high profile film credit sequences such as Wonder Woman, xXx: Return of Xander Cage, Guardians of the Galaxy Vol. 2 and more. As he stepped into the industry and picked up other creative tools, Joseph found that Blender often gave him an edge in terms of workflow.

Text by Joseph Conover, Greenhaus GFX

I started using Blender about ten years ago and still implement it in my workflow for modeling, simulation, texturing, sculpting, and various other general tasks. The software is so comprehensive that it lets me picture the final product from a wide viewpoint. It offers a big advantage in eliminating guesswork and time wasted when jumping between different programs.

The largest project I’ve worked on at Greenhaus so far is the Wonder Woman’s end title sequence.

I did too many random things to count, but these are screenshots of notable parts:

Patty Jenkins (Wonder Woman’s director) thought that many scenes in our sequence were too warlike and wanted some uplifting moments, so I 3D projected this view of Themyscira (home to Wonder Woman and the Amazons) based on a painted version created by my boss, Jason Doherty.

Here are several of my more notable models that were used in various scenes. The woman was based on actress Gal Gadot – sculpted in Zbrush and refined in Blender. For the plane, I took inspiration from the WWII German Biplane. My favorite thing to work on was the Sword structure, in which I used arrays and curve modifiers to create a rotating structure effect.

This was one of the environments I got to develop from start to finish. It was a mix of kitbashing and modeling in Blender. The whole process only took me an afternoon to finish because I was able to quickly duplicate the pieces and fill in the space. This scene was also repurposed in different shots throughout the sequence.

Guardians of the Galaxy Vol. 2’s logo was a different story, because it started off in Blender but ended in C4D. This was the logo our client liked at first, which was done in Blender with some 80’s style comping in After Effects:

While Guardians of the Galaxy Vol. 2’s final product didn’t use much of Blender other than the animation, this promotional ad for the 2017 NHL All-Star Weekend did.

This was a great example of Blender’s versatility. For the two shots below, I had to hand model the scenes to match the Cinerama Dome and the Hollywood Sign. Blender allowed me to quickly draft out my ideas from animation to the final lighting before I exported it to Maya and rendered in V-ray.

So what are your thoughts? Hit me up at if you want to chat about Blender or just talk art!

September 26, 2017

fwupd about to break API and ABI

Soon I’m going to merge a PR to fwupd that breaks API and ABI and bumps the soname. If you want to use the stable branch, please track 0_9_X. The API break removes all the deprecated API and cruft we’ve picked up in the months since we started the project, and with the upcoming 1.0.0 version coming up in a few weeks it seems a sensible time to have a clean out. If it helps, I’m going to put 0.9.x in Fedora 26 and F27, so master branch probably only for F28/rawhide and jhbuild at this point.

In other news, 4 days ago I became a father again, so expect emails to be delayed and full of confusion. All doing great, but it turns out sleep is for the weak. :)

September 25, 2017

Blender Daily Doodles

If you follow me on Instagram or Youtube, you’ve probably noticed all my spare time has been consumed by flying racing drones recently. Winter is approaching, so I’d rather spare my fingers from freezing and focus on my other passion, 3D doodling.

Modifier stack explorations Modifier stack explorations

This blog post is the equivalent of a new year’s resolution. I’ll probably be overwhelmed by duties and will drop out from this, but at least being public about it creates some pressure to keep trying. Feel free to help out with the motivation :)

Animation Nodes is amazing Animation Nodes is amazing

September 21, 2017

Krita 3.3.0 – first release candidate

Less than a month after Krita 3.2.1, we’re getting ready to release Krita 3.3.0. We’re bumping the version because there are some important changes for Windows users in this version!

Alvin Wong has implemented support for the Windows 8 event API, which means that Krita now supports the n-trig pen in the Surface line of laptops (and similar laptops from Dell, HP and Acer) natively. This is still very new, so you have to enable this in the tablet settings:

And he also refactored Krita’s hardware-accelerated display functionality to optionally use Angle on Windows instead of native OpenGL. That means that many problems with Intel display chips and broken driver versions are worked around because Krita now indirectly uses Direct3D.

There are more changes in this release, of course:

  • Some visual glitches when using hi-dpi screens are fixed (remember: on Windows and Linux, you need to enable this in the settings dialog).
  • If you create a new image from clipboard, the image will have a title
  • Favorite blending modes and favorite brush presets are now loaded correctly on startup
  • GMIC
    • the plugin has been updated to the latest version for Windows and Linux.
    • the configuration for setting the path to the plugin has been removed. Krita looks for the plugin in the folder where the krita executable is, and optionally inside a folder with a name that starts with ‘gmic’ next to the krita executable.
    • there are several fixes for handling layers and communication between Krita and the plugin
  • Some websites save jpeg images with a .png extension: that used to confuse Krita, but Krita now first looks inside the file to see what kind of file it really is.
  • PNG:
    • 16 and 32 bit floating point images are now converted to 16 bit integer when saving the images as PNG.
    • It’s now possible to save the alpha channel to PNG images even if there are no (semi-) transparent pixels in the image
  • When hardware accelerated display is disabled, the color picker mode of the brush tool showed a broken cursor; this has been fixed.
  • The Reference Images docker now only starts loading images when it is visible, instead on Krita startup. Note: the reference images docker uses Qt’s imageio plugins to load images. If you are running on Linux, remove all Deepin desktop components. Deepin comes with severely broken qimageio plugins that will crash any Qt application that tries to display images.
  • File layers now correctly reload on change again
  • Add several new commandline options:
    • –nosplash to start Krita without showing the splash screen
    • –canvasonly to start Krita in canvas-only mode
    • –fullscreen to start Krita full-screen
    • –workspace Workspace to start Krita with the given workspace
  • Selections
    • The Select All action now first clears the selection before selecting the entire image
    • It is now possible to extend selections outside the canvas boundary
  • Performance improvements: in several places superfluous reads from the settings were eliminated, which makes generating a layer thumbnail faster and improves painting if display acceleration is turned off.
  • The smart number input boxes now use the current locale to follow desktop settings for numbers
  • The system information dialog for bug reports is improved
  • macOS/OSX specific changes:
    • Bernhard Liebl has improved the tablet/stylus accuracy. The problem with circles having straight line segments is much improved, though it’s not perfect yet.
    • On macOS/OSX systems with and AMD gpu, support for hardware accelerated display is disabled because saving to PNG and JPG hangs Krita otherwise.



Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes. There are no 32 bits packages at this point, but there will be for the final release.


(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

When it is updated, you can also use the Krita Lime PPA to install Krita 3.3.0-rc.1 on Ubuntu and derivatives.


Note: the gmic-qt and pdf plugins are not available on OSX.

Source code


For all downloads:


The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
. The signatures are here.

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

September 20, 2017

Bluetooth on Fedora: joypads and (more) security

It's been a while since I posted about Fedora specific Bluetooth enhancements, and even longer that I posted about PlayStation controllers support.

Let's start with the nice feature.

Dual-Shock 3 and 4 support

We've had support for Dual-Shock 3 (aka Sixaxis, aka PlayStation 3 controllers) for a long while, but I've added a long-standing patchset to the Fedora packages that changes the way devices are setup.

The old way was: plug in your joypad via USB, disconnect it, and press the "P" button on the pad. At this point, and since GNOME 3.12, you would have needed the Bluetooth Settings panel opened for a question to pop up about whether the joypad can connect.

This is broken in a number of ways. If you were trying to just charge the joypad, then it would forget its original "console" and you would need to plug it in again. If you didn't have the Bluetooth panel opened when trying to use it wirelessly, then it just wouldn't have worked.

Set up is now simpler. Open the Bluetooth panel, plug in your device, and answer the question. You just want to charge it? Dismiss the query, or simply don't open the Bluetooth panel, it'll work dandily and won't overwrite the joypad's settings.

And finally, we also made sure that it works with PlayStation 4 controllers.

Note that the PlayStation 4 controller has a button combination that allows it to be visible and pairable, except that if the device trying to connect with it doesn't behave in a particular way (probably the same way the 25€ RRP USB adapter does), it just wouldn't work. And it didn't work for me on a number of different devices.

Cable pairing for the win!

And the boring stuff

Hey, do you know what happened last week? There was a security problem in a package that I glance at sideways sometimes! Yes. Again.

A good way to minimise the problems caused by problems like this one is to lock the program down. In much the same way that you'd want to restrict thumbnailers, or even end-user applications, we can forbid certain functionality from being available when launched via systemd.

We've finally done this in recent fprintd and iio-sensor-proxy upstream releases, as well as for bluez in Fedora Rawhide. If testing goes well, we will integrate this in Fedora 27.

September 18, 2017

Screen Brightness on an Asus 1015e (and other Intel-based laptops)

When I upgraded my Asus laptop to Stretch, one of the things that stopped working was the screen brightness keys (Fn-F5 and Fn-F6). In Debian Jessie they had always just automagically worked without my needing to do anything, so I'd never actually learned how to set brightness on this laptop. The fix, like so many things, is easy once you know where to look.

It turned out the relevant files are in /sys/class/backlight/intel_backlight. cat /sys/class/backlight/intel_backlight/brightness tells you the current brightness; write a number to /sys/class/backlight/intel_backlight/brightness to change it.

That at least got me going (ow my eyes, full brightness is migraine-inducing in low light) but of course I wanted it back on the handy function keys.

I wrote a script named "dimmer", with a symlink to "brighter", that goes like this:


curbright=$(cat /sys/class/backlight/intel_backlight/brightness)
echo dollar zero $0
if [[ $(basename $0) == 'brighter' ]]; then
  newbright=$((curbright + 200))
  newbright=$((curbright - 200))
echo from $curbright to $newbright

sudo sh -c "echo $newbright > /sys/class/backlight/intel_backlight/brightness"

That let me type "dimmer" or "brighter" to the shell to change the brightness, with no need to remember that /sys/class/whatsit path. I got the names of the two function keys by running xev and typing Fn and F5, then Fn and F6. Then I edited my Openbox ~/.config/openbox/rc.xml, and added:

<keybind key="XF86MonBrightnessDown">
  <action name="Execute">
<keybind key="XF86MonBrightnessUp">
  <action name="Execute">

September 16, 2017

David Revoy teaches Krita course at university in Paris

This past week artist David Revoy visited the Université Cergy-Pontoise in Paris France to give a Krita training. The university’s teacher, Nicolas Priniotakis, has been using linux and other open source technology such as Blender. This was the first time the students have been exposed to Krita…and the results were a success with the help of David!

You can read more about David’s trip and see what was taught during the class from his blog:


September 11, 2017

tl;dr: You need an application icon of at least 64×64 in size

At the moment the appstream-builder in Fedora requires a 48x48px application icon to be included in the AppStream metadata. I’m sure it’s no surprise that 48×48 padded to 64×64 and then interpolated up to 128×128 (for HiDPI screens) looks pretty bad. For Fedora 28 and higher I’m going to raise the minimum icon size to 64×64 which I hope people realize is actually a really low bar.

For Fedora 29 I think 128×128 would be a good minimum. From my point of view the best applications in the software center already ship large icons, and the applications with tiny icons are usually of poor quality, buggy, or just unmaintained upstream. I think it’s fine for a software center to do the equivalent of “you must be this high to ride” and if we didn’t keep asking more of upstreams we’d still be in a world with no translations, no release information and no screenshots.

Also note, applications don’t have to do this; it’s not like they’re going to fall out of the Fedora — they’re still installable on the CLI using DNF, although I agree this will impact the number of people installing and using a specific application. Comments welcome.

September 10, 2017

Urban sketchers meeting in São Paulo

4 days sketching through São Paulo with 250 people from whole Brazil and beyond...

September 09, 2017

WebDriver support in WebKitGTK+ 2.18

WebDriver is an automation API to control a web browser. It allows to create automated tests for web applications independently of the browser and platform. WebKitGTK+ 2.18, that will be released next week, includes an initial implementation of the WebDriver specification.

WebDriver in WebKitGTK+

There’s a new process (WebKitWebDriver) that works as the server, processing the clients requests to spawn and control the web browser. The WebKitGTK+ driver is not tied to any specific browser, it can be used with any WebKitGTK+ based browser, but it uses MiniBrowser as the default. The driver uses the same remote controlling protocol used by the remote inspector to communicate and control the web browser instance. The implementation is not complete yet, but it’s enough for what many users need.

The clients

The web application tests are the clients of the WebDriver server. The Selenium project provides APIs for different languages (Java, Python, Ruby, etc.) to write the tests. Python is the only language supported by WebKitGTK+ for now. It’s not yet upstream, but we hope it will be integrated soon. In the meantime you can use our fork in github. Let’s see an example to understand how it works and what we can do.

from selenium import webdriver

# Create a WebKitGTK driver instance. It spawns WebKitWebDriver 
# process automatically that will launch MiniBrowser.
wkgtk = webdriver.WebKitGTK()

# Let's load the WebKitGTK+ website.

# Find the GNOME link.
gnome = wkgtk.find_element_by_partial_link_text("GNOME")

# Click on the link.

# Find the search form. 
search = wkgtk.find_element_by_id("searchform")

# Find the first input element in the search form.
text_field = search.find_element_by_tag_name("input")

# Type epiphany in the search field and submit.

# Let's count the links in the contents div to check we got results.
contents = wkgtk.find_element_by_class_name("content")
links = contents.find_elements_by_tag_name("a")
assert len(links) > 0

# Quit the driver. The session is closed so MiniBrowser 
# will be closed and then WebKitWebDriver process finishes.

Note that this is just an example to show how to write a test and what kind of things you can do, there are better ways to achieve the same results, and it depends on the current source of public websites, so it might not work in the future.

Web browsers / applications

As I said before, WebKitWebDriver process supports any WebKitGTK+ based browser, but that doesn’t mean all browsers can automatically be controlled by automation (that would be scary). WebKitGTK+ 2.18 also provides new API for applications to support automation.

  • First of all the application has to explicitly enable automation using webkit_web_context_set_automation_allowed(). It’s important to know that the WebKitGTK+ API doesn’t allow to enable automation in several WebKitWebContexts at the same time. The driver will spawn the application when a new session is requested, so the application should enable automation at startup. It’s recommended that applications add a new command line option to enable automation, and only enable it when provided.
  • After launching the application the driver will request the browser to create a new automation session. The signal “automation-started” will be emitted in the context to notify the application that a new session has been created. If automation is not allowed in the context, the session won’t be created and the signal won’t be emitted either.
  • A WebKitAutomationSession object is passed as parameter to the “automation-started” signal. This can be used to provide information about the application (name and version) to the driver that will match them with what the client requires accepting or rejecting the session request.
  • The WebKitAutomationSession will emit the signal “create-web-view” every time the driver needs to create a new web view. The application can then create a new window or tab containing the new web view that should be returned by the signal. This signal will always be emitted even if the browser has already an initial web view open, in that case it’s recommened to return the existing empty web view.
  • Web views are also automation aware, similar to ephemeral web views, web views that allow automation should be created with the constructor property “is-controlled-by-automation” enabled.

This is the new API that applications need to implement to support WebDriver, it’s designed to be as safe as possible, but there are many things that can’t be controlled by WebKitGTK+, so we have several recommendations for applications that want to support automation:

  • Add a way to enable automation in your application at startup, like a command line option, that is disabled by default. Never allow automation in a normal application instance.
  • Enabling automation is not the only thing the application should do, so add an automation mode to your application.
  • Add visual feedback when in automation mode, like changing the theme, the window title or whatever that makes clear that a window or instance of the application is controllable by automation.
  • Add a message to explain that the window is being controlled by automation and the user is not expected to use it.
  • Use ephemeral web views in automation mode.
  • Use a temporal user profile in application mode, do not allow automation to change the history, bookmarks, etc. of an existing user.
  • Do not load any homepage in automation mode, just keep an empty web view (about:blank) that can be used when a new web view is requested by automation.

The WebKitGTK client driver

Applications need to implement the new automation API to support WebDriver, but the WebKitWebDriver process doesn’t know how to launch the browsers. That information should be provided by the client using the WebKitGTKOptions object. The driver constructor can receive an instance of a WebKitGTKOptions object, with the browser information and other options. Let’s see how it works with an example to launch epiphany:

from selenium import webdriver
from selenium.webdriver import WebKitGTKOptions

options = WebKitGTKOptions()
options.browser_executable_path = "/usr/bin/epiphany"
epiphany = webdriver.WebKitGTK(browser_options=options)

Again, this is just an example, Epiphany doesn’t even support WebDriver yet. Browsers or applications could create their own drivers on top of the WebKitGTK one to make it more convenient to use.

from selenium import webdriver
epiphany = webdriver.Epiphany()


During the next release cycle, we plan to do the following tasks:

  • Complete the implementation: add support for all commands in the spec and complete the ones that are partially supported now.
  • Add support for running the WPT WebDriver tests in the WebKit bots.
  • Add a WebKitGTK driver implementation for other languages in Selenium.
  • Add support for automation in Epiphany.
  • Add WebDriver support to WPE/dyz.

September 05, 2017

security things in Linux v4.13

Previously: v4.12.

Here’s a short summary of some of interesting security things in Sunday’s v4.13 release of the Linux kernel:

security documentation ReSTification
The kernel has been switching to formatting documentation with ReST, and I noticed that none of the Documentation/security/ tree had been converted yet. I took the opportunity to take a few passes at formatting the existing documentation and, at Jon Corbet’s recommendation, split it up between end-user documentation (which is mainly how to use LSMs) and developer documentation (which is mainly how to use various internal APIs). A bunch of these docs need some updating, so maybe with the improved visibility, they’ll get some extra attention.

Since Peter Zijlstra implemented the refcount_t API in v4.11, Elena Reshetova (with Hans Liljestrand and David Windsor) has been systematically replacing atomic_t reference counters with refcount_t. As of v4.13, there are now close to 125 conversions with many more to come. However, there were concerns over the performance characteristics of the refcount_t implementation from the maintainers of the net, mm, and block subsystems. In order to assuage these concerns and help the conversion progress continue, I added an “unchecked” refcount_t implementation (identical to the earlier atomic_t implementation) as the default, with the fully checked implementation now available under CONFIG_REFCOUNT_FULL. The plan is that for v4.14 and beyond, the kernel can grow per-architecture implementations of refcount_t that have performance characteristics on par with atomic_t (as done in grsecurity’s PAX_REFCOUNT).

Daniel Micay created a version of glibc’s FORTIFY_SOURCE compile-time and run-time protection for finding overflows in the common string (e.g. strcpy, strcmp) and memory (e.g. memcpy, memcmp) functions. The idea is that since the compiler already knows the size of many of the buffer arguments used by these functions, it can already build in checks for buffer overflows. When all the sizes are known at compile time, this can actually allow the compiler to fail the build instead of continuing with a proven overflow. When only some of the sizes are known (e.g. destination size is known at compile-time, but source size is only known at run-time) run-time checks are added to catch any cases where an overflow might happen. Adding this found several places where minor leaks were happening, and Daniel and I chased down fixes for them.

One interesting note about this protection is that is only examines the size of the whole object for its size (via __builtin_object_size(..., 0)). If you have a string within a structure, CONFIG_FORTIFY_SOURCE as currently implemented will make sure only that you can’t copy beyond the structure (but therefore, you can still overflow the string within the structure). The next step in enhancing this protection is to switch from 0 (above) to 1, which will use the closest surrounding subobject (e.g. the string). However, there are a lot of cases where the kernel intentionally copies across multiple structure fields, which means more fixes before this higher level can be enabled.

NULL-prefixed stack canary
Rik van Riel and Daniel Micay changed how the stack canary is defined on 64-bit systems to always make sure that the leading byte is zero. This provides a deterministic defense against overflowing string functions (e.g. strcpy), since they will either stop an overflowing read at the NULL byte, or be unable to write a NULL byte, thereby always triggering the canary check. This does reduce the entropy from 64 bits to 56 bits for overflow cases where NULL bytes can be written (e.g. memcpy), but the trade-off is worth it. (Besdies, x86_64’s canary was 32-bits until recently.)

IPC refactoring
Partially in support of allowing IPC structure layouts to be randomized by the randstruct plugin, Manfred Spraul and I reorganized the internal layout of how IPC is tracked in the kernel. The resulting allocations are smaller and much easier to deal with, even if I initially missed a few needed container_of() uses.

randstruct gcc plugin
I ported grsecurity’s clever randstruct gcc plugin to upstream. This plugin allows structure layouts to be randomized on a per-build basis, providing a probabilistic defense against attacks that need to know the location of sensitive structure fields in kernel memory (which is most attacks). By moving things around in this fashion, attackers need to perform much more work to determine the resulting layout before they can mount a reliable attack.

Unfortunately, due to the timing of the development cycle, only the “manual” mode of randstruct landed in upstream (i.e. marking structures with __randomize_layout). v4.14 will also have the automatic mode enabled, which randomizes all structures that contain only function pointers.

A large number of fixes to support randstruct have been landing from v4.10 through v4.13, most of which were already identified and fixed by grsecurity, but many were novel, either in newly added drivers, as whitelisted cross-structure casts, refactorings (like IPC noted above), or in a corner case on ARM found during upstream testing.

One of the issues identified from the Stack Clash set of vulnerabilities was that it was possible to collide stack memory with the highest portion of a PIE program’s text memory since the default ELF_ET_DYN_BASE (the lowest possible random position of a PIE executable in memory) was already so high in the memory layout (specifically, 2/3rds of the way through the address space). Fixing this required teaching the ELF loader how to load interpreters as shared objects in the mmap region instead of as a PIE executable (to avoid potentially colliding with the binary it was loading). As a result, the PIE default could be moved down to ET_EXEC (0x400000) on 32-bit, entirely avoiding the subset of Stack Clash attacks. 64-bit could be moved to just above the 32-bit address space (0x100000000), leaving the entire 32-bit region open for VMs to do 32-bit addressing, but late in the cycle it was discovered that Address Sanitizer couldn’t handle it moving. With most of the Stack Clash risk only applicable to 32-bit, fixing 64-bit has been deferred until there is a way to teach Address Sanitizer how to load itself as a shared object instead of as a PIE binary.

early device randomness
I noticed that early device randomness wasn’t actually getting added to the kernel entropy pools, so I fixed that to improve the effectiveness of the latent_entropy gcc plugin.

That’s it for now; please let me know if I missed anything. As a side note, I was rather alarmed to discover that due to all my trivial ReSTification formatting, and tiny FORTIFY_SOURCE and randstruct fixes, I made it into the most active 4.13 developers list (by patch count) at LWN with 76 patches: a whopping 0.6% of the cycle’s patches. ;)

Anyway, the v4.14 merge window is open!

© 2017, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

September 04, 2017

Jumpstarting the Raspberry Pi Zero W: Now Available via Humble Bundle!

[Raspberry Pi Zero W] My new book is now shipping! And it's being launched via a terrific Humble Bundle of books on electronics, making, Raspberry Pi and Arduino.

Humble Bundles, if you haven't encountered them before, let you pay what you want for a bundle of books on related subjects. The books are available in ePub, Mobi, and PDF formats, without DRM, so you can read them on your choice of device. If you pay above a certain amount, they add additional books. My book is available if you pay $15 or more.

You can also designate some of the money you pay for charity. In this case the charity is Maker Ed, a crowdfunding initiative that supports Maker programs primarily targeted toward kids in schools. (I don't know any more about them than that; check out their website for more information.)

Jumpstarting the Raspberry Pi Zero W is a short book, with only 103 pages in four chapters:

  1. Getting Started: includes tips on headless setup and the Linux command line;
  2. Blink an LED: includes ways to blink and fade LEDs from the shell and from several different Python libraries;
  3. A Temperature Notifier and Fan Control: code and wiring instructions for three different temperature sensors (plus humidity and barometric pressure), and a way to use them to control your house fan or air conditioner, either according to the temperature in the room or through a Twitter command;
  4. A Wearable News Alert Light Show: wire up NeoPixels or DotStars and make them respond to keywords on Twitter or on any other web page you choose, plus some information on powering a Pi portably with batteries.

All the code and wiring diagrams from the book, plus a few extras, are available on Github, at my Raspberry Pi Zero Book code repository.

To see the book bundle, go to the Electronics & Programming Humble Bundle and check out the selections. My book, Jumpstarting the Raspberry Pi Zero W, is available if you pay $15 or more -- along with tons of other books you'll probably also want. I already have Make: Electronics and it's one of the best introductory electronics books I've seen, so I'm looking forward to seeing the followup volume. Plus there are books on atmospheric and environmental monitoring, a three-volume electronic components encyclopedia, books on wearable electronics and drones and other cool stuff.

I know this sounds like a commercial, but this bundle really does look like a great deal, whether or not you specifically want my Pi book, and it's a limited-time offer, only good for six more days.

Interview with Miri

Could you tell us something about yourself?

Hi there, I’m Miri. I’ve been drawing ever since I could pick up a pencil and switched to digital in the past 3 or so years, currently enrolled in college to grind out credits and hopefully become an animator.

Do you paint professionally, as a hobby artist, or both?

At this moment in time I’m just a hobbyist who does volunteer work for the Smite Community Magazine, but I’d love to be a professional someday.

What genre(s) do you work in?

Mainly fantasy and mythology gods and gaming fanart, I really love drawing characters.

Whose work inspires you most — who are your role models as an artist?

Oh man, I really love Araki Hirohiko’s figure style, the way he draws poses is just fascinating and what really kickstarted my interest in drawing people. Jaguddada on DeviantArt is absolutely hands down my favorite digital painter, the way he handles colors and how his digital paint strokes look like oil on canvas just fascinates me, I’d love to learn color theory and his painting skills one day. Baby steps!

How and when did you get to try digital painting for the first time?

I think it was in 2014 with Paint Tool Sai, I had interest in making fanart of some band I was into, it was rough and blocky and simple, but really fun.

What makes you choose digital over traditional painting?

The ability to just erase mistakes like they never even happened, and being able to redline things without it smudging, just having so many ways to fix mistakes like they weren’t even there to begin with, and the non-messy cleanup when you’re finished drawing. No pencils littered everywhere or shavings, no drying out markers to replace, etc.

How did you find out about Krita?

I made the switch to Linux in early 2016, finding out that Sai and CS6 wouldn’t be available unless I used WINE to emulate them, I just started looking for a free software that fit my taste. GIMP was okay, but it had a few quirks I couldn’t really iron out, and it was a bit simple for me. I think I found out about Krita through an art thread on some imageboard for taking requests, tried it out, it ran like a smoother SAI and I haven’t looked back.

What was your first impression?

I was like “Wow, this is a lot to take in and learn.” I’m still trying to figure out everything! There’s so many buttons I’m still unsure on what they do, I am just a simpleton!

What do you love about Krita?

I really love how it’s Linux friendly, and how it’s like a great replacement for Sai, like, I can actually make cool things in it without too much effort, and then there’s even levels further that I haven’t even explored yet that will probably even improve my stuff even more in the future. Also the opacity slider right above layers? Found that the first time I opened it, and it’s been my best friend ever since, took me a while on other programs to figure out what that bar did.

What do you think needs improvement in Krita? Is there anything that really annoys you?

There was one bug in the past when I used my left handed script that it would reset the pen size and opacity and brush type once you flipped it over from using the eraser, and it was my burden. Also the unexpected crashes. But the bug got fixed so I finally got to update my Krita again and that was a very good day. Also, not sure if it exists and I just haven’t found it yet, but Sai had a clipping tool that made it so you could clip layers to other layers so if you colored out of one of the layers it wouldn’t bleed to the rest of the drawing, that’s something I’ve googled but haven’t found the answer to yet, but it would
be a godsend to find.

What sets Krita apart from the other tools that you use?

Free Software and Linux compatible. Cannot stress it enough. Linux support is a dealmaker and the fact I don’t have to pay out of pocket for a program that works just as good if not even better than the paid ones is really awesome.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

Every new piece beomes my favorite for a certain time, until I start pointing out flaws in it, and then go back to an older piece. I’m really into my Ra + Thoth piece, though, just because of the details and shading coming out so nice, not to mention a background I was really pleased with.

What techniques and brushes did you use in it?

Oh god it’s been so long, I believe I used the paint settings marker brush with it’s slight opacity for shading, one of the square brushes under blending for lighting, and the pencil tool from Deevad’s set for outlining. I’m really obsessed with the paint set of default pens for the basics.

Where can people see more of your work?

Deviantart: Pluuck
Twitter: Hikkikomiri
(Or if you like MOBAs, check out the latest SMITE community gaming magazine!)

Anything else you’d like to share?

Honestly, never give up. Accept critique but don’t let it eat you away
to make you fear picking up a pen again. What may be something you find
a struggle, may be even harder for someone else. Be unique, be creative,
and always keep improving.

September 03, 2017

Do you have dmidecode 42?

Dear Lazyweb, I need your help. Does anyone have a newish server system (it’s not going to work on Laptops) that has any output from sudo dmidecode | grep "DMI type 42"? If you do, can you tar up the contents of /sys/firmware/dmi/tables and send it to me via email. If I can get this code working then I’ll have another more exciting blog post coming up. Thanks!

August 31, 2017

FreeCAD Arch development news - August 2017

Time for our monthly development report, explaining a bit of what I did this month. Thanks again to everybody who help me on Patreon, each month we're getting higher! If you don't know about it yet, you can help me to spend more time working on FreeCAD by href="">sponsoring me with any amount you want...

August 30, 2017

darktable for Windows

A long time ago there was a post about why we don't have a Windows port. While I still stand by what I wrote six years ago, the times they are a changing.

Then two years ago there was yet another post regarding Windows. The gist of it was that the real blocker for a Windows release isn't so much a technical one but the lack of a person (or several) dedicated to maintaining it. Not just for the moment until all the patches got merged but for the foreseeable future.

Then Peter Budai came along. Like some people before he managed to compile darktable on Windows and offered us the patches he had to do, but other than what we had seen before he stuck around, helped fix bugs and was open to suggestions how to solve things in a better way. Eventually we became confident that the lack of Windows maintainership might be solved.

To cut a long story short, we are extremely pleased – albeit wary – to announce a very first official pre-alpha development snapshot for 64 bit Windows. We know it's still buggy, but as a sign of goodwill and request for help in testing it we would like to ask you to give it a try. Please report useful bugs in our bug tracker.

You can find the link to the binary on the forum (a great place btw., you should check it out).


I got myself stuck yesterday with GRUB running from an ext4 /boot/grub, but with /boot inside my LUKS LVM root partition, which meant GRUB couldn’t load the initramfs and kernel.

Luckily, it turns out that GRUB does know how to mount LUKS volumes (and LVM volumes), but all the instructions I could find talk about setting this up ahead of time (“Add GRUB_ENABLE_CRYPTODISK=y to /etc/default/grub“), rather than what the correct manual GRUB commands are to get things running on a failed boot.

These are my notes on that, in case I ever need to do this again, since there was one specific gotcha with using GRUB’s cryptomount command (noted below).

Available devices were the raw disk (hd0), the /boot/grub partition (hd0,msdos1), and the LUKS volume (hd0,msdos5):

grub> ls
(hd0) (hd0,msdos1) (hd0,msdos5)

Used cryptomount to open the LUKS volume (but without ()s! It says it works if you use parens, but then you can’t use the resulting (crypto0)):

grub> insmod luks
grub> cryptomount hd0,msdos5
Enter password...
Slot 0 opened.

Then you can load LVM and it’ll see inside the LUKS volume:

grub> insmod lvm
grub> ls
(crypto0) (hd0) (hd0,msdos1) (hd0,msdos5) (lvm/rootvg-rootlv)

And then I could boot normally:

grub> configfile $prefix/grub.cfg

After booting, I added GRUB_ENABLE_CRYPTODISK=y to /etc/default/grub and ran update-grub. I could boot normally after that, though I’d be prompted twice for the LUKS passphrase (once by GRUB, then again by the initramfs).

To avoid this, it’s possible to add a second LUKS passphrase, contained in a file in the initramfs, as described here and works for Ubuntu and Debian too. The quick summary is:

Create the keyfile and add it to LUKS:

# dd bs=512 count=4 if=/dev/urandom of=/crypto_keyfile.bin
# chmod 0400 /crypto_keyfile.bin
# cryptsetup luksAddKey /dev/sda5 /crypto_keyfile.bin
*enter original password*

Adjust the /etc/crypttab to include passing the file via /bin/cat:

sda5_crypt UUID=4aa5da72-8da6-11e7-8ac9-001cc008534d /crypto_keyfile.bin luks,keyscript=/bin/cat

Add an initramfs hook to copy the key file into the initramfs, keep non-root users from being able to read your initramfs, and trigger a rebuild:

# cat > /etc/initramfs-tools/hooks/crypto_keyfile <<EOF
if [ "$1" = "prereqs" ] ; then
    cp /crypto_keyfile.bin "${DESTDIR}"
# chmod a+x /etc/initramfs-tools/hooks/crypto_keyfile
# chmod 0700 /boot
# update-initramfs -u

This has the downside of leaving a LUKS passphrase “in the clear” while you’re booted, but if someone has root, they can just get your dm-crypt encryption key directly anyway:

# dmsetup table --showkeys sda5_crypt
0 155797496 crypt aes-cbc-essiv:sha256 e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 8:5 2056

And of course if you’re worried about Evil Maid attacks, you’ll need a real static root of trust instead of doing full disk encryption passphrase prompting from an unverified /boot partition. :)

© 2017, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

August 28, 2017

GUADEC 2017 Manchester

Really enjoyed this year’s GUADEC. Thanks everyone for coming and the local team for pulling off a perfectly organized conference.

GUADEC 2017 Manchester from jimmac on Vimeo.

Check out a few photos too.

Total Eclipse

[2017 Solar eclipse with corona] My first total eclipse! The suspense had been building for years.

Dave and I were in Wyoming. We'd made a hotel reservation nine months ago, by which time we were already too late to book a room in the zone of totality and settled for Laramie, a few hours' drive from the centerline.

For visual observing, I had my little portable 80mm refractor. But photography was more complicated. I'd promised myself that for my first (and possibly only) total eclipse, I wasn't going to miss the experience because I was spending too much time fiddling with cameras. But I couldn't talk myself into not trying any photography at all.

Initially, my plan was to use my 90mm Mak as a 500mm camera lens. It had worked okay for the the 2012 Venus transit.

[Homemade solar finder for telescope] I spent several weeks before the eclipse in a flurry of creation, making a couple of solar finders, a barn-door mount, and then wrestling with motorizing the barn-door (which was a failure because I couldn't find a place to buy decent gears for the motor. I'm still working on that and will eventually write it up). I wrote up a plan: what equipment I would use when, a series of progressive exposures for totality, and so forth.

And then, a couple of days before we were due to leave, I figured I should test my rig -- and discovered that it was basically impossible to focus on the sun. For the Venus transit, the sun wasn't that high in the sky, so I focused through the viewfinder. But for the total eclipse, the sun would be almost overhead, and the viewfinder nearly impossible to see. So I had planned to point the Mak at a distant hillside, focus it, then slip the filter on and point it up to the sun. It turned out the focal point was completely different through the filter.

[Solar finder for DSLR, made from popsicle sticks] With only a couple of days left to go, I revised my plan. The Mak is difficult to focus under any circumstances. I decided not to use it, and to stick to my Canon 55-250mm zoom telephoto, with the camera on a normal tripod. I'd skip the partial eclipse (I've photographed those before anyway) and concentrate on getting a few shots of the diamond ring and the corona, running through a range of exposures without needing to look at the camera screen or do any refocusing. And since I wasn't going to be usinga telescope, my nifty solar finders wouldn't work; I designed a new one out of popsicle sticks to fit in the camera's hot shoe.

Getting there

We stayed with relatives in Colorado Saturday night, then drove to Laramie Sunday. I'd heard horror stories of hotels canceling people's longstanding eclipse reservations, but fortunately our hotel honored our reservation. WHEW! Monday morning, we left the hotel at 6am in case we hit terrible traffic. There was already plenty of traffic on the highway north to Casper, but we turned east hoping for fewer crowds. A roadsign sign said "NO PARKING ON HIGHWAY." They'd better not try to enforce that in the totality zone!

[Our eclipse viewing pullout on Wyoming 270] When we got to I-25 it was moving and, oddly enough, not particularly crowded. Glendo Reservoir had looked on the map like a nice spot on the centerline ... but it was also a state park, so there was a risk that everyone else would want to go there. Sure enough: although traffic was moving on I-25 at Wheatland, a few miles north the freeway came to a screeching halt. We backtracked and headed east toward Guernsey, where several highways went north toward the centerline.

East of Glendo, there were crowds at every highway pullout and rest stop. As we turned onto 270 and started north, I kept an eye on OsmAnd on my phone, where I'd loaded a GPX file of the eclipse path. When we were within a mile of the centerline, we stopped at a likely looking pullout. It was maybe 9 am. A cool wind was blowing -- very pleasant since we were expecting a hot day -- and we got acquainted with our fellow eclipse watchers as we waited for first contact.

Our pullout was also the beginning of a driveway to a farmhouse we could see in the distance. Periodically people pulled up, looking lost, checked maps or GPS, then headed down the road to the farm. Apparently the owners had advertised it as an eclipse spot -- pay $35, and you can see the eclipse and have access to a restroom too! But apparently the old farmhouse's plumbing failed early on, and some of the people who'd paid came out to the road to watch with us since we had better equipment set up.

[Terrible afocal view of partial eclipse] There's not much to say about the partial eclipse. We all traded views -- there were five or six scopes at our pullout, including a nice little H-alpha scope. I snapped an occasional photo through the 80mm with my pocket camera held to the eyepiece, or with the DSLR through an eyepiece projection adapter. Oddly, the DSLR photos came out worse than the pocket cam ones. I guess I should try and debug that at some point.

Shortly before totality, I set up the DSLR on the tripod, focused on a distant hillside and taped the focus with duct tape, plugged in the shutter remote, checked the settings in Manual mode, then set the camera to Program mode and AEB (auto exposure bracketing). I put the lens cap back on and pointed the camera toward the sun using the popsicle-stick solar finder. I also set a countdown timer, so I could press START when totality began and it would beep to warn me when it was time to the sun to come back out. It was getting chilly by then, with the sun down to a sliver, and we put on sweaters.

The pair of eclipse veterans at our pullout had told everybody to watch for the moon's shadow racing toward us across the hills from the west. But I didn't see the racing shadow, nor any shadow bands.

And then Venus and Mercury appeared and the sun went away.


[Solar eclipse diamond ring] One thing the photos don't prepare you for is the color of the sky. I expected it would look like twilight, maybe a little darker; but it was an eerie, beautiful medium slate blue. With that unworldly solar corona in the middle of it, and Venus gleaming as bright as you've ever seen it, and Mercury shining bright on the other side. There weren't many stars.

We didn't see birds doing anything unusual; as far as I can tell, there are no birds in this part of Wyoming. But the cows did all get in a line and start walking somewhere. Or so Dave tells me. I wasn't looking at the cows.

Amazingly, I remembered to start my timer and to pull off the DSLR's lens cap as I pushed the shutter button for the diamond-ring shots without taking my eyes off the spectacle high above. I turned the camera off and back on (to cancel AEB), switched to M mode, and snapped a photo while I scuttled over to the telescope, pulled the filter off and took a look at the corona in the wide-field eyepiece. So beautiful! Binoculars, telescope, naked eye -- I don't know which view was best.

I went through my exposure sequence on the camera, turning the dial a couple of clicks each time without looking at the settings, keeping my eyes on the sky or the telescope eyepiece. But at some point I happened to glance at the viewfinder -- and discovered that the sun was drifting out of the frame. Adjusting the tripod to get it back in the frame took longer than I wanted, but I got it there and got my eyes back on the sun as I snapped another photo ...

and my timer beeped.

I must have set it wrong! It couldn't possibly have been two and a half minutes. It had been 30, 45 seconds tops.

But I nudged the telescope away from the sun, and looked back up -- to another diamond ring. Totality really was ending and it was time to stop looking.

Getting Out

The trip back to Golden, where we were staying with a relative, was hellish. We packed up immediately after totality -- we figured we'd seen partials before, and maybe everybody else would stay. No such luck. By the time we got all the equipment packed there was already a steady stream of cars heading south on 270.

A few miles north of Guernsey the traffic came to a stop. This was to be the theme of the afternoon. Every small town in Wyoming has a stop sign or signal, and that caused backups for miles in both directions. We headed east, away from Denver, to take rural roads down through eastern Wyoming and Colorado rather than I-25, but even so, we hit small-town stop sign backups every five or ten miles.

We'd brought the Rav4 partly for this reason. I kept my eyes glued on OsmAnd and we took dirt roads when we could, skirting the paved highways -- but mostly there weren't any dirt roads going where we needed to go. It took about 7 hours to get back to Golden, about twice as long as it should have taken. And we should probably count ourselves lucky -- I've heard from other people who took 11 hours to get to Denver via other routes.

Lessons Learned

Dave is fond of the quote, "No battle plan survives contact with the enemy" (which turns out to be from Prussian military strategist Helmuth von Moltke the Elder).

The enemy, in this case, isn't the eclipse; it's time. Two and a half minutes sounds like a lot, but it goes by like nothing.

Even in my drastically scaled-down plan, I had intended exposures from 1/2000 to 2 seconds (at f/5.6 and ISO 400). In practice, I only made it to 1/320 because of fiddling with the tripod.

And that's okay. I'm thrilled with the photos I got, and definitely wouldn't have traded any eyeball time for more photos. I'm more annoyed that the tripod fiddling time made me miss a little bit of extra looking. My script actually worked out better than I expected, and I was very glad I'd done the preparation I had. The script was reasonable, the solar finders worked really well, and the lens was even in focus for the totality shots.

Then there's the eclipse itself.

I've read so many articles about solar eclipses as a mystical, religious experience. It wasn't, for me. It was just an eerily beautiful, other-worldly spectacle: that ring of cold fire staring down from the slate blue sky, bright planets but no stars, everything strange, like nothing I'd ever seen. Photos don't get across what it's like to be standing there under that weird thing in the sky.

I'm not going to drop everything to become a globe-trotting eclipse chaser ... but I sure hope I get to see another one some day.

Photos: 2017 August 21 Total Solar Eclipse in Wyoming.

August 26, 2017

Angle and Windows Ink – a new test version of Krita for Windows

We’ve created a special version of Krita 4.0 pre-alpha for Windows users to test. This version contains two big new features that should solve the biggest problems Krita has on the Windows platform:

  • Support for ANGLE. This is really technical, but basically, ANGLE is a technology that presents an OpenGL API but lets Direct3D do the work. On Windows, many OpenGL drivers are very buggy, and that could lead to crashes, black or blank screens. It’s the most-hated issue we have, and it is not even a bug in Krita! If you’re a Windows user and had to disable OpenGL in the Configure Krita dialog, then you should test this build!
  • Support for Windows Ink/Windows 8 Pointer Events. That’s to say, native support for the n-trig pen technology in Microsoft’s Surface line of products — also used by Asus, Dell and HP in their convertibles that can use a pen.

Note: that this is a build from our development branch. It has got all kinds of nifty, but highly unstable features, like scripting, saving in the background, svg graphics… If you load a Krita 3.x file with vector layers and save it with this version of Krita, you will NOT be able to open it in your regular, stable Krita. This build is purely experimental! Do NOT use it for real work. DO help us with testing!

Test Instructions for Angle

  • Open the survey in your browser:
  • Download the test build and debug symbols
  • Open the first zipfile in Windows Explorer, and drag the krita_4.0-prealpha_angle_ink-1-x64 folder to your desktop
  • Open the second zipfile in Windows Explorer and drag the bin, lib and share folders into the krita_4.0-prealpha_angle_ink-1-x64 folder on your desktop.
  • With Windows Explorer, navigate into the krita_4.0-prealpha_angle_ink-1-x64 folder on your desktop
  • Start Krita by double-clicking on the krita link or on the bin\krita.exe file
  • You will now be given a choice:
  • Please first choose Test Desktop OpenGL. Create a new image and try to draw for a bit. Fill in the results in the survey: whether you experienced a crash or not.
  • Next, restart Krita and choose Test ANGLE. Create a new image and draw for a bit. Fill in the results in the survey.

NOTE: You won’t be able to enable/disable OpenGL/Angle in Krita’s Settings/Configure Krita/Display settings dialog; it will be forced enabled for this test.

Test Instructions for Windows Ink/Windows Pointer API

This is only relevant for Windows 8 and 10: Windows 7 does not support this API (and Krita does not support Windows 95, 98, XP or Vista). You should be using a Surface Pro with a pen or another convertible that uses Microsoft’s n-trig pen technology. It does not matter whether you have the wintab driver installed.

  • Open the survey in your browser:
  • If you haven’t performed the previous test for Angle, download the test build and debug symbols
  • Open the first zipfile in Windows Explorer, and drag the krita_4.0-prealpha_angle_ink-1-x64 folder to your desktop
  • Open the second zipfile in Windows Explorer and drag the bin, lib and share folders into the krita_4.0-prealpha_angle_ink-1-x64 folder on your desktop.
  • With Windows Explorer, navigate into the krita_4.0-prealpha_angle_ink-1-x64 folder on your desktop
  • Start Krita by double-clicking on the krita link or on the bin\krita.exe file
  • Press either Test Desktop OpenGL or Test ANGLE when you see the dialog discussed above: this does not matter
  • Go to Settings/Configure Krita/Tablet and check the experimental pointer api/windows ink support checkbox:
  • Close Krita and start Krita again
  • Create a new document and draw with the default brush. Check whether pressure gives a variation in size and opacity. Note your findings in the survey:


Thanks for helping to test this important new features.

August 25, 2017

Announce: Entangle “Bottom“ release 0.7.2 – an app for tethered camera control & capture

I am pleased to announce a new release 0.7.2 of Entangle is available for download from the usual location:

The this is mostly a bug fix release, but there was a little internal refactoring work to prepare for future support of timelapse / animation video display and possible webcam support.

  • Requires Gtk >= 3.10.0
  • Fix some introspection annotations
  • Use GdkSeat APIs if available
  • Use GtkOverlay and GtkRevealer in preference to custom widgets
  • Refactoring to prepare to support display of video files
  • Draw symbolic icons for video/image files while waiting for thumbnails to load
  • Ensure session highlight has a min 1 pixel visible border
  • Ensure session browser scrolls fully to right
  • Check for Adwaita icon theme which now includes symbolic icons
  • Remove left over check for DBus GLib
  • Remove use of deprecated GDK monitor functions
  • Remove use of deprecated GTK API for loading URIs
  • Fix handling of motion-notify event that broke client side window dragging
  • Fix warning when setting size of settings viewport
  • Update bug reporting address
  • Turn off over-zealous compiler warning about loop optimizations
  • Add ability to enter IP address of network camera
  • Fix URI pattern used to locate gphoto gvfs mounts
  • Add example plugin for bracketing photos of a total eclipse

Krita 3.2.1 Released

Krita 3.2.1 is a bug fix release. The following issues were fixed:

  • Crash on startup if only OpenGL 2.1 is found: if you had to disable opengl for 3.2.0, you can try to enable it again
  • A crash when changing layer types in the gmic-qt plugin
  • A bug where gmic-qt could crash on odd-sized images
  • A regression where using the text tool would break the brush tool
  • The option to use the native platform’s file dialogs was restored
  • A bug where selecting the line tool would disable the flow slider
  • Some issues with the LUT docker were fixed



Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.


(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

When it is updated, you can also use the Krita Lime PPA to install Krita 3.2.1 on Ubuntu and derivatives.


Note: the gmic-qt and pdf plugins is not available on OSX.

Source code


For all downloads:


The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
. The signatures are here.

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

Google Summer of Code: Help Alexey Kapustin by Testing His Work!

The Google Summer of Code is nearly at an end. One of our students, Alexey Kapustin, has made a Windows build of his work. Alexey has created a version of Krita that helps us figure out how people actually use Krita by aggregating anonymous data on how often Krita is started, how long it’s used, which tools are used, which options and filters are most popular and so on.

Of course, this is still an experimental version: if it becomes part of the regular release it will be opt-in, not opt-out.  Within the KDE community we have created a policy for this kind of thing: .

If you want to help Alexey and are a Windows user, please download his special version of Krita 4.0 pre-alpha and play with it. There’s still a lot of work to do, of course, but first we need testers!

August 24, 2017

Krita’s Updated Vision

In 2010, during a developer sprint in Deventer, the Krita team sat down together with Peter Sikking to hammer out a vision statement for the project. Our old goal, be KDE’s Gimp/Photoshop, didn’t reflect what we really wanted to do. Here are some documents describing the creation of Krita’s vision:

Creating the vision took a lot of good, hard work, and this was the result (you need to read this as three paragraphs, givings answers to “what is it”, “for whom is it” and “what’s the value”):

Krita is a KDE program for sketching and painting, offering an end–to–end solution for creating digital painting files from scratch by masters.

Fields of painting that Krita explicitly supports are concept art, creation of comics and textures for rendering.

Modeled on existing real-world painting materials and workflows, Krita supports creative working by getting out of the way and with a snappy response.

Seven years later, this needed updating. We’ve added new fields that Krita supports, such as animation, and real-world painting materials and workflows — that never really materialized because as soon as we sat down with real-world artists, we learned that they couldn’t care less: they cared about being productive. So, after discussion on the mailing list and during our weekly meetings, we modified the vision document:

Krita is a free and open source cross-platform application that offers an end-to-end solution for creating digital art files from scratch. Krita is optimized for frequent, prolonged and focused use.

Explicitly supported fields of painting are illustrations, concept art, matte painting, textures, comics and animations.

Developed together with users, Krita is an application that supports their actual needs and workflow. Krita supports open standards and interoperates with other applications.

Let’s go through the changes.

We now mention “free and open source” instead of KDE because with expansion of Krita on Windows and OSX, we now have many users who do not know that KDE stands for Free Software that respects your privacy and the way you want to work. We considered “Free Software” instead, but this is really a moment where we need to make clear that “free software” is not “software for free”.

We still mention “files” explicitly; we’ve never really been interested in what you do with those files, but, for instance, printing from Krita just doesn’t have any priority for us. Krita is for creating your artwork, not for publishing it.

We replaced the “for masters” with “frequent, prolonged and focused use”. The meaning is the same: to get the most out of Krita you have to really use it. Krita is not for casually adding scribbles to a screenshot. But the “for masters” often made people wonder whether Krita could be used by beginning artists. The answer is of course “yes” — but you’ll have to master an application with thousands of possibilities.

In the second paragraph, we’ve added animations and matte painting. Animation was introduced for the third time in 2016; it’s clearly something a lot of people love to do. Matte painting gets close to photo manipulation, which isn’t in our vision, but focused on creating a new artwork. We’ve always felt that Krita could be used for that, as well. Note: no 3d, no webpage design, no product design, no wedding albums, no poster or other print design.

Finally, the last paragraph got almost completely rewritten. Gone is real-world materials as an inspiration, and in are our users as inspiration: we won’t let you dictate what Krita can do, or how Krita lets you do stuff, UX design isn’t something that can be created by voting. But we do listen, and for the past years we’ve let you vote for which features you would find most useful, while still keeping the direction of Krita as a whole in our hands. But we want to create an application that lets you get stuff done. We removed the “snappy” response, since that’s pretty much a given. We’re not going to try to create an application that’s ponderous or works against you all the way. Finally, we do care about interoperability and standards, and have spent countless hours of work on improving that, so we felt it needed to be said.

August 23, 2017

GIMP 2.9.6 Released

After more than a year of hard work we are excited to release GIMP 2.9.6 featuring many improvements, some new features, translation updates for 23 languages, and 204 bug fixes.

As usual, for a complete list of changes please see NEWS. Here we’d like to focus on the most important changes.


GIMP now has support for experimental multi-threading in GEGL and will try to use as many cores as are available on your computer.

We know GIMP can explode when using more than one core, but we keep it that way so that we get as many bug reports as possible for this officially unstable development version. This is because we really, really want to ship GIMP 2.10 with usable parallel processing.

On the other hand, you can always set the amount of cores to 1 if you couldn’t be bothered to report bugs. For that, please tweak the amount of threads on the System Resources page of the Preferences dialog.

Setting amount of threads in GIMP 2.9.6

GUI, Usability, and Configurability

Benoit Touchette improved mask creation workflow for users who use a ton of masks in their projects. Now GIMP remembers the last type of mask initialization, and you can use key modifiers + mouse click on layer previews to create, apply, or remove masks. There’s a new button in the Layers dockable dialog for that as well.

Easily create new mask with GIMP 2.9.6

To make that feature possible, Michael Natterer introduced saving of last dialogs’ settings across sessions and made these defaults configurable via the new Interface / Dialog Defaults page in the Preferences dialog.

Configurable Dialog Defaults in GIMP 2.9.6

Additionally, the Preferences dialog got a vertical scrollbar where applicable to keep its height more sensible, and settings on individual pages of the dialog can be reset separately now.

The Quit dialog got a few updates: automatically exiting when all the images in the list have been saved, and a Save As button for every opened image (clicking an image in the list will raise it easy checks).

Configurable Fill With option in GIMP 2.9.6

Yet another new feature is an option (on the screenshot above) to choose fill color or pattern for empty spaces after resizing the canvas.

Better Hi-DPI Support

While most changes for better Hi-DPI displays support are planned for v3.0, when GIMP is expected to be based on either GTK+3 or GTK+4, we were able to remove at least some of the friction by introducing icon sizes at different resolutions and a switch for icon sizes on the Icon Theme page of the Preferences dialog.

Configurable icon size in GIMP 2.9.6

On-canvas Interaction Changes

Michael Natterer did a huge under-the-hood work that is likely to affect user interaction with GIMP bigly. Simply put, he moved a lot of on-canvas code from tools like Rectangle Select, Measure and Path into reusable code.

The effect of that is multifold:

  • New tools can reuse on-canvas elements of other tools (adding shape drawing tools should be easier now, although we are not planning that for 2.10, unless someone sends a clean patch).
  • GEGL-based filters can be interacted with directly on the canvas (Spiral and Supernova so far as test case).

So far one still needs to write C code to make a GEGL-based filter use on-canvas interaction. We expect to spend some time figuring out a way to simplify this, possibly using the GUM language (see below).

Layers, Linear and Perceptual Workflows

Since we want to make workflows in linear color spaces more prominent in GIMP, it was time to update the blend modes code. You can now switch between two sets of layer modes: legacy (perceptual) and default (linear). The user interface for switching was a quick design, we’d like to come up with something better, so we are interested in your input.

Moreover, we made both compositing of layers and blending color space configurable, should you have the need to use that for advanced image manipulation.

We also added a new Colors -> Linear Invert command to provide radiometrically correct color inversion. And the histogram dialog now features a toggle between gamma and linear modes—again, it’s a design we’d like to improve.

Thanks to Øyvind Kolås and his Patreon supporters, GIMP now also has a simple ‘blendfun’ framework that greatly simplifies implementing new color modes. Ell made use of that by adding Linear Burn, Vivid Light, Linear Light, Pin Light, Hard Mix, Exclusion, Merge, Split, and Luminance (RGB) blending modes (most of them now also supported in the PSD plug-in).

Another prominent change is the introduction of the Pass Through mode for layer groups. When this mode is used instead of any other one, GIMP mixes layers inside that group directly to the layers below, skipping creation of the group projection. The feature was implemented by Ell. The screenshot below features a user-submitted PSD file that has TEXTURES layer group in the Pass Through mode, as opened in GIMP 2.9.4 (left) and GIMP 2.9.6 (right).

Pass Through mode vs no Pass Through mode

Newly added color tags simplify managing large projects with a lot of layers and layer groups. The screenshot below is a real-life PSD file opened in GIMP 2.9.6.

New User Interface Themes

To make more use of that feature, we need someone to step up and implement multiple layers selection. For an initial research, see this wiki page.

For full access to all the new features, we updated the Layer Attributes dialog to provide the single UI for setting layer’s name, blending mode, opacity, and offset, toggling visibility, link status, various locks, color tags.

Updated Layer Attributes dialog


Under the influence of Elle Stone (and with her code contributions), CIE LCH and CIE LAB color spaces are finding more use in GIMP now.

Color dialogs now have an LCH color selector that, in due time, will most likely replace outdated HSV selector for reasons outlined by Elle in this article. The LCH selector also supports gamut checking.

A new Hue-Chroma filter in the Colors menu works much like Hue-Saturation, but operates in CIE LCH color space. Moreover, the Fuzzy Select and the Bucket Fill tool now can select colors by CIE L, C, and H.

Finally, both the Color Picker and the Sample Points dialog now display pixel values in CIE LAB and CIE LCH.

Sample points in LCH and LAB, GIMP 2.9.6


New Handle Transform tool contributed by Johannes Matschke in 2015 has been finally cleaned up by Michael Natterer and available by default. It’s a little tricky to get used to, but we hear reports that once you get the hang of it, you love it.

Thanks to Ell, the Warp Transform tool is now a lot faster, partially thanks to a switch that toggles high-quality preview that isn’t always necessary.

All transformation tools don’t display grid by default anymore, and during an interactive transformation the original layer gets hidden now. The latter greatly simplifies transforming upper layer in relation to a lower layer. Before that, the original layer used to block the view.

Free Select tool now waits for Enter being pressed to confirm selection, which enables you to tweak positions of polygonal selection.


An important new feature that is somewhat easy to overlook is being able to paint on transparent layers with modes other than normal.

Thanks to shark0r, the Smudge tool now has a Flow control that allows mixing in both constant and gradient color while smudging. There’s another new option to never decrease alpha of existing pixels while smudging in the tools options now as well. For more on this, please read this forum thread.

Canvas rotation has been improved: it got snappier in certain cases, and rulers, scrollbars, as well as the Navigation dialog follow the rotation now.

Alexia introduced some improvements to the brush engine. For bitmap brushes, GIMP now caches hardness and disables dynamic change of hardness to improve painting performance. Bitmap brushes also don’t get clipped anymore, when hardness is less than 100. Plus there’s a specialized convolution algorithm for the hardness blur to make it faster now.

Processing Raw Images

Since 2.9.4, GIMP is capable of opening raw (digital camera) images via darktable, and the plan was to open it up to more plug-in developers, because nothing sparks a thoughtful, civil conversation like a raw processor of choice.

This is now possible: 2.9.6 ships with a RawTherapee plug-in (v5.2 or newer should be installed) and a new file-raw-placeholder plug-in that registers itself for loading all raw formats, but does nothing except returning an error message pointing to darktable and RawTherapee, if neither is installed.

Moreover, you can now choose preferred raw plug-in, when multiple options are available on your computer. For this, open the Preferences dialog and go to the Image Import page, then click on the plug-in you prefer and click OK to confirm your choice. You will need to restart GIMP.

Better PSD Support

The PSD plug-in now supports a wider range of blending modes for layers, at both importing and exporting: Linear Burn, Linear Light, Vivid Light, Pin Light, and Hard Mix blending modes. It also finally supports exporting layer groups and reads/writes the Pass Through mode in those. Additionally, GIMP now imports and exports color tags from/to PSD files.

WebP support

We already shipped GIMP 2.9.2 with initial support for opening and exporting WebP files, however the plug-in was missing a number of essential features. Last year, we replaced it with a pre-existing plug-in initially written by Nathan Osman back in 2011 and maintained through the years. We now ship it by default as part of GIMP.

WebP exporting in GIMP 2.9.6

The new plug-in received additional contributions from Benoit Touchette and Pascal Massimino and supports both ICC profiles, metadata loading/exporting, and animation.

Metadata Viewing and Editing

Thanks to Benoit Touchette, GIMP now ships a new metadata viewer that uses Exiv2 to display Exif, XMP, IPTC, and DICOM metadata (the latter is displayed on the XMP tab).

Metadata viewer in GIMP 2.9.6

Moreover, Benoit implemented a much anticipated metadata editor that supports adding/editing writing XMP, IPTC, DICOM, and GPS/Exif metadata, as well as loading/exporting metadata from/to XMP files.

Metadata editor in GIMP 2.9.6


Thanks to contributions from Thomas Manni and Ell, GIMP now has 9 more GEGL-based filters, including much anticipated Wavelet Decompose, as well as an Extract Component plug-in that simplifies fetching e.g. CMYK’s K channel or LAB’s L* channel from an image.

Another new feature that we expect to develop further is GUM—a simple metadata language that helps automatically building more sensible UI for GEGL filters. Here’s a quick video:

Resources and Presets

To make GIMP more useful by default, we now ship it with some basic presets for the Crop tool: 2×3, 3×4, 16:10, 16:9, and Square.

Documents templates have been updated and now feature popular, contemporary presets for both print and digital media.

What’s Next

We still have a bunch of bugs to fix before we can release 2.10 and we appreciate all the huge and tiny useful patches contributors send us to that effect.

GIMP 2.9.8 is expected to ship with more bug fixes and an updated Blend (Gradient Fill) tool that works completely on canvas, including adding and removing color stops and assigning colors.

August 21, 2017

Parametric IFC files

The title of this article sucks, I know, couldn't think of anything better... This is a concept I wanted to play with since a long time, and last month I was finally able to setup a proof of concept. The only missing part was to write an article explaining it, so here it goes. For who is...

August 18, 2017

Shipping PKCS7 signed metadata and firmware

Over the last few days I’ve merged in the PKCS7 support into fwupd as an optional feature. I’ve done this for a few reasons:

  • Some distributors of fwupd were disabling the GPG code as it’s GPLv3, and I didn’t feel comfortable saying just use no signatures
  • Trusted vendors want to ship testing versions of firmware directly to users without first uploading to the LVFS.
  • Some firmware is inherently internal use only and needs to be signed using existing cryptographic hardware.
  • The gpgme code scares me.

Did you know GPGME is a library based around screen scraping the output of the gpg2 binary? When you perform an action using the libgpgme APIs you’re literally injecting a string into a pipe and waiting for it to return. You can’t even use libgcrypt (the thing that gpg2 uses) directly as it’s way too low level and doesn’t have any sane abstractions or helpers to read or write packaged data. I don’t want to learn LISP S-Expressions (yes, really) and manually deal with packing data just to do vanilla X509 crypto.

Although the LVFS instance only signs files and metadata with GPG at the moment I’ve added the missing bits into python-gnutls so it could become possible in the future. If this is accepted then I think it would be fine to support both GPG and PKCS7 on the server.

One of the temptations for X509 signing would be to get a certificate from an existing CA and then sign the firmware with that. From my point of view that would be bad, as any firmware signed by any certificate in my system trust store to be marked as valid, when really all I want to do is check for a specific (or a few) certificates that I know are going to be providing certified working firmware. Although I could achieve this to some degree with certificate pinning, it’s not so easy if there is a hierarchical trust relationship or anything more complicated than a simple 1:1 relationship.

So this is possible I’ve created a LVFS CA certificate, and also a server certificate for the specific instance I’m running on OpenShift. I’ve signed the instance certificate with the CA certificate and am creating detached signatures with an embedded (signed-by-the-CA) server certificate. This seems to work well, and means we can issue other certificates (or CRLs) if the server ever moves or the trust is compromised in some way.

So, tl;dr: (should have been at the top of this page…) if you see a /etc/pki/fwupd/LVFS-CA.pem appear on your system in the next release you can relax. Comments, especially from crypto experts welcome. Thanks!

August 17, 2017

Krita 3.2.0 Released

Later than planned, here’s Krita 3.2.0! With the new G’Mic-qt plugin integration, the smart patch tool, finger painting on touch screens, new brush presets and a lot of bug fixes.

Read the full release notes for more information!. Here’s GDQuest’s video introducing 3.2.0:

Note: the gmic-qt plugin is not available on OSX. Krita now ships with a pre-built gmic-qt plugin on Windows and the Linux AppImage. If you have tested the beta or release candidate builds, you might need to reset your configuration.

Changes since the last release candidate:

  • Don’t reset the LUT docker when moving the Krita window between moitors
  • Correctly initialize the exposure display filter in the LUT docker
  • Add the missing pan tool a ction
  • Improve the “Normal” blending mode performance by 30% (first patch for Krita by Daria Scherbatyuk!)
  • Fix a crash when creating a second view on an image
  • Fix a possible crash when creating a second window
  • Improve finding the gmic-qt plugin: Krita now first looks whether there is one available in the same place as the Krita executable
  • Fix scroll wheel behaviour if Krita is built with Qt 5.7.1. or later
  • Fix panning in gmic-qt when applying gmic-qt to a non-RGBA image
  • Scale channel values correctly when using a non-RGBA image with gmic-qt
  • Fix the default setting for allowing multiple krita instances



    Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.


    (If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

    When it is updated, you can also use the Krita Lime PPA to install Krita 3.2.0 on Ubuntu and derivatives.


    Note: the gmic-qt and pdf plugins is not available on OSX.

    Source code


    For all downloads:


    The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
    . The signatures are here.

    Support Krita

    Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

August 16, 2017

Thank you all!

When we went public with our troubles with the Dutch tax office two weeks ago, the response was overwhelming. The little progress bar on is still counting, and we’re currently at 37,085 euros, and 857 donators. And that excludes the people who sent money to the bank directly. It does include Private Internet Access‘ sponsorship. Thanks to all you! So many people have supported us, we cannot even manage to send out enough postcards.

So, even though we’re going to get another accountant’s bill of about 4500 euros, we’ve still got quite a surplus! As of this moment, we have €29,657.44 in our savings account!

That means that we don’t need to do a fund raiser in September. Like we said, we’ve still got some features to finish. Dmitry and I are currently working on

  • Make Krita save and autosave in the background (done)
  • Improved animation rendering speed (done)
  • Improve Krita’s brush engine multi-core adaptability (under way)
  • Improve the general concurrency in Krita (under way)
  • Add touch functionality back (under way)
  • Implement the new text tool (under way)
  • Lazy brush: plug in a faster algorithm
  • Stacked brushes: was done, but needs to be redone
  • Replace the reference images docker with a reference images tool (under way)
  • Add patterns and filters to the vector support

All of that should be done before the end of the year. After that, we want to spend 2018 working on stability, polish and performance. So much will have changed that from 3.0 to 4.0 is a bigger step than from 2.9 to 3.0, even though that included the port to a new version of Qt! We will be doing new fund raisers in 2018, but we’re still discussing what the best approach would be. Kickstarters with stretch goals are very much feature oriented, and we’ve all decided that it’s time to improve what we have, instead of adding still more features, at least, for a while…

In the meantime, we’re working on the 3.2 release. We wanted to have it released yesterday, but we found a regression, which Dmitry is working hard on fixing right now. So it’ll probably be tomorrow.

August 14, 2017

A Homemade Solar Finder, for the Eclipse

While I was testing various attempts at motorizing my barn-door mount, trying to get it to track the sun, I had to repeatedly find the sun in my telescope.

In the past, I've generally used the shadow of the telescope combined with the shadow of the finderscope. That works, more or less, but it's not ideal: it doesn't work as well with just a telescope with no finder, which includes both of the scopes I'm planning to take to the eclipse; and it requires fairly level ground under the telescope: it doesn't work if there are bushes or benches in the way of the shadow.

For the eclipse, I don't want to waste any time finding the sun: I want everything as efficient as possible. I decided to make a little solar finderscope. One complication, though: since I don't do solar observing very often, I didn't want to use tape, glue or, worse, drill holes to mount it.

So I wanted something that could be pressed against the telescope and held there with straps or rubber bands, coming off again without leaving a mark. A length of an angled metal from my scrap pile seemed like a good size to be able to align itself against a small telescope tube.

[Constructing a solar sight] Then I needed front and rear sights. For the front sight, I wanted a little circle that could project a bulls-eye shadow onto a paper card attached to the rear sight. I looked at the hardware store for small eye-bolts, but no dice. Apparently they don't come that small.I settled for the second-smallest size of screw eye.

The screw eye, alas, is meant to screw into wood, not metal. So I cut a short strip of wood a reasonable size to nestle into the inside of the angle-iron. (That ripsaw Dave bought last year sure does come in handy sometimes.) I drilled some appropriately sized holes and fastened screw eyes on both ends, adding a couple of rubber grommets as spacers because the screw eyes were a little too long and I didn't want the pointy ends of the screws getting near my telescope tube.

I added some masking tape on the sides of the angle iron so it wouldn't rub off the paint on the telescope tube, then bolted a piece of cardboard cut from an old business card to the rear screw eye.

[Homemade solar sight] Voila! A rubber-band-attached solar sight that took about an hour to make. Notice how the shadow of the front sight exactly fits around the rear sight: you line up the shadow with the rear sight to point the scope. It seems to work pretty well, and it should be adaptable to any telescope I use.

I used a wing nut to attach the rear cardboard: that makes it easy to replace it or remove it. With the cardboard removed, the sight might even work for night-time astronomy viewing. That is, it does work, as long as there's enough ambient light to see the rings. Hmm... maybe I should paint the rings with glow-in-the-dark paint.

August 11, 2017

A Barn-Door Mount for the Eclipse

[Curved rod barn-door mount] I've been meaning forever to try making a "barn door" tracking mount. Used mainly for long-exposure wide-field astrophotography, the barn door mount, invented in 1975, is basically two pieces of wood with a hinge. The bottom board mounts on a tripod and is pointed toward the North Star; "opening" the hinge causes the top board to follow the motion of the sky, like an equatorial telescope mount. A threaded rod and a nut control the angle of the "door", and you turn the nut manually every so often. Of course, you can also drive it with a motor.

We're off to view the eclipse in a couple of weeks. Since it's my first total eclipse, my plan is to de-emphasize photography: especially during totality, I want to experience the eclipse, not miss it because my eyes are glued to cameras and timers and other equipment. But I still want to take photos every so often. Constantly adjusting a tripod to keep the sun in frame is another hassle that might keep my attention away from the eclipse. But real equatorial mounts are heavy and a time consuming to set up; since I don't know how crowded the area will be, I wasn't planning to take one. Maybe a barn door would solve that problem.

Perhaps more useful, it would mean that my sun photos would all be rotated approximately the right amount, in case I wanted to make an animation. I've taken photos of lunar and partial solar eclipses, but stringing them together into an animation turned out to be too much hassle because of the need to rotate and position each image.

I've known about barn-door mounts since I was a kid, and I knew the basic theory, but I'd never paid much attention to the details. When I searched the web, it sounded complicated -- it turned out there are many types that require completely different construction techniques.

The best place to start (I found out after wasting a lot of time on other sites) is the Wikipedia article on "Barn door tracker", which gives a wonderfully clear overview, with photos, of the various types. I had originally been planning a simple tangent or isosceles type; but when I read construction articles, it seemed that those seemingly simple types might not be so simple to build: the angle between the threaded rod and the boards is always changing, so you need some kind of a pivot. Designing the pivot looked tricky. Meanwhile, the pages I found on curved-rod mounts all insisted that bending the rod was easy, no trouble at all. I decided to try a curved-rod mount first.

The canonical reference is a 2015 article by Gary Seronik: A Tracking Platform for Astrophotography. But I found three other good construction guides: Optical Ed's "Making a Curve Bolt Barn Door", a Cloudy Nights discussion thread "Motorized Barn Door Mount Kit", and Massapoag Pond Photography's "Barn Door Tracker". I'm not going to reprise all their construction details, so refer to those sites if you try making your own mount.

[Barn-door mount, showing piano hinge] The crucial parts are a "piano hinge", a long hinge that eliminates the need to line up two or more hinges, and the threaded rod. Buying a piano hinge in the right size proved impossible locally, but the folks at Metzger's assured me that piano hinges can be cut, so I bought one longer than I needed and cut it to size. I used a 1/4-20 rod, which meant (per the discussions in the Cloudy Nights discussion linked above) that a 11.43-inch radius from the hinge to the holes the rod passes through would call for the nut to turn at a nice round number of 1 RPM.

I was suspicious of the whole "it's easy to bend the threaded rod ina 11.43-inch circle" theory, but it turned out to be true. Draw the circle you want on a sheet of newspaper, put on some heavy gloves and start bending, frequently comparing your rod to the circle you drew. You can fine-tune the curvature later.

I cut my boards, attached the hinge, measured about 11.4" and drilled a hole for the threaded rod. The hole needed to be a bit bigger than 5/8" to let the curved rod pass through without rubbing. Attach the curved rod to the top wood piece with a couple of nuts and some washers, and then you can fine-tune the rod's curvature, opening and closing the hinge and re-bending the rod a little in any place it rubs.

A 5/8" captive nut on the top piece lets you attach a tripod head which will hold your camera or telescope. A 1/4" captive nut on the bottom piece serves to attach the mount to a tripod -- you need a 1/4", not 3/8": the rig needs to mount on a tripod head, not just the legs, so you can align the hinge to the North Star. (Of course, you could build a wedge or your own set of legs, if you prefer.) The 3/4" plywood I was using turned out to be thicker than the captive nuts, so I had to sand the wood thinner in both places. Maybe using half-inch plywood would have been better.

[Wing nut on barn-door mount] The final piece is the knob/nut you'll turn to make the mount track. I couldn't find a good 1/4" knob for under $15. A lot of people make a wood circle and mount the nut in the center, or use a gear so a motor can drive the mount. I looked around at things like jam-jar lids and the pile of metal gears and sprinkler handles in my welding junkpile, but I didn't see anything that looked quite right, so I decided to try a wing nut just for testing, and worry about the knob later. Turns out a wing nut works wonderfully; there's no particular need for anything else if you're driving your barn-door manually.

Testing time! I can't see Polaris from my deck, and I was too lazy to set up anywhere else, so I used a protractor to set the hinge angle to roughly 36° (my latitude), then pointed it approximately north. I screwed my Pro-Optic 90mm Maksutov (the scope I plan to use for my eclipse photos) onto the ball head and pointed it at the moon as soon as it rose. With a low power eyepiece (20x), turning the wing nut kept the moon more or less centered in the field for the next half-hour, until clouds covered the moon and rain began threatening. I didn't keep track of how many turns I was making, since I knew the weather wasn't going to allow a long session, and right now I'm not targeting long-exposure photography, just an easy way of keeping an object in view.

A good initial test! My web searches, and the discovery of all those different types of barn-door mounts and pivots and flex couplings and other scary terms, had seemed initially daunting. But in the end, building a barn-door mount was just as easy as people say it is, and I finished it in a day.

And what about a motor? I added one a few days later, with a stepper and an Arduino. But that's a separate article.

August 10, 2017

Announcement: Clojure.spec implementation for Node.js

A few months ago I started implementing Speculaas, a Clojure.spec implementation for Node.js. This is the  announcement of a series of blog posts on this site in which I will show how to use this npm package for specification based testing of your code.

Have fun,


Forward only binary patching

A couple of weeks ago I’ve added some new functionality to dfu-tool which is shipped in fwupd. The dfu-tool utility (via libdfu) now has the ability to forward-patch binary files, somewhat like bsdiff does. To do this it compares the old firmware with the new firmware, finding blocks of data that are different and storing the new content and the offset in a .dfup file. The reason for storing the new content rather than a binary diff (like bsdiff) is that you can remove non-free and non-redistributable code without actually including it in the diff file (which, you might be doing if you’re neutering/removing the Intel Management Engine). This does make reversing the binary patch process impossible, but this isn’t a huge problem if we keep the old file around for downgrades.

$ sha1sum ~/firmware-releases/colorhug-1.1.6.bin
955386767a0108faf104f74985ccbefcd2f6050c  ~/firmware-releases/colorhug-1.1.6.bin

$ sha1sum ~/firmware-releases/colorhug-1.1.7.bin
9b7dbb24dbcae85fbbf045e7ff401fb3f57ddf31  ~/firmware-releases/colorhug-1.1.7.bin

$  dfu-tool patch-create ~/firmware-releases/colorhug-1.1.6.bin
~/firmware-releases/colorhug-1.1.7.bin colorhug-1_1_6-to-1_1_7.dfup
Dfu-DEBUG: binary growing from: 19200 to 19712
Dfu-DEBUG: add chunk @0x0000 (len 3)
Dfu-DEBUG: add chunk @0x0058 (len 2)
Dfu-DEBUG: add chunk @0x023a (len 19142)
Dfu-DEBUG: blob size is 19231

$ dfu-tool patch-dump colorhug-1_1_6-to-1_1_7.dfup
checksum-old: 955386767a0108faf104f74985ccbefcd2f6050c
checksum-new: 9b7dbb24dbcae85fbbf045e7ff401fb3f57ddf31
chunk #00     0x0000, length 3
chunk #01     0x0058, length 2
chunk #02     0x023a, length 19142

$ dfu-tool patch-apply ~/firmware-releases/colorhug-1.1.6.bin
colorhug-1_1_6-to-1_1_7.dfup new.bin -v
Dfu-DEBUG: binary growing from: 19200 to 19712
Dfu-DEBUG: applying chunk 1/3 @0x0000 (length 3)
Dfu-DEBUG: applying chunk 2/3 @0x0058 (length 2)
Dfu-DEBUG: applying chunk 3/3 @0x023a (length 19142)

$ sha1sum new.bin
9b7dbb24dbcae85fbbf045e7ff401fb3f57ddf31  new.bin

Perhaps a bad example here, the compiler changed between 1.1.6 and 1.1.7 so lots of internal offsets changed and there’s no partitions inside the image; but you get the idea. For some system firmware where only a BIOS default was changed this can reduce the size of the download from megabytes to tens of bytes; the largest thing in the .cab then becomes the XML metadata (which also compresses rather well). Of course in this case you can also use bsdiff if it’s already installed — I’ve not yet decided if it makes sense for fwupd to runtime require tools like bspatch as these could be needed by the firmware builder bubblewrap functionality, or if it could just be included as statically linked binaries in the .cab file. Comments welcome.

August 08, 2017

Building local firmware in fwupd

Most of the time when you’re distributing firmware you have permission from the OEM or ODM to redistribute the non-free parts of the system firmware, e.g. Dell can re-distribute the proprietary Intel Management Engine as part as the firmware capsule that gets flashed onto the hardware. In some cases that’s not possible, for example for smaller vendors or people selling OpenHardware. In this case I’m trying to help Purism distribute firmware updates for their hardware, and they’re only able to redistribute the Free Software coreboot part of the firmware. For reasons (IFD, FMAP and CBFS…) you need to actually build the target firmware on the system you’re deploying onto, where build means executing random low-level tools to push random blobs of specific sizes into specific unnecessarily complex partition formats rather than actually compiling .c into executable code. The current solution is a manually updated interactive bash script which isn’t awesome from a user-experience or security point of view. The other things vendors have asked for in the past is a way to “dd” a few bytes of randomness into the target image at a specific offset and also to copy the old network MAC address into the new firmware. I figured fwupd should probably handle this somewhat better than a random bash script running as root on your live system.

I’ve created a branch that allows you to ship an archive (I’m suggesting using the simple .tar format, as the .cab file will be compressed already) within the .cab file as the main “release”. Within the .tar archive will be a file and all the utilities or scripts needed to run the build operation, statically linked if required. At firmware deploy time fwupd will explode the tar file into a newly-created temp directory, create a bubblewrap container which has no network and limited file-system access and then run the script. Once complete, fwupd will copy out just the firmware.bin file and then destroy the bubblewrap container and the temporary directory. I’ve not yet worked out how to inject some things into the jail, for instance we sometimes need the old system firmware blob if we’re applying a bsdiff rather than just replacing all the data. I’m tempted to just mount /var/lib/fwupd/builder/ into the jail as read-only and then get the fwupd plugin to create the required data there at startup before the jail gets created.

Not awesome, but more reliable and secure than just running random bash files as root. Comments welcome.

August 07, 2017

Krita 3.2.0: We Have a Release Candidate!

After last week’s rollercoaster ride (if you haven’t seen it, check the news, then the update!), it was hard to get back into making releases and writing code. Yet, here is the release candidate for Krita 3.2.0. Fixes since the second beta include:

  • Some cleanups when handling OpenGL
  • Show a clearer error when loading the wintab32.dll file fails on Windows
  • Fix a regression where bezier tools couldn’t close the curve and couldn’t create a second curve
  • Fixes for working with multiple windows in subwindow mode where one of the documents is set to “stays on top”
  • Fix resetting the Level of Detail cache when changing the visibility of a layer: this fixes an issue where after changing the visibility of a layer, the color picker would pick from an older version of the layer.
  • Save the last used folder in the Reference Images Docker
  • Don’t create nested autosave documents.
  • Add recognizing uc-logic tablets on Linux
  • Improve the stabilizer
  • Improve the communication between Krita and the gmic-qt plugin
  • Fix progress reporting after the gmic-qt filter returns
  • Fix loading a custom brush preset that uses the text brush
  • Update the new brush preset set



Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.


(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

When it is updated, you can also use the Krita Lime PPA to install Krita 3.2.0-rc.1 on Ubuntu and derivatives.


Source code


For all downloads:


The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
. The signatures are here.

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

August 05, 2017

Keeping Git Branches in Sync

I do most of my coding on my home machine. But when I travel (or sit in boring meetings), sometimes I do a little hacking on my laptop. Most of my code is hosted in GitHub repos, so when I travel, I like to update all the repos on the laptop to make sure I have what I need even when I'm offline.

That works great as long as I don't make branches. I have a variable $myrepos that lists all the github repositories where I want to contribute, and with a little shell alias it's easy enough to update them all:

allgit() {
    pushd ~
    foreach repo ($myrepos)
        echo $repo :
        cd ~/src/$repo
        git pull

That works well enough -- as long as you don't use branches.

Git's branch model seems to be that branches are for local development, and aren't meant to be shared, pushed, or synchronized among machines. It's ridiculously difficult in git to do something like, "for all branches on the remote server, make sure I have that branch and it's in sync with the server." When you create branches, they don't push to the server by default, and it's remarkably difficult to figure out which of your branches is actually tracking a branch on the server.

A web search finds plenty of people asking, and most of the Git experts answering say things like "Just check out the branch, then pull." In other words, if you want to work on a branch, you'd better know before you go offline exactly which branches in which repositories might have been created or updated since the last time you worked in that repository on that machine. I guess that works if you only ever work on one project in one repo and only on one or two branches at a time. It certainly doesn't work if you need to update lots of repos on a laptop for the first time in two weeks.

Further web searching does find a few possibilities. For checking whether there are files modified that need to be committed, git status --porcelain -uno works well. For checking whether changes are committed but not pushed, git for-each-ref --format="%(refname:short) %(push:track)" refs/heads | fgrep '[ahead' works ... if you make an alias so you never have to look at it.

Figuring out whether branches are tracking remotes is a lot harder. I found some recommendations like git branch -r | grep -v '\->' | while read remote; do git branch --track "${remote#origin/}" "$remote"; done and for remote in `git branch -r`; do git branch --track ${remote#origin/} $remote; done but neither of them really did what I wanted. I was chasing down the rabbit hole of writing shell loops using variables like

  localbranches=("${(@f)$(git branch | sed 's/..//')}")
  remotebranches=("${(@f)$(git branch -a | grep remotes | grep -v HEAD | grep -v master | sed 's_remotes/origin/__' | sed 's/..//')}")
when I thought, there must be a better way. Maybe using Python bindings?


In Debian, the available packages for Git Python bindings are python-git, python-pygit2, and python-dulwich. Nobody on #python seemed to like any of them, but based on quick attempts with all three, python-git seemed the most straightforward. Confusingly, though Debian calls it python-git, it's called "git-python" in its docs or in web searches, and it's "import git" when you use it.

It's pretty straightforward to use, at least for simple things. You can create a Repo object with

from git import Repo
repo = Repo('.')
and then you can get lists like repo.heads (local branches), repo.refs (local and remote branches and other refs such as tags), etc. Once you have a ref, you can use, check whether it's tracking a remote branch with ref.tracking_branch(), and make it track one with ref.set_tracking_branch(remoteref). That makes it very easy to get a list of branches showing which ones are tracking a remote branch, something that had proved almost impossible with the git command line.

Nice. But now I wanted more: I wanted to replace those baroque git status --porcelain and git for-each-ref commands I had been using to check whether my repos needed committing or pushing. That proved harder.

Checking for uncommitted files, I decided it would be easiest stick with the existing git status --porcelain -uno. Which was sort of true. git-python lets you call git commands, for cases where the Python bindings aren't quite up to snuff yet, but it doesn't handle all cases. I could call:

    output = repo.git.status(porcelain=True)
but I never did find a way to pass the -uno; I tried u=False, u=None, and u="no" but none of them worked. But -uno actually isn't that important so I decided to do without it.

I found out later that there's another way to call the git command, using execute, which lets you pass the exact arguments you'd pass on the command line. It didn't work to call for-each-ref the way I'd called repo.git.status (repo.git.for_each_ref isn't defined), but I could call it this way:

    foreachref = repo.git.execute(['git', 'for-each-ref',
                                   '--format="%(refname:short) %(push:track)"',
and then parse the output looking for "[ahead]". That worked, but ... ick. I wanted to figure out how to do that using Python.

It's easy to get a ref (branch) and its corresponding tracking ref (remote branch). ref.log() gives you a list of commits on each of the two branches, ordered from earliest to most recent, the opposite of git log. In the simple case, then, what I needed was to iterate backward over the two commit logs, looking for the most recent SHA that's common to both. The Python builtin reversed was useful here:

    for i, entry in enumerate(reversed(ref.log())):
        for j, upstream_entry in enumerate(reversed(upstream.log())):
            if entry.newhexsha == upstream_entry.newhexsha:
                return i, j

(i, j) are the number of commits on the local branch that the remote hasn't seen, and vice versa. If i is zero, or if there's nothing in ref.log(), then the repo has no new commits and doesn't need pushing.

Making branches track a remote

The last thing I needed to do was to make branches track their remotes. Too many times, I've found myself on the laptop, ready to work, and discovered that I didn't have the latest code because I'd been working on a branch on my home machine, and my git pull hadn't pulled the info for the branch because that branch wasn't in the laptop's repo yet. That's what got me started on this whole "update everything" script in the first place.

If you have a ref for the local branch and a ref for the remote branch, you can verify their is the same, and if the local branch has the same name but isn't tracking the remote branch, probably something went wrong with the local repo (like one of my earlier attempts to get branches in sync, and it's an easy fix: ref.set_tracking_branch(remoteref).

But what if the local branch doesn't exist yet? That's the situation I cared about most, when I've been working on a new branch and it's not on the laptop yet, but I'm going to want to work on it while traveling. And that turned out to be difficult, maybe impossible, to do in git-python.

It's easy to create a new local branch: repo.head.create(repo, name). But that branch gets created as a copy of master, and if you try to turn it into a copy of the remote branch, you get conflicts because the branch is ahead of the remote branch you're trying to copy, or vice versa. You really need to create the new branch as a copy of the remote branch it's supposed to be tracking.

If you search the git-python documentation for ref.create, there are references to "For more documentation, please see the Head.create method." Head.create takes a reference argument (the basic ref.create doesn't, though the documentation suggests it should). But how can you call Head.create? I had no luck with attempts like repo.git.Head.create(repo, name, reference=remotebranches[name]).

I finally gave up and went back to calling the command line from git-python.

repo.git.checkout(remotebranchname, b=name)
I'm not entirely happy with that, but it seems to work.

I'm sure there are all sorts of problems left to solve. But this script does a much better job than any git command I've found of listing the branches in my repositories, checking for modifications that require commits or pushes, and making local branches to mirror new branches on the server. And maybe with time the git-python bindings will improve, and eventually I'll be able to create new tracking branches locally without needing the command line.

The final script, such as it is:

August 03, 2017

The embedded color sensor in the ThinkPad P70

Last week at GUADEC Christian gave me a huge laptop to borrow with the request to “make the color sensor work in Fedora”.

This thing is a beast: the display is 17″ and 4K resolution, two GPUs, two hard-disks and a battery. It did not fit in my laptop bag, only just squeezed in my suitcase, and weighed a metric ton. I was pretty confident I could make the color sensor work, as I previously reverse engineered the Huey and we had existing support for the embedded Huey as found in the W700 ThinkPad. Just like the W700, the sensor is located in the palm-rest, and so the laptop lid needs to be shut (and the display kept on) when showing calibration patches. I told Christian it should be a case of adding an unlock code, another PID to the udev rules and then we should be good. How wrong could I be!

Lets look on what’s shipped by default with the Laptop. In Microsoft Windows 10, the Pantone application prompts you to recalibrate your display once per week. When you manually run the calibration wizard, it asks you to choose your display temperature and also the gamma value for the curve, defaulting to D65 whitepoint and 2.2 gamma.

It then asks you to shut the lid and uses a combination of flashing the Thinkpad red-dot LED and using sound effects to show you the progress of the calibration. By opening the lid a tiny fraction we can see the pattern is as follows:

  1. Black offset
  2. Red primary
  3. Green primary
  4. Blue primary
  5. Red gamma ramp, 7 steps
  6. Green gamma ramp, 7 steps
  7. Blue gamma ramp, 7 steps

The USB traffic was intercepted for two runs, and dumped to CSV files. These were further post-processed by a python script to filter down and to help understand the protocol used.

From completely reverse engineering the protocol, we can show that the Pantone X-Rite sensor in the palm-rest of the P70 is nothing more than a brightness sensor with a display-specific primary correction matrix. You can’t actually get a RGB or XYZ color out of the sensor, the only useful thing that it can do is linearize the VCGT ramps, and with only 7 measurements for each channel I’m surprised it can do anything very useful at all.

Is it not known how the sensor and calibration tool can create an ICC profile without hardcoding the primaries in the sensor EEPROM itself, and this is probably what happens here. Whilst the sensor would be able to linearize a display where the hardware-corrected backlight suddenly becomes non-linear, it is completely unable to return a set of display primaries. Said another way, the sensor can’t tell the difference between a 100% red and 100% blue patch.

These findings also correlate with the findings from AnandTech who say that calibrating the display with the embedded sensor actually makes the LCD worse when measuring saturation accuracy, whitepoint and grayscale accuracy.

If you’re serious about color calibration, please don’t use the built-in sensor, and if you are buying a top-end Xeon system save a few dollars and don’t bother with the color sensor. For $20 extra Pantone could have used a calibrated XYZ one-piece sensor from MAZeT, which would have given them a true “color sensor” that could have sampled much quicker and with true XYZ-adapted readings.

The irony is of course, you can’t even use the HueyCOLOR sensor as a ambient light sensor. As the device is in the palm-rest, you frequently cover it with your hand — and any backlight adjustment would feed back into the sensor causing the backlight to flash.

If you actually want to make a colord sensor driver for this hardware we’d need to extend the capability bitfield to show it’s only capable of brightness, and also continue parsing the EEPROM to find the spectral sensitivities, but that’s not something that I think is useful to do.

If you want to know about the low-level protocol used and more information about the device, I’ve written some notes which document the protocol used. Disappointing.

August 02, 2017

GCompris at Akademy 2017

I didn’t blog yet about my experience during this year’s Akedemy, the annual conference and gathering of the KDE community.

This time it was in Almería, Spain. The organizers made a wonderful work, and everything went perfectly good. The event was well covered locally, with at least three newspaper articles.

(Photo by Guille Fuertes)

I could meet old friends and make new ones, visited a few awesome places, and I think we all had a wonderful time there.

It was also a very productive event, with lots of progress done or started for the different projects.
On my side, I had some very interesting feedback after my talk about GCompris. I was asking for some help on a few things, including deb, flatpak and appimage packaging on linux. For flatpak, Aleix Pol showed me the initial work he already did, and I could help him adding a missing dependency.
For the appimage, I was very happy to see the next day a message from probono on our irc channel, who saw my slides and started working on the appimage for GCompris :). That was a great surprise and I couldn’t hope for better help for it, as he is basically the man behind the appimage project, and already helped creating the appimage for Krita. And finally for the deb package, we have just been contacted by a Kubuntu packager who is willing to have an up-to-date package in their next release. The community is awesome, thank you all! 😀

(Photos by Paul Brown)

Besides, I could attend several very interesting talks, and had a whole lot of interesting technical and human talks that helped me to learn a lot, at least I believe so.

So much thanks to the KDE community for always being so cool, and again big thanks to KDE e.V. for supporting this event and my participation to it.

Krita Foundation: Update

When we posted the news about our tax wrangle yesterday, we did expect to make some waves. We didn’t expect the incredible response from all of you! A day later, over 500 awesome people have donated a total of €9562 (at the time of writing, check the fancy progress bar we’ve finally managed to create!). Fourteen people have joined the development fund, too! Thank you all!

But that’s not all, we were stunned when we were approached by the team at Private Internet Access. They wanted to help us out and sponsor Krita with £20,000! Private Internet Access provides worldwide fast, multi-gigabit VPN Tunnel gateways. They already sponsor a great many awesome organizations, and now also Krita!

Of course, this makes our work much easier. Not only do we don’t have to worry whether we can pay the tax bill, but we can also start sending money to Dmitry again! And that’s why if you’ve been wondering whether you should still help Krita with a donation (or by getting something from the shop), please don’t hesitate! To recap, our current plans are:

  • Make exporting and rendering animations much faster
  • Improve the performance of some brush engines on multi-core systems
  • Add touch functionality to Krita
  • Continue the implementation of the new text tool
  • Finish the remaining kickstarter features: lazy brush, stacked brushes, reference image tool.
  • Release Krita 3.2 (soon!) and Krita 4.0 (this year)

And then, since we’ve basically reworked all parts of Krita, spend some time working on bugs, polish and, as always, performance.

Boudewijn Rempt, Krita Maintainer

One-time Donation

€1 minimum


Monthly Subscription

€1 minimum

Stichting Krita Foundation
Korte Assenstraat 11 7411JP Deventer, the Netherlands.
IBAN: NL72INGB0007216397

August 01, 2017

FreeCAD Arch development news - July 2017

WTF, July development news in August, I hear you yelling (it's not only me!) But there was some last minute topic I really wanted to complete to include in this report... But now that it is complete, I think it deserves another post on its own, so I'll only mention it here briefly. It will...

Krita Foundation in Trouble

Please check the August 2nd update, too!

Even while we’re working on a new beta for Krita 3.2 and a new development build for 4.0 (with Python, on Windows!), we have to release some bad news as well.

The Krita Foundation is having trouble with the Dutch tax authorities. This is the situation:

In February, we received an audit from the tax inspector. We were quite confident we wouldn’t have any problems because when we setup the Krita Foundation in 2013, we took the advice of a local tax consultant on how to setup the Foundation and its administration. We registered for VAT with the tax authorities and kept our books as instructed by the consultant.

However, the tax inspector found two problems springing from the fact the Foundation sells training videos and books, so it is not 100% funded by donations. This means that the tax authorities see the Foundation is as partly a company, partly as not a company.

  • We claimed back VAT for things bought by the Foundation. But we should only have claimed the VAT back to the percentage of income generated from sales, which is about 15%. (The rest of our income is donations.)
  • The Foundation was created to be able to have Dmitry work full-time on Krita. Because we sell stuff, the tax inspector has determined that we’re a company, and should have paid VAT in the Netherlands over the work Dmitry has been doing in Russia. Even though there is no VAT in Russia on the kind of work Dmitry is doing. But because we’re not a company, we cannot reclaim the VAT.

In other words, because we’re mostly not a company, we should not have claimed back the VAT we paid; but we’re also considered fully a company, so we should have paid VAT in the Netherlands over Dmitry’s work, which we could not have claimed back because the Foundation is mostly not a company. (It didn’t matter that Dmitry owns the copyright on his work, and that the Foundation doesn’t own anything related to Krita except for the trademark…)

The result is a tax bill of 24,000 euros. We have consulted with an accountant, and together we got the bill reduced to 15,006 euros, including fines and interest, but the accountant’s bill came to 4,000 euros.

The discussions with the tax inspector and accountant have taken months to resolve. The stress that caused has not just eaten into our coding productivity, it also meant we had no certainty at all, so we missed our usual May fundraiser. At one point, we were almost certain the Krita Foundation would go broke.

We ended 2016 with about 30,000 euros in the bank, enough to keep us going until June: it has dwindled to € 5.461,63 by now. Fortunately, we did have the help of three extra-ordinary sponsors who helped us survive through this period. We also have found a sponsor for some extra work on Krita, mainly focused on improving performance on systems with many cores and restoring some touch functionality and touch ui to Krita.

Still, we have not been able to be as productive as we wanted, and some of the cool things we were working on aren’t done yet, and maybe won’t get done in time for Krita 4.0.

Then there is another complication: until the middle of 2016, I had a day job next to my work on Krita, giving me in effect two full-time jobs. I suffered a break-down in the middle of 2016, and had to stop my day job. I lived on my savings until they ran out by the end of 2016, when I started working full-time for the Foundation as well, so our expenses have gone up too.

For the future, we’ve separated the sales of training videos, artbooks and sales on the Windows Store and Steam out to a separate company, so the Krita Foundation is 100% a non-profit. That means that there is no VAT payable in the Netherlands over the work Dmitry does in Russia. We checked the new setup with the accountants, and they have given green light for it.

Now we’ve got the bills, we can start making plans again:

  • As I said in the beginning. we’re currently working on Krita 3.2 and the next pre-alpha development release of 4.0. Our community is healthy, with more and more people chipping in and having fun hacking on Krita, working on the documentation and creating illustrations, comics and animations with Krita.
  • In September, we will run a fundraiser for development in 2018. After we’ve finished the backlog of kickstarter-promised features for 4.0 or 4.1, our focus will be on stability and polish for a year. “Zero bugs!” — that’s going to be the rallying cry for the fundraiser and for 2018!

Though there is no reason to wait until September to make a donation or join the development fund!

Note: in the interests of full transparency, you can find our end-of-year reports for 2013, 2014, 2015 and 2016 here.

Boudewijn Rempt, Krita Maintainer

One-time Donation

€1 minimum


Monthly Subscription

€1 minimum

Stichting Krita Foundation
Korte Assenstraat 11 7411JP Deventer, the Netherlands.
IBAN: NL72INGB0007216397

July 30, 2017

Remapping the Caps Lock key on Raspbian

I've remapped my CapsLock key to be another Ctrl key since time immemorial. (Actually, since the ridiculous IBM PC layout replaced the older keyboards that had Ctrl there already, to the left of the A.)

On normal current Debian distros, that's fairly easy: you can edit /etc/default/keyboard to have XKBOPTIONS="ctrl:nocaps.

You might think that would work in Raspbian, since it also has /etc/default/keyboard and raspi-config writes keyboard options to it if you set any (though of course CapsLock isn't among the choices it offers you). But it doesn't work in the PIXEL desktop: there, that key still acts as a Caps Lock.

Apparently lxde (under PIXEL's hood) overrides the keyboard options in /etc/default/keyboard without actually giving you a UI to set them. But you can add your own override by editing ~/.config/lxkeymap.cfg. Make the option line look something like this:

option = ctrl:nocaps

Then when you restart PIXEL, you should have a Control key where CapsLock used to be.

July 26, 2017

New Evince format support: Adobe Illustrator and CBR files

A quick update, as we've touched upon Evince recently.

I mentioned that we switched from using external tools for decompression to using libarchive. That's not the whole truth, as we switched to using libarchive for CBZ, CB7 and the infamous CBT, but used a copy/paste version of unarr to support RAR files, as libarchive support lacks some needed features.

We hope to eventually remove the internal copy of unarr, but, as a stop-gap, that allowed us to start supporting CBR comics out of the box, and it's always a good thing when you have one less non-free package to grab from somewhere to access your media.

The second new format is really two formats, from either side of the 2-digit-year divide: PostScript-based Adobe Illustrator and PDF-based Adobe Illustrator. Evince now declares to support "the format" if both of the backends are built and supported. It only took 12 years, and somebody stumbling upon the feature request while doing bug triaging. The nooks and crannies of free software where the easy feature requests get lost :)

Both features will appear in GNOME 3.26, the out-of-the-box CBR support is however available now in an update for the just released Fedora 26.

July 25, 2017

My Life (So Far) in Numbers

As of 2017:

  • I have been at the company I helped to start for 18 years
  • I have been married for 12 years
  • I have a 9-year-old child (and 6, and 1)

I’m going for a personal high-score.

July 23, 2017

Nambé Lake Nutcrackers

[Nambe Lake]

This week's hike was to Nambé Lake, high in the Sangre de Cristos above Santa Fe.

It's a gorgeous spot, a clear, shallow mountain lake surrounded by steep rocky slopes up to Lake Peak and Santa Fe Baldy. I assume it's a glacial cirque, though I can't seem to find any confirmation of that online.

[Clark's nutcracker taking bread from my hand.] There's a raucous local population of Clark's nutcrackers, a grey relative of the jays (but different from the grey jay) renowned for its fearlessness and curiosity. One of my hiking companions suggested they'd take food from my hand if I offered. I broke off a bit of my sandwich and offered it, and sure enough, a nutcracker flew right over. Eventually we had three or four of them hanging around our lunch spot.

The rocky slopes are home to pikas, but they're shy and seldom seen. We did see a couple of marmots in the rocks, and I caught a brief glimpse of a small, squirrel-sized head that looked more grey than brown like I'd expect from a rock squirrel. Was it a pika? I'll never know.

We also saw some great flowers. Photos: Nambé Lake Nutcrackers.

July 22, 2017

Krita 3.2.0: Second Beta Available

We’re releasing the second beta for Krita 3.2.0 today! These beta builds contain the following fixes, compared to the first 3.2.0 beta release. Keep in mind that this is a beta: you’re supposed to help the development team out by testing it, and reporting issues on

  • There are still problems on Windows with the integration with the gmic-qt plugin, but several lockups have been fixed.
  • The smart patch tool merge was botched: this is fixed now.
  • It wasn’t possible anymore to move vector objects with the mouse (finger and tablet worked fine). This is fixed now.
  • Fixed the size and flow sliders
  • Fixes to saving jpg or png images without a transparency channel


The KDE download site has been updated to support https now.


Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.


(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

A snap image for the Ubuntu App Store will be available from the Ubuntu application store. When it is updated, you can also use the Krita Lime PPA to install Krita 3.2.0-beta.2 on Ubuntu and derivatives.


Source code


For all downloads:


The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
. The signatures are here.

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

July 21, 2017


@GodTributes took over my title, soz.

Dude, where's my maintainer?

Last year, probably as a distraction from doing anything else, or maybe because I was asked, I started reviewing bugs filed as a result of automated flaw discovery tools (from Coverity to UBSan via fuzzers) being run on gdk-pixbuf.

Apart from the security implications of a good number of those problems, there was also the annoyance of having a busted image file bring down your file manager, your desktop, or even an app that opened a file chooser either because it was broken, or because the image loader for that format didn't check for the sanity of memory allocations.

(I could have added links to Bugzilla entries for each one of the problems above, but that would just make it harder to read)

Two big things happened in gdk-pixbuf 2.36.1, which was used in GNOME 3.24:

  • the removal of GdkPixdata as a stand-alone image format loader. We really don't want to load GdkPixdata files from sources other than generated sources or embedded data structures, and removing that loader closed off those avenues. We still ended up fixing a fair number of naive assumptions in helper functions though.
  • the addition of a thumbnailer for gdk-pixbuf supported images. Images would not be special-cased any more in gnome-desktop's thumbnailing code, making the file manager, the file chooser and anything else navigating directories full of broken and huge images more reliable.
But that's just the start. gdk-pixbuf continues getting bug fixes, and we carry on checking for overflows, underflows and just flows, breaks and beats in general.

Programmatic Thumbellina portrait-maker

Picture, if you will, a website making you download garbage files from the Internet, the ROM dump of a NES cartridge that wasn't properly blown on and digital comic books that you definitely definitely paid for.

That's a nice summary of the security bugs foisted upon GNOME in past year or so, even if, thankfully, we were ahead of the curve in terms of fixing those issues (the GStreamer NSF decoder bug was removed in 2013, the comics backend in evince was rewritten over a period of 2 years and committed in March 2017).

Still, 2 pieces of code were running on pretty much every file downloaded, on purpose or not, from the Internet: Tracker's indexers and the file manager's thumbnailers.

Tracker started protecting itself not long after the NSF vulnerability, even if recent versions of GStreamer weren't vulnerable, as we mentioned.

That left the thumbnailers. Some of those are first party, like the gdk-pixbuf, and those offered by core applications (Evince, Videos), written by GNOME developers (yours truly for both epub/mobi and Nintendo DS).

They're all good quality code I'd vouch for (having written or maintained quite a few of them), but they can rely on third-party libraries (say GStreamer, poppler, or libarchive), have naive or insufficiently defensive code (gdk-pixbuf loaders,  GStreamer plugins) or, worst of all: THIRD-PARTY EXTENSIONS.

There are external plugins and extensions for image formats in gdk-pixbuf, for video and audio formats in GStreamer, and for thumbnailers pretty much anywhere. We can't control those, but the least we can do when they explode in a wet mess is make sure that the toilet door is closed.

Not even Nicholas Cage can handle this Alcatraz

For GNOME 3.26 (and today in git master), the thumbnailer stall will be doubly bolted by a Bubblewrap sandbox and a seccomp blacklist.

This closes a whole vector of attack for the GNOME Desktop, but doesn't mean we're completely out of the woods. We'll need to carry on maintaining and fixing security bugs in those libraries and tools we depend on, as GStreamer plugin bugs still affect Videos, gdk-pixbuf bugs still affect Photos and Eye Of Gnome, etc.

And there are limits to what those 2 changes can achieve. The sandboxing and syscall blacklisting avoids those thumbnailers writing anything but an image file in PNG format in a specific directory. There's no network, the filename of the original file is hidden and sanitised, but the thumbnailer could still create a crafted PNG file, and the sandbox doesn't work inside a sandbox! So no protection if the application running the thumbnailer is inside Flatpak.

In fine

GNOME 3.26 will have better security for thumbnailers, so you won't "need to delete GNOME Files".

But you'll probably want to be careful with desktops that forked our thumbnailing code, namely Cinnamon and MATE, which don't implement those security features.

The next step for the thumbnailers will be beefing up our protection against greedy thumbnailers (in terms of CPU and memory usage), and sharing the code better between thumbnailers.

Note for later, more images of cute animals.

July 18, 2017

Krita 3.2 beta 1 Released

We’re releasing the first beta for Krita 3.2.0 today! Compared to Krita 3.1.4, released 26th of May, there are numerous bug fixes and some very cool new features. Please test this release, so we can fix bugs before the final release!

Known bugs

It’s a beta, so there are bugs. One of them is that the size and flow sliders are disabled. We promise faithfully we won’t release until that’s fixed, but in the meantime, no need to report it!


  • Krita 3.2 will use the gmic-qt plugin created and maintained by the authors of G’Mic We’re still working with them to create binary builds that can run on Windows, OSX and most versions of Linux. This plugin replaces completely the older gmic plugin.
  • We added Radian’s brush set to Krita’s default brushes.

These brushes are good for create a strong painterly look:

  • There are now shortcuts for changing layer states like visibility and lock.
  • There have been many fixes to the clone brush
  • There is a new dialog from where you can copy and paste relevant information about your system for bug reports.
  • We’ve integrated the Smart Patch tool that was previously only in the 4.0 pre-alpha builds!

  • The Gaussian Blur filter now can use kernels up to 1000 pixels in diameter

Bug Fixes

Among the bigger bug fixes:

  • Painting with your finger on touch screens is back. You can enable or disable this in the settings dialog.
  • If previously you suffered from the “green brush outline” syndrome, that should be fixed now, too. Though we cannot guarantee the fix works on all OpenGL systems.
  • There have been a number of performance improvements as well
  • The interaction with the file dialog has been improved: it should be better at guessing which folder you want to open, which filename to suggest and which file type to use.

And of course, there were dozens of smaller bug fixes.


The KDE download site has been updated to support https now.


Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.


(For some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

A snap image for the Ubuntu App Store will be available from the Ubuntu application store. When it is updated, you can also use the Krita Lime PPA to install Krita 3.2.0-beta.1 on Ubuntu and derivatives.


Source code


For all downloads:


The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
. The signatures are here.

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

July 16, 2017

Translating Markdown to LibreOffice or Word

For the Raspberry Pi Zero W book I'm writing, the publisher, Maker Media, wants submissions in Word format (but stressed that LibreOffice was fine and lots of people there use it, a nice difference from Apress). That's fine ... but when I'm actually writing, I want to be able to work in emacs; I don't want to be distracted fighting with LibreOffice while trying to write.

For the GIMP book, I wrote in plaintext first, and formatted it later. But that means the formatting step took a long time and needed exceptionally thorough proofreading. This time, I decided to experiment with Markdown, so I could add emphasis, section headings, lists and images all without leaving my text editor.

Of course, it would be nice to be able to preview what the formatted version will look like, and that turned out to be easy with a markdown editor called ReText, which has a lovely preview mode, as long as you enable Edit->Use WebKit renderer (I'm not sure why that isn't the default).

Okay, a chapter is written and proofread. The big question: how to get it into the Word format the publisher wants?

First thought: ReText has a File->Export menu. Woohoo -- it offers ODT. So I should be able to export to ODT then open the resulting file in LibreOffice.

Not so much. The resulting LibreOffice document is a mess, with formatting that doesn't look much like the original, and images that are all sorts of random sizes. I started going through it, resizing all the images and fixing the formatting, then realized what a big job it was going to be and decided to investigate other options first.

ReText's Export menu also offers HTML, and the HTML it produces looks quite nice in Firefox. Surely I could open that in LibreOffice, then save it (maybe with a little minor reformatting) as DOCX?

Well, no, at least not directly. It turns out LibreOffice has no obvious way to import an HTML file into a normal text document. If you Open the HTML file, it displays okay (except the images are all tiny thumbnails and need to be resized one by one); but LibreOffice can't save it in any format besides HTML or plaintext. Those are the only formats available in the menu in the Save dialog. LibreOffice also has a Document Converter, but it only converts Office formats, not HTML; and there's no Import... in LO's File. There's a Wizards->Web Page, but it's geared to creating a new web page and saving as HTML, not importing an existing HTML-formatted document.

But eventually I discovered that if I "Create a new Text Document" in LibreOffice, I can Select All and Copy in Firefox, followed by Paste into Libre Office. It works great. All the images are the correct size, the formatting is correct needing almost no corrections, and LibreOffice can save it as DOCX, ODT or whatever I need.

Image Captions

I mentioned that the document needed almost no corrections. The exception is captions. Images in a book need captions and figure numbers, unlike images in HTML.

Markdown specifies images as

![Image description][path/to/image.jpg)

Unfortunately, the Image description part is only visible as a mouseover, which only works if you're exporting to a format intended for a web browser that runs on desktop and laptop computers. It's no help in making a visible caption for print, or for tablets or phones that don't have mouseover. And the mouseover text disappears completely when you paste the document from Firefox into LibreOffice.

I also tried making a table with the image above and the caption underneath. But I found it looked just as good in ReText, and much better in HTML, just to add a new paragraph of italics below the image:


*Image description here*

That looks pretty nice in a browser or when pasted into LibreOffice. But before submitting a chapter, I changed them into real LibreOffice captions.

In LibreOffice, right-click on the image; Add Caption is in the context menu. It can even add numbers automatically. It initially wants to call every caption "Illustration" (e.g. "Illustration 1", "Illustration 2" and so on), and strangely, "Figure" isn't one of the available alternatives; but you can edit the category and change it to Figure, and that persists for the rest of the document, helpfully numbering all your figures in order. The caption dialog when you add each caption always says that the caption will be "Illustration 1: (whatever you typed)" even if it's the fourteenth image you've captioned; but when you dismiss the dialog it shows up correctly as Figure 14, not as a fourteenth Figure 1.

The only problem arises if you have to insert a new image in the middle of a chapter. If you do that, you end up with two Figure 6 (or whatever the number is) and it's not clear how to persuade LibreOffice to start over with its renumbering. You can fix it if you remove all the captions and start over, but ugh. I never found a better way, and web searches on LibreOffice caption numbers suggest this is a perennial source of frustration with LibreOffice.

The bright side: struggling with captions in LibreOffice convinced me that I made the right choice to do most of my work in emacs and markdown!

July 14, 2017

July 13, 2017

Summer Sale: Made with Krita now €7,95

It’s summer — a bit rainy, but still summer! So it’s time for a summer sale — and we’ve reduced the price for the Made with Krita 2016 art books to just 7,95. That means that shipping (outside the Netherlands) is more expensive than the book itself, but it’s a great chance to get acquainted with forty great artists and their work with Krita! The book is professionally printed on 130 grams paper and softcover bound in signatures. The cover illustration is by Odysseas Stamoglou. Every artist is showcased with a great image, as well as a short bio.

Made with Krita 2016
On sale: €7,95
Forty artists from all over the world, working in all kinds of styles and on all kinds of subjects show how Krita is used in the real world to create amazing and engaging art. The book also contains a biographical section with information about each individual artist. Made with Krita 2016 is now on sale: 7,95€, excluding shipping. Shipping is 11,25€ (3,65€ in the Netherlands).

Get Made with Krita 2016

July 11, 2017

Over Half a Million Downloads per Month

The official Blender release is now being downloaded over half a million times per month, and a total of 6.5M last year.

During the period of July 2016 and July 2017, Blender has seen the release of Blender 2.78 and a/b/c fix releases.

This is not counting:

  • Experimental Builds on Buildbot
  • Release Candidates and Test Builds
  • Other services offering Blender (app stores like Steam or community sites like GraphicAll)
  • Linux repositories

Below is the full report for each platform.