November 25, 2015

Happy Birthday GIMP!

Happy Birthday GIMP!

Also, wallpapers and darktable 2.0 creeps even closer!

I got busy building a birthday present for a project I work with and all sort of neat things happened in my absence! The Ubuntu Free Culture Showcase chose winners for it’s wallpaper contest for Ubuntu 15.10 ‘Wily Werewolf’ (and quite a few community members were among those chosen).

The darktable crew is speeding along to a 2.0 release with a new RC2 being released.

Also, a great big HAPPY 20th BIRTHDAY GIMP! I made you a present. I hope it fits and you like it! :)

Ubuntu Wallpapers

Back in early September I posted on discuss about the Ubuntu Free Culture Showcase that was looking for wallpaper submissions from the free software community to coincide with the release of Ubuntu 15.10 ‘Wily Werewolf’. The winners were recently chosen from among the submissions and several of our community members had their images chosen!

The winning entries from our community include:

Moss inflorescence by carmelo75 Moss inflorescence
The first winner is from PhotoFlow creator Andrea Ferrero
Light my fire, evening sun by Dariusz Duma Light my fire, evening sun
by Dariusz Duma
Sitting Here, Making Fun by Philipp Haegi Sitting Here, Making Fun
by Mimir
Tranquil by Pat David Tranquil
by Pat David

A big congratulations to you all for some amazing images being chosen! If you’re running Ubuntu 15.10, you can grab the ubuntu-wallpapers package to get these images right here!

darktable 2.0 RC2

Hot on the heels of the prior release candidate, darktable now has an RC2 out. There are many minor bugfixes from the previous RC1, such as:

  • high iso fix for exif data of some cameras
  • various macintosh fixes (fullscreen)
  • fixed a deadlock
  • updated translations

The preliminary changelog from the 1.6.x series:

  • darktable has been ported to gtk-3.0
  • new thumbnail cache replaces mipmap cache (much improved speed, less crashiness)
  • added print mode
  • reworked screen color management (softproof, gamut check etc.)
  • removed dependency on libraw
  • removed dependency on libsquish (solves patent issues as a side effect)
  • unbundled pugixml, osm-gps-map and colord-gtk
  • text watermarks
  • color reconstruction module
  • raw black/white point module
  • delete/trash feature
  • addition to shadows&highlights
  • more proper Kelvin temperature, fine-tuning preset interpolation in WB iop
  • noiseprofiles are in external JSON file now
  • monochrome raw demosaicing (not sure whether it will stay for release, like Deflicker, but hopefully it will stay)
  • aspect ratios for crop&rotate can be added to conf (ae36f03)
  • navigating lighttable with arrow keys and space/enter
  • pdf export — some changes might happen there still
  • brush size/hardness/opacity have key accels
  • the facebook login procedure is a little different now
  • export can upscale
  • we no longer drop history entries above the selected one when leaving dr or switching images
  • text/font/color in watermarks
  • image information now supports gps altitude
  • allow adding tone- and basecurve nodes with ctrl-click
  • new “mode” parameter in the export panel
  • high quality export now downsamples before watermark and frame to guarantee consistent results
  • lua scripts can now add UI elements to the lighttable view (buttons, sliders etc…)
  • a new repository for external lua scripts was started.

More information and packages can be found on the darktable github repository.

Remember, updating from the currently stable 1.6.x series is a one-way street for your edits (no downgrading from 2.0 back to 1.6.x).

GIMP Birthday

All together now…

Happy Birthday to GIMP! Happy Birthday to GIMP!

GIMP Wilber Big Icon

This past weekend GIMP celebrated it’s 20th anniversary! It was twenty years ago on November 21st that Peter Mattis announced the availability of the “General Image Manipulation Program” on comp.os.linux.development.apps.

Twenty years later and GIMP doesn’t look a day older than a 1.0 release! (Yes, there’s a double entendre there).

To celebrate, I’ve been spending the past couple of months getting a brand new website and infrastructure built for the project! Just in case anyone was wondering where I was or why I was so quiet. I like the way it turned out and is shaping up so go have a look if you get a moment!

There’s even an official news post about it on the new site!

GIMP 2.8.16

To coincide with the 20th anniversary, the team also released a new stable version in the 2.8 series: 2.8.16. Head over to the downloads page to pick up a copy!!

New PhotoFlow Tutorial

Still working hard and fast on PhotoFlow, Andreas took some time to record a new video tutorial. He walks through some basic usage of the program, in particular opening an image, adding layers and layer masks, and saving the results. Have a look and if you have a moment give him some feedback!

Andreas is working on PhotoFlow at a very fast pace, so expect some more news about his progress very soon!

Krita 2.9 Animation Edition Beta released!


Today we are happy to announce the long awaited beta-version of Krita with Animation and Instant Preview support! Based on Krita 2.9, you can now try out the implementation of the big 2015 kickstarter features!

What’s new in this version? From the user point of view Krita didn’t change much. There are three new dockers: Animation, Timeline and Onion Skins, which let you control everything about your animation frames and one new menu item View->Instant Preview Mode (previously known as Level of Detail) allowing painting on huge canvases. For both features, you need a system that supports OpenGL 3.0 or higher.

For people who previously installed Krita, to get Instant Preview to show up on the view menu, delete the krita.rc(not kritarc) file in your resource folder(which can be accessed quickly via Settings->Manage Resources->Open Resource Folder) and restart Krita. Or just use the hotkey Shift+L.

But under these visually tiny changes hides a heap of work done to the Krita kernel code. We almost rewritten it to allow most of the rendering processes run in the background. So now all animated frames and view cache planes are calculated in the moments of time when the user is idle (thinks, or chooses a new awesome brush). Thanks to these changes now it is possible to efficiently work with huge images and play a sequence of complex multi-layered frames in real time (the frames are recalculated in the background and are uploaded to you GPU directly from the cache).


So, finally, welcome Krita 2.9 Animation Edition Beta! (Note the version number! The final release will be based on Krita 3.0, this version is created from the 2.9 stable release, but it is still beta. We welcome your feedback!

Video tutorial from Timothee Giet:

A short video introduction into Krita animation features is available here.

Packages for Ubuntu:

You can get them through Krita Lime repositories. Just choose ‘krita-animation-testing’ package:

sudo add-apt-repository ppa:dimula73/krita
sudo apt-get update
sudo apt-get install krita-animation-testing

Packages for Windows:

Two packages are available: 64-bit, 32-bit:

You can download the zip files and just unpack them, for instance on your desktop and run. You might get a warning that the Visual Studio 2012 Runtime DLL is missing: you can download the missing dll here. You do not need to uninstall any other version of Krita before giving these packages a try!

User manuals and tutorials:

November 24, 2015

Krita 2.9 Animation Edition beta

I’m very happy to tell you, finally, a version of Krita supporting basic animation features is released ! (Check it here)

This is still in early stage, based on latest 2.9 version, with lot of additional features to come later from version 3.

If you want to have fun with it, here is a little introduction tutorial to get started, with some text and a video to illustrate it.

-Load the animation workspace to quickly activate the timeline and animation dockers.

-The timeline only has the selected layer. To keep a layer always visible on it, click on the plus icon and select the corresponding option (Show in timeline to keep the selected layer, or Add existing layer and select one in the list …)

-To make a layer animated, create a new frame on it (with the right-click option on the timeline, or with the button on the animation docker). Now the icon to activate the onion skins on it becomes visible (the light bulb icon), activate it to can see previous and next frames.

-The content of a frame is visible in the next frames until you create a new one.

-After drawing the first frame, go further in the timeline and do any action to edit the image (draw, erase, delete all, use transform tool, …). It creates a new frame with the content corresponding to the action you made.

-If you prefer to only create new frames manually, disable the auto-frame-mode with the corresponding button in the animation docker (the film with a pen icon).

-To move a frame in time, just drag and drop it to a new time position

-To duplicate a frame, press Control while you drag and drop it to a new time position.

-In the animation docker, you can define the start and end of the animation (to define the frames to use for export, and for the playback loop). You can also define the speed of the playback with the Frame rate value (frames per second) and the Play speed (multiplier or the frame rate).

-In the Onion Skins docker, you can change the opacity for each of the 10 previous and next frames. You can also select a color overlay to distinguish previous and next frames. You can act on the global onion skins opacity with the 0 slider.

-To change the opacity of several onion skins at the same time, press Shift while clicking across the sliders.

-To export your animation, use the menu entry File – Export animation, and select the image format you want for the image sequence.

Have fun animating in Krita, and don’t forget to report any issue you find to help improve the final version ;)

SDN/NFV DevRoom at FOSDEM: Deadline approaching!

We extended the deadline for the SDN/NFV DevRoom at FOSDEM to Wednesday, November 25th recently – and we now have the makings of a great line-up!

To date, I have received proposals about open source VNFs, dataplane acceleration and accelerated virtual switching, an overview of routing on the internet that looks fascinating, open switch design and operating systems, traffic generation and testing, and network overlays.

I am still interested in having a few more NFV focussed presentations, and one or two additional SDN controller projects – and any other topics you might think would tickle our fancy! Just over 24 hours until the deadline.

November 23, 2015

Game Art Quest Kickstarter!

Today an exciting new crowdfunding campaign kicks off! Nathan Lovato, the author of Game Design Quest, wants to create a new series of video tutorials on creating 2D game art with Krita. Nathan is doing this on his own, but the Krita project, through the Krita Foundation, really wants this to happen! Over to Nathan, introducing his campaign:

“There are few learning resources dedicated to 2d game art. With Krita? Close to none. That is why I started working on Game Art Quest. This training will show you the techniques and concepts game artists use in their daily work. If you want to become a better artist, this one is for you.”

“We are developing this project together with the Krita Foundation. This is an opportunity for Krita to reach new users and to sparkle the interest of the press. However, for this project to come to life, we need your help. A high quality training series requires months of full-time work to create. That is why we are crowdfunding it on Kickstarter.”

“But who the heck am I to teach you game art? I’m Nathan, a professional game designer and tutor. I am the author of Game Design Quest, a YouTube channel filled with tutorials about game creation. Every Thursday, I release a new video. And I’ve done so since the start of the year, on top of my regular work. Over the months, my passion for open source technologies grew stronger. I discovered Krita 2.9 and felt really impressed by it. Krita deserves more attention.”

“Long story short, Game Art Quest is live on Kickstarter. And its existence depends on you!”

“Even if you can’t afford to pledge, share the word on social networks! This would help immensely. Also, this campaign is not only supporting the production of the premium series. It will allow me to keep offering you free tutorials for the months to come. And for the whole duration of the campaign, you’re getting 2 tutorials every single week!”

Check out Nathan’s campaign:


Interview with Christopher Stewart


Could you tell us something about yourself?

My name is Christopher, and I am an illustrator living in Northern California. When I’m not in a 2d mindset I like to sculpt with Zbrush and Maya. Some of my interests include Antarctica, Hapkido and racing planes of the 1930s.

Do you paint professionally, as a hobby artist, or both?

I have been working professionally for quite some time. I have worked for clients such as Ubisoft, Shaquille O’Neal and Universal Studios. I’m always looking for new and interesting work.

What genre(s) do you work in?

SF, Fantasy, and Comic Book/ Sequential art. This is where the foundation of my work lies – these genres have always been an inspiration to me ever since I was a kid.

Whose work inspires you most — who are your role models as an artist?

Wow, what a tough question! So many great artists out there.. Brom- definitely, N.C. Wyeth, George Perez, and Alphose Mucha. Recently I have revisited the background stylists of Disney with their immersive environments.

How and when did you get to try digital painting for the first time?

About 9 years ago. Until then my work was predominantly traditional. I wanted to try new mediums, and I thought digital painting would be a great area to explore.

What makes you choose digital over traditional painting?

Time and space.

Alterations and color adjustments can be done quickly for a given digital piece.

The physicality of traditional medium has different challenges, and usually the solution will take longer to accomplish with traditional mediums in general.

Digital painting doesn’t take up a lot of space, a few decent sized stretched canvases..

How did you find out about Krita?

I had tried Painter X and CS and they were unsatisfying, so I was looking for a paint program. Krita was recommended by a long-time friend who liked the program, and I was hooked.

What was your first impression?

It was very intuitive. It had a UI that I had very few difficulties with.

What do you love about Krita?

I really really liked the responsiveness of the brushes. With other applications I was experiencing a “flatness” from the tablet I use to the results I wanted on screen, Krita’s brushes just feel more supple. The ability to customize the interface and brushes was also a huge plus.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I haven’t been using Krita very long (less than 6 months) but I would like to be able to save and import/export color history as a file within an open Krita document.

What sets Krita apart from the other tools that you use?

When a company makes an application as powerful as Krita available for free, it’s a statement about how confident they are that artists will love it. And judging from the enthusiastic and knowledgeable people in the forums, they not only love it they want others to be able to love it and use it too. Developing and experienced artists need to evaluate new tools easily. Access to those tools should never be so prohibitively costly as to turn them away. Krita doesn’t get in the way of talent being explored, it supports it.

What techniques and brushes do you prefer to use?

I use a lot of the default brushes especially the Bristle brushes, a semi transparent texture to add to a plein air look as a final layer. I use some of David Revoy’s brushes, specifically the Splatter brushes. I recently made a new custom brush that I tried out on my most recent illustration.

Where can people see more of your work?

My website is You can reach me there!

Anything else you’d like to share?

Thank you so much for the interview and a special thanks to the developers and community that make Krita work!

November 22, 2015

Call to translators

Dear translators,

We plan to release Stellarium 0.14.1 at first day of next month.

This is a bugfix release, who has few small fixes and has few ported features from version 0.15.0. Currently translators can improve translation of version 0.14.0 and fix some mistakes in translations. If you can assist with translation to any of the 134 languages which Stellarium supports, please go to Launchpad Translations and help us out:

If it will be required we can postpone release on few days.

Thank you!

ഇവൻ മര്യാദരാമൻ

ഇവൻ മര്യാദരാമൻ, 2015 : ഒരു ഗ്രാമം, രണ്ട് വീട്ടുകാർ, അദ്യം സ്നേഹം, പിന്നെ പക, പ്രതികാരം, കുടിപ്പക, കുടിക്കാത്ത പക, അച്ഛന്റെ പൊക. കാലം മുന്നോട്ട് ഓടി പോവുന്നു. നായകൻ ഒരു പിച്ചയായി വളരുന്നു. വില്ലൻ വളർന്ന് ഓവർ സമ്പന്നൻ ആവുന്നു. വില്ലന്റെ മകൾ സുന്ദരിയായി വളരുന്നു. അപ്പോൾ അവളുടെ ആങ്ങളമാർ ജിമ്മന്മാർ ആവണമല്ലോ – അതും ഉണ്ട്, രണ്ടെണ്ണം. ഒരിക്കലും പ്രതീക്ഷികാത്ത (നാലു തവണ ദിലീപ് സിനിമയിൽ വന്നിട്ടുണ്ടെങ്കിലും) കാരണങ്ങളാൽ ഗ്രാമത്തിലേക്ക് തിരിച്ച് വരുന്ന […]

എന്നും എപ്പോഴും

എന്നും എപ്പോഴും, 2015 : ഒരേ ഫോർമാറ്റിൽ സിനിമ ഇടുക്കുന്ന സത്യൻ അന്തിക്കാട് പല സ്ഥിരം ഘടങ്ങളും മാറ്റി വച്ച് ‘സ്റ്റോറി ബോർഡ് എടുക്കാൻ മറന്ന്’, പിന്നെ കട്ട് പറയാനും മറന്ന സിനിമയാണ് എന്നും എപ്പോഴും. കട്ട് പറയാൻ മറന്നു എന്നത് സത്യമാണ്, നീണ്ട് വലിഞ്ഞ ഇലാസ്റ്റിക്ക് ജെട്ടി പോലെ കിട്ടക്കുന്നു സിനിമ -കാലിടാൻ ഓട്ടയില്ലെന്ന് മാത്രം. ഫ്ലാഷ്ബാക്കെന്ന മാരകായുധം ഉപയോഗിക്കാതെ, തമിഴ്നാട്ടിലേക്ക് ഒരു കണക്ഷനുമില്ലാത്ത, ഗ്രാമത്തിന്റെ പച്ചപ്പ് കാണിക്കാത്ത സത്യൻ അന്തിക്കാട് സിനിമ എന്നത് ഒരുതരത്തിൽ […]

November 20, 2015

Kubernetes from the ground up

I really loved reading Git from the bottom up when I was learning Git, which starts by showing how all the pieces fit together. Starting with the basics and gradually working towards the big picture is a great way to understand any complex piece of technology.

Recently I’ve been working with Kubernetes, a fantastic cluster manager. Like Git it is tremendously powerful, but the learning curve can be quite steep.

But there is hope. Kamal Marhubi has written a great series of articles that take the same approach: start from the basic building blocks, build with those.

Currently available:

Highly recommended.


Comments | More on | @rubenv on Twitter

Industry benchmark SPEC 2.0 uses Blender

SPEC is the Standard Performance Evaluation Corporation – the Industry leading benchmark provider for performance testing suites. The newly released SPECwpc V2.0 benchmark measures all key aspects of workstation performance based on diverse professional applications.

The SPECwpc benchmark will be used by vendors to optimize performance and publicize benchmark results for different vertical market segments. Users can employ the benchmark for buying and configuration decisions specific to their industries. SPECwpc V2.0 runs under the 64-bit versions of Microsoft Windows 7 SP1 and Windows 8.1 SP1.

New in the SPECwpc V2.0 is a prominent inclusion of two free/open software projects: Blender and LuxRender.



For more information: the official SPEC announcement.


November 17, 2015

Ubuntu Unstable Repository and our Release Candidates

Following is a public service announcement from Pascal de Bruijn, the maintainer of the Ubuntu PPAs.

As most of you know, my darktable-unstable PPA was serving as a pre-release repository for our stable maintenance tree, as it usually does. Now as master has settled down, and we're slowly gearing up for a 2.0 release, I'll do pre-release (release candidate) builds for darktable 2.0 there.

On my darktable-unstable PPA I will support Ubuntu Trusty (14.04, the latest Long Term Support release) as always. Temporarily I'll support Ubuntu Wily (15.10, the latest plain release) as well, at least until we have a final 2.0 stable release. Once we have a final 2.0 stable release I will support all Ubuntu versions (still) supported by Canonical at that time via my darktable-release PPA as usual.

In general updates on my darktable-unstable PPA should be expected to be fairly erratic, completely depending on the number and significance of changes being made in git master. That said, I expect that it will probably average out at once a week or so.

If you find any issues with these darktable release candidates please do report them to our bug tracker.

November 16, 2015

second release candidate for darktable 2.0

we're proud to announce the second release candidate in the new feature release of darktable, 2.0~rc2.

as always, please don't use the autogenerated tarball provided by github, but only our tar.xz.

the release notes and relevant downloads can be found attached to this git tag:
please only use our provided packages ("darktable-2.0.rc2.*" tar.xz and dmg) not the auto-created tarballs from github ("Source code", zip and tar.gz). the latter are just git snapshots and will not work! here are the direct links to tar.xz and dmg:

the checksums are:

$ sha256sum darktable-2.0~rc2.tar.xz 
9349eaf45f6aa4682a7c7d3bb8721b55ad9d643cc9bd6036cb82c7654ad7d1b1  darktable-2.0~rc2.tar.xz
$ sha256sum darktable-2.0~rc2.dmg 
f343a3291642be1688b60e6dc98930bdb559fc5022e32544dcbe35a38aed6c6d  darktable-2.0~rc2.dmg

packages for individual platforms and distros will follow shortly.

for your convenience, robert hutton collected build instructions for quite a few distros in our wiki:

the changes from rc1 include many minor bugfixes, such as:

  • high iso fix for exif data of some cameras
  • various macintosh fixes (fullscreen)
  • fixed a deadlock
  • updated translations

and the preliminary changelog as compared to the 1.6.x series can be found below.

when updating from the currently stable 1.6.x series, please bear in mind that your edits will be preserved during this process, but it will not be possible to downgrade from 2.0 to 1.6.x any more. be careful if you need darktable for production work!

happy 2.0~rc2 everyone :)

  • darktable has been ported to gtk-3.0
  • new thumbnail cache replaces mipmap cache (much improved speed, less crashiness)
  • added print mode
  • reworked screen color management (softproof, gamut check etc.)
  • removed dependency on libraw
  • removed dependency on libsquish (solves patent issues as a side effect)
  • unbundled pugixml, osm-gps-map and colord-gtk
  • text watermarks
  • color reconstruction module
  • raw black/white point module
  • delete/trash feature
  • addition to shadows&highlights
  • more proper Kelvin temperature, fine-tuning preset interpolation in WB iop
  • noiseprofiles are in external JSON file now
  • monochrome raw demosaicing (not sure whether it will stay for release, like Deflicker, but hopefully it will stay)
  • aspect ratios for crop&rotate can be added to conf (ae36f035e1496b8b8befeb74ce81edf3be588801)
  • navigating lighttable with arrow keys and space/enter
  • pdf export -- some changes might happen there still
  • brush size/hardness/opacity have key accels
  • the facebook login procedure is a little different now
  • export can upscale
  • we no longer drop history entries above the selected one when leaving dr or switching images
  • text/font/color in watermarks
  • image information now supports gps altitude
  • allow adding tone- and basecurve nodes with ctrl-click
  • new "mode" parameter in the export panel
  • high quality export now downsamples before watermark and frame to guarantee consistent results
  • lua scripts can now add UI elements to the lighttable view (buttons, sliders etc...)
  • a new repository for external lua scripts was started.

November 14, 2015

fwupd and DFU

For quite a long time fwupd has supported updating the system ‘BIOS’ using the UpdateCapsule UEFI mechanism. This open specification allows vendors provide a single update suitable for Windows and Linux, and the mechanism for applying it is basically the same for all vendors. Although there are only a few systems in the wild supporting capsule updates, a lot of vendors are planning new models next year, and a few of the major ones have been trialing the LVFS service for quite a while too. With capsule updates, fwupd and the LVFS we now have a compelling story for how to distribute and securely install system BIOS updates automatically.

It’s not such a rosy story for USB devices. In theory, everything should be using the DFU specification which has been endorsed by the USB consortium, but for a number of reasons quite a few vendors don’t use this. I’m guilty as charged for the ColorHug devices, as I didn’t know of the existance of DFU when designing the hardware. For ColorHug I just implemented a vendor-specific HID bootloader with a few custom commands as so many other vendors have done; it works well, but every vendor does things a slightly different way which needs having vendor specific update tools and fairly random firmware file formats.

With DFU, what’s supposed to happen is there are two modes for the device, a normal application runtime which is doing whatever the device is supposed to be doing, and another DFU mode which is really just an EEPROM programmer. By ‘detaching’ the application firmware using a special interface you can program the device and then return to normal operation.

So, what to do? For fwupd I want to ask vendors of removable hardware to implement DFU so that we don’t need to write code for each device type in fwupd. To make this a compelling prospect I’ve spent a good chunk of time of last week:

  • Creating a GObjectIntrospectable and cancellable host-side library called libdfu
  • Writing a reference GPLv3+ device-side implementation for a commonly used USB stack for PIC microcontrollers
  • Writing the interface code in fwupd to support DFU files wrapped in .cab files for automatic deployment

At the moment libdfu supports reading and writing raw, DFU and DfuSe file types, and supports reading and writing to DFU 1.1 devices. I’ve not yet implemented writing to ST devices (a special protocol extension invented by ST Microsystems) although that’s only because I’m waiting for someone to lend me a device with a STM32F107 included (e.g. DSO Nano). I’ve hopefully made the code flexible enough to make this possible without breaking API, although the libdfu library is currently private to fwupd until it’s had some proper review. You can of course use the dependable dfu-util tool to flash firmware, but this wasn’t suitable for use inside fwupd for various reasons.

Putting my money where my mouth is, I’ve converted the (not-yet-released) ColorHug+ bootloader and firmware to use DFU; excluding all the time I spent writing the m-stack patch and the libdfu support in fwupd it only took a couple of hours to build and test. Thanks to Christoph Brill, I’ll soon be getting some more hardware (a Neo FreeRunner) to verify this new firmware update mechanism on a real device with multiple implemented DFU interfaces. If anyone else has any DFU-capable hardware (especially Arduino-style devices) I’d be glad of any donations.

Once all this new code has settled down I’m going to be re-emailing a lot of the vendors who were unwilling to write vendor-specific code in fwupd. I’m trying to make the barrier to automatic updates on Linux as low as possible.

Comments welcome.

November 11, 2015

evolution of seccomp

I’m excited to see other people thinking about userspace-to-kernel attack surface reduction ideas. Theo de Raadt recently published slides describing Pledge. This uses the same ideas that seccomp implements, but with less granularity. While seccomp works at the individual syscall level and in addition to killing processes, it allows for signaling, tracing, and errno spoofing. As de Raadt mentions, Pledge could be implemented with seccomp very easily: libseccomp would just categorize syscalls.

I don’t really understand the presentation’s mention of “Optional Security”, though. Pledge, like seccomp, is an opt-in feature. Nothing in the kernel refuses to run “unpledged” programs. I assume his point was that when it gets ubiquitously built into programs (like stack protector), it’s effectively not optional (which is alluded to later as “comprehensive applicability ~= mandatory mitigation”). Regardless, this sensible (though optional) design gets me back to his slide on seccomp, which seems to have a number of misunderstandings:

  • A Turing complete eBPF program watches your program Strictly speaking, seccomp is implemented using a subset of BPF, not eBPF. And since BPF (and eBPF) programs are guaranteed to halt, it makes seccomp filters not Turing complete.
  • Who watches the watcher? I don’t even understand this. It’s in the kernel. The kernel watches your program. Just like always. If this is a question of BPF program verification, there is literally a program verifier that checks various properties of the BPF program.
  • seccomp program is stored elsewhere This, with the next statement, is just totally misunderstood. Programs using seccomp define their program in their own code. It’s used the same way as the Pledge examples are shown doing.
  • Easy to get desyncronized either program is updated As above, this just isn’t the case. The only place where this might be true is when using seccomp on programs that were not written natively with seccomp. In that case, yes, desync is possible. But that’s one of the advantages of seccomp’s design: a program launcher (like minijail or systemd) can declare a seccomp filter for a program that hasn’t yet been ported to use one natively.
  • eBPF watcher has no real idea what the program under observation is doing… I don’t understand this statement. I don’t see how Pledge would “have a real idea” either: they’re both doing filtering. If we get AI out of our syscall filters, we’re in serious trouble. :)

OpenBSD has some interesting advantages in the syscall filtering department, especially around sockets. Right now, it’s hard for Linux syscall filtering to understand why a given socket is being used. Something like SOCK_DNS seems like it could be quite handy.

Another nice feature of Pledge is the path whitelist feature. As it’s still under development, I hope they expand this to include more things than just paths. Argument inspection is a weak point for seccomp, but under Linux, most of the arguments are ultimately exposed to the LSM layer. Last year I experimented with creating a “seccomp LSM” for path matching where programs could declare whitelists, similar to standard LSMs.

So, yes, Linux “could match this API on seccomp”. It’d just take some extensions to libseccomp to implement pledge(), as I described at the top. With OpenBSD doing a bunch of analysis work on common programs, it’d be excellent to see this usable on Linux too. So far on Linux, only a few programs (e.g. Chrome, vsftpd) have bothered to do this using seccomp, and it could be argued that this is ultimately due to how fine grained it is.

© 2015, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

Remembrance Day 2015

I’m definitely emotionally vulnerable due to newborn-induced sleep deprivation, but this drawing that my five-year-old daughter brought home from school today actually made me cry:

Remembrance Day 2015 - Je me souviens, by Olivia, age 5

November 09, 2015

Interview with Bruno Fernandez

Could you tell us something about yourself?

My name is Bruno Fernandez. I’m 41 years old and I live in Buenos Aires, Argentina. I work as sysadmin in a financial company in Argentina. Besides, I’m an illustrator who works in different graphic media in my country and abroad.

I have a beautiful family and two children, They are called Agustina and Dante.

Do you paint professionally, as a hobby artist, or both?

I have been working professionally for ten years but I have always worked on setting a professional vision and genuineness to every line, every colour. The word ‘hobby’ sometimes removes the sense of sincerity for what I consider a passion.

What genre(s) do you work in?

One of the most recognizable genres I focus on is editorial illustration near surrealism. However, referring to another aspects of my work, I could also mention children’s illustration.

Whose work inspires you most — who are your role models as an artist?

Those are difficult questions because what inspires me does not always guarantee similar results. I usually observe and this observation generates the necessity of looking for lines, style, colour, emphasis in texture or feeling transmutation.

However, I like many artists like Carlos Alonso, Edvard Munch, Klimt, Egon Schiele, Viviana Bilotti, Poly Bernatene, Enrique Breccia, Quique Alcatena, Frank Frazetta, Joaquin Sorolla, Maria Wernicke, and so on. Some of them are not famous but I am able to find enriching details that help me to stay on the course I want to move.

All in all, I cannot forget my children: their freshness and flow without conditioning. They always remind me the child I used to be and the future I imagined.

Bruno Fernandez 1

How and when did you get to try digital painting for the first time?

My first time was frustrating. I remembered I tried to do something with Gimp, one of the first available versions for Linux. It wasn’t worth the trouble because
I always felt more comfortable using traditional media  pencil, acrylics and a piece of paper.

What makes you choose digital over traditional painting?

I wanted to find applications that fulfilled the same expectations I had with physical tools. Thus, Mypaint was the first application I tried.

Bruno Fernandez 2

How did you find out about Krita? What do you love about  Krita?

I have been working in system administration on Linux systems for ten years and I have always provided opportunities to each available application, considering not only my sysadmin job, but also my creative side. So, after using Mypaint, I found out that Krita provided a world full of possibilities. I also found artists like David Revoy who exemplified the professional possibilities of the application.

All in all, Krita is my favourite application because of its potential and resources. Krita covers all my expectations without needing proprietary applications like Adobe Photoshop or Adobe Illustrator.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I am always excited when I finish a picture. In my last work, I try to see my own transition to observe the aspects I overcame. I think “Campo de rosas” represents the state of my art at the moment. As regards resources, it is important to mention paintbrushes by Pablo Cazorla and David Revoy.

Bruno Fernandez Campo de Rosas

Where can people see more of your work?

You can find out more about me on There, you can contact all my social networks: tumblr, Deviantart and facebook.

Anything else you’d like to share?

Special thanks to the Krita team for this magnificent tool and for the opportunity of showing my works .

November 06, 2015

Call For Content: Blenderart Magazine issue #48

We are ready to start gathering up tutorials, making of articles and images for Issue # 48 of Blenderart Magazine.

The theme for this issue is “Time Flies: 10 years of Blenderart Magazine”

Blenderart Magazine is 10 years old and we are going to celebrate by asking for projects, images and animations that you have done in the last 10 years. The older the better. What is your oldest project? Are you brave enough to share it with us?

Looking for:

*Old projects

*Old work-arounds that have been rendered obsolete by improvements to Blender

*”Do You Remember?” Articles: memories about how it used to be, the problems you encountered and how you solved it

*warning: lack of submissions could result in an entire issue of strange sculpting experiments, half completed models and a galley filled with random bad sketches by yours truly…. :P …… goes off to start filling sketchbook with hundreds of stick figures, just in case. :P


Send in your articles to sandra
Subject: “Article submission Issue # 48 [your article name]”

Gallery Images

As usual you can also submit your best renders based on the theme of the issue. The theme of this issue is “Time Flies: 10 years of Blenderart Magazine”. Please note if the entry does not match with the theme it will not be published.

Send in your entries for gallery to gaurav
Subject: “Gallery submission Issue # 48″

Note: Image size should be of 1024x (width) at max.

Last date of submissions December 5, 2015.

Good luck!
Blenderart Team

Gadget reviews

Not that I'm really running after more gadgets, but sometimes, there is a need that could only be soothed through new hardware.

Bluetooth UE roll

Got this for my wife, to play music when staying out on the quays of the Rhône, playing music in the kitchen (from a phone or computer), or when she's at the photo lab.

It works well with iOS, MacOS X and Linux. It's very easy to use, with whether it's paired, connected completely obvious, and the charging doesn't need specific cables (USB!).

I'll need to borrow it to add battery reporting for those devices though. You can find a full review on Ars Technica.

Sugru (!)

Not a gadget per se, but I bought some, used it to fix up a bunch of cables, repair some knickknacks, and do some DIY. Highly recommended, especially given the current price of their starter packs.

15-pin to USB Joystick adapter

It's apparently from Ckeyin, but you'll find the exact same box from other vendors. Made my old Gravis joystick work, in the hope that I can make it work with DOSBox and my 20-year old copy of X-Wing vs. Tie Fighter.

Microsoft Surface ARC Mouse

That one was given to me, for testing, works well with Linux. Again, we'll need to do some work to report the battery. I only ever use it when travelling, as the batteries last for absolute ages.

Logitech K750 keyboard

Bought this nearly two years ago, and this is one of my best buys. My desk is close to a window, so it's wireless but I never need to change the batteries or think about charging it. GNOME also supports showing the battery status in the Power panel.

Logitech T650 touchpad

Got this one in sale (17€), to replace my Logitech trackball (one of its buttons broke...). It works great, and can even get you shell gestures when run in Wayland. I'm certainly happy to have one less cable running across my desk, and reuses the same dongle as the keyboard above.

If you use more than one devices, you might be interested in this bug to make it easier to support multiple Logitech "Unifying" devices.

ClicLite charger

Got this from a design shop in Berlin. It should probably have been cheaper than what I paid for it, but it's certainly pretty useful. Charges up my phone by about 20%, it's small, and charges up at the same time as my keyboard (above).

Dell S2340T

Bought about 2 years ago, to replace the monitor I had in an all-in-one (Lenovo all-in-ones, never buy that junk).

Nowadays, the resolution would probably be considered a bit on the low side, and the touchscreen mesh would show for hardcore photography work. It's good enough for videos though and the speaker reaches my sitting position.

It's only been possible to use the USB cable for graphics for a couple of months, and it's probably not what you want to lower CPU usage on your machine, but it works for Fedora with this RPM I made. Talk to me if you can help get it into RPMFusion.

Shame about the huge power brick, but a little bonus for the builtin Ethernet adapter.

Surface 3

This is probably the biggest ticket item. Again, I didn't pay full price for it, thanks to coupons, rewards, and all. The work to getting Linux and GNOME to play well with it is still ongoing, and rather slow.

I won't comment too much on Windows either, but rather as what it should be like once Linux runs on it.

I really enjoy the industrial design, maybe even the slanted edges, but one as to wonder why they made the USB power adapter not sit flush with the edge when plugged in.

I've used it a couple of times (under Windows, sigh) to read Pocket as I do on my iPad 1 (yes, the first one), or stream videos to the TV using Flash, without the tablet getting hot, or too slow either. I also like the fact that there's a real USB(-A) port that's separate from the charging port. The micro SD card port is nicely placed under the kickstand, hard enough to reach to avoid it escaping the tablet when lugged around.

The keyboard, given the thickness of it, and the constraints of using it as a cover, is good enough for light use, when travelling for example, and the layout isn't as awful as on, say, a Thinkpad Carbon X1 2nd generation. The touchpad is a bit on the small side though it would have been hard to make it any bigger given the cover's dimensions.

I would however recommend getting a Surface Pro if you want things to work right now (or at least soon). The one-before-last version, the Surface Pro 3, is probably a good target.

November 05, 2015

Krita 2.9.9 Released

The ninth semi-monthly bug fix release of Krita is out! Upgrade now to get the following fixes and features:


  • Show a message when trying to use the freehand brush tool on a vector layer
  • Add a ctrl-m shortcut for calling up the Color Curves filter dialog. Patch by Raghavendra Kamath. Thanks!
  • Improve performance by not updating the image when adding empty layers and masks.


  • Fix typing in the artistic text tool. A regression in 2.9.8 made it impossible to type letters that were also used as
    global shortcuts. This is now fixed.
  • Don’t crash when opening an ODG file created in inkscape. The files are not displayed correctly, though and we need to figure out what the issue is.
  • Fix the gaussian blur filter: another 2.9.8 regression where applying a gaussian blur filter would cause the right and bottom edge to become semi-transparent.
  • Fix calculating available memory on OSX. Thanks to René J.V. Bertin for the patch!
  • When duplicating layers, duplicate the channel flags so the new layers are alpha locked if the original layers were alpha locked.
  • Fix a number of hard to find crashes in the undo system and the compositions docker.
  • Another exiv2-related jpeg saving fix.
  • Add a new dark pass-through icon.

Go to the Download page to get the freshest Krita! (And don’t forget to check out Scott’s book, or Animtim’s latest training DVD either!)

Krita Next

The next version of Krita will be 3.0 and we’re definitely getting there! There is a lot of development going on fixing issues with shortcuts, issues with the opengl canvas, issues with icons… And making packages. Ubuntu Linux users can already use the Krita 3.0 Unstable packages in the Lime repository, and we’re working on Windows and OSX packages.

Here’s a demo by Wolthera:

Winners Selected from Giveaway

Written by Scott Petrovic

And the giveaway is over! I want to thank everyone for entering and showing your support for Krita. The amount of comments and love that is being shown for Krita is out of this world. With the 400+ entries, there were over 20,000 words that were written. The developers spend a lot of time helping people with issues related to Krita, graphics drivers, or tablets. It is refreshing to see that a lot of people are enjoying Krita the way it currently is.

Now for the winners…

  1. John Hattan
  2. AJ2600
  3. Waru
  4. Sam M.
  5. Otxoa

Congratulations! I have your email addresses and will be contacting you shortly. I ordered the copies last week but they haven’t arrived yet. I will sign and ship them off as soon as I can.

Any Other Way to Get Free Copies?

I lose at pretty much all giveaways that I enter like this. I also know that for some of you, a large reason you are using Krita is because it is free. This was your only shot. Paying for a book of any type is out of reach at the moment, no matter what the cost.

For those of you that really want the education and cannot afford the book, there might be another way to get it while supporting Krita. Did you know that many libraries will get you a book for free if you just ask them for it? I cannot speak for most countries, but I know this works in the USA. They don’t charge you for anything. I have done this recently with other books. Some library websites will have a request form that you can ask for books. If you fill that out, they usually respond back and let you know when/if it comes in.

Show Your Support

It is exciting for us volunteers to see that Krita is making a difference in people’s lives. When people share their work in things like the monthly drawing challenge, it shows us that people are using and enjoying the software. If you have any skill sets that you would like to volunteer for, feel free to get in contact with us through the chatroom or forum.  Even beta testing helps new releases go smoother. There are plenty of ways to help Krita and keep it moving forward.

attribute((cleanup)), mixed declarations and code, and goto.

One of the cool features of recent GLib is g_autoptr() and g_autofree. It’s liberating to be able to write:

g_autofree char *filename = g_strdup_printf("%s/%d.txt", dir, count);

And be sure that will be freed no matter how your function returns. But as I started to use it, I realized that I wasn’t very sure about some details about the behavior, especially when combined with mixing declarations and code as allowed by C99.

Internally g_autofree uses __attribute__((cleanup)), which is supported by GCC and clang. The definition of g_autofree is basically:

static inline void
g_autoptr_cleanup_generic_gfree (void *p)
  void **pp = (void**)p;
  g_free (*pp);

#define g_autofree __attribute__((cleanup(g_autoptr_cleanup_generic_gfree)))

Look at the following examples:

int count1(int arg)
  g_autofree char *str;

  if (arg < 0)
    return -1;

  str = g_strdup_printf("%d", arg);

  return strlen(str);

int count2(int arg)
  if (arg < 0)
    return -1;

  g_autofree char *str = g_strdup_printf("%d", arg);

  return strlen(str);

int count3(int arg)
  if (arg < 0)
    goto out;

  g_autofree char *str = g_strdup_printf("%d", arg);

  return strlen(str);

  return -1;

int count4(int arg)
  if (arg < 0)
    goto out;

    g_autofree char *str = g_strdup_printf("%d", arg);

    return strlen(str);

  return 0;

Which ones of these do you think work as intended, and which ones are buggy? (I’m not recommending this as a way counting the digits in a number – the example is artificial.)

count1() is pretty clearly buggy – the cleanup function will run in the error
path and try to free an uninitialized string. Slightly, more subtly, count3() is also buggy – because the goto jumps over the initialization. But count2() and count4() work as intended.

To understand why this is the case, it’s worth looking at how attribute((cleanup)) is described in the GCC manual – all it says is “the ‘cleanup’ attribute runs a function when the variable goes out of scope.” I first thought that this was a completely insufficient definition – not complete enough to allow figuring out what was supposed to happen in the above cases, but thinking about it a bit, it’s actually a precise definition.

To recall, the scope of a variable in C is from the point of the declaration of the variable to the end of the enclosing block. What the definition is saying is that any time a variable is in scope, and then goes out of scope, there is an implicit call to the cleanup function.

In the early return in count1() and at the return that is jumped to in count3(), the variable ‘str’ is in scope, so the cleanup function will be called, even though the variable is not initialized in either case. In the corresponding places in count2() and count4() the variable ‘str’ is not in scope, so the cleanup function will not be called.

The coding style takewaways from this are 1) Don’t use the g_auto* attributes on a variable that is not initialzed at the time of definition 2) be very careful if combining goto with g_auto*.

It should be noted that GCC is quite good at warning about it if you get it wrong, but it’s still better to understand the rules and get it right from the start.

November 04, 2015

first release candidate for darktable 2.0

We're proud to announce the first release candidate in the new feature release of darktable, 2.0~rc1.

The release notes and relevant downloads can be found attached to this git tag:
Please only use our provided packages ("darktable-2.0.rc1.*" tar.xz and dmg) not the auto-created tarballs from GitHub ("Source code", zip and tar.gz). The latter are just git snapshots and will not work! Here are the direct links to tar.xz and dmg:

$ sha256sum darktable-2.0.rc1.tar.xz
$ sha256sum darktable-2.0.rc1.dmg 

Packages for individual platforms and distros will follow shortly.

For your convenience, these are the ubuntu/debian packages required to build the source:

$ sudo apt-get build-dep darktable && sudo apt-get install libgtk-3-dev libpugixml-dev libcolord-gtk-dev libosmgpsmap-1.0-0-dev libcups2-dev

And the preliminary changelog can be found below.

When updating from the currently stable 1.6.x series, please bear in mind that your edits will be preserved during this process, but it will not be possible to downgrade from 2.0 to 1.6.x any more. Be careful if you need darktable for production work!

Happy 2.0~rc1 everyone :)

  • darktable has been ported to gtk-3.0
  • new thumbnail cache replaces mipmap cache (much improved speed, less crashiness)
  • added print mode
  • reworked screen color management (softproof, gamut check etc.)
  • text watermarks
  • color reconstruction module
  • raw black/white point module
  • delete/trash feature
  • addition to shadows&highlights
  • more proper Kelvin temperature, fine-tuning preset interpolation in the white balance module
  • noiseprofiles are in external JSON file now
  • monochrome raw demosaicing (not sure whether it will stay for release, like Deflicker, but hopefully it will stay)
  • aspect ratios for crop&rotate can be added to conf (ae36f03)
  • navigating lighttable with arrow keys and space/enter
  • pdf export – some changes might happen there still
  • brush size/hardness/opacity have key accels
  • the facebook login procedure is a little different now
  • export can upscale
  • we no longer drop history entries above the selected one when leaving darkroom or switching images
  • text/font/color in watermarks
  • image information now supports GPS altitude
  • allow adding tone- and basecurve nodes with ctrl-click
  • we renamed mipmaps to thumbnails in the preferences
  • new “mode” parameter in the export panel
  • high quality export now downsamples before watermark and frame to guarantee consistent results
  • Lua scripts can now add UI elements to the lighttable view (buttons, sliders etc.)
  • a new repository for external Lua scripts was started.

November 03, 2015

Mathilde Ampe – Automotive design with Blender


As an automotive company, we were used to do all the digital work in one software, Alias, which is mostly dedicated to industrial design. However, we knew that using this software in the early stages of creation was very time consuming. The questions raised were how to speed up the process and how to make our digital life easier: one answer, use a different software during those early stages.

That’s how we got into Blender; in a car creating a digital model of a seat is particularly long because of the soft materials used and the criteria. I was between interviews when I was asked to model, in Blender, a seat of my choice for a test. Based on pictures, I modelled a front seat in 4 hours when my future manager Pierre-Paul Andriani expected me to release this data in a day or two. That was the first step of our journey into Blender.



Then step by step or, should I say, proof of concept by proof of concept, we implemented Blender into the process. We took advantage of a new project to try new possibilities. We first rebuilt the exterior of the project based on a scan then we gave this data to the engineers. In the first stages, engineers were used to receive the scans which are incredibly heavy. Getting some lighter data changed their life.

As I was the only one being able to use Blender, I taught to my team how to use it. We are now four modellers being able to work on Blender. Despite the differences between the surface modelling and polygonal modelling and their logic of construction, guys learned very quickly how to use it.

Because our project needed us to create a range of cars, we used Blender to create it. Based on the rebuilt model and with two or three modellers, we created nine iterations of the base car in five days. This type of work would take us between ten and fifteen days with Alias. That was definitely the exercise which showed to the design studio the huge advantage of using Blender in the early stages of a project. “Early form studies don’t need to be as precise as NURBS models. We use Blender as a tool for management to sign off on the design volumes. Once package decisions are made we can move on to the next steps in Alias” said Pierre-Paul.

Since then we keep trying new tasks. We milled an exterior with Blender data, we created a model from scratch based on a sketch and a defined wheelbase. The designer can see modifications in real time instead of having to wait for a couple hours. Between the first test and full implementation only 4 months had gone by. We are also experimented with rendering, animations and how to create a new process for these two tasks.

Mathilde Ampe

Digital Modeller – Design

Tata Motors European Technical Centre


SDN/NFV DevRoom at FOSDEM 2016

We are pleased to announce the Call for Participation in the FOSDEM 2016 Software Defined Networking and Network Functions Virtualization DevRoom!

Important dates:

  • Nov 18: Deadline for submissions
  • Dec 1: Speakers notified of acceptance
  • Dec 5: Schedule published

This year the DevRoom topics will cover two distinct fields:

  • Software Defined Networking (SDN), covering virtual switching, open source SDN controllers, virtual routing
  • Network Functions Virtualization (NFV), covering open source network functions, NFV management and orchestration tools, and topics related to the creation of an open source NFV platform

We are now inviting proposals for talks about Free/Libre/Open Source Software on the topics of SDN and NFV. This is an exciting and growing field, and FOSDEM gives an opportunity to reach a unique audience of very knowledgeable and highly technical free and open source software activists.

Topics accepted include, but are not limited to:


  • SDN controllers – OpenDaylight, OpenContrail, ONOS, Midonet, OVN, OpenStack Neutron,Calico, IOvisor, …
  • Dataplane processing: DPDK, OpenDataplane, netdev, netfilter, ClickRouter
  • Virtual switches: Open vSwitch, Snabb Switch, VDE, Lagopus
  • Open network protocols: OpenFlow, NETCONF, OpenLISP, eBPF, P4, Quagga


  • Management and Orchestration (MANO): Deployment and management of network functions, policy enforcement, virtual network functions definition –, Cloudify, OpenMANO, Tacker, …
  • Open source network functions: Clearwater IMS, FreeSWITCH, OpenSIPS, …
  • NFV platform features: Service Function Chaining, fault management, dataplane acceleration, …

Talks should be aimed at a technical audience, but should not assume that attendees are already familiar with your project or how it solves a general problem. Talk proposals can be very specific solutions to a problem, or can be higher level project overviews for lesser known projects. Please include the following information when submitting a proposal:

  • Your name
  • The title of your talk (please be descriptive, as titles will be listed with around 250 from other projects)
  • Short abstract of one or two paragraphs
  • Short bio (with photo)

The deadline for submissions is November 18th, 2015. FOSDEM will be held on the weekend of January 30th-31st 2016 and the SDN/NFV DevRoom will take place on Sunday, January 31st 2016. Please use the following website to submit your proposals: (you do not need to create a new Pentabarf account if you already have one from past years). You can also join the devroom’s mailing list, which is the official communication channel for the DevRoom: network-devroom at (subscription page:

The Networking DevRoom 2016 Organization Team

Master builds for OSX and Windows, an update

So I spent three full days trying to make working builds for OSX and Windows. Mostly OSX, with a side-dish of Windows. Here's a short update. I'm using this git repository as a build system. It's basically a set of cmake extern projects, one for each dependency. It's still a mess, there are definitions for dependencies we no longer need, like glew.

Both on Windows and on OSX, I setup a development tree with this repo, a build directory for the dependencies, an install directory, a download directory and a second build directory for doing Krita development.

I'm using Qt 5.6 alpha, by the way, compiled to exclude dbus and some other things.


On OSX, there were some weirdnesses: OpenColorIO seems hardcoded to want myptch as a patch command, not just on Windows, but everywhere... That needs patching, of course, or symlinking patch to mypatch.

Eigen3 doesn't want to build because it needs a dart file for some test setup which we don't want to build. Patch in the cmake project.

Qt's macdeployqt needs patching as well, the patch is in the cmake project. After building Qt with -rpath, it became necessary to manually set the rpath on desktop2json: as built by kcoreaddons, it won't run because it cannot find Qt.

Finally, I managed to build everything including Krita. In order to run Krita, it's necessary to use macdeployqt to deploy all plugins, libraries and frameworks to the app bundle, and then manually use install_name_tool to add @executable_path/../Frameworks to the rpaths of the executable.

But... Somehow, macdeployqt refuses to deploy the QtNetwork framework out of all Qt frameworks it deploys to the bundle. No idea why, yet, I had to stop debugging that because it was bedtime... More next weekend, but it is progress.


On Windows, I use the same kritadeposx repo: the name is wrong. When everything works, I want to add the externals definition to krita's repo. In any case, with some coaxing, I got most things to build. Almost.

Qt was a bit of a problem: QtDeclarative just doesn't build with Visual Studio 2015. Not sure why, for now I didn't need that module.

Then it turned out that ki18n cannot find the gettext executable. I could bull past that by commenting out the line where it looks for it, but then the same happens when trying to configure Krita. It needs more investigation why this happens.

At that point the laptop overheated and shut down and I wasn't motivated to start it up again, so again more next weekend... With hopefully more progress.

November 02, 2015

News from the World of Tomorrow

News from the World of Tomorrow

And more awesome updates!

Some awesome updates from the community and activity over on the forums! People have been busy doing some really neat things (that really never fail to astound me). The level of expertise we have floating around on so many topics is quite inspiring.

darktable 2.0 Release Candidate

Towards a Better darktable!

A nice Halloween weekend gift for the F/OSS photo community from darktable: a first Release Candidate for a 2.0 release is now available!

Houz made the announcement on the forums this past weekend and includes some caveats. (Edits will be preserved going up, but it won’t be possible to downgrade back to 1.6.x).

Preliminary notes from houz (and Github):

  • darktable has been ported to gtk-3.0
  • new thumbnail cache replaces mipmap cache (much improved speed, less crashiness)
  • added print mode
  • reworked screen color management (softproof, gamut check etc.)
  • text watermarks
  • color reconstruction module
  • raw black/white point module
  • delete/trash feature
  • addition to shadows&highlights
  • more proper Kelvin temperature, fine-tuning preset interpolation in WB iop
  • noiseprofiles are in external JSON file now
  • monochrome raw demosaicing (not sure whether it will stay for release, like Deflicker, but hopefully it will stay)
  • aspect ratios for crop&rotate can be added to conf (ae36f03)
  • navigating lighttable with arrow keys and space/enter
  • pdf export — some changes might happen there still
  • brush size/hardness/opacity have key accels
  • the facebook login procedure is a little different now
  • export can upscale
  • we no longer drop history entries above the selected one when leaving dr or switching images
  • text/font/color in watermarks
  • image information now supports gps altitude
  • allow adding tone- and basecurve nodes with ctrl-click
  • we renamed mipmaps to thumbnails in the preferences
  • new “mode” parameter in the export panel
  • high quality export now downsamples before watermark and frame to guarantee consistent results
  • lua scripts can now add UI elements to the lighttable view (buttons, sliders etc…)
  • a new repository for external lua scripts was started.

G’MIC 1.6.7

Because apparently David Tschumperlé doesn’t sleep, a new release of G’MIC was recently announced as well! This release includes a really neat new patch-based texture resynthesizer that David has been playing with for a while now.

G'MIC Syntexturize Patch Re-synthesizing an input texture to an output of arbitrary size.

It will build an output texture of arbitrary size based on an input texture (and can result in some neat looking peppers apparently).

Speaking of G’MIC…

G’MIC for Adobe After Effects and Premier Pro

Yes, I know it’s Adobe. Still, I can’t help but think that this might be an awesome way to introduce some people to the amazing work being done by so many F/OSS creators.

Tobias Fleischer announced on this post that he has managed to get G’MIC working with After Effects and Premier Pro. Even some of the more intensive filters like skeleton and Rodilius appear to be working fine (if a bit sluggish)!

Adobe After Effects G'MIC


You might remember PhotoFlow as the project that creator Andrea Ferrero used when writing his Blended Panorama Tutorial from a few months ago. What you might not realize is that Andrea has also been working at a furious pace improving PhotoFlow (indeed it feels like every few days he is announcing new improvements - almost as fast as G’MIC!).

PhotoFlow Perspective Correction Original PhotoFlow Perspective Correction Corrected Example of PhotoFlow perspective correction.

His latest release was announced a few days ago as 0.2.3. He’s incorporated some nice new improvements in this version:

  • the additon of the LMMSE demosaicing method, directly derived from the algorithm implemented in RawTherapee
  • an impulse noise (also known as salt&pepper) reduction tool, again derived from rawTherapee. It effectively reduces isolated bright and dark pixels.
  • a perspective correction tool, derived from Darktable. It can simultaneously correct horizontal and vertical perspective as well as tilting, and works interactively.

Head on over to the PhotoFlow Blog to check things out!

LightZone 4.1.3 Released

We don’t hear as often from folks using LightZone, but that doesn’t mean they’re not working on things! In fact, Doug Pardee just stopped by the forums a while ago to announce a new release is available, 4.1.3. (Bonus fun - read that topic to see the Revised BSD License go flying right over my head!)

Head over to their announcement to see what they’re up to.

Rapid Photo Downloader

We also had the developer of Rapid Photo Downloader, Damon Lynch, stop by the forums to solicit feedback from users just the other day. A nice discussion ensued and is well worth reading (or even contributing to!).

Damon is working hard on the next release of RPD (apparently the biggest update since the projects inception in 2007!), so go show some support and provide some feedback for him.

RawTherapee Forum

RawTherapee Logo

The RawTherapee team is testing out having a forum over here on discuss as well (we welcomed the G’MIC community a little while ago). This is currently an alternate forum for the project (which may become the official forum in the future). The category is quiet as we only just set it up, so drop by and say hello!

Speaking of RawTherapee…

Lede Image

I want to thank Morgan Hardwood ( for providing us a wonderful view of Röstånga, Sweden as a background image on the main page.

Rostanga by Morgan Hardwood Röstånga by Morgan Hardwood cba

New Krita Book Release and Giveaway!


Krita contributor Scott Petrovic has released his new book Digital Painting with Krita 2.9. This is the first book on Krita in English! At over 230 pages long, the book is packed with useful information on how Krita works!

The book includes a lot of illustrations and examples to help explain concepts. The book is available in print or ebook format. Some of the profits from the book will go back to the Krita Foundation. This will help us continue to fix bugs and add even more features. And of course the awesome cover by Tyson Tan makes getting the book a snap decision!

Scott has been involved in the Krita community for a long time. He maintains this very website. Apart from that, and writing this book, he finds time to do development. Scott plays a role in the user interaction design with things like the new animation system and the layers docker coming in Krita 3.1. Read on for a short interview!

Check out the table of contents and the first two chapters.

You can get your copy from most retailers such as Amazon and Barnes & Noble. A full list of locations can be seen here. The Amazon ebook is DRM-Free.

If you buy through Amazon using these links, Amazon will send some money to the Krita Foundation:

Book Giveaway

Along with the book release, Scott is doing a book giveaway right now! 5 copies of the book will be autographed and given away through a lottery system. The book giveaway is running from now until November 4, 2015 at the end of the day.

You can head over to his site and enter to win your copy.

About Scott Petrovic

An author interview this time!

How did you get into the Krita community? You’re doing a lot of things, from website work to development, but what’s your favourite?

In 2013, I was spending time assisting with the open source application Blender.  Blender was my first experience contributing to an open source project and the Linux world. I was a bit nervous with how everything worked. They use foreign things like IRC and mailing lists to do most of their communication.

It was great to work with the Blender developers. Many thanks go to Brecht Von Lommel and Campbell Barton for their guidance. The Blender developers had a good step-by-step guide on how to build for Windows, so it was an easy transition for me without having to install Linux. That experience made a big impact on how I view the open source community in general. I did a blog post about my experience outlining some of the things I learned.

While working with Blender, I stumbled upon Krita. I was reading an article on the pre-production of one of their short movies and Krita was mentioned. Krita sounded  interesting and fun so I started checking it out. The more I used the program, the more I fell in love with it. Having a bit of open source experience under my belt, I began to look for an opportunity to help.

It was less than a week when Boudewjin Rempt put a “call for help” for the website redesign. Because Krita is mostly run by a small group of volunteers, I felt I had a large amount of control and direction with That freedom and small-scale workflow made me feel like I could make a big difference.

How long did you take to write the book? Can you give us some highlights encountered when writing? What was the hardest bit?

I started writing the book near the end of 2014, so it would be about a year. I had no idea what I was really getting into when I started writing. I don’t really know other authors or hang out in writing groups. I just felt that Krita is a great program, and it needed to have a stronger voice. The community and developers that are behind Krita are great all-around. They have an amazing drive, attitude, and professionalism that I want to surround myself with every day.

With all the writing I did, I thought the most exciting part was getting feedback from people like Wolthera, David Revoy, and my editor Karen. It was really the first time anyone else looked at my writing. When you start to write large bodies of text like a book, you start to lose the ability to judge your writing ability. It all looks like a bunch of notes that you write down from research and talking with people. The feedback I received really put the writing in perspective and allowed my creative juices to start flowing again.

The hardest part of writing a non-fiction book for me was clarity. I really struggled at times knowing if something was too technical, or was too simple for the reader. This book is designed for artists, so I spent a lot of time trying to explain things and give examples to make things easier. Many of the illustrations and examples in the book are not heavily rendered, but have only the information needed to help you understand the concept. Teaching core concepts always tried to be the focus.

The other difficult part was keeping up with changes. Krita releases a new version almost every month with fixes and new features. This made it difficult for the writing to stay up-to-date. When I was almost done, we decided to update all the icons in the application. I had a lot of images in my book that showed icons, so you can imagine the rework that was needed there. I ended up modifying my writing process to get the book as current as possible.

What’s your favorite Krita feature?

I would say the popup palette is my favorite feature. When I am creating artwork, I like to hide the entire user interface. This involves using Canvas Only Mode (Tab key shortcut). I guess I could argue that Canvas Only mode is my favorite feature as well. I don’t like distractions when I am being creative. I think panels, menus, and dockers all put a mental strain on you. You aren’t aware of it, but your brain manually filters out that information when it is on the screen. The popup palette allows me to keep working in Canvas Only mode. I can change brushes, colors, and tags without having to see all the menus again.

Who’s your favorite Krita artist?

I would probably say a tie between David Revoy and Tyson Tan. They both do great work in Krita, give education on their processes, and help give feedback during development. What makes them so talented is that they have strong technical and artistic abilities. When I was younger I thought great artwork was on technical ability alone. How pretty and realistic can you make that girl, or how dramatic can you make your action scene. While those skills are important, I believe artists should strive to tell a story and make a deeper connection with people. David and Tyson have helped me improve my own art with their educational material. They have made a connection with me that goes beyond just looking at their art.

October 31, 2015

Stellarium 0.13.3 has been released! discussion

Der Prozedureinsprungpunkt "_Z17qt_message_output9QtMsgTypeRK18QMessageLogContextRK7QString" wurde in der DLL "Qt5Core.dll" nicht gefunden.
Wird mit 13.3 Windows XP nicht mehr unterstützt?

October 30, 2015

HDMI presentation setup on Linux, Part II: problems and tips

In Part I of HDMI Presentation Setup on Linux, I covered the basics of getting video and audio working over HDMI. Now I want to cover some finer-grained details: some problems I had, and ways to make it easier to enable HDMI when you need it.

Testing follies, delays, and screen blinking/flashing woes

While I was initially trying to get this working, I was using my own short sound clip (one of the alerts I use for IRC) and it wasn't working. Then I tried the test I showed in part I, $ aplay -D plughw:0,3 /usr/share/sounds/alsa/Front_Center.wav and that worked fine. Tried my sound clip again -- nothing. I noticed that my clip was mono and 8-bit while the ALSA sample was stereo and 16-bit, and I wasted a lot of time in web searches on why HDMI would play one and not the other.

Eventually I figured out that the reason my short clip wasn't playing was that there's a delay when switching on HDMI sound, and the first second two two of any audio may be skipped. I found lots of complaints about people missing the first few seconds of sound over HDMI, so this problem is quite common, and I haven't found a solution.

So if you're giving a talk where you need to play short clips -- for instance, a talk on bird calls -- be aware of this. I'm probably going to make a clip of a few seconds of silence, so I can play silence before every short clip to make sure I'm fully switched over to HDMI before the clip starts: aplay -D plughw:0,3 silence.wav osprey.wav

Another problem, probably related, when first starting an audio file: the screen blinks brieftly off then on again, then blinks again a little while after the clip ends. ("Flicker" turns out to be a better term to use when web searching, though I just see a single blink, not continued flickering). It's possible this is something about my home TV, and I will have to try it with another monitor somewhere to see if it's universal. It sounds like kernel bug 51421: Enabling HDMI sound makes HDMI video flicker, but that bug was marked resolved in 2012 and I'm seeing this in 2015 on Debian Jessie.

Making HDMI the sound default

What a pain, to have to remember to add -D plughw:0,3 every time you play a sound. And what do you do for other programs that don't have that argument?

Fortunately, you can make HDMI your default sound output. Create a file in your home directory called .asoundrc with this in it (you may be able to edit this down -- I didn't try) and then all audio will go to HDMI:

pcm.dmixer {
  type dmix
  ipc_key 1024
  ipc_key_add_uid false
  ipc_perm 0660
  slave {
    pcm "hw:0,3"
    rate 48000
    channels 2
    period_time 0
    period_size 1024
    buffer_time 0
    buffer_size 4096
pcm. !default {
  type plug
  slave.pcm "dmixer"

Great! But what about after you disconnect? Audio will still be going to HDMI ... in other words, nowhere. So rename that file:

$ mv .asoundrc asoundrc-hdmi
Then when you connect to HDMI, you can copy it back:
$ cp asoundrc-hdmi .asoundrc 

What a pain, you say again! This should happen automatically!

That's possible, but tricky: you have to set up udev rules and scripts. See this Arch Linux discussion on HDMI audio output switching automatically for the gory details. I haven't bothered, since this is something I'll do only rarely, when I want to give one of those multimedia presentations I sometimes contemplate but never actually give. So for me, it's not worth fighting with udev when, by the time I actually need HDMI audio, the udev syntax probably will have changed again.

Aliases to make switching easy

But when I finally do break down and design a multimedia presentation, I'm not going to be wanting to do all this fiddling in the presentation room right before the talk. I want to set up aliases to make it easy.

There are two things that need to be done in that case: make HDMI output the default, and make sure it's unmuted.

Muting can be done automatically with amixer. First run amixer with no arguments to find out the channel name (it gives a lot of output, but look through the "Simple mixer control" lines, or speed that up with amixer | grep control.

Once you know the channel name (IEC958 on my laptop), you can run: amixer sset IEC958 unmute The rest of the alias is just shell hackery to create a file called .asoundrc with the right stuff in it, and saving .asoundrc before overwriting it. My alias in .zshrc is set up so that I can say hdmisound on or hdmisound off (with no arguments, it assumes on), and it looks like this:

# Send all audio output to HDMI.
# Usage: hdmisound [on|off], default is on.
hdmisound() {
    if [[ $1 == 'off' ]]; then
        if [[ -f ~/.asoundrc ]]; then
            mv ~/.asoundrc ~/.asoundrc.hdmi
        amixer sset IEC958 mmute
        if [[ -f ~/.asoundrc ]]; then
            mv ~/.asoundrc ~/.asoundrc.nohdmi
        cat >> ~/.asoundrc <<EOF
pcm.dmixer {
  type dmix
  ipc_key 1024
  ipc_key_add_uid false
  ipc_perm 0660
  slave {
    pcm "hw:0,3"
    rate 48000
    channels 2
    period_time 0
    period_size 1024
    buffer_time 0
    buffer_size 4096
pcm. !default {
  type plug
  slave.pcm "dmixer"
        amixer sset IEC958 unmute

Of course, I could put all that .asoundrc content into a file and just copy/rename it each time. But then I have another file I need to make sure is in place on every laptop; I decided I'd rather make the alias self-contained in my .zshrc.

C.H.I.P. flashing on Fedora

You might have heard of the C.H.I.P., the 9$ computer. After contributing to their Kickstarter, and with no intent on hacking on more kernel code than is absolutely necessary, I requested the "final" devices, when chumps like me can read loads of docs and get accessories for it easily.

Turns out that our old friend the Realtek 8723BS chip is the Wi-Fi/Bluetooth chip in the nano computer. NextThingCo got in touch, and sent me a couple of early devices (as well as to the "Kernel hacker" backers), with their plan being to upstream all the drivers and downstream hacks into the upstream kernel.

Before being able to hack on the kernel driver though, we'll need to get some software on it, and find a way to access it. The docs website has instructions on how to flash the device using Ubuntu, but we don't use that here.

You'll need a C.H.I.P., a jumper cable, and the USB cable you usually use for charging your phone/tablet/e-book reader.

First, let's install a few necessary packages:

dnf install -y sunxi-tools uboot-tools python3-pyserial moserial

You might need other things, like git and gcc, but I kind of expect you to already have that installed if you're software hacking. You will probably also need to get sunxi-tools from Koji to get a new enough version that will support the C.H.I.P.

Get your jumper cable out, and make the connection as per the NextThingCo docs. I've copied the photo from the docs to keep this guide stand-alone.

Let's install the tools, modified to work with Fedora's newer, upstreamer, version of the sunxi-tools.

$ git clone
$ cd CHIP-tools
$ make
$ sudo ./ -d

If you've followed the instructions, you haven't plugged in the USB cable yet. Plug in the USB cable now, to the micro USB power supply on one end, and to your computer on the other.

You should see the little "OK" after the "waiting for fel" message:

== upload the SPL to SRAM and execute it ==
waiting for fel........OK

At this point, you can unplug the jumper cable, something not mentioned in the original docs. If you don't do that, when the device reboots, it will reboot in flashing mode again, and we obviously don't want that.

At this point, you'll just need to wait a while. It will verify the installation when done, and turn off the device. Unplug, replug, and launch moserial as root. You should be able to access the C.H.I.P. through /dev/ttyACM0 with a baudrate of 115200. The root password is "chip".

Obligatory screenshot of our new computer:

Next step, testing out our cleaned up Realtek driver, Fedora on the C.H.I.P., and plenty more.

October 28, 2015

October 27, 2015

HDMI presentation setup on Linux, video and audio: Part I

For about a decade now I've been happily connecting to projectors to give talks. xrandr --output VGA1 --mode 1024x768 switches on the laptop's external VGA port, and xrandr --auto turns it off again after I'm disconnected. No fuss.

But increasingly, local venues are eschewing video projectors and instead using big-screen TVs, some of which offer only an HDMI port, no VGA. I thought I'd better figure out how to present a talk over HDMI now, so I'll be ready when I need to know.

Fortunately, my newest laptop does have an HDMI port. But in case it ever goes on the fritz and I have to use an older laptop, I discovered you can buy VGA to HDMI adaptors rather cheaply (about $10) on ebay. I bought one of those, tested it on my TV at home and it at least worked there. Be careful when shopping: you want to make sure you're getting something that takes VGA in and outputs HDMI, rather than the reverse. Ebay descriptions aren't always 100% clear on that, but if you check the gender of the connector in the photo and make sure it's right to plug into the socket on your laptop, you should be all right.

Once you're plugged in (whether via an adaptor, or native HDMI built into your laptop), connecting is easy, just like connecting with VGA:

xrandr --output HDMI1 --mode 1024x768

Of course, you can modify the resolution as you see fit. I plan to continue to design my presentations for a 1024x768 resolution for the forseeable future. Since my laptop is 1366x1024, I can use the remaining 342-pixel-wide swath for my speaker notes and leave them invisible to the audience.

But for GIMP presentations, I'll probably want to use the full width of my laptop screen. --mode 1366x768 didn't work -- that resolution wasn't available -- but running xrandr with no arguments got me a list of available resolutions, which included 1360x768. That worked fine and is what I'll use for GIMP talks and other live demos where I want more screen space.

Sound over HDMI

My Toastmasters club had a tech session where a few of us tried out the new monitor in our meeting room to make sure we could use it. One person was playing a video with sound. I've never used sound in a talk, but I've always wanted to find an excuse to try it. Alas, it didn't "just work" -- xrandr's video settings have nothing to do with ALSA's audio settings. So I had to wait until I got home so I could do web searches and piece together the answer.

First, run aplay -l , which should show something like this:

$ aplay -l
**** List of PLAYBACK Hardware Devices ****
card 0: Intel [HDA Intel], device 0: STAC92xx Analog [STAC92xx Analog]
  Subdevices: 0/1
  Subdevice #0: subdevice #0
card 0: Intel [HDA Intel], device 3: HDMI 0 [HDMI 0]
  Subdevices: 1/1
  Subdevice #0: subdevice #0

Find the device number for the HDMI device, which I've highlighted here: in this case, it's 3 (which seems to be common on Intel chipsets).

Now you can run a test:

$ aplay -D plughw:0,3 /usr/share/sounds/alsa/Front_Center.wav
Playing WAVE '/usr/share/sounds/alsa/Front_Center.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Mono
If you don't hear anything, don't worry: the HDMI channel is probably muted if you've never used it before. Run either alsamixer or alsamixergui.

[alsamixergui with HDMI muted] [alsamixer] Now find the channel representing your HDMI connection. (HDMI must be plugged in for this to work.) In alsamixer, it's called S/PDIF; in alsamixergui, it's called IEC958. If you look up either of those terms, Wikipedia S/PDIF will tell you that S/PDIF is the Sony/Philips Digital Interconnect Format, a data protocol and a set of physical specifications. Those physical specifications appear to have nothing to do with video, and use connectors that are nothing like HDMI. So it doesn't make much sense. Just remember that if you see IEC958 or S/PDIF in ALSA, that's probably your HDMI channel.

In the alsamixergui screenshot, IEC958 is muted: you can tell because the little speaker icon at the top of the column is bright white. If it were unmuted, the speaker icon would be grey like most of the others. Yes, this seems backward. It's Linux audio: get used to obscure user interfaces.

In the alsamixer screenshot, the mutes are at the bottom of each column, and MM indicates a channel is muted (like the Beep channel in the screenshot). S/PDIF is not muted here, though it appears to be at zero volume. (The 00 doesn't tell you it's at zero volume; 00 means it's not muted. What did I say about Linux audio?) ALSA apparently doesn't let you adjust the volume of HDMI output: presumably they expect that your HDMI monitor will have its own volume control. If your S/PDIF is muted, you can use your right-arrow key to arrow over to the S/PDIF channel, then type m to toggle muting. You can exit alsamixer with Ctrl-C (Q and Ctrl-Q don't work).

Now try that aplay -D command again and see if it works. With any luck, it will (loudly).

A couple of other tests you might want to try:
speaker-test -t sine -f 440 -c 2 -s 1 -D hw:0,3
plays a sine wave. speaker-test -c 2 -r 48000 -D hw:0,3
runs a general speaker test sequence.

In Part II of Linux HDMI Presentations, I'll cover some problems I had, and how to write an alias to make it easy to turn HDMI audio on and off.

October 26, 2015

Interview with Laura

Monkey Girl v2(resized)

Could you tell us something about yourself?

My name is Laura, and I currently live in Calgary, Alberta. Aside from 2D art, I model/sculpt with Blender, Maya, and ZBrush. I enjoy running and board sports, and I love science and cats!

Do you paint professionally, as a hobby artist, or both?

For the moment, I only paint/illustrate as a hobby or for free, but in the future I hope to make my hobby my profession once I gain more experience and skill. I hope to apply my 2D/3D skills to the entertainment and medical industry.

What genre(s) do you work in?

My favorite genre would be cartoon art. Childhood cartoons, such as Scooby-Doo, Tom and Jerry, and The Looney Tunes, as well as manga/anime have had a big influence on my art. I tend to like cartoon art along with comedy, adventure, fantasy, and sci-fi genres, very much like Disney’s and Pixar’s films. I sometimes also enjoy painting more realistic or semi-realistic portraits.

Whose work inspires you most — who are your role models as an artist?

By far, works from the people at Disney, Pixar, Dreamworks and the like have been my inspiration. In terms of specific people, this is a bit tough. Here are a few I can think of at the moment: I really admire Wenqing Yan (Yuumei) on DeviantArt for the powerful messages in her art; David Revoy’s works are absolutely gorgeous as well (I actually recently discovered his comic “Pepper & Carrot” from Krita interviews!); and Kurt Papstein is also a big inspiration – his creature/alien sculpts are out of this world!

How and when did you get to try digital painting for the first time?

I got Photoshop Elements when I was maybe 8 years old or so, and started painting almost exclusively digitally from that point on. I switched from PS Elements to Gimp after some time, and then just about 2-3 years ago I found Krita!

What makes you choose digital over traditional painting?

I don’t have much experience in traditional painting, so I’m going to talk exclusively about digital painting. Nonetheless, if there was a style I could instantly master, it would be realism in traditional painting. When I was a kid, I came across PS Elements and things just progressed from there, so I never felt like trying my hand in traditional painting. What I like about digital painting is that I can fix mistakes very easily without risking ruining my work: just Ctrl-Z, or erase or paint over part of the work that needs fixing, or use the transform and assistant tools. In my case, I have found in digital painting the freedom to explore possibilities and to let the work evolve into something I didn’t plan or expect on making. I love that fact that I’m able to experiment with so many different tools, brushes, and special effects. Another thing I really love about digital painting is the abundance of tutorials and communities available for learning new techniques, getting advice, and getting your work easily critiqued by someone. (And on the plus side you can make friends.)

How did you find out about Krita?

I was using Gimp on Linux and looking for a better painting alternative and I came across Krita.

What was your first impression?

I first noticed the UI. I really loved how simple it’s made, making it easy to work. The second thing that really struck me was the brushes.

What do you love about Krita?

Intuitive UI and wonderful brushes. I love the level of customization that Krita’s UI has; I’m still learning new ways I can customize the UI to suit my work flow. Krita’s brushes are my favorite part. They feel so natural. There’s also a huge variety of brushes and textures available to create all kinds of effects. I also really love the shortcuts (I’m a big fan of shortcuts, makes for quick work flow) – the ‘m’ key to quickly mirror, the ‘e’ key to quickly erase, the ‘/’ to switch between two presets (which I often do), and the transform tools are really wicked (especially the perspective tool). And of course, Krita is free! With plenty of improved features with every new update!

What do you think needs improvement in Krita? Is there anything that really annoys you?

I haven’t had the chance to use Krita enough yet to really figure out its problems… A lot of the problems I used to have, like lagging brushes, random crashes, extremely slow start-up, and other problems of the like have been solved (a big thank you for that!).

Some of the things bothering me are speed and big files. Krita tends to get a little slow when I’m working with many layers or using large brushes and textures. Saving big files is a bit scary because the program almost crashes (‘stops responding’ as Windows likes to put it, and unfortunately I have to use Windows because of Maya and ZBrush). One other thing that I’m not particularly fond of is the warp tool. I actually really love this tool, very handy. I often need to use it to adjust proportions and fix little mistakes, but I find it lacking in ability compared to PS. The deformation (warping) isn’t as ‘smooth’ as I’ve seen PS’s warping tool work.

That’s about all I can think of right now. As I learn more about the tools and options available with Krita, I’ll be sure to give my opinion in the Krita forums! :)

What sets Krita apart from the other tools that you use?

One word: free. It’s a free software (and I couldn’t be more grateful for that), but for that it doesn’t lack in quality at all. The support for Krita and the effort the developers put into continuously improving this software is wonderful; you don’t see this level of support and dedication with a lot of corporate software.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I tend to pick my latest works as my favorites, since I feel that each new piece is an improvement from older ones. I think Spirit of Nature is probably my favorite. I’ve spent the most time on this piece and attempted to produce something of higher quality. I learned a lot of new things while painting this piece – new painting techniques and how to use different brushes, including custom made ones. It goes without saying there’s a lot to improve on, but being my biggest work I’d like to think of it as a new milestone to beat with my next piece of work.
Spirit of Nature(resized)

What techniques and brushes did you use in it?

I almost exclusively used one of David Revoy’s brush kits. I had some fun using various brushes. For example, I used the wet and bristle brushes for skin, clothing and the eagle, the rake brush for the hair, and textured and non-textured smudge brushes to blend. I used Krita’s default pressure airbrush tool for softer and smoother shadows, and hard brushes for highlights. I also used textures and some custom-made brushes. To add glowing/shining effects I used the glow and blur brushes. Because I wanted the Native American woman to be highly symmetrical, I used the horizontal mirror mode when I drew the initial sketch.

Where can people see more of your work?

I recently made a DeviantArt account and am most active there:

Anything else you’d like to share?

Thank you very much for the interview and a BIG thank you to the Krita developers!!

October 24, 2015

Stellarium 0.14.0

The Stellarium development team after 6 months of development is proud to announce the release of version 0.14.0 of Stellarium.

Version 0.14.0 brings a big leap forward in astronomical accuracy for historical applications:
- Precession now follows the IAU2006 model in the long-time version from Vondrák et al. 2011.
- Nutation is applied (IAU2000B solution). Given that nobody has observed it without telescope and the model does not give limits of applicability, we limit its application to 1500..2500
- Application of DeltaT has been simplified and made a bit more intuitive.
We now dare to add another coordinate system: Ecliptic coordinates of date.
We can therefore now show that planetary positions given by the commonly used solution VSOP87 is applicable to -4000..+8000 only, and its use outside this range will give somewhat artificial results. There is more to follow in future versions.

The other big addition is a greatly improved collection of DSO data with lots of possibilities in a new GUI tab to select for object type and/or catalog. In total, 15 catalogs are now built-in!

Also the Meteor Shower, Satellites, Telescope Control and 3D Sceneries plugins have been improved.

Landscapes can have switchable labels, so you can e.g. indicate mountain peaks.

In total 83 bugs and wishlist items were fixed or at least decided.

A platform-specific change for Windows: OpenGL binding is now dynamic. That means, there are no more separate OpenGL/ANGLE/MESA downloads, but after installation you will have separate commands in the start menu which force ANGLE or MESA modes.

A huge thanks to our community whose contributions help to make Stellarium better!

Full list of changes:
- Added accurate calculations of the ecliptic obliquity. Finally we have good precession! (LP: #512086, #1126981, #1240070, #1282558, #1444323)
- Added calculations of the nutation. Now we optionally have IAU-2000B nutation. (Applied between 1500..2500 only.)
- Added new DSO catalog and related to him features (LP: #1453731, #1432243, #1285175, #1237165, #1189983, #1153804, #1127633, #1106765, #1106761, #957083)
- Added Iridium flares to Satellites plugin (LP: #1106782)
- Added tool for prediction of Iridium flares to Satellites plugin
- Added AstroCalc - a tool for calculating planetary phenomena (LP: #1181702)
- Added list of interesting double stars in Search Tool
- Added list of interesting variable stars in Search Tool
- Added cross-identification data for HIP stars (SAO and HD numbers)
- Added sinusoidal projection
- Added draw DSO symbols with different colors
- Added labels to the landscapes (gazetteer support)
- Added a new behaviour for drawing of orbit for selected planet - drawing of the orbits of 'hierarchical group' (parent and all their childrens) of the selected celestial body (disabled by default) (LP: #1071457).
- Added an option to lower the horizon, e.g. for a widescreen cylindrical view with mostly sky and ground on lower screen border (LP: #1299063).
- Added various timeshift-by-year commands (LP: #1478670)
- Added Moon's phases info
- Added support of SOCKS5 proxies (LP: #1499974)
- Rewriting the Meteor module and the Meteor Showers plugin (LP: #1471143)
- Updated skycultures (LP: #1497818)
- Updated Hungarian and Brazilian Portuguese translations of landscapes and skycultures
- Updated data for Solar System Observer
- Updated color scheme for constellations
- Updated documentation
- Updated QZip stuff
- Updated TUI plugin
- Updated data for Satellites plugin
- Updated Clang support
- Updated Pluto and Charon textures
- Updated GCVS catalog
- Updated ssystem.ini (LP: #1240791)
- Using a better formula for CCD FOV calculation in Oculars plugin (LP: #1440330)
- Updated on-screen control panel in Oculars plugin (LP: #1467286)
- Updated list of detection operating systems (LP: #1504910)
- Fixed shortkeys conflict (New shortkeys for Scenery 3D plugin - LP: #1449767)
- Fixed aliasing issue with GCC in star data unpacking
- Fixed documentation for core.wait() function (LP: #1450870)
- Fixed perspective projection issue (LP: #724868)
- Fixed crash on activation of AngleMeasure in debug mode (LP: #1455839)
- Fixed crash in Sinusoidal Projection (LP: #1472555)
- Fixed issue for update bottom bar position when we toggle the GUI (LP: #1409023)
- Fixed weird view of the Moon in Oculars plugin when Moon's scaling is enabled
- Fixed non-realistic mode drawing of artificial satellites
- Fixed availability plugins for the scripting engine (LP: #1468986)
- Fixed potention memory bug in Meteor class
- Fixed wrong spectral types for some stars (LP: #1429530)
- Fixed style of progress bar (LP: #1419895)
- Fixed tooltip colors in night mode (LP: #1300801)
- Fixed visibility of selected object info in night mode (LP: #1487096)
- Fixed behaviour of waitFor function (scripting engine) (LP: #1051858)
- Fixed resize event handler on Windows (LP: #1490265)
- Fixed regression bug of key movements
- Fixed magnitude of the Sun during solar eclipse (LP: #1485245)
- Fixed obvious omission in Observability plugin after having separated JD/JDE (LP: #1499161)
- Fixed night mode on retina display (LP: #1458109)
- Fixed unusual meteor shower display (LP: #1506126)
- Enable autosaving the state by default for Compass Marks plugin (LP: #1410987)
- Added escape counter to allow exiting loop in solving Kepler's equation (LP: #1465112)
- Added limitation magnitude for solar system objects (LP: #1409543)
- Added extinctive reddening on the horizon for planets, esp. visible for Sun/Moon/bright planets (LP: #1448621)
- Added tooltip for 'Select single constellation' option (LP: #1498022)
- Avoid entering locations with leading whitespace (LP: #1083731)
- Avoid rendering of sunlight and moonlight when show of Solar System objects is disabled (LP: #1499699)
- Change visibleSkyArea limitation to allow landscapes with "holes" in the ground. (LP: #1469407)
- Improved Delta T feature (LP: #1380242)
- Improved the ad-hoc visibility formula for LDN and LBN
- Improved the sight opportunities when artificial satellites are passes (LP: #907318)
- Changed behaviour of coloring of orbits of artificial satellites: we use gray color for parts of orbits, where satellite will be invisible (LP: #914751)
- Improved coloring and rendering artificial satellites (include their labels) in the Earth shadow for both modes (LP: #1436954)
- Improved for landscape and lightscape brightness with/without atmosphere. Now even considers landscape opacity for no-atmosphere conditions.
- Reduce Milky Way brightness in moonlight
- Enhancement of features in Telescope Control plugin (LP: #1469450)
- Removed all references to QDeclarative
- Removed meaningless extincted magnitude data in infostring if objects are below horizon
- Restore drawing corona of the Sun during total solar eclipse (LP: #1435065)

October 23, 2015

Krita on OSX

Ever since our first kickstarter in 2014, we've been building every release of Krita for OSX. The initial work to make that possible took two weeks of full-time hacking. We did the work because Krita on OSX was a stretch goal and we wanted to show that it was possible to bring Krita to OSX, not because we thought two weeks would be enough to create a polished port. After all, the port to Windows took the best part of a year. We didn't make the stretch goal, and the port to OSX got stuck.

And that shows. We build Krita on a mid-2011 Mac Mini that runs Mavericks. That has a couple of consequences: Krita doesn't run well on anything but Mavericks, and the hack-build-test cycle takes about half an hour. That means that fixing bugs is nigh on impossible. Some things have been broken since the start, like OpenGL, other things broke along the way, like proper tablet support, loading and saving jpeg files. And more.

Still, though we didn't make the stretch goal, the demand for Krita on OSX is there: we're getting about half a dozen queries a week about Krita on OSX. So, what should be the next steps?

Step one: define the goals. That's easy. Krita on OSX should run on all versions of OSX that are supported by Qt5, be a good citizen, that is, come as an app bundle in a disk image, save settings to the usual locations, use the regular temporary file location and provide the same feature set and performance as on Windows and Linux. (Which might be difficult because of problems with Apple's OpenGL implementation: Apple wants developers to use their platform-specific alternative.)

Step two: estimate the effort needed. With the Qt5/Kf5 port of Krita, estimation is more difficult because only now people are creating the first Kf5-based applications on OSX, and are running into new and interesting problems. Things like finding resources in standard locations: for Krita 2.x, we had to hack KDE's standard paths implementation quite severely. The main issues are: OpenGL, tablet support, standard paths, bundle creation, platform integration and optimization.

My best effort estimation is three to four months of nearly full-time work. That's similar to the Qt5 port, and comes to about 12,000 to 16,000 euros. Add in a decently fast Mac, and we're looking at, roughly, an investment for 15,000 to 19,000 euros. Add a bit for unexpected events, and let's say, 20,000 euros. A lot of money, but actually quite cheap for a development effort of this kind. It's more than the Krita Foundation has availabe, though...

There is a secondary consideration, borne from experience with the Kf5/Qt5 port: if it will take three months of development time, then that development time is not spent on other things, like bug fixes or kickstarter features. That will have an immediate impact on the project! The port has already made the bug list grow horribly long, because work on the port meant less work on bug fixes.

If we decide to it, step three then must be: do it... There are a couple of possibilities.

The first: run a kickstarter campaign to get the money. Wolthera and I actually started setting one up. But we're just not sure whether a kickstarter campaign is going to succeed, and to fail would be really bad. It would reflect badly on Krita as a wider project and might jeopardize the 2016 kickstarter campaign to fund the next round of feature improvements. It might even cannibalize the 2016 campaign. We're not sure how likely that is, though, because we're not sure the campaigns would be targetting the same user group. Right now, our campaigns are supported in equal parts by free software enthousiasts and by artists. We're not reaching the OSX community, because Krita isn't ready on OSX, but conversely, we don't know how to reach the OSX community. We don't even know whether the OSX community can be involved enough to reach a funding level of at least 15,000 euros.

That makes starting a kickstarter campaign (which is in itself two months of full-time work) a really dicy proposition. Even cutting the goal up into tranches of 5000 euros for the basic port (and a new Mac) and then stretch goals of 2500 euros seemed chancy to us. Plus, if we get stuck at 5000 euros there really is not enough money to do a decent port.

The second possibility: fund it out of pocket, and try to get the investment back. That could be done by making Krita for OSX exclusively available on Steam, or by a possible increase in donations because we can now reach the OSX user community. The first option could be scotched by someone taking our work and making Krita available gratis on OSX. That would be totally OK of course: Krita is free and open source. Nothing says we have to give our binaries aways, but on the other hand, nothing can stop anyone else from giving our binaries away. Or making their own, once the hard work is done. The second possibility, increased donations, is a kind of gamble. It might work out, or it might not...

The third possibility: fund the development out of pocket, but take a longer period to work on it. Get a Mac and devote, say, two weeks of initial work, and then invest a day a week to OSX. Slice up the week. A bit like I'm now doing four days a week of paid non-krita development to fill up my empty bank account, one day a week of porting and one day a week of stable-version bug fixing.

The final possibility is to do nothing. Maybe a capable OSX-loving developer will come around and start doing the work out of love for Krita. But I'm not sanguine about that happening, since we've seen four or five people trying to build Krita on OSX, and all but two failed. The first attempt was using MacPorts, which doesn't lead to an installable app bundle, and the second attempt was the one done for the 2014 Kickstarter.

Which brings us full-circle to the question: what now?


Part of GNOME’s visual identity are the default wallpapers. Ever since GNOME3 was released, regardless of the release version, you can tell a stock GNOME desktop from afar. Unlike what most Linux distributions do, we don’t change the wallpaper thematically from release to release and there is a strong focus on continuity.

Adwaita 3.18 Day

While both Android and Windows are going analog, we’re not that hipster. If you follow my journal, you probably wouldn’t be shocked to hear I mainly use Blender to create the wallpapers. In the past Inkscape took a major part in the execution, but its lacking implementation of gradients leads to dramatic color banding in the subtle gradients we need for the wallpapers. I used to tediously compensate for this in GIMP, using noisify filters while working in high bit depth and then reducing color using GEGL’s magical color reduction operation that Hans Peter Jensen wrote a while a back. It allows to chose various dithering algorithms when lowering the bit depth.

However thanks to Cycles, we get the noise for free :) Actually it’s one of the things I spend hours and hours waiting for getting cleaned up with iterations. But it does help with color banding.

Blender rendering the night variant of the 3.20 Adwaita wallpaper (work in progress).

In my work I have always focused on execution. Many artists spend a great deal of time constructing a solid concept and have everything thought out. But unless the result is well executed, the whole thing falls apart. GNOME Wallpapers are really just stripes and triangles. But it’s the detail, the light play, the sharpness, not too much high density information that make it all work.

First iterations of the GNOME 3.20 variants are beginning to land in the gnome-backgrounds module. Check it out.




Lock Screen

October 22, 2015

Fedora Design Team Update

Fedora Design Team Logo

I’ve been posting these to the design-team mailing list lately but thought it might be good to blog, too.

Meeting Summary

We had a meeting today. In attendance were myself, ryanlerch, gnokii,
mleonova, riecatnor, sam08, mbriza, garrett, and tatica.

Meetbot links:

Here’s a quick run-through of what we discussed:

Presentation to Council

Our presentation to the Fedora council is going to be at 5 PM UTC / 12 pm EST
(post DST) in #fedora-meeting on Monday, November 2. It will be an IRC
meeting and anyone is welcome to come.

The topics I’ll present to the council are covered pretty well in this
summary from last meeting:

The main change in the message I think is that swag distribution /
production varies by geo.

For the FAD we would like to bring up to them, it is looking like July
2016 in Boston would be a good time/location. It is close to Flock the
next month in EMEA, but EMEA in particular is expensive for our current
team makeup.

New Meeting Time

Our new meeting time is going to shift next meeting due to daylight
savings time. Today the meeting was 1200 UTC / 8 AM EDT… next meeting
(Thursday, November 5) will be at 1300 UTC / 8 AM EST. If you have a
daylight savings shift, the time won’t change for you; if you don’t, the
meeting will be one hour later. Here’s a breakdown of the new time for
future meetings:

  • 1300 UTC
  • 5 AM PST (US + Canada/Pacific)
  • 8 AM EST
  • 8:30 AM (Caracas)
  • 3 PM (Europe / Berlin + Rome)
  • 4 PM (Moscow)
  • 6:30 PM (Kolkata)
  • 8 PM (Phnom Penh)
  • 9 PM (Kuala Lumpur)
  • 11 PM (Queensland AU)

New Meeting Chairs

I’m not going to be around from about mid November-December until
February or March, so ryanlerch and gnokii volunteered to chair the
meetings when I’m out.

Completed Tickets

Ticket 373: Production of swag for F22

mleonova completed this one with sticker designs for the 3 Fedora editions.

Ticket 360: Fedora LiveUSB Icons


mleonova completed this one with icon designs for all of the various
Fedora images. There is some great new logo artwork in this ticket worth
checking out!

Ticket 398: AsciiBinder Logo

mleonova completed this one. It’s an excellent logo design worth taking
a look at. :)

Ticket 393: Cover for Getting Started with Fedora Guide

mleonova completed this one. She added a photo of the finished printed
book to the ticket so you can see how it came out.

Open Tickets

Ticket 401: Review: Updated cheat cube to use DNF instead of yum

This one is waiting on an update from Ankur; pinged him in the ticket.

Ticket 347:
Fedora Cloud, Server, and Worksation Stack Exchange community promotion

This one is waiting for feedback from mattdm; pinged him in the ticket.

Ticket 199: Interface and Usability refresh for

anuradhaw gave us an update and pointed to her code; puiterwijk put some fixes in (the code was written to a newer version of askbot) and pushed her code to stage and you can see it here:

There are some minor issues that need to be addressed I think before we can push to prod. anuradhaw indicated she’s studying for exams but after her exams will have more time to come back to this project. We will wait for anuradha to finish her exams to try to address the issues.

Ticket 402: Need to scale down hackergotchi in planet rss feed css

This ticket needed an owner so ryanlerch grabbed it. Thanks, Ryan!

Ticket 210: Update fedoracommunity maps with Russia in EMEA

We thought we were finished with this one but it turns out there are
some issues. ryanlerch kindly picked it back up to fix.

Ticket 279: Design for “This week in Fedora”

This ticket needed an updating. We discussed this design at the Flock
design clinic; ryanlerch created a repo on and added a mockup
at the event that he committed then; he also has newer work he did
post-Flock that he will upload later. I also have a mock from flock I
never uploaded so I’ll upload that too.

Ticket 367: Need logo design for the Fedora Security Team(FST)

This ticket is so close to being done, the logo just needs a few tiny
tweaks. Pinged Yogi on it.

Ticket 350: Anaconda banner template

Yogi completed the template work; we just need it to be posted to the
wiki. Yogi is going to work on this. (He contacted me after the meeting)

In-Progress / Feedback Tickets

Ticket 403: Icons for FAS 3

riecatnor asked for feedback on her work for this one:

– she designed three updated options for the group icon based on
feedback from before, and the 3rd one was the favorite.

– there’s an icon used in the dark blue bar in the upper left of this
screen; the group icon should hopefully work there as well:

– in context, the users icon stands out a bit in the full screen mockup
because the fedora logo isn’t square. we suggested riecatnor modify the
mockup to add a square border along the outside of the icon to see if
that fixes the issue

Ticket 407: Logo for

mleonova asked for feedback on this logo design. the favorites were the
upper left and lower left versions; the designs were well-liked :)

Ticket 404: F23 Release Posters


sam08 created these awesome designs. We don’t know if the reporter used
them, but they are all quite nice. I asked sam08 to post the SVGs and
we’ll close the ticket and advertise them more widely to the Fedora
community to use.

Non-free software can mean unexpected surprises

I went to a night sky photography talk on Tuesday. The presenter talked a bit about tips on camera lenses, exposures; then showed a raw image and prepared to demonstrate how to process it to bring out the details.

His slides disappeared, the screen went blank, and then ... nothing. He wrestled with his laptop for a while. Finally he said "Looks like I'm going to need a network connection", left the podium and headed out the door to find someone to help him with that.

I'm not sure what the networking issue was: the nature center has open wi-fi, but you know how it is during talks: if anything can possibly go wrong with networking, it will, which is why a good speaker tries not to rely on it. And I'm not blaming this speaker, who had clearly done plenty of preparation and thought he had everything lined up.

Eventually they got the network connection, and he connected to Adobe. It turns out the problem was that Adobe Photoshop is now cloud-based. Even if you have a local copy of the software, it insists on checking in with Adobe at least every 30 days. At least, that's the theory. But he had used the software on that laptop earlier that same day, and thought he was safe. But that wasn't good enough, and Photoshop picked the worst possible time -- a talk in front of a large audience -- to decide it needed to check in before letting him do anything.

Someone sitting near me muttered "I'd been thinking about buying that, but now I don't think I will." Someone else told me afterward that all Photoshop is now cloud-based; older versions still work, but if you buy Photoshop now, your only option is this cloud version that may decide ... at the least opportune moment ... that you can't use your software any more.

I'm so glad I use Free software like GIMP. Not that things can't go wrong giving a GIMP talk, of course. Unexpected problems or bugs can arise with any software, and you take that risk any time you give a live demo.

But at least with Free, open source software like GIMP, you know you own the software and it's not suddenly going to refuse to run without a license check. That sort of freedom is what makes the difference between free as in beer, and Free as in speech.

You can practice your demo carefully before the talk to guard against most bugs and glitches; but all the practice in the world won't guard against software that won't start.

I talked to the club president afterward and offered to give a GIMP talk to the club some time soon, when their schedule allows.

October 21, 2015

October 20, 2015

released darktable 1.6.9

We are happy to announce that darktable 1.6.9 has been released.

The release notes and relevant downloads can be found attached to this git tag:
Please only use our provided packages ("darktable-1.6.9.*" tar.xz and dmg) not the auto-created tarballs from github ("Source code", zip and tar.gz). The latter are just git snapshots and will not work! Here are the direct links to tar.xz and dmg:

this will likely be the last maintenance release in our 1.6 major release

$ sha256sum darktable-1.6.9.tar.xz
$ sha256sum darktable-1.6.9.dmg


  • don't build with external lua 5.3 or higher (darktable MUST be built with
    lua 5.2)
  • format datetime locale dependant (and try to handle timezones better)
  • fix various minor memory leaks
  • use sRGB as display profile on all versions of OS X, fixes monitor profile
    being applied twice


(newly added camera support should be considered experimental for the
time being):

  • Olympus E-M10 Mk2
  • Canon G3 X
  • Canon PowerShot SX60 HS
  • Sony A7R II
  • Fuji X-A2
  • Panasonic FZ1000 bad pixel detection
  • alias Panasonic TZ70/ZS50 to the TZ71
  • improve Samsung NX1/NX500 support (handle 12bit modes)
  • don't load broken Kodak kdc files

white balance presets

  • Olympus E-M10 Mk2
  • Canon PowerShot SX60 HS
  • Canon PowerShot G7 X
  • Sony A7R II
  • Sony A7 II
  • Sony RX100M4
  • Sony RX10
  • Nikon 1 J5


  • Nikon D3300
  • Canon PowerShot S120


  • Swedish (small updates)

October 15, 2015

Viewer for email attachments in Office formats

I seem to have fallen into a nest of Mac users whose idea of email is a text part, an HTML part, plus two or three or seven attachments (no exaggeration!) in an unholy combination of .DOC, .DOCX, .PPT and other Microsoft Office formats, plus .PDF.

Converting to text in mutt

As a mutt user who generally reads all email as plaintext, normally my reaction to a mess like that would be "Thanks, but no thanks". But this is an organization that does a lot of good work despite their file format habits, and I want to help.

In mutt, HTML mail attachments are easy. This pair of entries in ~/.mailcap takes care of them:

text/html; firefox 'file://%s'; nametemplate=%s.html
text/html; lynx -dump %s; nametemplate=%s.html; copiousoutput
Then in .muttrc, I have
auto_view text/html
alternative_order text/plain text

If a message has a text/plain part, mutt shows that. If it has text/html but no text/plain, it looks for the "copiousoutput" mailcap entry, runs the HTML part through lynx (or I could use links or w3m) and displays that automatically. If, reading the message in lynx, it looks to me like the message has complex formatting that really needs a browser, I can go to mutt's attachments screen and display the attachment in firefox using the other mailcap entry.

Word attachments are not quite so easy, especially when there are a lot of them. The straightforward way is to save each one to a file, then run LibreOffice on each file, but that's slow and tedious and leaves a lot of temporary files behind. For simple documents, converting to plaintext is usually good enough to get the gist of the attachments. These .mailcap entries can do that:

application/msword; catdoc %s; copiousoutput
application/vnd.openxmlformats-officedocument.wordprocessingml.document; docx2txt %s -; copiousoutput
Alternatives to catdoc include wvText and antiword.

But none of them work so well when you're cross-referencing five different attachments, or for documents where color and formatting make a difference, like mail from someone who doesn't know how to get their mailer to include quoted text, and instead distinguishes their comments from the text they're replying to by making their new comments green (ugh!) For those, you really do need a graphical window.

I decided what I really wanted (aside from people not sending me these crazy emails in the first place!) was to view all the attachments as tabs in a new window. And the obvious way to do that is to convert them to formats Firefox can read.

Converting to HTML

I'd used wvHtml to convert .doc files to HTML, and it does a decent job and is fairly fast, but it can't handle .docx. (People who send Office formats seem to distribute their files fairly evenly between DOC and DOCX. You'd think they'd use the same format for everything they wrote, but apparently not.) It turns out LibreOffice has a command-line conversion program, unoconv, that can handle any format LibreOffice can handle. It's a lot slower than wvHtml but it does a pretty good job, and it can handle .ppt (PowerPoint) files too.

For PDF files, I tried using pdftohtml, but it doesn't always do so well, and it's hard to get it to produce a single HTML file rather than a directory of separate page files. And about three quarters of PDF files sent through email turn out to be PDF in name only: they're actually collections of images of single pages, wrapped together as a PDF file. (Mostly, when I see a PDF like that I just skip it and try to get the information elsewhere. But I wanted my program at least to be able to show what's in the document, and let the user choose whether to skip it.) In the end, I decided to open a firefox tab and let Firefox's built-in PDF reader show the file, though popping up separate mupdf windows is also an option.

I wanted to show the HTML part of the email, too. Sometimes there's formatting there (like the aforementioned people whose idea of quoting messages is to type their replies in a different color), but there can also be embedded images. Extracting the images and showing them in a browser window is a bit tricky, but it's a problem I'd already solved a couple of years ago: Viewing HTML mail messages from Mutt (or other command-line mailers).

Showing it all in a new Firefox window

So that accounted for all the formats I needed to handle. The final trick was the firefox window. Since some of these conversions, especially unoconv, are quite slow, I wanted to pop up a window right away with a "converting, please wait..." message. Initially, I used a javascript: URL, running the command:

firefox -new-window "javascript:document.writeln('<br><h1>Translating documents, please wait ...</h1>');"

I didn't want to rely on Javascript, though. A data: URL, which I hadn't used before, can do the same thing without javascript:

firefox -new-window "data:text/html,<br><br><h1>Translating documents, please wait ...</h1>"

But I wanted the first attachment to replace the contents of that same window as soon as it was ready, and then subsequent attachments open a new tab in that window. But it turned out that firefox is inconsistent about what -new-window and -new-tab do; there's no guarantee that -new-tab will show up in the same window you recently popped up with -new-window, and running just firefox URL might open in either the new window or the old, in a new tab or not, or might not open at all. And things got even more complicated after I decided that I should use -private-window to open these attachments in private browsing mode.

In the end, the only way firefox would behave in a repeatable, predictable way was to use -private-window for everything. The first call pops up the private window, and each new call opens a new tab in the private window. If you want two separate windows for two different mail messages, you're out of luck: you can't have two different private windows. I decided I could live with that; if it eventually starts to bother me, I can always give up on Firefox and write a little python-webkit wrapper to do what I need.

Using a file redirect instead

But that still left me with no way to replace the contents of the "Please wait..." window with useful content. Someone on #firefox came up with a clever idea: write the content to a page with a meta redirect.

So initially, I create a file pleasewait.html that includes the header:

<meta http-equiv="refresh" content="2;URL=pleasewait.html">
(other HTML, charset information, etc. as needed). The meta refresh means Firefox will reload the file every two seconds. When the first converted file is ready, I just change the header to redirect to URL=first_converted_file.html. Meanwhile, I can be opening the other documents in additional tabs.

Finally, I added the command to my .muttrc. When I'm viewing a message either in the index or pager screens, F10 will call the script and decode all the attachments.

macro index <F10> "<pipe-message>~/bin/viewmailattachments\n" "View all attachments in browser"
macro pager <F10> "<pipe-message>~/bin/viewmailattachments\n" "View all attachments in browser"

Whew! It was trickier than I thought it would be. But I find I'm using it quite a bit, and it takes a lot of the pain out of those attachment-full emails.

The script is available at: viewmailattachments on GitHub.

My calendar for the next week

These are all actual items on my calendar for the next week (mostly for my kids, as you’ll gather):

  • Teddy bear clinic
  • School pictures
  • Swimming lessons
  • Piano lessons
  • Step dancing lessons
  • School “Spookcatular”
  • Have a baby

I am truly living the dream.

Which Laptop?

All the old hardware I kept from my KO GmbH days is, well, old, and dying. The Thinkpad's hinges are breaking, the Dell XPS 12's has a really small screen and is too slow for development work, the Thinkstation desktop machine has been throwing compiler segfaults for a year now. I've got a bunch of Intel Software Development Platforms, which are interesting laptops, but without battery life. And the Surface Pro 3 is a test device, not suited to develop on either. Even the Dell monitor is slowly losing what little contrast it had.

But what to buy?

I need a good keyboard, a good, largish hi-dpi screen (to check whether Krita handles that okay), at least 16 gb of memory and a big, fast disk. I have multiple checkouts of Krita, I build million+ lines of C++ code projects all day long, run virtual machines and, well, Krita likes lots of memory as well.

I could buy a Mac... It would help porting Krita to OSX. But it would also mean using a Mac. I've done that before, but I didn't like it. The keyboard shortcuts are all wrong, the window manager is aenemic and the whole platform goes out of the way to patronize its users. Plus, Macbooks don't have separate home, end, page up and page down keys, and there still isn't even a backspace key. And, expensive as Macbook Pro Retina's are, they don't even come with a touch screen, which is a convencience I've come to really appreciate. And the processors available now are a generation out of date.

I could buy a Dell. An XPS 15, or a Precision. I would have a really good screen, an up-to date-processor and 16 gb of memory. So far, so good. But all these workhorses have the same keyboad as the XPS 12, which means, no Home, no End, no PageUp, no PageDown. Look, dear laptop manufacturers, I'm editing all day long. I need to zoom through my text. That needs those four keys!

I could by a Lenovo. No, scratch that. Lenovo has squandered whatever good will they had by dropping build quality year over year. Every new thinkpad is worse than the previous generation. The keyboards have all the keys, though. The screens are often really dim, have really low contrast. And those breaking hinges... And, except for a gaming laptop, no configuration with more than 8GB of memory, no Hi-DPI screens. Even the X1 Carbon doesn't seem to go to 16GB! If it did, I might still be tempted, despite the hinges, because it's at least got the home, end, PgUp and PgDn keys.

I could wait and buy a Surface Book. It might be a bit small, but it has most keys (weirdly enough no Ins key, which I actually use a lot), but the screen's aspect ratio is pretty good. I'm just worried that it, being so thin, won't be able to stand all the compiling I'd be doing. On the other hand, it's got a pen, which is pretty useful for me. No word on when it will become available, though...

So, what I need is Lenovo's keyboard, Dell's processor, screen and memory, Micrsoft's pen and the ability to run OSX for porting Krita...

October 14, 2015

Krita 2.9.8

The eighth bug-fix release of Krita 2.9! We’re still fixing bugs and adding improvements, but a lot of work has gone into the kickstarter goals and the Krita 3.0 porting work, too. Ubuntu Linux users can use the ” krita-lod-unstable” packages from the Krita Lime repository to test-drive the first version of the animation support and the “LOD” performance improvements. Check the LOD option in the View menu, and many brushes and other features will be perform much better on large images!

But for day to day work, please update to Krita 2.9.8! There are some important fixes to the Photoshop-style Layer Styles feature, to the OpenEXR, TIFF, PNG and JPEG import/export filters.

  • Improve performance when adding new layers. (A blank new layer doesn’t need to make Krita update
    the entire image)
  • Fix the pass-through icons so there’s dark and light variants and make some other icons smaller
  • BUG:353261: Make rotation terminology consistent in the rotate image and rotate layer plugin
  • BUG:353248: Prevent a crash when using some types of graphics tablets
  • BUG:352916: Fix a crash in the cage transform worker
  • Improve rendering speed when some layers are invisible
  • Fix a crash when using shape/vector layers
  • BUG:352734: Fix saving single-layer EXR files
  • BUG:352983: Load the layers in a multi-layer EXR file in the right order
  • BUG:352734: Support loading and saving EXR files that have both layers and top-level channgels
  • BUG:310359: Fix loading and saving of L*a*b TIFF images
  • Add a Save Profile checkbox to the TIFF and JPG export filters: you can now save TIFF, JPG and PNG images without an embedded profile.
  • BUG:352845: Store the smoothing options only once
  • Fix Photoshop-style layer styles that use the random noise
  • Improve the performance of Photoshop-style Layer styles.


Call to translators and testers discussion

Qt5.5 brought dynamic OpenGL initialisation for Windows, which means that a single download should work on every supported system, and we don't need separate ANGLE/MESA downloads.

But we don't have enough old computers to test this. If you have a system (likely from around 2006-2010?) which reports only OpenGL2.1 and/or has shown strange behaviour with the 0.13 series, you can try the new start links with --angle-mode and --mesa-mode. On modern PCs with up-to-date drivers which support OpenGL3 and later, all links work (but only the default OpenGL link is needed). On an Intel GMA4500 (supporting OpenGL2.1 only), OpenGL causes the least problems, ANGLE has real visible defects in d3d11 mode (missing menu buttons) and --mesa-mode causes a truly unexpected crash on startup with V0.13.90.3-64bit while we had expected Mesa should have worked on every Windows PC. There are some more options which you please can test with --angle-mode in case you face problems:

QT_ANGLE_PLATFORM environment variable. Supported values are:

d3d11: Use Direct3D 11
d3d9: Use Direct3D 9
warp: Use the Direct3D 11 software rasterizer

We would welcome reports (LOGFILES!) about systems which do not report OpenGL3 compatibility but still work "somewhat" like that GMA4500 system, and reports whether any of the 3 options help in running --angle-mode, or whether at least --mesa-mode works.

The most demanded report would be the one that brings a solution to the problem why --mesa-mode fails on startup on at least 1 PC. MESA mode was regarded as last remedy that should have worked everywhere.

SVG animation

I haven’t written a post in quite a while, so I decided to document my failure to come up with a viable alternative to the translatable animations we use in Getting Started documentation. So let’s start with what’s wrong with it. Despite being far more maintainable than a screencast, it’s still a major hassle to keep the videos in sync with the developing designs. Every translation requires a re-render of all the frames and it quickly grows into gigabytes per language.

Czech version of one of the Getting Started videos

If you’re interested in seeing how these were produced, see the Behind the Scenes of getting Started video.

The animations themselves aren’t super complex. Basic transforms (translation, scale and rotation) and opacity is all that’s needed. And because we are using translatable SVGs in Mallard, it was time to look into SVG animation. There are numerous options available to animate in SVG, which already gave me a hint that none of them will work properly for my use case. I hate being right.


I’ll starts with the one I like least. The inline garbage approach. SMIL. Each attribute of an SVG element is animateable. Creating a global sequence using this by hand is close to impossible. Its capabilities do include a few extras like animating an object along a path, but in general I cannot imagine editing this by hand. Incorporating Inkscape into the workflow seemed feasible first. Inkscape will not touch the XML it doesn’t know about. It will not clean out any of the animation stuff when you save. The xlink namespace definition to animate along path seems to have worked, but I can’t figure out some weird offsets. Groups usually get some matrix transforms as soon as you reposition them. It all may boil down to Inkscape using its own coordinate system, I don’t know. I haven’t succeeded to bolt some animation on the Inkscape generated SVG.

About as complex of a SMIL animation as I can produce :)


A much more appealing was the concept of using CSS animation. We do a lot of transitions and some animation in gtk+, so it would have been great to reuse the same technology here. While CSS transitions are spot on, animation with a sense of a global timeline is not really the use case for the web. Usually animation in the intended context is an individual transform happening after an event triggered. Creating a sequence of various objects animating in a global timeline is pretty awkward. Especially if you want to loop the whole animation infinitely. The only tools for your disposal is either a time offset or relative time keyframes, keeping all objects’ animation the same length.

CSS Based animation of a #cursor1 with a JS playback reset button that doesn’t work. ;)

I also ran into Firefox and Webkit interpreting transform-origin differently.

.run-animation {
  transform-origin: top left;
  animation: cursor-move 2s ease 1s forwards, 
                    fade-in 1s linear 0s, 
                    cursor-click .25s ease 3s alternate 2;
@keyframes fade-in {
  from { opacity: 0;  }
  to {  opacity: 1; }
@keyframes cursor-move {
  from { opacity: 1;  }
  to {  opacity: 1; transform: translate(100px,-100px);}
@keyframes cursor-click {
  from { transform: translate(100px,-100px) scale(1); }
  to { transform: translate(100px,-100px) scale(.5); }

The above CSS uses animation-delay. It might be possible to have all keyframes last the same time and use the keyframes relative keyframing for timing (duplicate same keyframe to “hold”). I can’t imagine retiming or generally modify an existing animation hand constructed using CSS’ keyframes though. A visual tool with a timeline would be necessary.


There are many js based frameworks to aid creating and animating SVG documents in realtime, but none of them seem to aid me in creating a global complex animation using assets created in Inkscape. I looked at Google Webdesigner next.

Google Webdesigner

Google Webdesigner has all the necessary visual tools like property keyframing and a global timeline. Sadly it produces a rather less self contained set of html, js and css files. I didn’t figure out a way how that could be brought into Mallard.

In the end, even though the animations don’t seem to be that complex, maintaining them by hand doesn’t seem very doable. A visual editor is required. If Google Webdesigner can be taught to produce a standalone SVG or Mallard taught to use iframes, I’m all ear. Any pointers to a similar tool is also welcome.

Feature Freeze darktable 2.0

Dear all,

yesterday we entered the feature freeze stage for the upcoming darktable 2.0 feature release: no more new features will be allowed in. The coming months will be used to stabilize and fine-tune the code base.

As usual we don't make any definite statements about a release schedule, but we suggest to stay tuned towards the end of the year.

the dt team

October 13, 2015


October 12, 2015

Star ratings in GNOME Software

A long time ago, GNOME software used to show star ratings as popularity next to the application using the fedora-tagger application. This wasn’t a good idea for several reasons:

  • People can’t agree on a scale. Is an otherwise flawless application with one translation issue 5 stars or 4? Is a useful computational fluid dynamics application that crashes on startup but can be run manually on the command line 1 star or 3 stars?
  • It only worked on Fedora, and there was no real policy on how to share data, or the privacy implications of clicking a star
  • People could “game” the ratings system, for example hardcore KDE users could go through all the GNOME apps and give then one star. We then limited this to only rate applications that you have installed, but it was really a cat and mouse thing.

So, lets go two steps back. What is the star rating trying to convey to the user? When I look at a star rating, I want to see a proportional number of stars to how awesome it is to me. The rest of this blog tries to define awesomeness.

As part of the AppStream generation process we explode various parts of the distro binary package and try to build metadata by merging various sources together, for example AppData, desktop files and icons. As part of this we also have access to the finished binary and libraries, and so can also run tools on them to get a metric of awesomeness. So far, the metrics of awesomeness (here-on known as “kudos”) are:

  • AppMenu — has an application menu in line with the GNOME 3 HIG
  • HiDpiIcon — installs a 128×128 or larger application icon
  • HighContrast — installs hicontrast icons for visually impaired users
  • ModernToolkit — uses a modern toolkit like Gtk-3 or QT-5
  • Notifications — registers desktop notifications
  • SearchProvider — provides a search provider for GNOME Shell or KDE Plasma
  • UserDocs — provides user documentation

These attempt to define how tightly the application is integrated with the platform, which is usually a pretty good metric of awesomeness. Of course, some applications like Blender are an island in terms of integration, but of course are awesome. We still need new ideas for this, so ideas are very much welcome.

There are some other “run-time” kudos used as well. These are not encoded by the builder as they require some user information or are too specific to GNOME Software. These include:

  • FeaturedRecommended — One of the GNOME Software design team chose to feature this
  • HasKeywords — there are keywords in the desktop file used for searching
  • HasScreenshots — more than one screenshot is supplied
  • MyLanguage — has a populated translation in my locale, or a locale fallback
  • PerfectScreenshots — screenshots are perfectly sized, in 16:9 aspect
  • Popular — lots of people have downloaded this (only available on Fedora)
  • RecentRelease — there been an upstream release in the last year

When added together, the number of stars will correspond roughtly to the number of kudos the application has.

You can verify the kudos your application is getting by doing something like:

killall gnome-software
gnome-software --verbose

and then navigating to the details for an application you’ll see on the console:

 id-kind:         desktop
 state:           available
 id:              blender.desktop
 kudo:            recent-release
 kudo:            featured-recommended
 kudo:            has-screenshots
 kudo:            popular
 kudo-percentage: 60

Comments (as always) are welcome, as are new ideas on how to test for awesomeness.

Interview with Pierre Geier


Could you tell us something about yourself?

There isn’t really much to say, I’m Pierre Geier, just a 28-year-old German guy who codes at work and draws at home. I don’t have an art degree or some fancy certificate, actually I’ve never went to an art school. Most of my skills came from just drawing and listening to people who know what they’re doing.

Do you paint professionally, as a hobby artist, or both?

Right now I paint just because I like it. I’ve had some commissions in the past, but I can’t pay my bills with them. And I think I’m not yet ready for that kind of business. Getting money by doing what you love sounds nice in the first place. But I don’t really like the fact that there are people who are going to tell me what I should draw, like art directors.

What genre(s) do you work in?

Almost entirely portraits showing women, just like the old masters. I tried some space art, which was pretty nice, but I’m going to stick at portraits for now.


Whose work inspires you most — who are your role models as an artist?

David Revoy is one of my role models, he is one of the reasons why I started to paint digitally. Back  in 2013 I saw one of his time lapse videos, “Lezard”. He did that crazy Stuff with alchemy and suddenly had something to work with. So I bought a cheap Aiptek and started to draw with MyPaint  and all my drawings where awful. I switched back to traditional for a couple of months and eventually got back to digital. I read a lot of books, watched a lot of tutorials and started to listen to people who know what they are doing, like Matt from My whole mindset about drawing was just wrong. My standards were too high, I had almost no knowledge about volume, light, shade, surfaces and wanted to create awesome art. I started to drew basic props like mugs, cans or just textured balls. Then I discovered gesture drawing and with it some new techniques like “drawing with the bean”.

What makes you choose digital over traditional painting?

If you draw digitally you tend to take more risks, because you know you can make mistakes without destroying your whole image. And I just like the fact that you can waste a lot of “high quality” paper and colour without paying a single cent for it.


How did you find out about Krita?

If I remember correctly I’ve known about Krita since 2005, I guess, when I used KDE and there was this office stuff and a drawing program, which I never used. Until early 2015 I used only MyPaint and GIMP. And now I’ve been using Krita since April 2015.

What was your first impression?

I really had trouble making it work, because “vc” is detecting my CPU wrong. I was able to fix that by editing the qmake macro files. So my first impression was: this is so much slower than GIMP and MyPaint. Earlier Krita was just to complicated for me, it has so many layer-modes, I had no idea what to do with all those brushes and all my colours looked like mud (hello gamma corrected colour profiles)

What do you love about Krita?

David’s brushes, layers like filter-layer. And of course that overall workspace, I love to have my own colour palette any time next to my image

What do you think needs improvement in Krita? Is there anything that really annoys you?

Krita has this reference browser, the idea with that is nice. But I never use it, I wish that little preview image would act more like a canvas. Beginners have a really hard time getting used to Krita, I think it would help a lot if Krita can switch to a minimal interface when it is started for the first time. New Krita users would probably never need that many layer modes. The Macro recording. I would love to use that to record time-lapse videos, but sadly it doesn’t save informations like stylus pressure.

What sets Krita apart from the other tools that you use?

The brushes and the brush engine, colour management and the filter-layer.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

My recent one which I made for Nellwyn, I had so much fun in drawing it.  It’s the one showing a rat on the girl’s shoulder.

What techniques and brushes did you use in it?

The coffee-technique, grab a cup or 10 and start drawing. I block a lot, and rarely use the classic line art. I think it’s called chiaroscuro, you can find that in the book “The Artist’s Complete Guide to DRAWING THE HEAD” by William L. Maughan. I almost never use the Airbrush-Brushes in Krita, because they destroy skin structure and makes people look like plastic dolls. I used the brush kit 7 by David Revoy for this. Detail and the digital sketch brush are my best friends.


Where can people see more of your work?

I upload a lot work in progress on Facebook: or on my personal site:

Anything else you’d like to share?

Keep drawing, don’t set your standards too high. Listen to people who know what they are doing. You don’t need an art school. Draw every day.


October 11, 2015

How to get X output redirection back

X stopped working after my last Debian update.

Rather than run a login manager, I typically log in on the console. Then in my .zlogin file, I have:

if [[ $(tty) == /dev/tty1 ]]; then
  # do various things first, then:
  startx -- -dumbSched >& $HOME/.xsession-errors
Ignore -dumbSched for now; it's a fix for a timing problem openbox has when bringing up initial windows. The relevant part here is that I redirect both standard output and standard error to a file named .xsession-errors. That means that if I run GIMP or firefox or any other program from a menu, and later decide I need to see their output to look for error messages, all I have to do is check that file.

But as of my last update, that no longer works. Plain startx, without the output redirection, works fine. But with the redirect, X pauses for five or ten seconds, then exits, giving me my prompt back but with messed-up terminal settings, so I have to type reset before I do anything else.

Of course, I checked that .xsession file for errors, and also the ~/.local/share/xorg/Xorg.0.log file it referred me to (which is where X stores its log now that it's no longer running as root). It seems the problem is this:

Fatal server error:
(EE) xf86OpenConsole: VT_ACTIVATE failed: Operation not permitted
Which wasn't illuminating but at least gave me a useful search keyword.

I found a fair number of people on the web having the same problem. It's related to the recent Xorg change that makes it possible to run Xorg as a regular user, not root. Not that running as a user should have anything to do with capturing standard output and error. But apparently Xorg running as a user is dependent on what sort of virtual terminal it was run from; and the way it determines the controlling terminal, apparently, is by checking stderr (and maybe also stdout).

Here's a slightly longer description of what it's doing, from the ever useful Arch Linux forums.

I'm fairly sure there are better ways of determining a process's controlling terminal than using stderr. For instance, a casual web search turned up ctermid; or you could do checks on /dev/tty. There are probably other ways.

The Arch Linux thread linked above, and quite a few others, suggest adding the server option -keeptty when starting X. The Xorg manual isn't encouraging about this as a solution:

Prevent the server from detaching its initial controlling terminal. This option is only useful when debugging the server. Not all platforms support (or can use) this option.
But it does work.

I found several bugs filed already on the redirection problem. Freedesktop has a bug report on it, but it's more than a year old and has no comments or activity: Freedesktop bug 82732: rootless X doesn't start if stderr redirected.

Redhat has a bug report: Xorg without root rights breaks by streams redirection, and supposedly added a fix way back in January in their package version xorg-x11-xinit-1.3.4-3.fc21 ... though it looks like their fix is simply to enable -keeptty automatically, which is better than nothing but doesn't seem ideal. Still, it does suggest that it's probably not harmful to use that workaround and ignore what the Xorg man page says.

Debian didn't seem to have a bug filed on the problem yet (not terribly surprising, since they only enabled it in unstable a few days ago), so I used reportbug to attempt to file one. I would link to it here if Debian had an actual bug system that allowed searching for bugs (they do have a page entitled "BTS Search" but it gives "Internal Server Error", and the alternate google groups bug search doesn't find my bug), or if their bug reporting system acknowledged new bugs by emailing the submitter the bug number. In truth I strongly suspect that reportbug is actually a no-op and doesn't actually do anything with the emailed report.

But I'm not sure the Debian bug matters since the real bug is Xorg's, and it doesn't look like they're very interested in the problem. So anyone who wants to be able to access output of programs running under X probably needs to use -keeptty for the forseeable future.

Update: the bug acknowledgement came in six hours later. It's bug 801529.

How to get X output redirection back

X stopped working after my last Debian update.

Rather than run a login manager, I typically log in on the console, then in my .zlogin file, I have:

if [[ $(tty) == /dev/tty1 ]]; then
  # do various things first, then:
  startx -- -dumbSched >& $HOME/.xsession-errors
Ignore -dumbSched for now; it's a fix for a timing problem openbox has when bringing up initial windows. The relevant part here is that I redirect both standard output and standard error to a file named .xsession-errors. That means that if I run GIMP or firefox or any other program from a menu, and later decide I need to see their output to look for error messages, all I have to do is check that file.

But as of my last update, that no longer works. Plain startx, without the output redirection, works fine. But with the redirect, X pauses for five or ten seconds, then exits, giving me my prompt back but with messed-up terminal settings, so I have to type reset before I do anything else.

Of course, I checked that .xsession file for errors, and also the ~/.local/share/xorg/Xorg.0.log file it referred me to (which is where X stores its log now that it's no longer running as root). It seems the problem is this:

Fatal server error:
(EE) xf86OpenConsole: VT_ACTIVATE failed: Operation not permitted
Which wasn't illuminating but at least gave me a useful search keyword.

I found a fair number of people on the web having the same problem. It's related to the recent Xorg change that makes it possible to run Xorg as a regular user, not root. Not that running as a user should have anything to do with capturing standard output and error. But apparently Xorg running as a user is dependent on what sort of virtual terminal it was run from; and the way it determines the controlling terminal, apparently, is by checking stderr (and maybe also stdout).

Here's a slightly longer description of what it's doing, from ever useful the Arch Linux forums.

I'm fairly sure there are better ways of determining a process's controlling terminal than using stderr. For instance, a casual web search turned up ctermid; or you could do checks on /dev/tty. There are probably other ways.

The Arch Linux thread linked above, and quite a few others, suggest adding the server option -keeptty when starting X. The Xorg manual isn't encouraging about this as a solution:

Prevent the server from detaching its initial controlling terminal. This option is only useful when debugging the server. Not all platforms support (or can use) this option.
But it does work.

I found several bugs filed already on the redirection problem. Freedesktop has a bug report on it, but it's more than a year old and has no comments or activity: Freedesktop bug 82732: rootless X doesn't start if stderr redirected.

Redhat has a bug report: Xorg without root rights breaks by streams redirection, and supposedly added a fix way back in January in their package version xorg-x11-xinit-1.3.4-3.fc21 ... though it looks like their fix is simply to enable -keeptty automatically, which is better than nothing but doesn't seem ideal. Still, it does suggest that it's probably not harmful to use that workaround and ignore what the Xorg man page says.

Debian didn't seem to have a bug filed on the problem yet (not terribly surprising, since they only enabled it in unstable a few days ago), so I used reportbug to attempt to file one. I would link to it here if Debian had an actual bug system that allowed searching for bugs (they do have a page entitled "BTS Search" but it gives "Internal Server Error", and the alternate google groups bug search doesn't find my bug), or if their bug reporting system acknowledged new bugs by emailing the submitter the bug number. In truth I strongly suspect that reportbug is actually a no-op and doesn't actually do anything with the emailed report.

But I'm not sure the Debian bug matters since the real bug is Xorg's, and it doesn't look like they're very interested in the problem. So anyone who wants to be able to access output of programs running under X probably needs to use -keeptty for the forseeable future.

October 10, 2015

October Development News: krita moves to a new repository

Lots of things are happening! Let’s start with the most important part: Krita is no longer part of the Calligra source code. Krita 2.9 will still be developed inside Calligra, and we expect to do several more releases of Krita 2.9 with bug fixes and performance improvements. In fact, we expect to be releasing Krita 2.9 regularly until Krita 3.0 is done.

Krita 3.0 is now being developed at Clone the source code with:

git clone git://

and push to

The next step will be setting up phabricator, reviewboard, the repo viewer, the github mirror, the translation system and so on.

In the meantime, Krita 2.9.8 is building, and we hope to have a release ready by Monday. It’s a bit delayed, because of the Qt World Summit taking up some time.


Coding on all the kickstarter features is going on apace: the Level of Detail performance feature is nearly done, and Ubuntu Linux users can test it already by installing the krita-lod-unstable packages from the Krita Lime repository. We’re working on Windows packages as well. Having reached this point, Dmitry is now working on the animation feature, following up on the work Jouni did during the Google Summer of Code. It’s mostly  improving and polishing the user interface now, possibly followed by a performance optimization phase.

Qt5 Port

October Kiki, by Wolthera

October Kiki, by Wolthera

Krita 3.0, that is to say, the Qt5 port of Krita has come far enough that it’s possible to actually do some drawing and sketching with it. The Qt5 developers, especially Shawn Rutledge did a real good job on improving Qt’s support for tablets, but… There are bugs in the XCB library that need fixing. This means that tablet handling on Linux isn’t perfect yet, and we haven’t done a lot of testing on Windows yet. And the porting caused a lot of regressions, small bugs, visual issues, performance regressions… All of those need fixing over the next couple of months!

October 09, 2015

Qt World Summit 2015

I hadn't been able to attend the Qt Dev Days since the old Nokia days... My last one was when they handed out the blue N9's. KDE, as a sponsor/partner for the Qt World Summit, had access to a number of passes, and was going to have a booth, and I applied for one. I like doing booth duty and I think I'm fairly good at getting people to listen to our spiel. Here's a report from the trenches!

Monday is training day, and the KDE team didn't have passes for trainings. Besides, it probably was more instructive to sit in the hack room and hack anyway. With about ten KDE hackers around, the room was full and stuffy, but on the other hand, it was like a mini sprint. At the end of the day, I had the kxmlgui framework in a state where I could use it again for Krita without needing to fork it.

Very gratifying was the recognition KDE got on Tuesday, during the intro talk and the keynote by Lars Knoll. We know we're doing a good job stress-testing Qt, and a good job as a community helping Qt develop and grow, and it was good to see that recognized.

Not so awesome was the need another keynote speaker felt to connect to his public by making a little "I won't tell my wife" joke. Not terribly seriously over the edge, perhaps, but unpleasant nonetheless. When can we have a tech conference where speakers don't assume that their public are heterosexual, white, middle-aged married men? I happen to be one of them, of course...

I didn't attend many presentations. Of the talks I attended, both Guisseppe D'Angelo's "Integrating OpenGL with Qt Quick 2" and Olivier Goffart's "Using namespace std:" stood out because the presentation was clear, the information went deep so I could learn something new. Olivier's way of engaging with the public worked really well.

The real meat of the QtWS was working the booth, though. We had a good presentation: nice blue table cloth, two good posters (need to have OS logo's next year, most people thought KDE was Linux-only), a presentation running on a big screen and videos (Plasma Phone, Calligra Gemini, Krita) running in a loop on a nice convertible Windows 8 laptop, together with some software, like Krita Gemini and Marble to show people that KDE's Frameworks are a truly tested technology you can use to create serious real-world technology. Here's a picture Dan took:

That was a story that worked: almost everyone whom I asked "do you know KDE" answered with "Yes, I even used to use it". So I'd go on explaining that KDE is more than the desktop they used to use, but that there's this set of frameworks, full of tried, tested and reliable technology. Some examples later, I'd point them at the inqlude website. KDE really doesn't have a website that 'sells' Frameworks, which is a problem. Inqlude worked, though. I could also help reassure some people that doing a new desktop application with QWidgets wasn't a choice for a dead technology, also by showing off Krita Gemini: QWidgets and QML in the same application, with a seamless switch. All in all, we reached a fair number of interested people and we had a story that appealed and got through.

Wednesday evening, my feet ached and the arm that I had broken just before aKademy was very painful as well, but I was pretty satisfied. Plus, I had a stack of Kiki postcards, and pretty much everyone whom I handed one smiled, no matter how tired they were!

One cannot visit Berlin and skip seeing one of the museums. That's my story, and I stick to it. This time, I went to the Gemäldegalerie. Usually, I visit the Bode Museum, which has some incredible sculpture. The Gemäldegalerie was showing a special exhibition around Adobe's famous Illustrator splash screen's painter. The exhibition was a tad disappointing though.

Botticelli's work was shown together with 19th and 20th century works inspired by him. Some of those works were really interesting, some were boring. Warhol's Venus on an Amiga 1000 is much less interesting than the Amiga 1000 itself. Other works were more interesting, like Antinio Donghi or Evelyn de Morgan. But that's fine: not everything needs to be riveting. More of a problem was the presentation of Botticelli's works themselves: a boring, long row of paintings grouped by subject. As if the exhibition designer was fresh out of inspiration. The central room with the sole two works signed by Botticelli was flooded in a red light that made it impossible to see anything.

Anyway, after the exhibition followed a four kilometer walk through the galleries, with so many great paintings that a certain visual indigestion was inevitable. But I'll go again, and perhaps again. This might be my favorite for now, with the red-haired girl helping her grandmother, and the dancing pair:

Monthly Drawing Challenge

The September drawing challenge was a lot of fun, of course! But for October, we have a really good topic for you: “Tenderness”, chosen by Muses author Ramon Miranda. Rise to the challenge and get drawing!

Blenderart Mag Issue #47 now available

Welcome to Issue #47, “What’s your Passion?”

Welcome to our 10 Anniversary Issue where we look at “What’s Your Passion”. Following your artistic passions helps you grow and improves your artistic skills. While some of the things you might be exploring may not seem to connect to your previous artistic endeavors, be assured it will add to them one way or another. So here is a great opportunity to see how others follow their passions and maybe jump start a few of your own.

Table of Contents:

123D Tutorial
Exploring Character modeling
Interview with Reynante M
Ancient Beast Game Project
New Method for Subdivision
Interview with Ton Roosendaal

And Lot More…

October 04, 2015

Aligning images to make an animation (or an image stack)

For the animations I made from the lunar eclipse last week, the hard part was aligning all the images so the moon (or, in the case of the moonrise image, the hillside) was in the same position in every time.

This is a problem that comes up a lot with astrophotography, where multiple images are stacked for a variety of reasons: to increase contrast, to increase detail, or to take an average of a series of images, as well as animations like I was making this time. And of course animations can be fun in any context, not just astrophotography.

In the tutorial that follows, clicking on the images will show a full sized screenshot with more detail.

Load all the images as layers in a single GIMP image

The first thing I did was load up all the images as layers in a single image: File->Open as Layers..., then navigate to where the images are and use shift-click to select all the filenames I wanted.

[Upper layer 50% opaque to align two layers]

Work on two layers at once

By clicking on the "eyeball" icon in the Layers dialog, I could adjust which layers were visible. For each pair of layers, I made the top layer about 50% opaque by dragging the opacity slider (it's not important that it be exactly at 50%, as long as you can see both images).

Then use the Move tool to drag the top image on top of the bottom image.

But it's hard to tell when they're exactly aligned

"Drag the top image on top of the bottom image": easy to say, hard to do. When the images are dim and red like that, and half of the image is nearly invisible, it's very hard to tell when they're exactly aligned.


Use a Contrast display filter

What helped was a Contrast filter. View->Display Filters... and in the dialog that pops up, click on Contrast, and click on the right arrow to move it to Active Filters.

The Contrast filter changes the colors so that dim red moon is fully visible, and it's much easier to tell when the layers are approximately on top of each other.


Use Difference mode for the final fine-tuning

Even with the Contrast filter, though, it's hard to see when the images are exactly on top of each other. When you have them within a few pixels, get rid of the contrast filter (you can keep the dialog up but disable the filter by un-checking its checkbox in Active Filters). Then, in the Layers dialog, slide the top layer's Opacity back to 100%, go to the Mode selector and set the layer's mode to Difference.

In Difference mode, you only see differences between the two layers. So if your alignment is off by a few pixels, it'll be much easier to see. Even in a case like an eclipse where the moon's appearance is changing from frame to frame as the earth's shadow moves across it, you can still get the best alignment by making the Difference between the two layers as small as you can.

Use the Move tool and the keyboard: left, right, up and down arrows move your layer by one pixel at a time. Pick a direction, hit the arrow key a couple of times and see how the difference changes. If it got bigger, use the opposite arrow key to go back the other way.

When you get to where there's almost no difference between the two layers, you're done. Change Mode back to Normal, make sure Opacity is at 100%, then move on to the next layer in the stack.

It's still a lot of work. I'd love to find a program that looks for circular or partially-circular shapes in successive images and does the alignment automatically. Someone on GIMP suggested I might be able to write something using OpenCV, which has circle-finding primitives (I've written briefly before about SimpleCV, a wrapper that makes OpenCV easy to use from Python). But doing the alignment by hand in GIMP, while somewhat tedious, didn't take as long as I expected once I got the hang of using the Contrast display filter along with Opacity and Difference mode.

Creating the animation

Once you have your layers, how do you turn them into an animation?

The obvious solution, which I originally intended to use, is to save as GIF and check the "animated" box. I tried that -- and discovered that the color errors you get when converting an image to indexed make a beautiful red lunar eclipse look absolutely awful.

So I threw together a Javascript script to animate images by loading a series of JPEGs. That meant that I needed to export all the layers from my GIMP image to separate JPG files.

GIMP doesn't have a built-in way to export all of an image's layers to separate new images. But that's an easy plug-in to write, and a web search found lots of plug-ins already written to do that job.

The one I ended up using was Lie Ryan's Python script in How to save different layers of a design in separate files; though a couple of others looked promising (I didn't try them), such as gimp-plugin-export-layers and save_all_layers.scm.

You can see the final animation here: Lunar eclipse of September 27, 2015: Animations.

October 03, 2015

The Art of Language Invention

"Careful now. I must say this right. Upe, I have killed your husband. There's a gold hairpin in his chest. If you need more soulprice, ask me. Upe, bing keng ... No, that doesn't start right. Wait, I should say, 'Upe, bing wisyeye keng birikyisde... And then say sorry. What's sorry in this stupid language? Don't know. Bing biyititba. I had to... She can have the gold hairpin, and the other one, that should be enough. I hope she didn't really love him."

This is a tiny fragment from the novel I was writing when I started hacking on Krita... I finished the last chapter last year, and added a new last chapter this year. The context? Yidenir, one the protagonists, an apprentice sorcerer, is left alone by her master in a Barushlani camp, where she lives among the women, in the inner courtyard. When she learns she has been abandoned, she goes to the men's side of the tent, argues with the warlord and to make sure he understand she's a sorcerer, kills his right-hand man, by ramming one of her hairpins in his chest. Then she goes back, and tries to figure out how to tell that henchman's wife that she has killed her husband. A couple of weeks isn't long enough to learn Den Barush, as Barushlani is called in Denden (where 'barush' is form of the word for 'mountain').

Together with the novel, I wrote parts of a grammar of Barushlani. I had written a special application to collect language data, called Kura, and a system that used docbook, fop and python to combine language data and descriptive text into a single grammar. I was a serious conlanger. Heck, I was a serious linguist, having had an article published in linguistics of the Tibeto-Burman Area.

But conlanging is how I started. I hadn't read Tolkien (much, the local library only had Volume II of Lord of the Rings, in a Dutch translation), I didn't know it was possible to invent a language. But around 1981 I started learning French, English and German, and with French came a grammar. A book that put down the rules of language in an orderly way, very attractively, too, I thought. And my mind was fizzing with this invented world, full of semi-hemi-demi-somewhat humans that I was sculpting in wax. And drawing. And trying to figure out the music of. My people needed a language!

So I started working on Denden. It's no coincidence that Denden has pretty much no logical phonology. Over the years, I found I had gotten sentimentally attached to words I invented early on, so while grammar was easy to rewrite and make more interesting, the words had to stay. More or less.

Then I started studying Chinese, found some like-minded people, like Irina, founded the Society for Linguafiction (conlang wasn't a word back then), got into a row with Leyden Esperantist Marc van Oostendorp who felt that languages should only be invented from idealistic motives, not aesthetic. I got into a memorable discussion in a second-hand bookshop when a philosopher told me smugly that I might have imagined I had invented a language, but that I was wrong because a) you cannot invent a language and b) an invented language is not a language.

I got into the community centered around the CONLANG mailing list. I did a couple of relays, a couple of translations, and then I started getting ambitious about my world: I started working on the first two novels. And then, of course, I got side-tracked a little, first by the rec.arts.sf.composition usenet group, where people could discuss their writing, and later on by Krita.

These days, when we need words and names for our long-running RPG campaign, we use Nepali for Aumen Sith, Persian for Iss-Peran. Only Valdyas and Velihas have proper native language. The shame!!

And apart from RPG and now and then writing a bit of fiction, I had more or less forgotten about my conlanging. The source code for Kura seems to be lost, I need to check some old CDR's, but I'm not very hopeful. The setup I used to build the grammars is pretty much unreconstructable, and the wordprocessor documents that have my oldest data don't load correctly anymore. (I did some very weird hacks, back then, including using a hex editor to make a Denden translation of WordPerfect 4.2.)

Until today, when young whipper-snapper David J. Peterson's book arrived, entitled "The art of language invention". Everything came back... The attempt to make sense of Yaguello's Les Fous du Langage (crap, but there wasn't much else..) Trying to convince other people that no, I wasn't crazy, trying to explain to auxlangers that, yes, doing this for fun was a valid use of my time. The Tolkienian sensation of having sixteen drafts of a dictionary and no longer knowing which version is correct. What's not in David's book, but... Telling your lover in her or your own language that you love her, and writing erotic poetry in that language, too. Marrying at the town hall wearing t-shirts printed with indecent texts in different conlangs, each white front with black letters shouting defiance at the frock-coated marriage registrar. (I don't believe in civil marriage.)

Reading the book made me realize that, of course, internet has changed what it means to be a conlanger. We started out with literally stenciled fanzines, swapping fanzine for fanzine, moving on to actual copiers. Quietly not telling my Nepali/Hayu/Dumi/Limbu/comparative linguistics teacher what I actually was assembling the library of Cambridge books on Language (the red and green series!) for.

Linguistically, David's book doesn't have much to offer me, of course. I adapted Mark Rosenfelder's Perl scripts to create a diachronically logical system of sound changes so I could generate the Barushlani vocabulary. I know, or maybe, knew, about phonology, morphology, syntax and semantics. I made my first fonts with Corel Draw in the early nineties. I had to hack around to get IPA into Word 2. But it was a fun read, and brought back some good memories.

And also some pet peeves... Dothraki! I'm not a Games of Thrones fan, I long for a nice, fun, cosy fantasy series where not everyone wants to kill, rape and enslave everyone else. I found the books unreadable and the television series unwatchable. And... Dothraki. David explains how he uses the words and names the author had sprinkled around the text to base the language on. Good job on his side. But those words! Martin's concept of "exotic language" basically boils down to "India is pretty exotic!" It reads like the random gleanings from the Linguistic Survey of India, or rather, those stories from the Boy's Own Library that deal with Hindoostan. Which is, no doubt, where the 'double' vowels come from. Kaheera's ee is the same ee as in Victorian spellings of baksheesh and so on. Harumph.

BUT if the connection with television series helps sell this book and get more people having fun conlanging, then it's all worth it! I'm going to see if I can revive that perl script, and maybe do some nice language for the people living in the lowlands west of the mountain range that shelters Broi, the capital of Emperor Rordal, or maybe finally do something about Vustlani, the language of his wife, Chazalla.

Let's go back to Yidenir, doing the laundry with poor disfigured Tsoy... Tsoy wants to sing!

"Yidenir, ngaimyibge?" Another fierce scowl.

"What did you say? -- do I sing? Er..." Yidenir was silent for a moment. Was this girl making fun of her? Or was she just trying to be friendly?

"Sadrabam aimyibgyi ingyot. Aimyibgyi ruysing ho," Tsoy explained patiently.

"Er, singing, is good, er allowed? when doing laundry? Oh, yes, I can sing... Denden only, is that all right? Er, aimyipkyi denden?"


"All right, then... Teach you a bit of Denden, too? Ngsahe Denden bingyop?" Yidenir offered.

Call to translators and testers

We plan to release Stellarium 0.14.0 at final week in October.

There are many new strings to translate in this release because we have many changes in sky cultures, landscapes and in the application. If you can assist with translation to any of the 134 languages which Stellarium supports, please go to Launchpad Translations and help us out:

Testing of new features also will be very helpful for preparation of the release.

Thank you!

October 01, 2015

Lunar eclipse animations

[Eclipsed moon rising] The lunar eclipse on Sunday was gorgeous. The moon rose already in eclipse, and was high in the sky by the time totality turned the moon a nice satisfying deep red.

I took my usual slipshod approach to astrophotography. I had my 90mm f/5.6 Maksutov lens set up on the patio with the camera attached, and I made a shot whenever it seemed like things had changed significantly, adjusting the exposure if the review image looked like it might be under- or overexposed, occasionally attempting to refocus. The rest of the time I spent socializing with friends, trading views through other telescopes and binoculars, and enjoying an apple tart a la mode.

So the images I ended up with aren't all they could be -- not as sharply focused as I'd like (I never have figured out a good way of focusing the Rebel on astronomy images) and rather grainy.

Still, I took enough images to be able to put together a couple of animations: one of the lovely moonrise over the mountains, and one of the sequence of the eclipse through totality.

Since the 90mm Mak was on a fixed tripod, the moon drifted through the field and I had to adjust it periodically as it drifted out. So the main trick to making animations was aligning all the moon images. I haven't found an automated way of doing that, alas, but I did come up with some useful GIMP techniques, which I'm in the process of writing up as a tutorial.

Once I got the images all aligned as layers in a GIMP image, I saved them as an animated GIF -- and immediately discovered that the color error you get when converting to an indexed GIF image loses all the beauty of those red colors. Ick!

So instead, I wrote a little Javascript animation function that loads images one by one at fixed intervals. That worked a lot better than the GIF animation, plus it lets me add a Start/Stop button.

You can view the animations (or the source for the javascript animation function) here: Lunar eclipse animations

Secrets of Krita: the Third Krita Training DVD

Comics with Krita author Timothée Giet is back with his second training DVD: Secrets of Krita, a collection of videos containing 100 lessons about the most important things to know when using Krita. In 10 chapters, you will discover with clear examples all the essential and hidden features that make Krita so powerful and awesome! The data DVD is English spoken with English subtitles.

Secrets of Krita – DVD (€29,95)

Secrets of Krita – Download (€29,95)

Table of Contents

04-Display Colors
06-Canvas-Only Mode
07-Canvas Input
08-Other Shortcuts
10-Advanced Color Selector

2-Generic Brush Settings
01-Popup Palette
02-Toolbar Shortcuts
03-Dirty Presets
05-Precision Setting
06-Soft Brush
07-Build-Up And Wash
08-Dynamic Settings
09-Lock Setting
10-Smoothing mode

3-Specific Brush Settings
01-Pixel Brush: Color Dynamics
02-Pixel Brush: Pixel Art Presets
03-Color Smudge Brush: Overlay Mode
04-Sketch Brush: How It Works
05-Bristle Brush: Ink Depletion
06-Shape Brush: Speed And Displace
07-Spray Brush: Shapes And Dynamics
08-Hatching Brush: Hatching Options
09-Clone Brush: Shortcuts And Modes
10-Deform Brush: Deformation modes

01-Background Modes
02-Layer Groups
03-Inherit Alpha
04-Erase Mode
05-Filter Layer And Mask
06-Layer Conversion
07-Split Alpha
08-Split Layer
09-File Layer
10-Layer Color space

01-Selection Operations
03-Selection View
04-Global Selection Mask
05-Local Selection Mask
06-Selection Painting
07-Select Opaque
08-Contiguous Selection
09-Vector Selection
10-Convert Selection

01-Crop Tool
02-Pseudo-Infinite Canvas
03-Transform Tool
04-Transform Tool – Free Transform
05-Transform Tool – Perspective
06-Transform Tool – Warp
07-Transform Tool – Cage
08-Transform Tool – Liquify
09-Transform A Group
10-Recursive Transform

01-Assistant Magnetism
02-Vanishing Point
06-Concentric Ellipse
07-Parallell Ruler
09-Infinite Ruler
10-Fish Eye Point

01-Filter Presets
02-Dodge And Burn
04-Index Colors
05-Color To Alpha
06-Alpha Curve
07-Color Transfer
09-Layer Styles

9-Vector Tools
01-Vector Drawing
02-Vector Editing
03-Stacked Shapes
05-Stroke Shapes
07-Artistic Text
08-Multiline Text
09-Pattern Editing
10-Gradient Editing

01-Mirror view
02-Wrap Around mode
03-Mirror Painting
05-Save Incremental
06-Save Group Layers
08-Task Sets
09-Color Selectors
10-Command Line

September 28, 2015

Interview with Anusha Bhanded


Could you tell us something about yourself?

My name is Ana, I live in India and I love doing digital art, at least as a beginner. I’m 13 years old.

Do you paint professionally, as a hobby artist, or both?

Well, I’m just a hobby artist, can’t say “artist” but I like art.

Whose work inspires you most — who are your role models as an  artist?

Actually my role model, not totally as an artist, but it’s Scott Cawthon and team, the creator of FNAF. i like the art he did in the game and the effects. Also Markus Persson and team, the pixel artist.

What makes you choose digital over traditional painting?

As per my thinking i am kind of good at painting and art stuff, even my parents and friends say so, and I loved to do everything i could do on any gadget. I had this bored feeling with a pencil and a paper, so I started digital painting! It’s fun to use Krita!


What do you love about Krita?

As I loved to do digital painting I surfed the internet for good apps. All of them were great but they were not free… well, I ended up with Krita! My first favourite thing about Krita is that it’s free! That’s good because there are so many young artists out there who deserve to use any free available programs as good as Krita. Krita has TONS of awesome brushes and you can use a variety of them!

How did you find out about Krita?

One day I was surfing on the internet for a good painting tool. Most people said paint tool SAI was the best. I even tried to download the cracked version but that did not work, and I ended up using an awesome program called Krita! =D

September 27, 2015

Make a series of contrasting colors with Python

[PyTopo with contrasting color track logs] Every now and then I need to create a series of contrasting colors. For instance, in my mapping app PyTopo, when displaying several track logs at once, I want them to be different colors so it's easy to tell which track is which.

Of course, I could make a list of five or ten different colors and cycle through the list. But I hate doing work that a computer could do for me.

Choosing random RGB (red, green and blue) values for the colors, though, doesn't work so well. Sometimes you end up getting two similar colors together. Other times, you get colors that just don't work well, because they're so light they look white, or so dark they look black, or so unsaturated they look like shades of grey.

What does work well is converting to the HSV color space: hue, saturation and value. Hue is a measure of the color -- that it's red, or blue, or yellow green, or orangeish, or a reddish purple. Saturation measures how intense the color is: is it a bright, vivid red or a washed-out red? Value tells you how light or dark it is: is it so pale it's almost white, so dark it's almost black, or somewhere in between? (A similar model, called HSL, substitutes Lightness for Value, but is similar enough in concept.)

[GIMP color chooser] If you're not familiar with HSV, you can get a good feel for it by playing with GIMP's color chooser (which pops up when you click the black Foreground or white Background color swatch in GIMP's toolbox). The vertical rainbow bar selects Hue. Once you have a hue, dragging up or down in the square changes Saturation; dragging right or left changes Value. You can also change one at a time by dragging the H, S or V sliders at the upper right of the dialog.

Why does this matter? Because once you've chosen a saturation and value, or at least ensured that saturation is fairly high and value is somewhere in the middle of its range, you can cycle through hues and be assured that you'll get colors that are fairly different each time. If you had a red last time, this time it'll be a green, or yellow, or blue, depending on how much you change the hue.

How does this work programmatically?

PyTopo uses Python-GTK, so I need a function that takes a gtk.gdk.Color and chooses a new, contrasting Color. Fortunately, gtk.gdk.Color already has hue, saturation and value built in. Color.hue is a floating-point number between 0 and 1, so I just have to choose how much to jump. Like this:

def contrasting_color(color):
    '''Returns a gtk.gdk.Color of similar saturation and value
       to the color passed in, but a contrasting hue.
       gtk.gdk.Color objects have a hue between 0 and 1.
    if not color:
        return self.first_track_color;

    # How much to jump in hue:
    jump = .37

    return gtk.gdk.color_from_hsv(color.hue + jump,

What if you're not using Python-GTK?

No problem. The first time I used this technique, I was generating Javascript code for a company's analytics web page. Python's colorsys module works fine for converting red, green, blue triples to HSV (or a variety of other colorspaces) which you can then use in whatever graphics package you prefer.

September 25, 2015

Philips Wireless, modernised

I've wanted a stand-alone radio in my office for a long time. I've been using a small portable radio, but it ate batteries quickly (probably a 4-pack of AA for a bit less of a work week's worth of listening), changing stations was cumbersome (hello FM dials) and the speaker was a bit teeny.

A couple of years back, I had a Raspberry Pi-based computer on pre-order (the Kano, highly recommended for kids, and beginners) through a crowd-funding site. So I scoured « brocantes » (imagine a mix of car boot sale and antiques fair, in France, with people emptying their attics) in search of a shell for my small computer. A whole lot of nothing until my wife came back from a week-end at a friend's with this:

Photo from Radio Historia

A Philips Octode Super 522A, from 1934, when SKUs were as superlative-laden and impenetrable as they are today.

Let's DIY

I started by removing the internal parts of the radio, without actually turning it on. When you get such old electronics, they need to be checked thoroughly before being plugged, and as I know nothing about tube radios, I preferred not to. And FM didn't exist when this came out, so not sure what I would have been able to do with it anyway.

Roomy, and dirty. The original speaker was removed, the front buttons didn't have anything holding them any more, and the nice backlit screen went away as well.

To replace the speaker, I went through quite a lot of research, looking for speakers that were embedded, rather than get a speaker in box that I would need to extricate from its container. Visaton make speakers that can be integrated into ceiling, vehicles, etc. That also allowed me to choose one that had a good enough range, and would fit into the one hole in my case.

To replace the screen, I settled on an OLED screen that I knew would work without too much work with the Raspberry Pi, a small AdaFruit SSD1306 one. Small amount of soldering that was up to my level of skills.

It worked, it worked!

Hey, soldering is easy. So because of the size of the speaker I selected, and the output power of the RPi, I needed an amp. The Velleman MK190 kit was cheap (€10), and should just be able to work with the 5V USB power supply I planned to use. Except that the schematics are really not good enough for an electronics starter. I spent a couple of afternoons verifying, checking on the Internet for alternate instructions, re-doing the solder points, to no avail.

'Sup Tiga!

So much wasted time, and got a cheap car amp with a power supply. You can probably find cheaper.

Finally, I got another Raspberry Pi, and SD card, so that the Kano, with its super wireless keyboard, could find a better home (it went to my godson, who seemed to enjoy the early game of Pong, and being a wizard).

Putting it all together

We'll need to hold everything together. I got a bit of help for somebody with a Dremel tool for the piece of wood that will hold the speaker, and another one that will stick three stove bolts out of the front, to hold the original tuning, mode and volume buttons.

A real joiner

I fast-forwarded the machine by a couple of years with a « Philips » figure-of-8 plug at the back, so machine's electrics would be well separated from the outside.

Screws into the side panel for the amp, blu-tack to hold the OLED screen for now, RPi on a few leftover bits of wood.


My first attempt at getting something that I could control on this small computer was lcdgrilo. Unfortunately, I would have had to write a Web UI for it (remember, my buttons are just stuck on, for now at least), and probably port the SSD1306 OLED screen's driver from Python, so not a good fit.

There's no proper Fedora support for Raspberry Pis, and while one can use a nearly stock Debian with a few additional firmware files on Raspberry Pis, Fedora chose not to support that slightly older SoC at all, which is obviously disappointing for somebody working on Fedora as a day job.

Looking for other radio retrofits, and there are plenty of quality ones on the Internet, and for various connected speakers backends, I found PiMusicBox. It's a Debian variant with Mopidy builtin, and a very easy to use initial setup: edit a settings file on the SD card image, boot and access the interface via a browser. Tada!

Once I had tested playback, I lowered the amp's volume to nearly zero, raised the web UI's volume to the maximum, and raised the amp's volume to the maximum bearable for the speaker. As I won't be able to access the amp's dial, we'll have this software only solution.

Wrapping up

I probably spent a longer time looking for software and hardware than actually making my connected radio, but it was an enjoyable couple of afternoons of work, and the software side isn't quite finished.

First, in terms of hardware support, I'll need to make this OLED screen work, how lazy of me. The audio setup is currently just the right speaker, as I'd like both the radios and AirPlay streams to be downmixed.

Secondly, Mopidy supports plugins to extend its sources, uses GStreamer, so would be a right fit for Grilo, making it easier for Mopidy users to extend through Lua.

Do note that the Raspberry Pi I used is a B+ model. For B models, it's recommended to use a separate DAC, because of the bad audio quality, even if the B+ isn't that much better. Testing out use the HDMI output with an HDMI to VGA+jack adapter might be a way to cut costs as well.

Possible improvements could include making the front-facing dials work (that's going to be a tough one), or adding RFID support, so I can wave items in front of it to turn it off, or play a particular radio.

In all, this radio cost me:
- 10 € for the radio case itself
- 36.50 € for the Raspberry Pi and SD card (I already had spare power supplies, and supported Wi-Fi dongle)
- 26.50 € for the OLED screen plus various cables
- 20 € for the speaker
- 18 € for the amp
- 21 € for various cables, bolts, planks of wood, etc.

I might also count the 14 € for the soldering iron, the 10 € for the Velleman amp, and about 10 € for adapters, cables, and supplies I didn't end up using.

So between 130 and 150 €, and a number of afternoons, but at the end, a very flexible piece of hardware that didn't really stretch my miniaturisation skills, and a completely unique piece of furniture.

In the future, I plan on playing with making my own 3-button keyboard, and making a remote speaker to plug in the living room's 5.1 amp with a C.H.I.P computer.

Happy hacking!

September 24, 2015

Done Porting!

Technically, we're done porting Krita to Qt5 and KDE Frameworks 5. That is to say, everything builds, links and Krita runs, and there are no dependencies on deprecated libraries or classes anymore. In the process, the majority of Calligra's libraries and plugins were also ported. It was not an easy process, and if there hadn't been sponsorship available for the porting work, it would not have happened. Not yet, in any case. It's something I've heard from other KDE project maintainers, too: without sponsorship to work on the port full-time, projects might have died.

Krita wouldn't have died, but looking back at the previous month's work, I wonder I didn't go crazy, in a loud way. I spent four, five days a week on porting, and fixing the porting documentation, and then one or two days on trying to keep the bug count down for the 2.9 branch. As Kalle noted, porting isn't hard, it's very mechanical work that despite all the scripts, still needs to be done by a human, one who can make judgement calls -- and one who isn't afraid of making mistakes. Lots of mistakes: it's unavoidable. Most of them seem to be fixed now, though. It's like running a racecourse in blinkers.

So, what were the hardest problems?

The winners, ex-equo are KStandardDirs to QStandardPaths and KUrl to QUrl.

The latter is weird, because, actually, we shouldn't be using QUrl at all. The reason KUrl was used in KOffice, now Calligra, is for handling network transparent file access. That's something I do use in Kate or KWrite, when writing blogs (my blog system is a bunch of nineties perl scripts) but which I am sure not a single Krita user is actually using. It's too slow and dangerous, with big files, to depend on, it's superseded by dropbox, one-drive, google-drive, owncloud and the rise of the cheap NAS. Not to mention that only Plasma Desktop users have access to it, because on all other platforms we use native file dialogs which don't give access to remote locations. All the QUrl's we use get created from local files and end up being translated to local filenames.

KStandardDirs is more interesting. KStandardDirs actually was two things in one: a way to figure out the paths where the system and the application can store stuff like configuration files, and a way to build a kind of resources database. You'd define a resource type, say "brush", and add a bunch of locations where brushes can be found. For instance, Krita looks for brushes in its own brushes folder, but also in the shared 'create project' brushes folder and could even look in gimp's brushes folder.

The resources part isn't part of QStandardPaths, but is used really heavily in Calligra. The central place where we load resources, KoResourceServer just couldn't be ported to QStandardPaths: we'd have to duplicate the code for every resource type. But there's no problem that cannot be solved with another layer of indirection and a lump of putty, and I created a KoResourcePaths class that can handle the resource aliases. I'm not totally convinced I ironed out all the bugs, but Krita starts and all resources are being loaded.

There were a few more classes that were deprecated, the foremost being KDialog. There's only a hundred or so places in Calligra where that class was used, and here the best solutions seemed to me to fork KDialog into KoDialog. Problem solved -- and honestly, I don't see why the class had to be deprecated in the first place.

Now all the basic porting has been done, it's time to figure out what is broken, and why. Here's a short list:

  • Loading icons. Right now, I need to patch out the use of the icon cache to load icons. But in any case I am still considering putting the icons in the executable as resources, because that makes packaging on Windows much easier.
  • Qt5's SVG loader had trouble with our svg icons; that was fixed by cleaning up the icons.
  • OpenGL was a huge task, needing a nearly complete rewrite -- it works now on my development system, but I'm not sure about others.
  • Qt5's tablet support is much better, but now that we use that instead of our own tablet support, we've lost some performance (work is ongoing) and some things have changed meaning, which means that the scratchpad and the popup palette are broken for tablet users.
  • In general, the user interface feels sluggish: even things like the preferences dialog's widgets are slow to react.

And when all that is fixed, there is more to do: make new build environments for Windows (hopefully we can start using MSVC 2015 now) and OSX, see about dropping superfluous dependencies on things like DBus and, then...

Testing, testing and testing!

But I am getting confident that Krita 3.0 could be something we can let people try and test this year. And here is, for your delectation, a delectable screenshot:

Vote in the Kiki Drawing Challenge Contest

This month, we ran a special edition of the monthly drawing contest: draw Kiki, and get your work on a kickstarter t-shirt! The contest now has drawn to a close, and it’s time to vote.

So… Vote for your favorite Kiki!

Here’s, as a teaser, Tyson Tan’s entry, entered hors-de-concours:


(Which, incidentally, also is going to be the splash screen for the next release.)

September 23, 2015

GNOME 3.18, here we go

As I'm known to do, a focus on the little things I worked on during the just released GNOME 3.18 development cycle.

Hardware support

The accelerometer support in GNOME now uses iio-sensor-proxy. This daemon also now supports ambient light sensors, which Richard used to implement the automatic brightness adjustment, and compasses, which are used in GeoClue and gnome-maps.

In kernel-land, I've fixed the detection of some Bosch accelerometers, added support for another Kyonix one, as used in some tablets.

I've also added quirks for out-of-the-box touchscreen support on some cheaper tablets using the goodix driver, and started reviewing a number of patches for that same touchscreen.

With Larry Finger, of Realtek kernel drivers fame, we've carried on cleaning up the Realtek 8723BS driver used in the majority of Windows-compatible tablets, in the Endless computer, and even in the $9 C.H.I.P. Linux computer.

Bluetooth UI changes

The Bluetooth panel now has better « empty states », explaining how to get Bluetooth working again when a hardware killswitch is used, or it's been turned off by hand. We've also made receiving files through OBEX Push easier, and builtin to the Bluetooth panel, so that you won't forget to turn it off when done, and won't have trouble finding it, as is the case for settings that aren't used often.


GNOME Videos has seen some work, mostly in the stabilisation, and bug fixing department, most of those fixes were also landed in the 3.16 version.

We've also been laying the groundwork in grilo for writing ever less code in C for plugin sources. Grilo Lua plugins can now use gnome-online-accounts to access keys for specific accounts, which we've used to re-implement the Pocket videos plugin, as well as the cover art plugin.

All those changes should allow implementing OwnCloud support in gnome-music in GNOME 3.20.

My favourite GNOME 3.18 features

You can call them features, or bug fixes, but the overall improvements in the Wayland and touchpad/touchscreen support are pretty exciting. Do try it out when you get a GNOME 3.18 installation, and file bugs, it's coming soon!

Talking of bug fixes, this one means that I don't need to put in my password by hand when I want to access work related resources. Connect to the VPN, and I'm authenticated to Kerberos.

I've also got a particular attachment to the GeoClue GPS support through phones. This allows us to have more accurate geolocation support than any desktop environments around.

A few for later

The LibreOfficeKit support that will be coming to gnome-documents will help us get support for EPubs in gnome-books, as it will make it easier to plug in previewers other than the Evince widget.

Victor Toso has also been working through my Grilo bugs to allow us to implement a preview page when opening videos. Work has already started on that, so fingers crossed for GNOME 3.20!

Pirituba services center

This is a project we did for a small services center in São Paulo. The ground floor hosts three big stores, with different ceiling heights depending on their position on the site and their entrance level, and two upper floors of offices space, with open plan, dividable according to future necessities. This project is...

September 22, 2015

Average Book Covers and a New (official) GIMP Website (maybe)

A little while back I had a big streak of averaging anything I could get my hands on. I am still working on a couple of larger averaging projects (here's a small sneak peek - guess the movie?):

I'm trying out visualizing a movie by mean averaging all of the cuts. Turns out movies have way more than I thought - so it might be a while until I finish this one... :)

On the other hand, here's something neat that is recently finished...

JungleBook: Simple Kindle Ebook Cover Analysis

Jason van Gumster just posted this morning about a neat project he'd been toying with that is along similar lines of the Netflix Top 50 Covers by Genre, but takes it to a deeper level. He's written code to average the top 50 ebook covers on Amazon by genre:

Top 50 Kindle Covers by Jason van Gumster

By itself this is really pretty (to me - not sure if anyone else likes these things as much as I do) but Jason takes it further by providing some analysis and commentary on the resulting images in the context of ebook sales and popularity (visually) to people.

I highly recommend you visit Jason's post and read the whole post (it's not too long). It's really neat!

The GIMP Website

I had this note on my to-do list for ages to tinker with the GIMP website. I finally got off my butt and started a couple of weeks ago. I did a quick mockup to get a feel for the overall direction I wanted to head:

I've been hacking at it for a couple of weeks now and I kind of like how it's turning out. I'm still in the process of migrating old site content and making sure that legacy URI's aren't going to change. It may end up being a new site for GIMP. It also may not, so please don't hold your breath... :)

Here's where I am at the moment for a front page:

static GIMP page

Yes, that image is a link. The link will lead you to the page as I build it: See? It's like a prize for people who bother to read to the end! Feel free to hit me up with ideas or if you want to donate any artwork for the new page while I build it. I can't promise that I'll use anything anyone sends me, but if I do I will be sure to properly attribute! (Please consider a permissive license if you decide to send me something).

September 21, 2015

The meaning of "fetid"; Albireo; and musings on variations in sensory perception

[Fetid marigold, which actually smells wonderfully minty] The street for a substantial radius around my mailbox has a wonderful, strong minty smell. The smell is coming from a clump of modest little yellow flowers.

They're apparently Dyssodia papposa, whose common name is "fetid marigold". It's in the sunflower family, Asteraceae, not related to Lamiaceae, the mints.

"Fetid", of course, means "Having an offensive smell; stinking". When I google for fetid marigold, I find quotes like "This plant is so abundant, and exhales an odor so unpleasant as to sicken the traveler over the western prairies of Illinois, in autumn." And nobody says it smells like mint -- at least, googling for the plant and "mint" or "minty" gets nothing.

But Dave and I both find the smell very minty and pleasant, and so do most of the other local people I queried. What's going on?

[Fetid goosefoot] Another local plant which turns strikingly red in autumn has an even worse name: fetid goosefoot. On a recent hike, several of us made a point of smelling it. Sure enough: everybody except one found it minty and pleasant. But one person on the hike said "Eeeeew!"

It's amazing how people's sensory perception can vary. Everybody knows how people's taste varies: some people perceive broccoli and cabbage as bitter while others love the taste. Some people can't taste lobster and crab at all and find Parmesan cheese unpleasant.

And then there's color vision. Every amateur astronomer who's worked public star parties knows about Albireo. Also known as beta Cygni, Albireo is a double star, the head of the constellation of the swan or the foot of the Northern Cross. In a telescope, it's a double star, and a special type of double: what's known as a "color double", two stars which are very different colors from each other.

Most non-astronomers probably don't think of stars having colors. Mostly, color isn't obvious when you're looking at things at night: you're using your rods, the cells in your retina that are sensitive to dim light, not your cones, which provide color vision but need a fair amount of light to work right.

But when you have two things right next to each other that are different colors, the contrast becomes more obvious. Sort of.

[Albireo, from Jefffisher10 on Wikimedia Commons] Point a telescope at Albireo at a public star party and ask the next ten people what two colors they see. You'll get at least six, more likely eight, different answers. I've heard blue and red, blue and gold, red and gold, red and white, pink and blue ... and white and white (some people can't see the colors at all).

Officially, the bright component is actually a close binary, too close to resolve as separate stars. The components are Aa (magnitude 3.18, spectral type K2II) and Ac (magnitude 5.82, spectral type B8). (There doesn't seem to be an Albireo Ab.) Officially that makes Albireo A's combined color yellow or amber. The dimmer component, Albireo B, is magnitude 5.09 and spectral type B8Ve: officially it's blue.

But that doesn't make the rest of the observers wrong. Color vision is a funny thing, and it's a lot more individual than most people think. Especially in dim light, at the limits of perception. I'm sure I'll continue to ask that question when I show Albireo in my telescope, fascinated with the range of answers.

In case you're wondering, I see Albireo's components as salmon-pink and pale blue. I enjoy broccoli and lobster but find bell peppers bitter. And I love the minty smell of plants that a few people, apparently, find "fetid".

WebKitGTK+ 2.10

HTTP Disk Cache

WebKitGTK+ already had an HTTP disk cache implementation, simply using SoupCache, but Apple introduced a new cross-platform implementation to WebKit (just a few bits needed a platform specific implementation), so we decided to switch to it. This new cache has a lot of advantages over the SoupCache approach:

  • It’s fully integrated in the WebKit loading process, sharing some logic with the memory cache too.
  • It’s more efficient in terms of speed (the cache is in the NetworkProcess, but only the file descriptor is sent to the Web Process that mmaps the file) and disk usage (resource body and headers are stored in separate files in disk, using hard links for the body so that difference resources with the exactly same contents are only stored once).
  • It’s also more robust thanks to the lack of index. The synchronization between the index and the actual contents has always been a headache in SoupCache, with many resources leaked in disk, resources that are cache twice, etc.

The new disk cache is only used by the Network Process, so in case of using the shared secondary process model the SoupCache will still be used in the Web Process.

New inspector UI

The Web Inspector UI has been redesigned, you can see some of the differences in this screenshot:


For more details see this post in the Safari blog


This was one the few regressions we still had compared to WebKit1. When we switched to WebKit2 we lost IndexedDB support, but It’s now back in 2.10. It uses its own new process, the DatabaseProcess, to perform all database operations.


WebKitGTK+ 2.8 improved the overall performance thanks to the use of the bmalloc memory allocator. In 2.10 the overall performance has also improved, this time thanks to a new implementation of the locking primitives. All uses of mutex/condition have been replaced by a new implementation. You can see more details in the email Filip sent to webkit-dev or in the so detailed commit messages.

Screen Saver inhibitor

It’s more and more common to use the web browser to watch large videos in fullscreen mode, and quite annoying when the screen saver decides to “save” your screen every x minutes during the whole video. WebKitGTK+ 2.10 uses the ScreenSaver DBus service to inhibit the screen saver while a video is playing in fullscreen mode.

Font matching for strong aliases

WebKit’s font matching algorithm has improved, and now allows replacing fonts with metric-compatible equivalents. For example, sites that specify Arial will now get Liberation Sans, rather than your system’s default sans font (usually DejaVu). This makes text appear better on many pages, since some fonts require more space than others. The new algorithm is based on code from Skia that we expect will be used by Chrome in the future.

Improve image quality when using newer versions of cairo/pixman

The poor downscaling quality of cairo/pixman is a well known issue that was finally fixed in Cairo 1.14, however we were not taking advantage of it in WebKit even when using a recent enough version of cairo. The reason is that we were using CAIRO_FILTER_BILINEAR filter that was not affected by the cairo changes. So, we just switched to use CAIRO_FILTER_GOOD, that will use the BILINEAR filter in previous versions of Cairo (keeping the backwards compatibility), and a box filter for downscaling in newer versions. This drastically improves the image quality of downscaled images with a minim impact in performance.


Editor API

The lack of editor capabilities from the API point of view was blocking the migration to WebKit2 for some applications like Evolution. In 2.10 we have started to add the required API to ensure not only that the migration is possible for any application using a WebView in editable mode, but also that it will be more convenient to use.

So, for example, to monitor the state of the editor associated to a WebView, 2.10 provides a new class WebKitEditorState, that for now allows to monitor the typing attributestyping attributes. With WebKit1 you had to connect to the selection-changed signal and use the DOM bindings API to manually query the typing attributes. This is quite useful for updating the state of the editing buttons in the editor toolbar, for example. You just need to connect to WebKitEditorState::notify::typying-attributes and update the UI accordingly. For now typing attributes is the only thing you can monitor from the UI process API, but we will add more information when needed like the current cursor position, for example.

Having WebKitEditorState doesn’t mean we don’t need a selection-changed signal that we can monitor to query the DOM ourselves. But since in WebKit2 the DOM lives in the Web Process, the selection-changed signal has been added to the Web Extensions API. A new class WebKitWebEditor has been added, to represent the web editor associated to a WebKitWebPage, and can be obtained with webkit_web_page_get_editor(). And is this new class the one providing the selection-changed signal. So, you can connect to the signal and use the DOM API the same way it was done in WebKit1.

Some of the editor commands require an argument, like for example, the command to insert an image requires the image source URL. But both the WebKit1 and WebKit2 APIs only provided methods to run editor commands without any argument. This means that, once again, to implement something like insert-image or insert link, you had to use the DOM bindings to create and insert the new elements in the correct place. WebKitGTK+ 2.10 provides webkit_web_view_execute_editing_command_with_argument() to make this a lot more convenient.

You can test all this features using the new editor mode of MiniBrowser, simply run it with -e command line option and no arguments.


Website data

When browsing the web, websites are allowed to store data at the client side. It could be a cache, like the HTTP disk cache, or data required by web features like offline applications, local storage, IndexedDB, WebSQL, etc. All that data is currently stored in different directories and not all of those could be configured by the user. The new WebKitWebsiteDataManager class in 2.10 allows you to configure all those directories, either using a common base cache/data directory or providing a specific directory for every kind of data stored. It’s not mandatory to use it though, the default values are compatible with the ones previously used.

This gives the user more control over the browsing data stored in the client side, but in the future versions we plan to add support for actually handling the data, so that you will be able to query and delete the data stored by a particular security domain.

Web Processes limit

WebKitGTK+ currently supports two process models, the single shared secondary process and the multiple secondary processes. When using the latter, a new web process is created for every new web view. When there are a lot of web views created at the same time, the resources required to create all those processes could be too much in some systems. To improve that a bit 2.10 adds webkit_web_context_set_web_process_count_limit(), to set the maximum number of web process that can be created a the same time.

This new API can also be used to implement a slightly different version of the shared single process model. By using the multiple secondary process model with a limit of 1 web process, you still have a single shared web process, but using the multi-process mechanism, which means the network will happen in the Network Process, among other things. So, if you use the shared secondary process model in your application, unless your application only loads local resources, we recommend you to switch to multiple process model and use the limit to benefit from all the Network Process feature like the new disk cache, for example. Epiphany already does this for the secondary process model and web apps.

Missing media plugins installation permission request

When you try to play media, and the media backend doesn’t find the plugins/codecs required to play it, the missing plugin installation mechanism starts the package installer to allow the user to find and install the required plugins/codecs. This used to happen in the Web Process and without any way for the user to avoid it. WebKitGTK+ 2.10 provides a new WebKitPermissionRequest implementation that allows the user to block the request and prevent the installer from being invoked.

September 19, 2015

Danit Peleg – 3D Printing a Fashion Collection



By: Danit Preleg

In September 2014 I started working on my graduate collection for my Fashion Design degree at Shenkar.  This year, I decided to work with 3D printing, which I barely knew anything about. I wanted to check if it’d be possible to create an entire garment using technology accessible to anyone. So I embarked on my 3D printing journey, without really knowing what the end result would be.

The first piece I focused on was the “LIBERTE” jacket. I modeled the jacket using a software called Blender and produced 3D files; I could now start to experiment with different materials and printers.

Together with the amazing teams at TechFactoryPlus and XLN, we experimented with different printers (Makerbot, Prusa, and finally Witbox) and materials (e.g. PLA, soft PLA).

3d-print-fashion-jacketThe breakthrough came when I was introduced to FilaFlex, which is a new kind of filament; it’s strong, yet very flexible. Using FilaFlex and the Witbox printer, I finally was able to print my red jacket.

Once I figured out how to print textiles, I was on my way to create a full collection. It would take more than 2000 hours to print (every A4-sized sheet of textile took at least 20 hours to print)  so I had to step up my printer-game, to a full fledged “3D-printing farm” at home.
I would like to thank Yaniv Gershony, a 3D designer who volunteered to help me throughout the past 9 months.  He took my designs and transformed the to 3D models. He’s extremely talented and an expert Blender user. Here’s his website:

And we used the following Blender add-ons during the process: Export paper model, Sverchok, Mesh lint, Booltool.









September 17, 2015

Portrait Lighting Cheat Sheets

Portrait Lighting Cheat Sheets

Blender to the Rescue!

Many moons ago I had written about acquiring a YN-560 speedlight for playing around with off-camera lighting. At the time I wanted to experiment with how different modifiers might be used in a portrait setting. Unfortunately, these were lighting modifiers that I didn’t own yet.

I wasn’t going to let that slow me down, though!

If you want to skip the how and why to get straight to the cheat sheets, click here.

Infinite Realities had released a full 3D scan by Lee Perry-Smith of his head that was graciously licensed under a Creative Commons Attribution 3.0 Unported License. For reference, here is a link to the object file and textures (80MB) and the displacement maps (65MB) from the Infinite Realities website.

What I did was to bring the high resolution scan and displacement maps into Blender and manually created my lights with modifiers in a virtual space. Then I could simply render what a particular light/modifier would look like with a realistic person being lit in any way I wanted.

Blender View Lighting Setup

This leads to all sorts of neat freedom to experiment with things to see how they might come out. Here’s another look at the lede image:

Blender Lighting Samples Various lighting setups test in Blender.

I had originally intended to make a nice bundled application that would allow someone to try all sorts of different lighting setups, but my skill in Blender only go so far. My skills at convincing others to help me didn’t go very far either. :)

So, if you’re ok with navigating around Blender already, feel free to check out my original blog post to download the .blend file and give it try! Jimmy Gunawan even took it further and modified the .blend to work with Blender cycles rendering as well.

With the power to create a lighting visualization of any scenario I then had to see if there was something cool I could make for others to use…

The Lighting Cheat Sheets

I couldn’t help but generate some lighting cheat sheets to help others use as a reference. I’ve seen some different ones around but I took advantage of having the most patient model in the world to do this with. :)

These were generated by rotating a 20” (virtual) softbox in a circle around the subject at 3 different elevations (0, 30°, and 60°).

Click the caption title for a link to the full resolution files:

Blender Lighting Setup 0 degrees Softbox 0° Portrait Lighting Cheat Sheet Reference
by Pat David (cba)
Blender Lighting Setup 30 degrees Softbox 30° Portrait Lighting Cheat Sheet Reference
by Pat David (cba)
Blender Lighting Setup 60 degrees Softbox 60° Portrait Lighting Cheat Sheet Reference
by Pat David (cba)

Hopefully these might prove useful as a reference for some folks. Share them, print them out, tape them to your lighting setups! :) I wonder if we could get some cool folks from the community to make something neat with them?

Image processing made easier with a powerful math expression evaluator.

Warning: This post contains personal thoughts about my research work in image processing. I’ll discuss about some of the issues I’m facing as an active developer of two open-source image processing frameworks (namely CImg and G’MIC). So keep in mind this will be a bit self-centered. There are high chances you find all this really boring if you are not a developer of image processing software yourself (and even if so). Anyhow, feel free to give your impressions after the reading!

1. Context and issues

In imaging science, image processing is processing of images using mathematical operations by using any form of signal processing for which the input is an image, such as a photograph or video frame.

That’s what Wikipedia says about  image processing. Selecting and ordering those mathematical operations is what actually defines algorithms, and implementing ready-to-use and interesting image processing algorithms is actually one of my goals, as well as making them available for interested users afterwards.

After all those years (>10) as a researcher in the image processing field (and like most of my colleagues), I can say I’ve already implemented a lot of these different algorithms. Mostly in C++ as far as I’m concerned. To be more precise, an important part of my work is even to design (and hopefully publish) my own image processing methods. Most of the time of course, my trials end up with clunky, ineffective and slow operators which give nothing interesting else than knowing the approach is not good enough to be followed. Someone who says everything he tries works right the first time is a liar. Step by step, I try to refine/optimize my prototypes or sometimes even take a completely different direction. Quickly, you realize that it is crucial in this job not to waste time when doing algorithm prototyping because the rate of success is in fact very low.


Don’t waste your time, in any occasion! (photo by (OvO), under CC-by-nc-sa.)

That’s actually one of the reason why I’ve started the G’MIC project. It was primarily designed as a helper to create and run custom image processing pipelines quickly (from the shell, basically). It saves me time, everyday. But the world of image processing algorithms is broad and sometimes you need to experiment with very low-level routines working at a pixel scale, trying such weird and unexpected stuffs that none of the “usual” image processing algorithms you already have in your toolbox can be of use as it is. Or it is used in a so diverted way that it gets hard to even think about using it adequately. In a word, your pixel-level algorithm won’t be expressed as a simple pipeline (or graph if you want to call it so) of macro-scale image processing operators. That’s the case for instance with most of the well known patch-based image processing algorithms (e.g. Non-Local-Means, or PatchMatch and plenty of variants), where each pixel value of the resulting image is computed from (a lot of) other pixel values whose spatial locations are sometimes not evenly distributed (but not random as well!).

Until now, when I was trying to implement this kind of algorithms, I was resigned to go back coding them in C++: It is one language I feel comfortable with, and I’m sure it will run fast enough most of the time. Indeed, computation time is often a bottleneck in image processing. Some of my colleagues are using scripting languages as Matlab or Python for algorithm prototyping. But they often need some tricks to avoid writing explicit code loops, or need to write at least some fast C/C++ modules that will be compiled and run from those higher-level interfaces, to ensure they get something fast enough (even for prototyping, I’m definitely not talking about optimized production code here!).


But, I’m not really satisfied with my C++ solution: Generally, I end up with several small pieces of C++ sources I need to compile and maintain. I can hardly re-use them in a bigger pipeline, or redistribute them as clean packages without a lot of extra work. Because they are just intended to be prototypes: They often have only basic command-line interfaces and thus cannot be directly integrated into bigger and user-friendly image processing frameworks. Making a prototype algorithm really usable by others requires at least to wrap it as a plug-in/module for [..copy the name of your favorite image processing tool or language here..]. This generally represents a lot of boring coding workthat may even require more time and efforts than writing the algorithm itself!  And I don’t talk about maintenance. If you’ve ever tried to maintain a 10-year old C++ prototype code, lost in one of your sub-sub-sub-sub folder in your $HOME, you know what I mean. I’d definitely prefer a simpler solution that let me spend more time on writing the algorithm itself than packaging it or making it usable. After all, the primary purpose of my work is to create cool algorithms, not really coding user interfaces for them. On the other way, I am a scientist and I’m also happy to share my discoveries with users (and possibly get feedback from them!). How to make those prototyped algorithms finally usable without spending too much time on making them usable ? :)

Ideally, I’d like something that could nicely integrate into G’MIC (my favorite framework for doing image processing stuffs, of course :) ). Even if at the end, those algorithms run a bit slower than they are in C++. One could suggest to make them as Octave or Scilab scripts/modules. But I’m the developer of G’MIC, so of course, I’d prefer a solution that help to extend my own project.

So finally, how could I code prototypes for new algorithms working at a pixel-level and make them readily available in G’MIC ? This question has worried me for a long time.

2. Algorithm code viewed as a complex math expression

In G’MIC, the closest thing to what I was looking for, is the command

-fill 'expression'

This command fills each pixel of a given image with the value evaluated from a “mathematical expression”. A mathematical expression being a quite vague concept, it appears you can already write some complex formulas. For instance, typing this on the command line:

$ gmic 400,400,1,3 -fill "X=x-w/2; Y=y-h/2; R=sqrt(X^2+Y^2); a=atan2(Y,X); if(R<=180,255*abs(cos(c+200*(x/w-0.5)*(y/h-0.5))),850*(a%(0.1*(c+1))))"

creates this weird-looking 400×400 color image (I advise you to put sunglasses):


Fig.1. One synthetic color image obtained by one application of the G’MIC command -fill.

Of course, the specified expression can refer to pixels of an existing input image. And so, it can modify the pixels of an image as well, as in the following example:

$ gmic leno.png -fill "(abs(i(x+1,y)-i(x-1,y)))^0.25"

which computes gamma-corrected differences of neighboring pixels along the X-axis, as shown below:


Fig.2.1. Original image leno.png


Fig.2.2. Result of the -fill command described above.

(As an aside, let me tell you I’ve recently received e-mails and messages from people who claim that using the image of our beloved Lena to illustrate an article or a blog post is “sexist” (someone even used the term “pornographic”…). I invite you reading the Lena story page if you don’t know why we commonly use this image. As I don’t want to hurt the over-sensibility of these people, I’ll be using a slight variation I’ve made by mixing a photograph of the blessed Jay Leno with the usual image of Lena. Let me call this the Leno image and everyone will be happy (but seriously, get a life!)).

So, as you can imagine, the command -fill already allows me to do a lof of complex and exotic things on images at a pixel level. Technically speaking, it uses the embedded math parser I’ve written for the CImg Library, a C++ open-source image processing library I’ve been developing since 1999 (and on which G’MIC is based). This math parser is quite small (around 1500 lines of C++ code) and quite fast as well, when it is applied on a whole image. That’s mainly because:

1. It uses parallelization (thanks to the use of OpenMP directives) to evaluate expressions on blocs of image pixels in a multi-threaded way.

2. Before being evaluated, the given math expression is pre-compiled on-the-fly by CImg, into a sequence of bytecodes. Then, the evaluation procedure (which is done for the whole image, pixel by pixel) requires only that bytecode sequence to be interpreted, which is way faster than parsing the input mathematical expression itself.

Anyway, I thought the complexity of the pixel-level algorithms I’d like to implement was really higher than just the evaluation of a mathematical formula. But wait… What is missing actually ? Not much more than loops and updatable variables… I already had variables (though non-updatable) and conditionals. Only loops were really missing. That looks like something I could try adding during my summer holidays, isn’t it ? 😉 So, that is where my efforts were focused on during these last weeks: I’ve added new functions to the CImg math parser that allow users to write their own loops in mathematical expressions, namely the functions dowhile(expr,_cond)whiledo(cond,expr) and for(init,cond,expr_end,_expr_body). Of course, it made me also review and re-implement large parts of the math parser code, and I took this opportunity to optimize the whole thing. A new version of the math parser has been made available for the release of G’MIC at the end of August. I’m still working on this expression evaluator in CImg and new improvements and optimizations are ready for the upcoming version of G’MIC (soon to be released).

3. A toy example: Julia fractals

So, what can we do now with these new math features in G’MIC ? Let me illustrate this with a toy example. The following custom G’MIC command renders a Julia fractal. To test it, you just have to copy/paste the following lines in a regular text file user.gmic:

julia_expr :
  -fill "
    zr = -1.2 + 2.4*x/w;
    zi = -1.2 + 2.4*y/h;
    for (iter = 0, zr^2+zi^2<=4 && iter<256, ++iter,
      t = zr^2 - zi^2 + 0.4;
      (zi *= 2*zr) += 0.2;
      zr = t
  -map 7

and invoke the new command -juliar_expr it defines by typing this in the terminal:

$ gmic user.gmic -julia_expr

Then, you’ll get this 1024×1024 color image:

Rendering of the Julia fractal set only from filling an image with a math expression.

Fig.3. Rendering of a Julia fractal only by filling an image with a complex math expression, containing an iteration loop.

As you see, this custom user command -julia_expr is very short and is mainly based on the invokation of the -fill command of G’MIC. But the coolest thing of all happens when we look at the rendering time of that function. The timing measure has been performed on an ASUS laptop with a dual-core HT i7 2Ghz. This is what I get:

Edit : This post has been edited, on 09/20/2015 to reflect the new timings due to math parser optimization done after the initial post of this article.

$ gmic user.gmic -tic -julia -toc
[gmic]-0./ Start G'MIC interpreter.
[gmic]-0./ Input custom command file 'user.gmic' (added 1 command, total 6195).
[gmic]-0./ Initialize timer.
[gmic]-0./julia/ Input black image at position 0 (1 image 1024x1024x1x1).
[gmic]-1./julia/ Fill image [0] with expression ' zr = -1.2 + 2.4*x/w; zi = -1.2 + 2.4*(...)( zi *= 2*zr) += 0.2; zr = t ); iter '.
[gmic]-1./julia/ Map cube color LUT on image [0], with dirichlet boundary conditions.
[gmic]-1./ Elapsed time : 0.631 s.

Less than 0.7 second to fill a 1024×1024 image where each of the 1,048,576 pixels may require up to 256 iterations of a computation loop ? Definitely not bad for a prototyped code written in 5 minutes and which does not require compilation to run! Note that all my CPUs have been active during the computation. Trying the same G’MIC code on my machine at work (a powerful 4x 3-core HT Xeon 2.6Ghz) makes the same render in 0.176 second only!

But of course, one could say:

Why not using the native G’MIC command -mandelbrot instead (here, native means hard-coded as a C++ function) ? It is probably way faster!

Let me compare my previous code with the following G’MIC invokation (which renders exactly the same image):

$ gmic 1024,1024 -tic -mandelbrot -1.2,-1.2,1.2,1.2,256,1,0.4,0.2 -map 7 -toc
[gmic]-0./ Start G'MIC interpreter.
[gmic]-0./ Input black image at position 0 (1 image 1024x1024x1x1).
[gmic]-1./ Initialize timer.
[gmic]-1./ Draw julia fractal on image [0], from complex area (-1.2,-1.2)-(1.2,1.2) with c0 = (0.4,0.2) and 256 iterations.
[gmic]-1./ Map cube color LUT on image [0], with dirichlet boundary conditions.
[gmic]-1./ Elapsed time : 0.055 s.

That’s 12 times faster, than the previous command -julia_expr run on my laptop, indeed! A bit reassuring to know that C++ compiled into assembly code is faster than CImg home-made bytecode compiled on the fly 😉

But the point is: Suppose now I want to slightly modify the rendering of the fractal, i.e. I don’t want to display the maximum iteration anymore for each pixel (variable iter), but the latest value of the variable zi before the divergence test occurs. Look how simple this is to create a slightly modified command -julia_expr2 that does exactly what I want to do. I have the full control on what the function does at a pixel level:

julia_expr2 :
  -fill "
    zr = -1.2 + 2.4*x/w;
    zi = -1.2 + 2.4*y/h;
    for (iter = 0, zr^2+zi^2<=4 && iter<256, ++iter,
      t = zr^2 - zi^2 + 0.4;
      (zi *= 2*zr) += 0.2;
      zr = t
  -normalize 0,255 -map 7

and this modified algorithm renders the image below (still in 0.7 second of course):

Fig.4. Slightly modified version of the Julia fractal by displaying another variable zi in the rendering algorithm.

Without these new loop features introduced in the math parser, I would have been forced to do one of these two things in G’MIC, to get the same result:

  1. Either, add some new options to the native command -mandelbrot to allow this new type of visualization. This basically means: Writing new pieces of C++ code, compiling a new version of G’MIC with these added features, package it and release it to make this available for everyone. Even if I already have some decent release scripts, this implies a lot of extra-work and packaging time. This is not like the user can get the new feature in a few minutes (if you’ve already used the filter update mechanism present in the G’MIC plug-in for GIMP, you know what I mean). And I don’t speak about all the possibilities I couldn’t think of (but the user will obviously need one day :) ), when adding such new display options to a native command like -mandebrot
  2. Or, write a new G’MIC custom script able to compute the same kind of result. This would be indeed the best way to make it available for other people quickly. But here, as the algorithm is very specific and works at a pixel level, doing it as a pipeline of macro-operators is quite a pain. It means I would have to use 3 nested -repeat...-done loops (which are basically the loops commands used in G’MIC pipelines) and it would probably take ages to render, as a G’MIC pipeline is always purely interpreted without any pre-compilation steps. Even by using multi-threading, it would have been a nightmare to compute here.

In fact, the quite long math expression we are using in command -julia_expr2 defines one complex algorithm as a whole, and we know it will be pre-compiled into a sequence of bytecodes by CImg before being evaluated for each of the 1024×1024=1,048,576 pixels that compose the image. Of course, we are not as fast as a native C++ implementation of the same command, but at the same time, we gain so much flexibility and genericity in the stuffs we can do now, that this disadvantage is easily forgiven. And the processing time stays reasonable. For fast algorithm prototyping, this feature seems to be incredibly nice! I won’t be forced to unsheathe my C ++ compiler every time I want to experiment some very specific image processing algorithms working at the pixel level.

4. A more serious example: the “Non-Local Means”

The Non-Local-Means is a quite famous patch-based denoising/smoothing algorithm in image processing, introduced in 2005 by A. Buades (beware, his home page contains images of Lena, please do not click if you are too sensitive!). I won’t enter in all the implementation details, as several different methods has been proposed in the literature just for implementing it. But one of the most simple (and slowest) technique requires 4 nested loops per image pixel. What a good opportunity to try writing this “slow” algorithm using the G’MIC -fill function! It took me less than 10 minutes, to be honest:

nlmeans_expr : -check "${1=10}>0 && isint(${2=3}) && $2>0 && isint(${3=1}) && $3>0"
  sigma=$1  # Denoising strength.
  hl=$2     # Lookup half-size.
  hp=$3     # Patch half-size.
  -fill "
    value = 0;
    sum_weights = 0;
    for (q = -"$hl", q<="$hl", ++q,
      for (p = -"$hl", p<="$hl", ++p,
        diff = 0;
        for (s = -"$hp", s<="$hp", ++s,
          for (r = -"$hp", r<="$hp", ++r,
            diff += (i(x+p+r,y+q+s) - i(x+r,y+s))^2
        weight = exp(-diff/(2*"$sigma")^2);
        value += weight*i(x+p,y+q);
        sum_weights += weight
    value/(1e-5 + sum_weights)

Now, let’s test it on a noisy version of the Leno image:

$ gmic user.gmic leno.png -noise 20 -c 0,255 -tic --nlmeans_expr 35,3,1 -toc
[gmic]-0./ Start G'MIC interpreter.
[gmic]-0./ Input custom command file 'user.gmic'(added 1 command, total 6195).
[gmic]-0./ Input file 'leno.png' at position 0 (1 image 512x512x1x3).
[gmic]-1./ Add gaussian noise to image [0], with standard deviation 20.
[gmic]-1./ Cut image [0] in range [0,255].
[gmic]-1./ Initialize timer.
[gmic]-1./nlmeans_expr/ Set local variable sigma='35'.
[gmic]-1./nlmeans_expr/ Set local variable hl='3'.
[gmic]-1./nlmeans_expr/ Set local variable hp='1'.
[gmic]-1./nlmeans_expr/ Fill image [0] with expression ' value=0; sum_weights=0; for(q = -3,q<(...)ight ) ); value/(1e-5 + sum_weights) '.
[gmic]-2./ Elapsed time : 3.156 s.

which results on these two images displayed on the screen, the noisy version (left), and the denoised one using the Non-Local-Means algorithm (right). Of course, the timing may differ from one machine to another. I guess my 3 seconds run here is seemly (tested on my powerful PC at the lab). It still takes less than 20 seconds on my laptop. A crop of the results are presented below. The initial Leno image is a 512×512 RGB image, and the timing has been measured for processing the whole image, of course.


Fig.5.1. Crop of a noisy version of the Leno image, degraded with gaussian noise, std=20.


Fig.5.2. Denoised version using the NL-means algorithm (custom command -nlmeans_expr).

Here again, you could argue that the native G’MIC command -denoise does the same thing and is faster to run. It is, definitely. Jérome Boulanger (one very active G’MIC contributor) has written a nice custom command -nlmeans that implements the NL-means with a smarter algorithm (avoiding the need for 4 nested loops per pixel) which runs even faster (and it is already available in the plug-in for GIMP). But that’s not the point. What I show here is I’m now able to do some (relatively fast) prototyping of algorithms working at a pixel level in G’MIC, without having to write and compile C++ code. But the best of all is about integration: if the algorithm appears to be interesting/effective enough, I can add it to the G’MIC standard library in a few minutes, and quickly create a filter for the GIMP plug-in as well. Let say it right away, probably 5 minutes after I’ve finished writing the first version of the algorithm, the plug-in users should be able to get it and use it on their own images (and give positive/negative feedback to help for future improvements). That’s what I call a smooth, quick and painless integration! And that is exactly the kind of algorithms I couldn’t implement before as custom G’MIC commands running at a decent speed.

To me, it clearly opens exciting perspectives to quickly prototype and integrate new custom image processing algorithms into G’MIC in the future!

5. The Vector Painting filter

In fact, this has happened earlier than expected. I’ve been able to add one of my latest image filter (named Vector painting) in the G’MIC plug-in for GIMP recently. It has been somehow unexpected, because I was just doing some debugging for improving the CImg math expression evaluator. Briefly, suppose you want to determine for each pixel of an image, the discrete spatial orientation of the maximal value variation, with an angular precision of 45°: For each pixel centered in a 3×3 neighborhood, I want to estimate which pixel of the neighborhood has the highest difference with the center pixel (measured as the squared difference between the two pixel values). To make things simpler, I’ve considered doing this on the image luminance only instead of using all the RGB color channels. At the end, I transform each pixel value into a label (an integer in range [1,8]) that represents one of the possible 45°-orientations of the plane. That was typically the kind of problems that would require the use of custom loops working at a pixel level, so something I couldn’t do easily before the loop feature introduced in my math parser (or I would have done the prototype in C++).

The solution to this problem was surprisingly easy to write. Here again, it didn’t take much more than 5 minutes of work:

foo :
  -fill "dmax = -1; nmax = 0;
         for (n = 0, ++n<=8,
           p = arg(n,-1,0,1,-1,1,-1,0,1);
           q = arg(n,-1,-1,-1,0,0,1,1,1);
           d = (j(p,q,0,0,0,1)-i)^2;
           if(d>dmax,dmax = d; nmax = n,nmax)

And if we apply this new custom command -foo on our Leno image,

$ gmic user.gmic leno.png -foo

We get this result (after re-normalization of the label image in range [0,255]). Keep in mind that each pixel of the resulting image is an integer label originally in range [1,8]. And by the way, the computation time is ridiculous here (178ms for this 512×512 image).


Fig.6. Each pixel of the Leno image is replaced by a label saying about which of its 3×3 neighbors is the most different from the central pixel.

It actually looks a bit ugly. But that’s not surprising, as the original image contains noise and you’ll get a lot of small random variations in flat regions. As a result, the labels you get in those regions are noisy as well. Now, what happens when we blur the image before computing the labels? That should regularize the resulting image of labels as well. Indeed:

$ gmic user.gmic leno.png --blur 1% -foo

returns this:


Fig.7. Each pixel of the blurred Leno image is replaced by a label saying about which of its 3×3 neighbors is the most different.

That’s interesting! Blurring the input image creates larger portions of constant labels, i.e. regions where the orientation of the maximal pixel variations is the same. And the original image contours keep appearing as natural frontiers of these labelled regions. Then, a natural idea would be to replace each connected region by the average color it overlays in the original color image. In G’MIC, this can be done easily with the command -blend shapeaverage.

$ gmic user.gmic leno.png --blur 1% -foo[-1] -blend shapeaverage

And what we get at the end is a nice piecewise constant abstraction of our initial image. Looks like a “vector painting”, no ? 😉


Fig.8. Result of the “shape average” blending between the original Leno image, and its map of labels, as obtained with command -foo.

As you may imagine, changing the amplitude of the blurring makes the result more or less abstract. Having this, It didn’t take too much time to create a filter that could be run directly from the G’MIC plug-in interface for GIMP. That’s the exact code I wrote to integrate my initial algorithm prototype in G’MIC and make it usable by everyone. It was done in less than 5 minutes, really:

#@gimp Vector painting : gimp_vector_painting, gimp_vector_painting_preview(1)
#@gimp : Details = float(9,0,10)
#@gimp : sep = separator(), Preview type = choice("Full","Forward horizontal","Forward vertical","Backward horizontal","Backward vertical","Duplicate horizontal","Duplicate vertical")
#@gimp : sep = separator(), note = note("<small>Author: <i>David Tschumperl&#233;</i>.\nLatest update: <i>08/25/2015</i>.</small>")
gimp_vector_painting :
  -repeat $! -l[$>]
    --luminance -b[-1] {10-$1}%,1,1
    -f[-1] "dmax = -1; nmax = 0;
            for (n = 0, ++n<=8,
              p = arg(n,-1,0,1,-1,1,-1,0,1);
              q = arg(n,-1,-1,-1,0,0,1,1,1);
              d = (j(p,q,0,0,0,1)-i)^2;
              if (d>dmax, dmax = d; nmax = n,nmax)
    -blend shapeaverage
  -endl -done

gimp_vector_painting_preview :
  -gimp_split_preview "-gimp_vector_painting $*",$-1

Here is the resulting filter, as it can be seen in the G’MIC plug-in for GIMP (requires version Just after I’ve pushed it in the G’MIC standard library:


Fig.9. The G’MIC plug-in for GIMP, running the “Vector Painting” filter.

Here again, that is how I conceive things should be done properly: 1. I create a quick algorithm prototype to transform an image into something else. 2. I decide that the algorithm is cool enough to be shared. 3. I add few lines to make it available immediately in the G’MIC image processing framework. What a gain of time compared to the time it would have taken by doing this in C++!

6. Comparison with ImageMagick’s -fx operator

While working on the improvement of my math expression evaluator in CImg, I’ve been wondering if what I was doing was not already existing in ImageMagick. Indeed, ImageMagick is one of the most well established open-source image processing framework, and I was almost sure they had already cope with the kind of questions I had for G’MIC. And of course, they had :)

So, they have a special operator -fx expression in convert that seems to be equivalent to what the G’MIC command -fill expression does. And yes, they probably had it for years, long before G’MIC even existed. But I admit I’ve almost completely stopped using ImageMagick tools when I’ve started developing my own C++ image processing library CImg, years ago. All the information you need to use this -fx operator in convert can be found on this documentation page, and even more examples on this page. Reading these pages was very instructive: I’ve noticed some interesting functions and notations they have in their own expression parser that I didn’t already have in mine (so, I’ve added some of them in my latest version of CImg!). Also I was particularly interested by this quote from their pages:

As people developed new types of image operations, they usually prototype it using a slow “-fx” operator first. When they have it worked out that ‘method’ is then converted into a new fast built-in operator in the ImageMagick Core library. Users are welcome to contribute their own “-fx” expressions (or other defined functions) that they feel would be a useful addition to IM, but which are not yet covered by other image operators, if they can be handled by one of the above generalized operators, it should be reasonably easy to add it.(…). What is really needed at this time is a FX expression compiler, that will pre-interpret the expression into a tighter and faster executable form. Someone was going to look into this but has since disappeared.

So it seems their -fx operator is quite slow as it re-parses the specified math expression for each image pixel. And when someone writes an interesting operator with -fx, they are willing to convert it into a C code and integrate this new built-in operator directly in the core ImageMagick library. It seems they don’t really mind adding new native hard-coded operators into IM, maybe even for very specific/unusual operators (at least they don’t mention it). That’s interesting, because that is precisely what I’m trying to avoid in G’MIC. My impression is that it’s often acceptable to be less efficient if the code we have to write for adding one feature is smaller, easier to maintain/upgrade and does not require releasing a new version to make this particular feature available. Personally, I’d always prefer to write a G’MIC custom command (so, a script that I can directly put in the G’MIC standard library) if possible, instead of adding the same feature as a new “native” built-in command (in C++). But maybe their -fx operator was that slow it was cumbersome to use in practice? I had to try!

And I’m a bit sorry to say this, but yes, it’s quite slow (and I have tested this on my pretty fast machine with 12 HT cores at 2.6Ghz). The ImageMagick -fx operator is able to use multiple cores which is clearly a good thing, but even with that, it is cumbersome to use on reasonably big images, with complex math expressions. In a sense, that reassures me about the usefulness of having developed my own math expression compiler in CImg. This pre-compilation step of the math expression into a shorter bytecode sequence seems then to be almost mandatory. I’ve done a quick timing comparison for some simple image effects that can be achieved similarly with both expression evaluators of G’MIC and ImageMagick. Most of the examples below have been actually taken from the -fx documentation pages. I’m dividing and multiplying my image values by 255 in the G’MIC examples below because ImageMagick formulas assume that RGB values of the pixels are defined in range [0,1]. These tests have been done with a high-resolution input image (of a motorbike) with size 3072×2048, in RGB mode. I’ve checked that both the ImageMagick and G’MIC invokations render the same images.

# Test1: Apply a sigmoid contrast function on the image colors.

$ time convert motorbike.jpg -fx "(1.0/(1.0+exp(10.0*(0.5-u)))-0.006693)*1.0092503" im_sigmo.jpg

real	0m9.033s
user	3m18.527s
sys	0m2.604s

$ time gmic -verbose - motorbike.jpg -/ 255 -fill "(1.0/(1.0+exp(10.0*(0.5-i)))-0.006693)*1.0092503" -* 255 -o gmic_sigmo.jpg,75

real    0m0.474s
user    0m3.183s
sys     0m0.111s
# Test2: Create a radial gradient from scratch.
$ time convert -size 3072x2048 canvas: -fx "Xi=i-w/2; Yj=j-h/2; 1.2*(0.5-hypot(Xi,Yj)/70.0)+0.5" im_radial.jpg

real	0m29.895s
user	8m11.320s
sys	2m59.184s

$ time gmic -verbose - 3072,2048 -fill "Xi=x-w/2; Yj=y-h/2; 1.2*(0.5-hypot(Xi,Yj)/70.0)+0.5" -cut 0,1 -* 255 -o gmic_radial.jpg

real    0m0.234s
user    0m0.990s
sys     0m0.045s

# Test3: Create a keftales pattern gradient from scratch.
$ time convert -size 3072x2048 xc: -channel G -fx  'sin((i-w/2)*(j-h/2)/w)/2+.5' im_gradient.jpg

real	0m2.951s
user	1m2.310s
sys	0m0.853s

$ time gmic -verbose - 3072,2048 -fill "sin((x-w/2)*(y-h/2)/w)/2+.5" -* 255 -o gmic_gradient.jpg

real    0m0.302s
user    0m1.164s
sys     0m0.061s
# Test4: Compute mirrored image along the X-axis.
$ time convert motorbike.jpg -fx 'p{w-i-1,j}' im_mirror.jpg 2>&1

real	0m4.409s
user	1m33.702s
sys	0m1.254s

$ time gmic -verbose - motorbike.jpg -fill "i(w-x-1,y)" -o gmic_mirror.jpg

real    0m0.495s
user    0m1.367s
sys     0m0.106s

The pre-compilation of the math expressions clearly makes a difference!

I would be really interested to compare the expression evaluators for more complex expressions, as the one I’ve used to compute Julia fractals with G’MIC for instance. I don’t have a deep knowledge of the ImageMagick syntax, so I don’t know what would be the equivalent command line. If you have any idea on how to do that, please let me know! I’d be interested also to get an idea on how Matlab is performing for the same kind of equations.

7. Conclusion and perspectives

What I conclude from all of this ? Well, I’m actually pretty excited by what the latest version of my expression evaluator integrated in G’MIC / CImg can finally do. It looks like it runs at a decent speed, at least compared to the one used in ImageMagick (which is definitely a reference project for image processing). I had also the idea of comparing it with GraphicsMagick, but I must admit I didn’t find the same -fx operator in it. And I didn’t find something similar (Maybe you could help teaching me how it works for GraphicsMagick?).

I’ve been already able to propose one (simple) artistic filter that I find interesting (Vector painting), and I’m very confident that these improvements of the math expression evaluator will open a lot of new possibilities for G’MIC. For allowing the design of new filters for everyone of course, but also to make my algorithm prototyping work easier and faster.

Could it be the beginning of a new boost for G’MIC? What do you think?

September 16, 2015

An Inkscape SVG Filter Tutorial — Part 2

Part 1 introduced SVG filter primitives and demonstrated the creation of a Fabric filter effect. Part 2 shows various ways to colorize the fabric. It ends with an example of using the techniques learned here to draw part of a bag of coffee beans.

Dying the Fabric

Our fabric at this point is white. We can give it color a variety of ways. We could have started off with a colorized pattern but that would not allow us to change the color so easily. And as this is a tutorial on using filters, lets look at ways the color can be changed utilizing filter primitives.

Coloring with the Flood, Blend, and Composite Primitives

We can use the Flood filter primitive to create a sheet of solid color and then use the Blend filter primitive to combine it with the fabric. The resulting image bleeds into the background. We’ll use the Composite filter primitive to auto-clip the background.

The Flood Filter Primitive

Add the Flood filter primitive to the filter chain by selecting Flood and clicking on the Add Effect button. The fabric will turn a solid black. Like the Turbulence filter primitive, the Flood filter primitive takes no inputs but simply fills the filter region with a solid color Black is the default flood color. You can change the color by clicking on the color sample next to Flood Color: in the dialog. Change the color however you wish. Leave the Opacity at one.

The Blend Filter Primitive.

Next add the Blend filter primitive. The drawing will be unchanged. Connect the Blend input to the last Displacement Map. The fabric should appear on top of the flood fill. This is expected as the default blending mode is Normal which simply draws the second image over the first. Use the drop-down menu to change the Mode to Multiply. This results in the lighter areas of the fabric taking on the flood color.

The output of the filter chain after blending.

Try experimenting with the other blending modes.

The Composite Filter Primitive

The flood fill leaks into the background. This can be removed by clipping the image to fabric area using the Composite filter primitive. Add the Composite filter primitive to the filter chain. The resulting image is again unchanged. Connect the second input to the composite filter to the last Displacement Map filter primitive. Still the image remains unchanged. Now change the Operator type to In. This dictates that the image should be clipped to the area that is “In” the image created by the second Displacement Map filter primitive.

Filter Dialog image.

The Filter Effect dialog after adding and adjusting the Flood, Blend, and Composite filter primitives.

The output of the filter after compositing.

Coloring the Fabric with the Component Transfer Filter Primitive

The Component Transfer filter primitive maps, pixel by pixel, the colors from an input image to different colors in an output image. Each “component” (Red, Green, Blue, and Alpha) is mapped independently. The method for mapping is determined by the Type; each Type has its own attributes. We’ll use the Linear and Identity mappings.

The output component has the same value as the input component.
The output component is equal to: intercept &plus input × slope. This is identical to the Identity type if the intercept is zero and the slope is one.

Replace the Flood Fill, Blend, and Composite filter primitives in the above filter chain by the Composite Transfer filter primitive. (To delete a filter primitive, right-click on the filter primitive name and select Delete in the menu that appears.) The just removed three-primitive filter chain mapped black to black and white to the flood color. We can duplicate this by setting the Red, Green, and Blue component transfer types to Linear (keeping the Alpha component type set to Identity). The condition that black maps to black requires that the Intercept values all be set to zero. The condition that white maps to the flood color dictates the slopes. The RGB values for the flood color used above are 205, 185, 107 on a scale where 255 is the maximum value. These values translate to 0.80, 0.73, 0.42 on a scale where the maximum value is one. Since an input value of 1.0 for the red component must result in a value of 0.80 we can see that these values are the required slopes.

Graph of input vs. output for the red, green, and blue channels.

Graph of the transfer functions.

Filter Dialog image.

The Filter Effect dialog after adding and adjusting the Component Transfer filter primitive.

The output of the filter after adding and adjusting the Component Transfer filter primitive.

Now suppose we want the fabric to be more subtle. We can change the mapping so that for each component, zero is mapped to half the maximum value. In this case we have the following values (RGB): Intercepts: 0.40, 0.36, 0.21 and Slopes: 0.40, 0.37, 0.21. See the following figure:

Graph of input vs. output for the red, green, and blue channels.

Graph of the transfer functions where the darkest value is half the lightest value.

Filter Dialog image.

The Filter Effect dialog after adding and adjusting the Component Transfer filter primitive.

The output of the filter after adjusting the Component Transfer filter primitive so the darkest areas have half the component values of the lightest.

Coloring the Fabric with the Color Matrix Filter Primitive

This filter primitive, unlike the Component Transfer, can intermix the color components. It does not, however, have the fine control over the transfer curves like in the Component Transfer filter primitive. There are several Types in this filter primitive. The Saturate, Hue Rotate, and Luminous to Alpha types are shortcuts for the more generic Matrix type. We need to use the Matrix type to match the results of the previous filters.

First replace the Component Transfer filter primitive by the Color Matrix filter primitive. After adding the new primitive, the fabric may disappear; that is a bug in Inkscape. Click on the matrix in the Filter Dialog and the fabric should reappear. The initial matrix is the Identity matrix (consisting of ones on the diagonal) which does not change the image.

The rows in the matrix control the output of, from top to bottom, the Red, Green, Blue, and Alpha channels. The columns correspond to the input, again in the same Red, Green, Blue, and Alpha order. The last column allows one to enter a constant offset for the row. For example, one can make a green object red by changing the top row to “0 1 0 0 0″ which means that the Red channel output is 0×R + 1×G + 0×B + 0×A + 0, where R, G, B, and A are the input values for the Red, Green, Blue, and Alpha channels respectively (on a scale of zero to one).

To change the values in the matrix, click first on a row of numbers to select the row and then click on a numeric entry in the row. The following figures show the values needed to match the fabric samples above.

Filter Dialog image.

The Filter Effect dialog after adding and adjusting the Color Matrix filter primitive to match the first (high contrast) fabric sample above.

Filter Dialog image.

The Filter Effect dialog after adding and adjusting the Color Matrix filter primitive to match the second (lower contrast) fabric sample above.

Coloring the Fabric Using the Fill Color and the Tile Filter Primitive

In an ideal world, a fabric filter would just take as input the color of an object and use that to blend with a pattern. SVG filters do have the ability to do this. One would read in a pattern tile using the Image filter primitive and then tile the pattern using the Tile filter primitive. But the Tile filter primitive is the one filter primitive that Inkscape hasn’t implemented. While more convenient, this method would still lack the fine control over color that the above methods have.

The output of a filter using the Tile primitive. The two rectangles differ only in Fill color. Renders correctly in Chrome, incorrectly in Firefox and Inkscape.

Putting it All Together

Let’s do something with the fabric! We could stencil some text on the fabric to make it look like part of a bag of coffee beans. The best way to do this is to break the filter up into two separate filters. The first will distort the weave (using the first Turbulence and Displacement Map pair and color the fabric while the second will add a gentle wave to both the fabric and text (using the second Turbulence and Displacement Map pair). The text is given its own filter to take away the sharp edges and to also give it a bit of irregularity independent of the weave. The text could be blended on top of the fabric by giving it an opacity of less than one. A better effect can be achieved, however, by using the new mix-blend-mode property. Inkscape can render this property but does not yet have a GUI to set it. Firefox supports this property and Chrome should soon (if it doesn’t already). I’ve used the mix-blend-mode value of multiply by adding the property to the text style attribute with the XML editor. The fabric and text are then grouped together before applying the “wave” filter to the group.

Part of a bag of coffee beans. Three filters are used. The first to distort the weave and give color to the fabric, the second to slightly blur and distort the text, and the third to take the blended together fabric and text and give them both a gentle wave.

Note, it is possible to put the text in the “defs” section and use the Image filter primitive to import the text into a filter so that the blending can be done with the Blend filter primitive. This isn’t easy to do in Inkscape and Firefox seems to have problems rendering it.

I hope you enjoyed this tutorial. Please leave comments and questions!

A section of a bag of coffee beans.

A PNG image just for Google+ which doesn’t support SVG images.

Hacking / Customizing a Kobo Touch ebook reader: Part II, Python

I wrote last week about tweaking a Kobo e-reader's sqlite database by hand.

But who wants to remember all the table names and type out those queries? I sure don't. So I wrote a Python wrapper that makes it much easier to interact with the Kobo databases.

Happily, Python already has a module called sqlite3. So all I had to do was come up with an API that included the calls I typically wanted -- list all the books, list all the shelves, figure out which books are on which shelves, and so forth.

The result was, which includes a main function that can list books, shelves, or shelf contents.

You can initialize kobo_utils like this:

import kobo_utils

koboDB = KoboDB("/path/where/your/kobo/is/mounted")
connect() throws an exception if it can't find the .sqlite file.

Then you can list books thusly:

or list shelf names:
or use print_shelf which books are on which shelves:
shelves = koboDB.get_dlist("Shelf", selectors=[ "Name" ])
for shelf in shelves:
    print shelf["Name"]

What I really wanted, though, was a way to organize my library, taking the tags in each of my epub books and assigning them to an appropriate shelf on the Kobo, creating new shelves as needed. Using plus the Python epub library I'd already written, that ended up being quite straightforward: shelves_by_tag.

September 15, 2015

Fedora Atomic Logo Idea

The Fedora Cloud Working Group recently decided that in Fedora 24 (or perhaps a bit further out depending on how the tooling/process can support it) that the Atomic version of Fedora is going to be the primary focus of the working group. (Background discussion on their list is available too.)

This has an affect on the Fedora website as the Fedora Cloud edition shifts from a buffet of kind of equally-positioned cloud- and container-related images to a more focused set of images optimized for container hosting (using Atomic) and a set of more clearly ancillary images that are also useful for cloud/container deployment of Fedora that aren’t based on the Atomic platform. We need to position these images accordingly on the website to meet the new model.

Matthew Miller and I discussed how the Cloud WG decision might effect the website and ideas for how we could update the website to suit for Fedora 24. One idea for how we could do this:

  • Consider replacing the “Cloud” edition slot on the front of with a Fedora “Atomic” edition brand.
  • Convert to focus instead solely on Atomic (maybe redoing the URL to
  • Build out a separate cloud image resource site (similar to, which is focused on all ARM-related builds across Fedora) with the full set of Cloud Base images (and maybe Fedora Docker containers too?) Then, pull these in to the edition site via a link in the “Other Downloads” section.

Anyhow, this is just an idea. The Atomic brand is already a pretty strong one, so I think trying to force something like “Fedora Atomic” under the current cloud logomark might miss the opportunity for Fedora to work with the brand recognition Atomic already has upstream. The question is – is that even possible? Luckily, I think the answer might be yes :)

The current Fedora Cloud logo.

The current Fedora Cloud logo.

The Atomic upstream logo.

The Atomic upstream logo.

I poked around a little bit with the Atomic logo (I believe tigert created the original awesome logo!), thickened it up and rounded out the lines a little bit so I could use it as a mask on the purple Fedora triangle texture in the same way the original cloud mark is used in the current cloud logo. I think it looks pretty cool; here it is in the context of the other Fedora Edition logos:

Fedora Atomic logo idea

I was kind of worried they wouldn’t hang together as a set, especially since the three logomarks here had been so close (Cloud’s mark was a 90 degrees rotated Server mark, and Workstation is Server with the top two bars merged to make a display,) but in practice it looks like this is really not a concern.

On the to-do list next are mockups for how a potential new cloud.fpo site might look as well as an updated (or as the case might be.) I started poking at mocking up a cloud.fpo site for the base cloud images and other cloud goodies but will probably need to iterate that on the Cloud WG list to get it right.

Ideas? Feedback? Comments are open of course :)

Fedora Developer Website Design

Fedora Developer Logo

For the past few weeks I have been working on mockups and the HTML/CSS for a new Fedora website, the Fedora Developer portal (likely to eventually live at The goal of the site is to provide resources and information to developers building things on Fedora (not primarily developers contributing to Fedora itself.)

A bunch of folks have been contributing content to the site, and Adam Šamalík and Petr Hracek set up the initial first-cut prototype of the site, configuring jekyll to generate the site and building out the basic framework of the site. The prototype was shared with the Fedora Environment and Stacks Working Group mailing list, and after some feedback and iteration on the initial prototype, Petr asked me to take a look at the overall UX / design of the site. So that’s how I came to be involved here. :)

Competitive Analysis and Sitemap

First, to better understand the space this site is in, I took a look at various developer sites for all sorts of OS platforms and took in the sorts of information they provided and how they organized it. I looked at:

  • Red Hat Developers – main nav is streamlined – solutions, products, downloads. Also has community, events, blogs. Large banner for features. Has a membership / join process. Front page directory of technologies under the solutions nav item.
  • Microsoft Windows Dev Center – main nav is a bit cluttered; core features seem to be developer docs, downloadable tools, code samples, community. Has a “get started” beginners’ guide. Features blog posts, videos, feature highlights. Has a log in.
  • Android Developers – main nav is “design | develope | distribute.” Features SDK, code sample library, and videos. Has case studies. Also has blog and support links.
  • Apple Developer – main nav is “platforms | Resources | Programs | Support | Member Center.” Has a membership program of some sort with log in. Front page is not so useful – solely promo banners for random things; full footer that extrapolates more on what’s under each of the main nav items. Resources page has a nice directory with categorized breakdown of all the resources on the site (seems like it’d make a better front page, honestly.) Includes forums, docs, and videos.
  • Ubuntu Developer – cluttered main nav, nav items contain Ubuntu-specific jargon – e.g. not sure what ‘scopes’ or ‘core’ actually are or why I’d care, has a community, has a log in. Has blog and latest events highlights but they are identical. Similar to Apple, actually useful information is pushed down to the full header at the very bottom of the page – including SDK, tutorials, get started guide, how to publish apps, community.

One thing that was common to all of these sites was the developer.*.com URL. ( does redirect to the URL.) I think because of this, seems like it would match the broader platform developer URL pattern out there.

Another thing they seemed to all have in common were directories of technologies – frameworks, platforms, langauges, tools, etc. – affiliated with their own platform. Many focused on deployment too – mostly to app stores, but deployment nonetheless.

Looking at the main structure of the site in the initial prototype, I felt it honestly was a pretty good organizational structure given the other developer sites out there. I wanted to tweak some of the wording of the headers (to make them action-oriented,) and had some suggestions as to additional content pieces that could be developed, but for the most part the prototype had a solid structure. I drew up a sitemap in Inkscape to help visualize it:

Suggested sitemap for Click on image above to view SVG source in git.

Suggested sitemap for Click on image above to view SVG source in git.


With confidence in the site information architecture / basic structure of the content, I then started mocking up what it could look like. Some things I considered while drawing this out:

  • The visual challenge here is to give the site its own feel, but also make sure it feels like a part of the Fedora ‘family,’ and has a visual design that really feels related to the other new Fedora sites we have like,, and
  • There should probably be a rotating banner feature area where features and events could be called out on the front page and maybe even on subpages. I don’t like the page full of promos that is the Apple Developer front page – it comes off as a bit disorganized IMHO – so rotating banners seemed preferable to avoid banners taking over the whole front page.
  • The main content of the website is mostly a series of simple reference guides about the platforms, frameworks, and languages in Fedora, which I understand will be kept updated and added to regularly. I think reference material can appear as rather static and perhaps stale, but I think the site should definitely come across as being updated and “living,” so featuring regularly updated content like blog posts could help with that.

So here’s what I came up with, taking some article content from to fill in the blog post areas –

Mockup for the front page of Click on image to view mockup source and other mockups in git.

Mockup for the front page of Click on image to view mockup source and other mockups in git.

A few notes about the design here:

  • To keep this looking like it’s part of the Fedora family, I employed a number of elements. I stuck with a white background horizontal branding/nav bar along the top and a fat horizontal banner below that. It also has the common Fedora websites footer (which may need some additions / edits;) and a mostly white, blue, and gray color palette. The base font is OpenSans, as it is for the other sites we’ve released in the past year. The site still has its own feel though; there are some little tweaks here and there to do this. For example, I ended up modifying the front page design such that the top banner runs all the way to the very top margin, and recedes only on sub pages to show the white horizontal background on the top header.
  • There is, as planned, a rotating banner feature area. The example in the mockups features DevAssistant, and the team already has other feature banners planned.
  • Blog content in the form of two featured posts as well as a recent blog headlines listing hopefully will inject a more active / current sense to the overall site.
  • I made mockups for each of the major sections of the site and picked out some CC-BY licensed photography roughly related to each corresponding section in its title banner up top.


Petr had also asked if I’d be able to provide the CSS, images, and icons for the site once the mockups were done. So I decided why not? The framework he and Adam used to set up the site was a static framework I was not familiar with – Ruby-based Jekyll, also used by GitHub Pages – and I thought it might be fun to learn more about it.


If you check out the tree for the website implementation, you’ll see a bunch of basic HTML files as well as markdown (*.md) files (the latter mostly in the content repo, which gets set up as a subdirectory under the website tree when you check the project out.) Jekyll lets you break down pages of the site into reusable chunks (e.g., header, footer, etc.), and it also lets you design different layouts that you can link to different pieces of content.

Whether any given page / chunk of content you’re working on is a *.md file or a *.html file, Jekyll has this thing at the top of each file called ‘front matter’ where you can configure the page (e.g., set which layout gets applied to it,) or even define variables. This is where I set the longer titles for each page/section as well as placed the descriptions for the sections that get placed in the title banner area.

Bizarre Jekyll Issue

Insert random interlude here :)

So I ran into a crazy, probably obscure issue with Jekyll during this process – due to my being a newbie and misunderstanding how it worked, yes, but Jekyll did happily build and spew out the site without complaint or pointing out the issue, so perhaps this little warning might help someone else. (Red Hatters Alex Wood and Jason Rist were really helpful in trying to help me debug this crazy issue. The intarwebs just totally failed on this one.)

I was trying to use page variables when trying to implement the title banners at the top of every page – I needed to know which page I was on in order to display the correct page title and description at the top. The variables were just spitting out blank nothingness in the built page. It turns out the issue was that I was using *.md files that had the same name as *.html files, and some variables were set in one file and some another file, and Jekyll seemed to want to just blank them all out when it encountered more than one file with the same base file name. I was able to fix the problem by merging the files into one file (I stuck with HTML and deleted the *.mds.)

Here it is

So the implemented site design is in place in the repo now, and I’ve handed the work back off the team to tweak, hook up functionality to, and to finish up content development. There is a development build of the website at, but I’m pretty sure that’s behind from the work I finished last week since I see a lot of issues I know I fixed in the code. It’s something to poke around though and use to get a feel for the site.

Want to Help?

we need your help!

If you poked around the development version of the site, you might have noticed there’s quite a bit more content needed. This is something you can help with!

The team has put together a contribution guide, and the core site content is basically formatted in Markdown syntax (Here’s a neat markdown tutorial.) More information about how to contribute is on the front page of the content repo in GitHub.

Thoughts? Ideas? Need help contributing? Hit me up in the comments :)

Mixed use, Itu, Brazil

This project gathers in a same site three different uses: a residential building, a hotel, and a convention center. The client being a real-estate investor, it was required, as usual in that context, to build as much as possible, in other words, to use the maximal construction area permitted by law. This...

September 14, 2015

Interview with Lucas Ribeiro


Could you tell us something about yourself?

Hi, I am a 24-year-old Brazilian artist, who lives in Sao Paulo. Married and eldest of three brothers. Watching my mother making a lot of pencil portraits when I was a child inspired me to do the same years later, since I saw it was not impossible to learn how to draw. I started to draw with pencils when I was 13, but nothing serious until I reached the age of 20. I began to learn digital painting, watercolor and improving my drawing skill (self-taught). Now I have worked in book covers, character design, a mascot for the government of Sao Paulo and recently even with graphic design. I use mainly Krita, but used previously GIMP, MyPaint, ArtRage, Sketchbook Pro, SAI… but Krita fits everything that I need better.

Do you paint professionally, as a hobby artist, or both?

I’m starting to do more freelance jobs. So I’m combining my hobby with my profession, which is a blessing. So, it is both.

What genre(s) do you work in?

I’m very eclectic, but I have to say that fantasy art and the cartoon style with a more realistic approach, like the concept art of Pixar and Dreamworks, are my favourites, and I plan to dedicate myself more to these styles.

Whose work inspires you most — who are your role models as an artist?

Well, this list is very, very large. I need to say that movies and books inspires me a lot: Lord of the Rings, Star Wars and the Disney animated movies. Inspiration can come from anywhere at any time. A song, a trip. But speaking about artists, I can’t fail to mention David Revoy and Ramon Miranda for doing excellent work with open source tools.

How and when did you get to try digital painting for the first time?

Well, I think that was with MS Paint Brush in the 90’s. Even though I was using a mouse, I was a happy child doing some ugly stuff. But when I started do draw seriously, I heard of Gimp Paint Studio and give it a try. After that I started to try different tools.

What makes you choose digital over traditional painting?

Actually I draw a lot with pencils, pen, ink and watercolor. But digital painting gives you endless possibilities for combinations and experiments without any cost (both in money and in time).

How did you find out about Krita?

I was looking for tips and resources to painting with GIMP, until I found out that David Revoy was using Krita to do the free “Pepper & Carrot” webcomic. When I looked up the pictures, I was impressed. Which is awesome.

What was your first impression?

The brushes feels very natural, almost as the real world. The way that the colour blends is very unique. There was no comparison with Photoshop in that, for example. The experience of painting with Krita was really natural and smooth. Even though that in my old laptop was lagging a little bit in the previously versions of Krita.

What do you love about Krita?

In first place: The brush engines and transform tools. I think they are the best in the market, on this moment. The brush editor is very intuitive and powerful too.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Maybe some speed improvements. When I’m using more layers in high resolution I feel that.

What sets Krita apart from the other tools that you use?

The way that the brushes feel. There is no comparison with other painting tools. Is very natural, in that way I feel I am really painting and not just using a digital tool.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

Every day I make studies or a new illustration. But I think that I would choose the “Gangster Pug”. I used a lot wet brushes, which is very similar to painting with watercolor in the real world. It’s basically the same workflow.

What techniques and brushes did you use in it?

Wet brushes, and airbrush with blending modes like Multiply and Overlay. The Muses and David Revoy’s V6 Brushpack is what I use most.

Where can people see more of your work?

Soon I’ll have a new website and portfolio. But right now, people can see it at behance and facebook. I invite everyone to visit me at these links, especially because 90% of my work is done in Krita now. For stuff like graphic design I use Inkscape or Blender.

My page:

Anything else you’d like to share?

You can add me on facebook ( or send me an email ( and share your thoughts. If you have not used Krita yet, try it. I think it’s the best tool in the market at the moment, and it’s really a production machine, whether you’re interested in VFX painting, illustration, comics and concept art, or just in painting and sketch.loanda800

September 11, 2015

An Inkscape SVG Filter Tutorial — Part 1

Part 1 introduces SVG filter primitives and demonstrates the creation of a Fabric filter effect. Part 2 shows various ways to colorize the fabric.


SVG filters allow bitmap-type manipulations inside a vector format. Scalability is preserved by pushing the bitmap processing to the SVG renderer at the point when the final screen resolution is known. SVG filters are very powerful, so powerful in fact that they have been moved out of SVG and into a separate CSS specification so that they can also be applied to HTML content. This power comes with a price: SVG filters can be difficult to construct. For example, a simple drop shadow filter consists of three connected filter primitives as shown in this SVG code:
<filter id="DropShadow">
  <feOffset in="SourceAlpha" dx="2" dy="2" result="offset"/>  ❶
  <feGaussianBlur in="offset" stdDeviation="2" result="blur"/>  ❷
  <feBlend in="SourceGraphic" in2="blur" mode="normal"/>  ❸
  1. Offset filter primtive: Create an image using the text alpha (SourceAlpha) and shift it down and right two pixels. Results in shifted black text.
  2. Gaussian Blur filter primitive: Blur result of previous step (“offset”).
  3. Blend filter primtive: Render the original image (SourceGraphic) over the result of the previous step (“blur”).
Some sample text!

A drop shadow applied to text.

Inkscape contains a Filter Dialog that can be used to construct filters. Here is the dialog showing the above drop-shadow filter effect:

Filter dialog showing the three filter primitives and how they are connected.

The Inkscape Filter Dialog showing a drop-shadow filter effect. The dialog shows the filter primitives and how their inputs (left-pointing triangles) are connected (black lines). It also contains controls for setting the various filter primitive attributes.

There can be more than one way to construct the same filter effect. For example, the order of the offset and blur primitives can be swapped without changing the result:

Some more text!

An alternative drop-shadow filter applied to text.

Inkscape contains over 200 canned filters effects, many of which have adjustable parameters. But sometimes none of them will do exactly what you want. In that case you can construct your own filter effect. It’s not as hard as it first seems once you understand some of the basic filter primitives.

A Fabric Filter

This tutorial creates a basic filter that can be applied to a pattern to create realistic fabric. It will introduce several very useful filter primitives that are fundamental to most of Inkscape’s canned filter effects.

Creating a Pattern

To begin with, we need a pattern that is the basis of the weave of the fabric. I’ve constructed a simple pattern consisting of four rectangles, two for the horizontal threads and two for the vertical threads. I’ve applied a linear gradient to give them a 3D look. One can certainly do better but as the pattern tile is quite small, one need not go overboard. Once you have drawn all the pattern parts, select them and then use Objects->Pattern to convert to a pattern. The new pattern will then be available in the Pattern drop-down menu that appears when the Pattern icon is highlighted on the Fill tab of the Fill and Stroke dialog.

The pattern consisting of four rectangles with linear gradients simulating a small section of the fabric weave.

The pattern (shown scaled up).

Next, apply the fabric pattern to the an object to create simple fabric.

The basic weave pattern applied to a large rectangle.

The pattern applied to a large rectangle.

Adding Blur

The pattern looks like a brick wall. It’s too harsh for fabric. We can soften the edges by applying a little blur. This is done through the Gaussian Blur filter primitive. Open the Filter Editor dialog (Filters->Filter Editor). Click on the New button to create a new, empty filter. A new filter with the name “filter1″ should be created. You can double click on the name to give the filter a custom name. Apply the filter to the fabric piece by selecting the piece and then checking the box next to the filter name. Your piece of fabric will disappear; don’t worry. We need to add a filter primitive to get it to show back up. To add a blur filter primitive select Gaussian Blur in the drop-down menu next to Add Effect and then clicking the Add Effect button. The fabric should now be visible with the blur effect applied. You can change the amount of blur by using the slider next to Standard Deviation; a value of 0.5 seems to be about right.

Filter Dialog image.

The Filter Effect dialog after applying a small amount of blur.

Note how the input to the Gaussian Blur primitive (triangle next to “Gaussian Blur”) is linked (under Connections) to the Source Graphic.
The basic weave pattern applied to a large rectangle.

A small amount of blur applied to the fabric.

Distorting the Threads

The pattern is still too rigid. The threads in real fabric are not so regular looking. We need to add some random distortions. To do so, we’ll link up two different filter primitives. The first filter primitive, Turbulence, will generate random noise. This noise will be used as an input to a Displacement Map filter primitive where pixels are shifted based on the value of the input.

The Turbulence Filter Primitive

Add a Turbulence filter primitive to the filter chain by selecting Turbulence from the drop-down menu next to Add Effect button, the click on the button. You should see a rectangle region filled with small random dots. There are a couple of things to note: The first is that the rectangle will be bigger than you initial object. This is normal. The filter region is enlarged by 10% on each side and the Turbulence filter fills this region. This is done on purpose as some filter primitives draw outside the object (e.g. the Gaussian Blur and Offset primitives). You can set the boundary of the filter region under the Filter General Settings tab. The default 10% works for most filters. You don’t want the region to be too large as it effects the time to render the filter. The second thing to note is that the Turbulence filter primitive has no inputs despite what is shown in the Filter Editor dialog.

There are a number of parameters to control the generation of the noise:

There are two values: Turbulence and Fractal Noise. The difference between the two is somewhat technical so I won’t go into it here. (See the Turbulence Filter Primitive section in my guide book.)
Base Frequency
This parameter controls the granularity of the noise. The value roughly corresponds to the inverse of the length in pixels of the fluctuations. (Note that the default value of ‘0’ is a special case and doesn’t follow this rule.)
The number of octaves used in creating the turbulence. For each additional octave, a new contribution is added to the turbulence with the frequency doubled and the contribution halved compared to the proceeding octave. It is usually not useful to use a value above three or four.
The seed for the pseudo-random number generator used to create the turbulence. Normally one doesn’t need to change this value.

One can guess that variations in the threads are about on the order of the distance between adjacent threads. For the pattern used here, the vertical threads are 6 pixels apart. This gives a base frequency of about 0.17 (i.e. 1/6). The value of Type should be changed to Fractal Noise. (Both Type values give good visual results but the Turbulence value leads to a shift of the image down and to the right for technical reasons.) Here is the resulting dialog:

Filter Dialog image.

The Filter Effect dialog after adding the Turbulence filter primitive.

And here is the resulting image:

The output of the first turbulence filter primitive.

The output of the filter chain which is at this point the output of the Turbulence filter primitive.

The Displacement Map Filter Primitive

Now we need to add the Displacement Map filter primitive which will take both the output of the Gaussian Blur and the Turbulence filter primitives as inputs. Select Dispacement Map from the drop-down menu and then click on the Add Effect button. Note that both inputs to the Dispacement Map filter primitive are set to the last filter primitive in the filter chain. We’ll need to drag the top one to the Gaussian Blur filter primitive. (Start the drag in the little triangle at the right of the filter primitive in the list.) Again, the image doesn’t change. We’ll need to make one more change but first here are the parameters for the Displacement Map filter primitive:

The scale factor is used to determine how far pixels should be shifted. The magnitude of the shift is the value of the displacement map (on a scale of 0 to 1) multiplied by this value.
X displacment
Determines which component (red, green, blue, alpha) should be used from the input map to control the x displacement.
Y displacement
Determines which component (red, green, blue, alpha) should be used from the input map to control the y displacement.

For our purpose, any values of X displacement and Y displacement are equally valid as all channels contain the same type of pseudo-random noise. To actually see a shift, one must set a non-zero scale factor. A value of about six seems to give a good effect.

Filter Dialog image.

The Filter Effect dialog after adding and adjusting the Displacement Map filter primitive.

And here is the resulting image:

The output of the first turbulence filter primitive.

The output of the filter chain after adding and adjusting the Displacement Map filter primitive.

Distorting the Fabric

Fabric rarely lies flat unless stretched and even then it is hard to make the threads lie straight and parallel. We can add a random wave to the fabric by adding another Turbulence and Displacement Map pair, but this time using a lower Base Frequency. Repeat the instructions above to add the two filter primitives but this time connect the top input to the Displacement Map to the previous Displacement Map. Set the Base Frequency to a value of 0.01. Set the Type to Fractal Noise. Set the Scale to ten.

Filter Dialog image.

The Filter Effect dialog after adding and adjusting the second Turbulence and Displacement Map filter primitives.

And here is the resulting image:

The final fabric image.

The output of the filter chain after distorting the fabric.

Of course, the pattern and filter can be applied to an arbitrary shape:

The pattern and filter applied to a blob.

The pattern and filter applied to a cloth patch.


We have constructed a basic Fabric filter but there is plenty of room for improvement. In the next part we’ll look at ways to add color to the fabric.

A section of a bag of coffee beans.

A PNG image just for Google+ which doesn’t support SVG images.

The blooms of summer, and weeds that aren't weeds

[Wildflowers on the Quemazon trail] One of the adjustments we've had to make in moving to New Mexico is getting used to the backward (compared to California) weather. Like, rain in summer!

Not only is rain much more pleasant in summer, as a dramatic thundershower that cools you off on a hot day instead of a constant cold drizzle in winter (yes, I know that by now Calfornians need a lot more of that cold drizzle! But it's still not very pleasant being out in it). Summer rain has another unexpected effect: flowers all summer, a constantly changing series of them.

Right now the purple asters are just starting up, while skyrocket gilia and the last of the red penstemons add a note of scarlet to a huge array of yellow flowers of all shapes and sizes. Here's the vista that greeted us on a hike last weekend on the Quemazon trail.

Down in the piñon-juniper where we live, things aren't usually quite so colorful; we lack many red blooms, though we have just as many purple asters as they do up on the hill, plus lots of pale trumpets (a lovely pale violet gilia) and Cowpen daisy, a type of yellow sunflower.

But the real surprise is a plant with a modest name: snakeweed. It has other names, but they're no better: matchbrush, broomweed. It grows everywhere, and most of the year it just looks like a clump of bunchgrass.

[Snakeweed in bloom] Then come September, especially in a rainy year like this one, and all that snakeweed suddenly bursts into a glorious carpet of gold.

We have plenty of other weeds -- learning how to identify Russian thistle (tumbleweed), kochia and amaranth when they're young, so we can pull them up before they go to seed and spread farther, has launched me on a project of an Invasive Plants page for the nature center (we should be ready to make that public soon).

But snakeweed, despite the name, is a welcome guest in our yard, and it lifts my spirits to walk through it on a September evening.

By the way, if anyone in Los Alamos reads this blog, Dave and I are giving our first planetarium show at the nature center tomorrow (that's Friday) afternoon. Unlike most PEEC planetarium shows, it's free! Which is probably just as well since it's our debut. If you want to come see us, the info is here: Night Sky Fiesta Planetarium Show.

September 08, 2015

Softness and Superresolution

Softness and Superresolution

Experimenting and Clarifying

A small update on how things are progressing (hint: well!) and some neat things the community is playing with.

I have been quiet these past few weeks because I decided I didn’t have enough to do and thought a rebuild/redesign of the GIMP website would be fun, apparently. Well, it is fun and something that couldn’t hurt to do. So I stepped up to help out.

A Question of Softness

There was a thread recently on a certain large social network in a group dedicated to off-camera flash. The thread was started by someone with the comment:

The most important thing you can do with your speed light is to put some rib [sic] stop sail cloth over the speed light to soften the light.

Which just about gave me an aneurysm (those that know me and lighting can probably understand why). Despite some sound explanations about why this won’t work to “soften” the light, there was a bit of back and forth about it. To make matters worse, even after over 100 comments, nobody bothered to just go out and shoot some sample images to see it for themselves.

So I finally went out and shot some to illustrate and I figured they would be more fun if they were shared (I did actually post these on our forum).

I quickly set a lightstand up with a YN560 on it pointed at my garden statue. I then took a shot with bare flash, one with diffusion material pulled over the flash head, and one with a 20” diy softbox attached.

Here’s what the setup looked like with the softbox in place:

Soft Light Test - Softbox Setup Simple light test setup (with a DIY softbox in place).

Remember, this was done to demonstrate that simply placing some diffusion fabric over the head of a speedlight does nothing to “soften” the resulting light:

Softness test image bare flash Bare flash result. Click to compare with diffusion material.

This shows clearly that diffusion material over the flash head does nothing to affect the “softness” of the resulting light.

For a comparison, here is the same shot with the softbox being used:

Softness test image softbox Same image with the softbox in place. Click to compare with diffusion material.

I also created some crops to help illustrate the difference up close:

Softness test crop #1 Click to compare: Bare Flash With Diffusion With Softbox
Softness test crop #1 Click to compare: Bare Flash With Diffusion With Softbox

Hopefully this demonstration can help put to rest any notion of softening a light through close-set diffusion material (at not-close flash-to-subject distances). At the end of the day, the “softness” quality of a light is a function of the apparent size of the light source relative to the subject. (The sun is the biggest light source I know of, but it’s so far it’s quality is quite harsh.)

A Question of Scaling

On discuss, member Mica asked an awesome question about what our workflows are for adding resolution (upsizing) to an image. There were a bunch of great suggestions from the community.

One I wanted to talk about briefly I thought was interesting from a technical perspective.

Both Hasselblad and Olympus announced not too long ago the ability to drastically increase the resolution of images in their cameras that used a “sensor-shift” technology to shift the sensor by a pixel or so while shooting multiple frames, then combing the results into a much larger megapixel image (200MP in the case of Hasselblad, and 40MP in the Olympus).

It turns out we can do the same thing manually by burst shooting a series of images while handholding the camera (the subtle movement of our hand while shooting provides the requisite “shift” to the sensor). Then we simply combine the images, upscale, and average the results to get a higher resolution result.

The basic workflow uses Hugin align_image_stack, Imagemagick mogrify, and G’MIC mean blend script to achieve the results.

  1. Shoot a bunch of handheld images in burst mode (if available).
  2. Develop raw files if that’s what you shot.
  3. Scale images up to 4x resolution (200% in width and height). Straight nearest-neighbor type of upscale is fine.
    • In your directory of images, create a new sub-directory called resized.
    • In your directory of images, run mogrify -scale 200% -format tif -path ./resized *.jpg if you use jpg’s, otherwise change as needed. This will create a directory full of upscaled images.
  4. Align the images using Hugin’s align_image_stack script.
    • In the resized directory, run /path/to/align_image_stack -a OUT file1.tif file2.tif ... fileX.tif The -a OUT option will prefix all your new images with OUT.
    • I move all of the OUT* files to a new sub-directory called aligned.
  5. In the aligned directory, you now only need to mean average all of the images together.
    • Using Imagemagick: convert OUTfile*.tif -evaluate-sequence mean output.bmp
    • Using G’MIC: gmic video-avg.gmic -avg \" *.tif \" -o output.bmp

I used 7 burst capture images from an iPhone 6+ (default resolution 3264x2448). This is the test image:

Superresolution test image Sample image, red boxes show 100% crop areas.

Here is a 100% crop of the first area:

100% crop of the base image, straight upscale.
100% crop, super resolution process result.

The second area crop:

100% crop, super resolution process result.
100% crop, super resolution process result.

Obviously this doesn’t replace the ability to have that many raw pixels available in a single exposure, but if the subject is relatively static this method can do quite well to help increase the resolution. As with any mean/median blending technique, a nice side-effect of the process is great noise reduction as well…

Not sure if this warrants a full article post, but may consider it for later.

September 07, 2015

GUADEC Gothenburg

The GUADEC posts have settled by now, which is why it’s time for me to post another one. I hope those of you lucky enough to be abel to visit the beautiful, but expensive, city of Gothenburg will enjoy this little 4K edit of the moment I’ve captured on my pocket camera.

GUADEC Gothenburg at 4K

And if you did, check out some of the photos too. I’ve stoppped counting how many I’ve attended, but it’s always great to meet up with all the creative minds and the new student blood that makes GNOME happen. Thanks to all of you, and especially to this year’s organizers! They made a stellar job.

DSC01363 DSC01140 DSC01233

September 04, 2015

Kickstarter Drawing Challenge!

Every month, we’ve got a drawing challenge on the Krita forum, where you can paint and draw a set subject and discuss your work with others. But this month’s challenge is special. The subject is our mascot Kiki, and the winner’s drawing will be on the t-shirts sent out as rewards for the kickstarter backers! And the winner will get a t-shirt as well, of course. So, connect your drawing tablet, fire up the latest Krita, get drawing and share the results on the forum!


Hacking / Customizing a Kobo Touch ebook reader: Part I, sqlite

I've been enjoying reading my new Kobo Touch quite a lot. The screen is crisp, clear and quite a bit whiter than my old Nook; the form factor is great, it's reasonably responsive (though there are a few places on the screen where I have to tap harder than other places to get it to turn the page), and I'm happy with the choice of fonts.

But as I mentioned in my previous Kobo article, there were a few tweaks I wanted to make; and I was very happy with how easy it was to tweak, compared to the Nook. Here's how.

Mount the Kobo

When you plug the Kobo in to USB, it automatically shows up as a USB-Storage device once you tap "Connect" on the Kobo -- or as two storage devices, if you have an SD card inserted.

Like the Nook, the Kobo's storage devices show up without partitions. For instance, on Linux, they might be /dev/sdb and /dev/sdc, rather than /dev/sdb1 and /dev/sdc1. That means they also don't present UUIDs until after they're already mounted, so it's hard to make an entry for them in /etc/fstab if you're the sort of dinosaur (like I am) who prefers that to automounters.

Instead, you can use the entry in /dev/disk/by-id. So fstab entries, if you're inclined to make them, might look like:

/dev/disk/by-id/usb-Kobo_eReader-3.16.0_N905K138254971:0 /kobo   vfat user,noauto,exec,fmask=133,shortname=lower 0 0
/dev/disk/by-id/usb-Kobo_eReader-3.16.0_N905K138254971:1 /kobosd vfat user,noauto,exec,fmask=133,shortname=lower 0 0

One other complication, for me, was that the Kobo is one of a few devices that don't work through my USB2 powered hub. Initially I thought the Kobo wasn't working, until I tried a cable plugged directly into my computer. I have no idea what controls which devices work through the hub and which ones don't. (The Kobo also doesn't give any indication when it's plugged in to a wall charger, nor does

The sqlite database

Once the Kobo is mouted, ls -a will show a directory named .kobo. That's where all the good stuff is: in particular, KoboReader.sqlite, the device's database, and Kobo/Kobo eReader.conf, a human-readable configuration file.

Browse through Kobo/Kobo eReader.conf for your own amusement, but the remainder of this article will be about KoboReader.sqlite.

I hadn't used sqlite before, and I'm certainly no SQL expert. But a little web searching and experimentation taught me what I needed to know.

First, make a local copy of KoboReader.sqlite, so you don't risk overwriting something important during your experimentation. The Kobo is apparently good at regenerating data it needs, but you might lose information on books you're reading.

To explore the database manually, run: sqlite3 KoboReader.sqlite

Some useful queries

Here are some useful sqlite commands, which you can generalize to whatever you want to search for on your own Kobo. Every query (not .tables) must end with a semicolon.

Show all tables in the database:

The most important ones, at least to me, are content (all your books), Shelf (a list of your shelves/collections), and ShelfContent (the table that assigns books to shelves).

Show all column names in a table:

PRAGMA table_info(content);
There are a lot of columns in content, so try PRAGMA table_info(content); to see a much simpler table.

Show the names of all your shelves/collections:


Show everything in a table:


Show all books assigned to shelves, and which shelves they're on:

SELECT ShelfName,ContentId FROM ShelfContent;
ContentId can be a URL to a sideloaded book, like file:///mnt/sd/TheWitchesOfKarres.epub, or a UUID like de98dbf6-e798-4de2-91fc-4be2723d952f for books from the Kobo store.

Show all books you have installed:

SELECT Title,Attribution,ContentID FROM content WHERE BookTitle is null ORDER BY Title;
One peculiarity of Kobo's database: each book has lots of entries, apparently one for each chapter. The entries for chapters have the chapter name as Title, and the book title as BookTitle. The entry for the book as a whole has BookTitle empty, and the book title as Title. For example, I have file:///mnt/sd/earnest.epub sideloaded:
sqlite> SELECT Title,BookTitle from content WHERE ContentID LIKE "%hamlet%";
ACT I.|Hamlet
Scene II. Elsinore. A room of state in the Castle.|Hamlet
Scene III. A room in Polonius's house.|Hamlet
Scene IV. The platform.|Hamlet
Scene V. A more remote part of the Castle.|Hamlet
Act II.|Hamlet
  [ ... and so on ... ]
ACT V.|Hamlet
Scene II. A hall in the Castle.|Hamlet
Each of these entries has Title set to the name of the chapter (an act in the play) and BookTitle set to Hamlet, except for the final entry, which has Title set to Hamlet and BookTitle set to nothing. That's why you need that query WHERE BookTitle is null if you just want a list of your books.

Show all books by an author:

SELECT Title,Attribution,ContentID FROM content WHERE BookTitle is null
AND Attribution LIKE "%twain%" ORDER BY Title;
Attribution is where the author's name goes. LIKE %% searches are case insensitive.

Of course, it's a lot handier to have a program that knows these queries so you don't have to type them in every time (especially since the sqlite3 app has no history or proper command-line editing). But this has gotten long enough, so I'll write about that separately.

September 03, 2015

About FreeCAD, architecture and workflows

Quite some time I didn't post about FreeCAD, but it doesn't mean things have been dead there. One of the important things we've been busy with in the past weeks is the transfer of the FreeCAD code and release files to GitHub. Sourceforge, where all this was hosted before, is unfortunately giving worrying signs of...

Updating the Shop!

Today we finally updated the offerings in the Krita webshop. The Comics with Krita training DVD by Timothée Giet is available as a free download using bittorrent, but if you want to support the Krita development, you can now download it directly for just €9,95. It’s still a really valuable resource, discussing not just Krita’s user interface, but also the technical details of creating comic book panels and even going to print.

We’ve also now got the USB sticks that were rewards in last year’s kickstarter for sale. By default, you get Comics with Krita, the Muses DVD and Krita 2.9.2 for Windows and OSX, as well as some brushes and other resources. That’s €34,95. For five euros more, I’ll put the latest Krita builds and the latest brush packs on it before sending it out. That’s a manual process at the moment since we release so often that it’s impossible to order the USB sticks from the supplier with the right version pre-loaded!

Because you can now get the Muses DVD, Comics with Krita and Krita itself on a USB Stick, we’ve reduced the price of the Muses DVD to €24,95! You can select either the download or the physical DVD, the price is the same.

And check out the very nice black tote bags and cool mugs as well!

All prices include shipping and only in the Netherlands, V.A.T. is added.


September 02, 2015

Krita 2.9.7 Released!

Two months of bug fixing, feature implementing, Google-Summer-of-Code-sweating, it’s time for a new release! Krita 2.9.7 is special, because it’s the last 2.9 release that will have new features. We’ll be releasing regular bug fix releases, but from now on, all feature development focuses on Krita 3.0. But 2.9.7 is packed! There are new features, a host of bug fixes, the Windows builds have been updated with OpenEXR 2.2. New icons give Krita a fresh new look, updated brushes improve performance, memory handling is improved… Let’s first look at some highlights:

New Features:

Tangent Normal Brush Engine

As is traditional, in September, we release the first Google Summer of Code results. Wolthera’s Tangent Normal Brush engine has already been merged!


It’s a specialized feature, for drawing normal maps, as used in 3d engines and games. Check out the introduction video:

There were four parts to the project:

  • The Tangent Normal Brush Engine. (You need a tilt-enabled tablet stylus to use it!)
  • The bumpmap filter now accepts normal map input
  • A whole new Normalize filter
  • And a new cursor option: the Tilt Cursor

Fresh New Icons

We’ve got a whole new, carefully assembled icon set. All icons are tuned so they work equally well with light and dark themes. And it’s now also possible to choose the size of the icons in the toolbox.


If you’ve got a high-dpi screen, make them big, if you’re on a netbook, make them small! All it takes is a right-click on the toolbox.


And to round out the improvements to the toolbox, the tooltips now show the keyboard shortcuts you can use to activate a tool and you can show and hide the toolbox from the docker menu.

Improvements to the Wrap-around mode

Everyone who does seamless textures loves Krita’s unique wraparound mode. And now we’ve fixed two limitations, and you can not just pick colors from anywhere, not just the original central image, but also fill from anywhere!


New Color Space Selector

Wolthera also added a new dialog for picking the color profile: The Color Profile browser. if you just want to draw without worries, Krita’s default will work for you, of course. But if are curious, or want to go deeper into color management, or have advanced needs then this browser dialog gives you all the details you need to make an informed choice!

Krita ships with a large set of carefully tuned ICC profiles created by Elle Stone. Her extensive notes on when one might prefer to use one or the other are included in the new color profile browser.

Compatibility with the rest of the world

We improved compatibility with Gimp: Krita can now load group layers, load XCF v3 files and finally load XCF files on Windows, too. Photoshop PSD support always gets attention. We made it possible to load bit/channel CMYK and Grayscale images, ZIP compressed PSD files and improved saving images with a single layer that has transparency to PSD.

Right-click to undo last path point

You can now right-click in the middle of creating a path to undo the last point.

More things…

  • The freehand tools’ Stabilizer mode has a new ‘Scalable smoothness’ feature.
  • You can now merge down Selection Masks
  • We already had shortcuts to fill your layer or selection with the foreground or background color or the current pattern at 100% opacity. If you press Shift in addition to the shortcut, the currently set painting opacity will be used.
  • We improved the assistants. You can now use the Shift key to add horizontal snapping to the handles of the straight-line assistants. The same shortcut will snap the third handle of the ellipse assistant to provide perfect circles.
  • Another assistant improvement:  there is now a checkbox to assistant snapping that will make the snapping happen to only the first snapped-to-assistant. This removes snapping issues on infinite assistants while keeping the ability to snap to chained assistants while the checkbox is unticked.
  • Several brushes were replaced with optimized versions: Basic_tip_default, Basic_tip_soft, Basic_wet, Block_basic, Block_bristles, Block_tilt, Ink_brush_25, Ink_gpen_10, Ink_gpen_25 now are much more responsive.
  • There is a new and mathematically robust normal map combination blending mode.
  • Slow down cursor outline updates for randomized brushes: when painting with a brush with fuzzy rotation, the outline looked really noisy before, now it’s smoother and easier to look at.
  • You can now convert any selection into a vector shape!
  • We already had a trim image to layer size option, but we added the converse: Trim to Image Size for if your layers are bigger than your image. (Which is easily managed with moving, rotating and so on).
  • The dodge and burn filter got optimized
  • Fixes to the Merge Layer functionality: you can use Ctrl+E to merge multiple selected layers, you can merge multiple selected layers with layer styles and merging of clone layers together with their sources will no longer break Krita.
  • The Color to Alpha filter now works correctly  with 16 bits floating point per channel color models.
  • We added a few more new shortcuts: scale image to new size using CTRL+ALT+I,  resize canvas with CTRL+ALT+C, create group kayer is CTRL+G, and feather selection = SHIFT+F6.

Fixed Bugs:

We resolved more than 150 bugs for this release. Here’s a highlight of the most important bug fixes! Some important fixes have to do with loading bundles. This is now more robust, but you might have problems with older bundle files. We also redesigned the Clone and Stamp brush creation dialogs. Look for the buttons in the predefined brush-tip tab of the brush editor. There are also performance optimizations, memory leak fixes and more:

  1.  BUG: 351599 Fix abr (photoshop) brush loading
  2. BUG:343615 Remember the toolbar visibility state when switching to canvas-only
  3. BUG:338839 Do not let the wheel zoom if there are modifiers pressed
  4. BUG:347500 Fix active layer activation mask
  5. Remove misleading error message after saving fails
  6. BUG 350289 : Prevent Krita from loading incomplete assistant.
  7. BUG:350960 Add ctrl-shift-s as default shortcut for “Save As” on Windows.
  8. Fix the Bristle brush presets
  9. Fix use normal map checkbox in the bumpmap filter UI.
  10. Fix loading the system-set monitor profile when using colord
  11. When converting between linear light sRGB and gamma corrected sRGB, automatically uncheck the “Optimize” checkbox in the colorspace conversion dialog.
  12. BUG:351488 Do not share textures when that’s not possible. This fixes showing the same image in two windows on two differently profiled monitors.
  13. BUG:351488 Update the display profile when moving screens. Now Krita will check whether you moved your window to another monitor, and if it detects you did that, recalculate the color correction if needed.
  14. Update the display profile after changing the settings — you no longer need to restart Krita after changing the color management settings.
  15. BUG:351664 Disable the layerbox if there is no open image, fixing a crash that could happen if you right-clicked on the layerbox before opening an image.
  16. BUG:351548 Make the transform tool work with Pass Through group layers
  17. BUG:351560 Make sure a default KoColor is black and transparent (fixes the default color settings for color fill layers)
  18. Lots of memory leak fixes
  19. BUG:351497 Blacklist “photoshop”:DateCreated” when saving. Photoshop adds a broken metadata line to JPG images that gave trouble when saving an image that contained a JPG created in Photoshop as a layer to Krita’s native file format.
  20. Ask for a profile when loading 16 bits PNG images, since Krita assumes linear light is default for 16 bits per channel RGB images.
  21. Improve the performance of most color correction filters
  22. BUG:350498 Work around encoding issues in kzip: images with a Japanese name now load correctly again.
  23. BUG:348099 Better error messages when exporting to PNG.
  24. BUG:349571 Disable the opacity setting for the shape brush. It hasn’t worked for about six years now.
  25. Improve the Image Recovery dialog by added some explanations.
  26. BUG:321361 Load resources from nested directories
  27. Do not use a huge amount of memory to save the pre-rendered image to OpenRaster or KRA files.
  28. BUG:351298 Fix saving CMYK JPEG’s correctly and do not crash saving 16 bit CMYK to JPEG
  29. BUG:351195 Fix slowdown when activating “Isolate Layer” mode
  30. Fix loading of selection masks
  31. BUG:345560 Don’t add the files you select when creating a File Layer  to the recent files list.
  32. BUG:351224 Fix crash when activating Pass-through mode for a group with transparency mask
  33. BUG:347798 Don’t truncate fractional brush sizes on eraser switch
  34. Don’t add new layers to a locked group layer
  35. Transform invisible layers if they are part of the group
  36. BUG:345619 Allow Drag & Drop of masks
  37. Fix the Fill Layer dialog to show the correct options
  38. BUG:344490 Make the luma options in the color selector settings translatable.
  39. BUG:351193 Don’t hang when isolating a layer during a stroke
  40. BUG:349621 Palette docker: Avoid showing a horizontal scrollbar
  41. Many fixes and a UI redesign for the Stamp and Clipboard brush creation dialogs
  42. BUG:351185 Make it possible to select layers in a pass-through group using the R shortcut.
  43. Don’t stop loading a bundle when a wrong manifest entry is found
  44. BUG:349333 fix inherit alpha on fill layers
  45. BUG:351005 Don’t crash on closing krita if the filter manager is open
  46. BUG:347285: Open the Krita Manual on F1 on all platforms
  47. BUG: 341899 Workaround for Surface Pro 3 Eraser
  48. BUG:350588 Fix a crash when the PSD file type is not recognized by the system
  49. BUG:350280 Fix a hangup when pressing ‘v’ and ‘b’ in the brush tool simultaneously
  50. BUG:350280 Fix  crash in the line tool.
  51. BUG:350507 Fix crash when loading a transform mask with a non-affine transform


August 31, 2015

Freaky Details (Calvin Hollywood)

Freaky Details (Calvin Hollywood)

Replicating Calvin Hollywood's Freaky Details in GIMP

German photographer/digital artist/photoshop trainer Calvin Hollywood has a rather unique style to his photography. It’s a sort of edgy, gritty, hyper-realistic result, almost a blend between illustration and photography.

Calvin Hollywood Examples

As part of one of his courses, he talks about a technique for accentuating details in an image that he calls “Freaky Details”.

Here is Calvin describing this technique using Photoshop:

In my meandering around different retouching tutorials I came across it a while ago, and wanted to replicate the results in GIMP if possible. There were a couple of problems that I ran into for replicating the exact same workflow:

  1. Lack of a “Vivid Light” layer blend mode in GIMP
  2. Lack of a “Surface Blur” in GIMP

Those problems have been rectified (and I have more patience these days to figure out what exactly was going on), so let’s see what it takes to replicate this effect in GIMP!

Replicating Freaky Details


The only extra thing you’ll need to be able to replicate this effect is G’MIC for GIMP.

You don’t technically need G’MIC to make this work, but the process of manually creating a Vivid Light layer is tedious and error-prone in GIMP right now. Also, you won’t have access to G’MIC’s Bilateral Blur for smoothing. And, seriously, it’s G’MIC - you should have it anyway for all the other cool stuff it does!

Summary of Steps

Here’s the summary of steps we are about to walk through to create this effect in GIMP:

  1. Duplicate the background layer.
  2. Invert the colors of the top layer.
  3. Apply “Surface Blur” to top layer.
  4. Set top layer blend mode to “Vivid Light”.
  5. New layer from visible.
  6. Set layer blend mode of new layer to “Overlay”, hide intermediate layer.

There are just a couple of small things to point out though, so keep reading to be aware of them!

Detailed Steps

I’m going to walk through each step to make sure it’s clear, but first we need an image to work with!

As usual, I’m off to Flickr Creative Commons to search for a CC licensed image to illustrate this with. I found an awesome portrait taken by the U.S. National Guard/Staff Sergeant Christopher Muncy:

New York National Guard, on Flickr New York National Guard by U.S. National Guard/Staff Sergeant Christopher Muncy on Flickr (cb).
Airman First Class Anthony Pisano, a firefighter with the New York National Guard’s 106th Civil Engineering Squadron, 106th Rescue Wing conducts a daily equipment test during a major snowstorm on February 17, 2015.
(New York Air National Guard / Staff Sergeant Christopher S Muncy / released)

This is a great image to test the effect, and to hopefully bring out the details and gritty-ness of the portrait.

1./2. Duplicate background layer, and invert colors

So, duplicate your base image layer (Background in my example).

Layer → Duplicate

I will usually name the duplicate layer something descriptive, like “Temp” ;).

Next we’ll just invert the colors on this “Temp” layer.

Colors → Invert

So right now, we should be looking at this on our canvas:

GIMP Freaky Details Inverted Image The inverted duplicate of the base layer.
GIMP Freaky Details Inverted Image Layers What the Layers dialog should look like.

Now that we’ve got our inverted “Temp” layer, we just need to apply a little blur.

3. Apply “Surface Blur” to Temp Layer

There’s a couple of different ways you could approach this. Calvin Hollywood’s tutorial explicitly calls for a Photoshop Surface Blur. I think part of the reason to use a Surface Blur vs. Gaussian Blur is to cut down on any halos that will occur along edges of high contrast.

There are three main methods of blurring this layer that you could use:

  1. Straight Gaussian Blur (easiest/fastest, but may halo - worst results)

    Filters → Blur → Gaussian Blur

  2. Selective Gaussian Blur (closer to true “Surface Blur”)

    Filters → Blur → Selective Gaussian Blur

  3. G’MIC’s Smooth [bilateral] (closest to true “Surface Blur”)

    Filters → G’MIC → Repair → Smooth [bilateral]

I’ll leave it as an exercise for the reader to try some different methods and choose one they like. (At this point I personally pretty much just always use G’MIC’s Smooth [bilateral] - this produces the best results by far).

For the Gaussian Blurs, I’ve had good luck with radius values around 20% - 30% of an image dimension. As the blur radius increases, you’ll be acting more on larger local contrasts (as opposed to smaller details) and run the risk of halos. So just keep an eye on that.

So, let’s try applying some G’MIC Bilateral Smoothing to the “Temp” layer and see how it looks!

Run the command:

Filters → G’MIC → Repair → Smooth [bilateral]

GIMP Freaky Details G'MIC Bilateral Filter The values I used in this example for Spatial/Value Variance.

The values you want to fiddle with are the Spatial Variance and Value Variance (25 and 20 respectively in my example). You can see the values I tried for this walkthrough, but I encourage you to experiment a bit on your own as well!

Now we should see our canvas look like this:

GIMP Freaky Details G'MIC Bilateral Filter Result Our “Temp” layer after applying G’MIC Smoothing [bilateral]
GIMP Freaky Details Inverted Image Layers Layers should still look like this.

Now we just need to blend the “Temp” layer with the base background layer using a “Vivid Light” blending mode…

4./5. Set Temp Layer Blend Mode to Vivid Light & New Layer

Now we need to blend the “Temp” layer with the Background layer using a “Vivid Light” blending mode. Lucky for me, I’m friendly with the G’MIC devs, so I asked nicely, and David Tschumperlé added this blend mode for me.

So, again we start up G’MIC:

Filters → G’MIC → Layers → Blend [standard] - Mode: Vivid Light

GIMP Freaky Details Vivid Light Blending G’MIC Vivid Light blending mode, pay attention to Input/Output!

Pay careful attention to the Input/Output portion of the dialog. You’ll want to set the Input Layers to All visibles so it picks up the Temp and Background layers. You’ll also probably want to set the Output to New layer(s).

When it’s done, you’re going to be staring at a very strange looking layer, for sure:

GIMP Freaky Details Vivid Light Blend Mode Well, sure it looks weird out of context…
GIMP Freaky Details Vivid Light Blend Mode Layers The layers should now look like this.

Now all that’s left is to hide the “Temp” layer, and set the new Vivid Light result layer to Overlay layer blending mode…

6. Set Vivid Light Result to Overlay, Hide Temp Layer

We’re just about done. Go ahead and hide the “Temp” layer from view (we won’t need it anymore - you could delete it as well if you wanted to).

Finally, set the G’MIC Vivid Light layer output to Overlay layer blend mode:

GIMP Freaky Details Final Blend Mode Layers Set the resulting G’MIC output layer to Overlay blend mode.

The results we should be seeing will have enhanced details and contrasts, and should look like this (mouseover to compare the original image):

GIMP Freaky Details Final Our final results (whew!)
(click to compare to original)

This technique will emphasize any noise in an image so there may be some masking and selective application required for a good final effect.


This is not an effect for everyone. I can’t stress that enough. It’s also not an effect for every image. But if you find an image it works well on, I think it can really do some interesting things. It can definitely bring out a very dramatic, gritty effect (it works well with nice hard rim lighting and textures).

The original image used for this article is another great example of one that works well with this technique:

GIMP Freaky Details Alternate Final After a Call by Mark Shaiken on Flickr. (cbna)

I had muted the colors in this image before applying some Portra-esque color curves to the final result..

Finally, a BIG THANK YOU to David Tschumperlé for taking the time to add a Vivid Light blend mode in G’MIC.

Try the method out and let me know what you think or how it works out for you! And as always, if you found this useful in any way, please share it, pin it, like it, or whatever you kids do these days…

This tutorial was originally published here.

Interview with Brian Delano

big small and me 4 and 5 sample

Could you tell us something about yourself?

My name is Brian Delano. I’m a musician, writer, futurist, entrepreneur and artist living in Austin, Texas. I don’t feel I’m necessarily phenomenal at any of these things, but I’m sort of taking an approach of throwing titles at my ego and seeing whichones stick and sprout.

Do you paint professionally, as a hobby artist, or both?

I’m more or less a hobby artist. I’ve made a few sales of watercolors here and there and have had my pieces in a few shows around town, but, so far, the vast majority of my art career exists as optimistic speculation between my ears.

What genre(s) do you work in?

I mostly create abstract art. I’ve been messing around with web comic ideas a bit, but that’s pretty young on my “stuff I wanna do” list. Recently, I’ve been working diligently on illustrating a children’s book series that I’ve been conceptualizing for a few years.

Whose work inspires you most — who are your role models as an artist?

Ann Druyan & Carl Sagan, Craig & Connie Minowa, Darren Waterston, Cy Twombly, Theodor Seuss Geisel, Pendelton Ward, Shel Silverstein and many others.

How and when did you get to try digital painting for the first time?

My first exposure to creating digital art was through the mid-nineties art program Kid Pix. It was in most every school’s computer lab and I thought it was mind-blowingly fun. I just recently got a printout from one of my first digital paintings from this era (I think I was around 8 or so when I made it) and I still like it. It was a UFO destroying a beach house by shooting lightning at it.

What makes you choose digital over traditional painting?

Don’t get me wrong, traditional (I call it analog :-P) art is first and foremost in my heart, but when investment in materials and time is compared between the two mediums, there’s no competition. If I’m trying to make something where I’m prototyping and moving elements around within an image while testing different color schemes and textures, digital is absolutely the way to go.

How did you find out about Krita?

I was looking for an open source alternative to some of the big name software that’s currently out for digital art. I had already been using GiMP and was fairly happy with what it offered in competition with Photoshop, but I needed something that was more friendly towards digital painting, with less emphasis on imaging. Every combination of words in searches and numerous scans through message boards all pointed me to Krita.

What was your first impression?

To be honest, I was a little overwhelmed with the vast set of options Krita has to offer in default brushes and customization. After a few experimental sessions, some video tutorials, and a healthy amount of reading through the manual, I felt much more confident in my approach to creating with Krita.

What do you love about Krita?

If I have a concept or a direction I want to take a piece, even if it seems wildly unorthodox, there’s a way to do it in Krita. I was recently trying to make some unique looking trees and thought to myself ” I wish I could make the leafy part look like rainbow tinfoil…” I messed around with the textures, found a default one that looked great for tinfoil, made a bunch of texture circles with primary colored brush outlines, selected all opaque on the layer, added a layer below it, filled in the selected space with a rainbow gradient, lowered the opacity a bit on the original tin foil circle layer, and bam! What I had imagined was suddenly a (digital) reality!

What do you think needs improvement in Krita? Is there anything that really annoys you?

Once in a while, if I’m really pushing the program and my computer, Krita will seem to get lost for a few seconds and become non responsive. Every new release seems to lessen this issue, though, and I’m pretty confident that it won’t even be an issue as development continues.

What sets Krita apart from the other tools that you use?

Krita feels like an artist’s program, created by artists who program. Too many other tools feel like they were created by programmers and misinterpreted focus group data to cater to artists’ needs that they don’t fully understand. I know that’s a little vague, but once you’ve tried enough different programs and then come to Krita, you’ll more than likely see what I mean.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I’m currently illustrating a children’s book series that I’ve written which addresses the size and scope of everything and how it relates to the human experience. I’m calling the series “BIG, small & Me”, I’m hoping to independently publish the first book in the fall and see where it goes. I’m not going to cure cancer or invent faster than lightspeed propulsion, but if I can inspire the child that one day will do something this great, or even greater, then I will consider my life a success.

big small and me cover sample

What techniques and brushes did you use in it?

I’ve been starting out sketching scenes with the pencil brushes, then creating separate layers for inking. Once I have the elements of the image divided up by ink, I turn off my sketch layers, select the shapes made by the ink layers and then fill blocks on third layers. When I have the basic colors of the different elements completed in this manner, I turn off my ink lines and create fourth and fifth layers for texturing and detailing each element. There are different tweaks and experimental patches in each page I’ve done, but this is my basic mode of operation in Krita.

Where can people see more of your work?

I have a few images and a blog up at, and will hopefully be doing much more with that site  pretty soon. I’m still in the youngish phase of most of the projects I’m working on so self promotion will most likely be ramping up over the next few months. I’m hoping to set up a kickstarter towards the end of the year for a first pressing of BIG, small & Me, but until then most of my finished work will end up on either or my facebook page.

Anything else you’d like to share?

It’s ventures like Krita that give me hope for the future of creativity. I am so thankful that there are craftspeople in the world so dedicated to creating such a superior tool for digital art.

August 26, 2015

Switching to a Kobo e-reader

For several years I've kept a rooted Nook Touch for reading ebooks. But recently it's become tough to use. Newer epub books no longer work work on any version of FBReader still available for the Nook's ancient Android 2.1, and the Nook's built-in reader has some fatal flaws: most notably that there's no way to browse books by subject tag, and it's painfully slow to navigate a library of 250 books when have to start from the As and you need to get to T paging slowly forward 6 books at a time.

The Kobo Touch

But with my Nook unusable, I borrowed Dave's Kobo Touch to see how it compared. I like the hardware: same screen size as the Nook, but a little brighter and sharper, with a smaller bezel around it, and a spring-loaded power button in a place where it won't get pressed accidentally when it's packed in a suitcase -- the Nook was always coming on while in its case, and I didn't find out until I pulled it out to read before bed and discovered the battery was too low.

The Kobo worked quite nicely as a reader, though it had a few of the same problems as the Nook. They both insist on justifying both left and right margins (Kobo has a preference for that, but it doesn't work in any book I tried). More important is the lack of subject tags. The Kobo has a "Shelves" option, called "Collections" in some versions, but adding books to shelves manually is tedious if you have a lot of books. (But see below.)

It also shared another Nook problem: it shows overall progress in the book, but not how far you are from the next chapter break. There's a choice to show either book progress or chapter progress, but not both; and chapter progress only works for books in Kobo's special "kepub" format (I'll write separately about that). I miss FBReader's progress bar that shows both book and chapter progress, and I can't fathom why that's not considered a necessary feature for any e-reader.

But mostly, Kobo's reader was better than the Nook's. Bookmarks weren't perfect, but they basically worked, and I didn't even have to spent half an hour reading the manual to use them (like I did with the Nook). The font selection was great, and the library navigation had one great advantage over the Nook: a slider so you could go from A to T quickly.

I liked the Kobo a lot, and promptly ordered one of my own.

It's not all perfect

There were a few disadvantages. Although the Kobo had a lot more granularity in its line spacing and margin settings, the smallest settings were still a lot less tight than I wanted. The Nook only offered a few settings but the smallest setting was pretty good.

Also, the Kobo can only see books at the top level of its microSD card. No subdirectories, which means that I can't use a program like rsync to keep the Kobo in sync with my ebooks directory on my computer. Not that big a deal, just a minor annoyance.

More important was the subject tagging, which is really needed in a big library. It was pretty clear Shelves/Collections were what I needed; but how could I get all my books into shelves without laboriously adding them all one by one on a slow e-ink screen?

It turns out Kobo's architecture makes it pretty easy to fix these problems.

Customizing Kobo

While the rooted Nook community has been stagnant for years -- it was a cute proof of concept that, in the end, no one cared about enough to try to maintain it -- Kobo readers are a lot easier to hack, and there's a thriving Kobo community on MobileReads which has been trading tips and patches over the years -- apparently with Kobo's blessing.

The biggest key to Kobo's customizability is that you can mount it as a USB storage device, and one of the files that exposes is the device's database (an sqlite file). That means that well supported programs like Calibre can update shelves/collections on a Kobo, access its book list, and other nifty tricks; and if you want more, you can write your own scripts, or even access the database by hand.

I'll write separately about some Python scripts I've written to display the database and add books to shelves, and I'll just say here that the process was remarkably straightforward and much easier than I usually expect when learning to access a new device.

There's lots of other customizing you can do. There are ways of installing alternative readers on the Kobo, or installing Python so you can write your own reader. I expected to want that, but so far the built-in reader seems good enough.

You can also patch the OS. Kobo updates are distributed as tarballs of binaries, and there's a very well designed, documented and supported (by users, not by Kobo) patching script distributed on MobileReads for each new Kobo release. I applied a few patches and was impressed by how easy it was. And now I have tight line spacing and margins, a slightly changed page number display at the bottom of the screen (still only chapter or book, not both), and a search that defaults to my local book collection rather than the Kobo store.

Stores and DRM

Oh, about the Kobo store. I haven't tried it yet, so I can't report on that. From what I read, it's pretty good as e-bookstores go, and a lot of Nook and Sony users apparently prefer to buy from Kobo. But like most e-bookstores, the Kobo store uses DRM, which makes it a pain (and is why I probably won't be using it much).

They use Adobe's DRM, and at least Adobe's Digital Editions app works in Wine under Linux. Amazon's app no longer does, and in case you're wondering why I didn't consider a Kindle, that's part of it. Amazon has a bad reputation for removing rights to previously purchased ebooks (as well as for spying on their customers' reading habits), and I've experienced it personally more than once.

Not only can I no longer use the Kindle app under Wine, but Amazon no longer lets me re-download the few Kindle books I've purchased in the past. I remember when my mother used to use the Kindle app on Android regularly; every few weeks all her books would disappear and she'd have to get on the phone again to Amazon to beg to have them back. It just isn't worth the hassle. Besides, Kindles can't read public library books (those are mostly EPUBs with Adobe DRM); and a Kindle would require converting my whole EPUB library to MOBI. I don't see any up side, and a lot of down side.

The Adobe scheme used by Kobo and Nook is better, but I still plan to avoid books with DRM as much as possible. It's not the stores' fault, and I hope Kobo does well, because they look like a good company. It's the publishers who insist on DRM. We can only hope that some day they come to their senses, like music publishers finally did with MP3 versus DRMed music. A few publishers have dropped DRM already, and if we readers avoid buying DRMed ebooks, maybe the message will eventually get through.

August 25, 2015

Funding Krita

Even Free software needs to be funded. Apart from being very collectible, money is really useful: it can buy transportation so contributors can meet, accommodation so they can sleep, time so they can code, write documentation, create icons and other graphics, hardware to test and develop the software on.

With that in mind, KDE is running a fund raiser to fund developer sprints, Synfig is running a fund raiser to fund a full-time developer and Krita… We’re actually trying to make funded development sustainable. Blender is already doing that, of course.

Funding development is a delicate balancing act, though. When we started doing sponsorship for full-time development on Krita, there were some people concerned that paying some community members for development would disenchant others, the ones who didn’t get any of the money. Even Google Summer of Code already raised that question. And there are examples of companies hiring away all community members, killing the project in the process.

Right now, our experience shows that it hasn’t been a problem. That’s partly because we have always been very clear about why we were doing the funding: Lukas had the choice between working on Krita and doing some boring web development work, and his goal was fixing bugs and performance issues, things nobody had time for, back then. Dmitry was going to leave university and needed a job, and we definitely didn’t want to lose him for the project.

In the end, people need food, and every line of code that’s written for Krita is one line more. And those lines translate to increased development speed, which leads to a more interesting project, which leads to more contributors. It’s a virtuous circle. And there’s still so much we can do to make Krita better!

So, what are we currently doing to fund Krita development, and what are our goals, and what would be the associated budget?

Right now, we are:

  • Selling merchandise: this doesn’t work. We’ve tried dedicated webshops, selling tote bags and mugs and things, but the total sales is under a hundred euros, which makes it not worth the hassle.
  • Selling training DVD’s: Ramon Miranda’s Muses DVD is still a big success. Physical copies and downloads are priced the same. There’ll be a new DVD, called “Secrets of Krita”, by Timothée Giet this year, and this week, we’ll start selling USB sticks (credit-card shaped) with the training DVD’s and a portable version of Krita for Windows and OSX and maybe even Linux.
  • The Krita Development Fund. It comes in two flavors. For big fans of Krita, there’s the development fund for individual users. You decide how much a month you can spare for Krita, and set up an automatic payment profile with Paypal or a direct bank transfer. The business development fund has a minimum amount of 50 euros/month and gives access to the CentOS builds we make.
  • Individual donations. This depends a lot on how much we do publicity-wise, and there are really big donations now and then which makes it hard to figure out what to count on, from month to month, but the amounts are significant. Every individual donor gets a hand-written email saying thank-you.
  • We are also selling Krita on Steam. We’ve got a problem here: the Gemini variant of Krita, with the switchable tablet/desktop GUI, got broken with the 2.9 release. But Steam users also get regular new builds of the 2.9 desktop version. Stuart is helping us here, but we need to work harder to interact with our community on Steam!
  • And we do one or two big crowd-funding campaigns. Our yearly kickstarters. They take about two full-time months to prepare, and you can’t skimp on preparation because then you’ll lose out in the end, and they take significant work to fulfil all the rewards. Reward fulfilment is actually something we pay someone a volunteer gratification for to do the work. We are considering doing a second kickstarter this year, to give me an income, with as goal producing a finished, polished OSX port of Krita. The 2015 kickstarter campaign brought in 27,471.78 euros, but we still need to buy and send out the rewards, which are estimated at an approximate cost of 5,000 euros.
  • Patreon. I’ve started a patreon, but I’m not sure what to offer prospective patrons, so it isn’t up and running yet.
  • Bug bounties. The problem here is that the amount of money people think is reasonable for fixing a bug is wildly unrealistic, even for a project that is as cheap to develop as Krita. You have to count on 250 euros for a day of work, to be realistic. I’ve sent out a couple of quotations, but… If you realize that adding support for loading group layers from XCF files is already taking three days, most people simply cannot bear the price of a bug fix individually.

So, let’s do sums for the first 8 months of 2015:

Paypal (merchandise, training materials, development fund, kickstarter-through-paypal and smaller individual donations) 8,902.04
The big individual donations usually arrive directly at our bank account, including a one-time donation to sponsor the port of Krita to Qt5 15,589.00
Steam 5,150.97
Kickstarter 27,471.78
Total 57,113.79

So, the Krita Foundation’s current yearly budget is roughly 65,000 euros, which is enough to employ Dmitry full-time and me part-time. The first goal really is to make sure I can work on Krita full-time again. Since KO broke down, that’s been hard, and I’ve spent five months on the really exciting Plasma Phone project for Blue Systems. That was a wonderful experience, but it had a direct influence on the speed of Krita development, both code-wise, as well as in terms of growing the userbase and keeping people involved.

What we also have tried is approaching VFX and game studios, selling support and custom development. This isn’t a big success yet, and that’s puzzling me some. All these studios are on Linux. All their software, except for their 2D painting application, is on Linux. They want to use Krita, on Linux. And every time we are in contact with some studio, they tell us they want Krita. Except, there’s some feature missing, something that needs improved… And we make a very modest quote, one that doesn’t come near what custom development should cost, and silence is the result.

Developing Krita is actually really cheap. We don’t have any overhead: no management, no office, modest hardware needs. With 5,000 euros we can fund one full-time developer for one month, with something to spare for hardware, sprints and other costs, like the license for the administration software, stamps and envelopes. The first goal would be to double our budget, so we can have two full-time developers, but in the end, I would like to be able to fund four to five full-time developers, including me, and that means we’re looking at a year budget of roughly 300,000 euros. With that budget, we’d surpass every existing 2D painting application, and it’s about what Adobe or Corel would need to budget for one developer per year!

Taking it from here, what are the next steps? I still think that without direct involvement of people and organizations who want to use Krita in a commercial, professional setting, we cannot reach the target budget. I’m too much a tech geek — there’s a reason KO failed, and that is that we were horrible at sales — to figure out how to reach out and convince people that supporting Krita would be a winning proposition! Answers on a post-card, please!

August 24, 2015

Self-generated metadata with LVFS

This weekend I finished the penultimate feature for the LVFS. Before today, when uploading firmware there was up to a 24h delay before the new firmware would appear in the metadata. This was because there was a cronjob on my home server downloading files every night from the LVFS site, running appstream-builder on them locally and then uploading the metadata back to the site. Not awesome at all.

Actually generating the metadata in the OpenShift instance was impossible, until today. Due to libgcab and libappstream-glib not being available on the RHEL 6.2 instance I’m using, I had to re-implement two things in Python:

  • Reading and writing Microsoft cabinet archives
  • Reading MetaInfo files and writing compressed AppStream XML

The two helper libraries (only really implementing the parts required, but patches welcome) are python-cabarchive and python-appstream. I’m not awesome at Python, so feedback (in the form of pull requests) welcome.

This means I’m nearly okay to be hit by a bus. Well, nearly; the final feature is to collect statistics about how many people are downloading each firmware file, and possibly collecting data on how many failures and successes there have been when actually applying the firmware. This is actually quite tricky to do without causing privacy issues and not doing double counting. I’ll do some more thinking and then write up a proposal, ideas welcome.

August 22, 2015

2015 KDE Sprints Fundraiser

Krita is a part of the KDE Community. Without KDE, Krita wouldn’t exist, and KDE still supports Krita in many different ways. KDE is a world-wide community for people and projects who create free software. That ranges from applications like Krita, Digikam, Kdenlive to education software like GCompris to desktop and mobile phone software.

Krita not only uses the foundations developed by KDE and its developers all around the world, KDE hosts the website, the forums, everything needed for development. And the people working on all this need to meet from time to time to discuss future directions, to make decisions about technology and to work together on all the software that KDE communities create. As with Krita, most of the work on KDE is done by volunteers!

KDE wants to support those volunteers with travel grants and accomodation support, and for that, KDE is raising funds right now. Getting developers, artists, documentation writers, users all together in one place to work on creating awesome free software! And there is a big sprint coming up soon: the Randa Meetings. From the 6th to the 13th of September more than 50 people will meet in Randa, Switzerland to work, discuss, decide, document, write, eat and sleep under one and the same roof.

It’s a very effective meeting: in 2011 the KDE Frameworks 5 project was started, rejuvenating and modernizing the KDE development platform. Krita is currently being ported to Frameworks. Last year, Kdenlive received special attention, reinvigorating the project as part of the KDE community. Krita artist Timothee Giet worked on GCompris, another new KDE project. This year, the focus is on bringing KDE software to touch devices, tablets, phones, laptops with touch screen.

Let’s help KDE bring people together!

August 21, 2015

Embargoed firmware updates in LVFS

For the last couple of days I’ve been working with a large vendor adding new functionality to the LVFS to support their specific workflow.

Screenshot from 2015-08-21 13-06-02

The new embargo target allows vendors to test the automatic update functionality using a secret vendor-specific URL set in /etc/fwupd.conf without releasing it to the general public until the hardware has been announced.

Updates normally go through these stages: Private → Embargoed → Testing → Stable although LVFS users with the QA capability can skip these as required. The screenshot also shows that we’re unpacking the .cab file and parsing the metainfo file server side (in python), which gives us so much more rich detail about the firmware.