December 08, 2018

darktable 2.6.0rc1 released

we’re proud to announce the second release candidate for the upcoming 2.6 series of darktable, 2.6.0rc1!

the github release is here:

as always, please don’t use the autogenerated tarball provided by github, but only our tar.xz. the checksums are:

$ sha256sum darktable-2.6.0~rc1.tar.xz
202bb53e924429aec74cd0a864b3d6a5c4d57b54547ef858bbd253116b909d22 darktable-2.6.0rc1.tar.xz
$ sha256sum darktable-2.6.0.rc1.dmg
c4ba0b929ae66904ae4e9fb97e67607bf1cf97f36a17c58e4b20624795c5e759 darktable-2.6.0.rc1.dmg
$ sha256sum darktable-2.6.0rc1.exe
808196a826eafe6ce2d913482ec4f60de60a4b061d934ee9e810e5bd8e602456 darktable-2.6.0rc1.exe

when updating from the currently stable 2.4.x series, please bear in mind that your edits will be preserved during this process, but it will not be possible to downgrade from 2.6 to 2.4.x any more.

Important note: to make sure that darktable can keep on supporting the raw file format for your camera, please read this post on how/what raw samples you can contribute to ensure that we have the full raw sample set for your camera under CC0 license!

  • Over 1600 commits to darktable+rawspeed since 2.4
  • 260+ pull requests handled
  • 250+ issues closed
  • Updated user manual is coming soon™

The Big Ones

  • new module retouch allowing changes based on image frequency layers
  • new module filmic which can replace the base curve and shadows and highlights
  • new module to handle duplicates in the darkroom with possibility to add a title, create standard or virgin duplicate, delete duplicate and quickly compare with a duplicate
  • new logarithm controls for the tone-curve
  • new mode for the unbreak profile module
  • add mask preview to adjust size, hardness before placing them
  • make it possible to change the cropped area in the perspective correction module
  • the mask blur has been complemented with a guided-filter to fine tune it (this works on RGB and Lab color space).
  • color balance module has two new modes based on ProPhotoRGB and HSL
  • Experimental support for PPC64le architecture (OpenCL support needs to be disabled, -DUSE_OPENCL=OFF)

New Features And Changes

  • search from the map view is now fixed
  • visual rework of the lighttable (color label, image kind, local copy)
  • an option make it possible to display some image information directly on the thumb
  • add optional scrollbars on lighttable, or lighttable and darkroom
  • allow each masks of the clone module to have the opacity adjusted
  • lightroom import module supports the creator, rights, title, description and publisher information.
  • enhance TurboPrint support by displaying the dialogue with all possible options
  • new sort filter based on the image’s aspect
  • new sort filter based on the image’s shutter speed
  • new sort filter based on the image’s group
  • new sort filter based on a personalized sorting order (drag&drop on the lighttable view)
  • collection based on the local copy status
  • group image number displayed on the collection module
  • new zoom level at 50%; 400%, 800% and 1600%
  • better support for monochrome RAW
  • add contextual help pointing to the darktable’s manual
  • better copy/paste support for multiple instances
  • add support for renaming the module instances
  • add frequency based adjustment for the RAW denoise module
  • add frequency based adjustment for the denoise profile module
  • all widgets should be themable via CSS now
  • add support for configuring the modules layout
  • different way to select hierarchical tags in the collection module (only the actual parent tag, all children or the parent and children)
  • better handling of grouped images by allowing setting stars, color label for the whole group.
  • make it possible to apply a preset to a new module instance using the middle click
  • new script to migrate collection from Capture One Pro

Bug fixes

  • Fix the color pickers behavior in all modules
  • Fix liquify tools switching
  • Many more bugs got fixed


  • No changes

Changed Dependencies

  • CMake 3.4 is now required
  • In order to compile darktable you now need at least gcc-5.0+/clang-3.9+
  • Minimal clang version was bumped from 3.4+ to 3.9+
  • Packagers are advised to pass -DRAWSPEED_ENABLE_LTO=ON to CMake to enable partial LTO.

RawSpeed changes

  • GoPro ‘.GPR’ raws are now supported via new, fast ‘VC-5’ parallel decompressor
  • Panasonic’s new raw compression (‘.RW2’, GH5s, G9 cameras) is now supported via new fast, parallel ‘Panasonic V5’ decompressor
  • Panasonic’s old (also ‘.RW2’) raw decompressor got rewritten, re-parallelized
  • Phase One (‘.IIQ’) decompressor got parallelized
  • Nikon NEF ‘lossy after split’ raw support was recovered
  • Phase One (‘.IIQ’) Quadrant Correction is now supported
  • Olympus High-Res (uncompressed) raw support
  • Lot’s and lot’s and lot’s of maintenance, sanitization, cleanups, small rewrites/refactoring.
  • NOTE: Canon ‘.CR3’ raws are NOT supported as of yet.

Camera support, compared to 2.4.0

Base Support

  • Canon EOS 1500D
  • Canon EOS 2000D
  • Canon EOS Rebel T7
  • Canon EOS 3000D
  • Canon EOS 4000D
  • Canon EOS Rebel T100
  • Canon EOS 5D Mark IV (sRaw1, sRaw2)
  • Canon EOS 5DS (sRaw1, sRaw2)
  • Canon EOS 5DS R (sRaw1, sRaw2)
  • Canon PowerShot G1 X Mark III
  • Fujifilm X-A5
  • Fujifilm X-H1 (compressed)
  • Fujifilm X-T100
  • Fujifilm X-T3 (compressed)
  • GoPro FUSION (dng)
  • GoPro HERO5 Black (dng)
  • GoPro HERO6 Black (dng)
  • GoPro HERO7 Black (dng)
  • Hasselblad CFV-50
  • Hasselblad H5D-40
  • Hasselblad H5D-50c
  • Kodak DCS Pro 14nx
  • Kodak DCS520C
  • Kodak DCS760C
  • Kodak EOS DCS 3
  • Nikon COOLPIX P1000 (12bit-uncompressed)
  • Nikon D2Xs (12bit-compressed, 12bit-uncompressed)
  • Nikon D3500 (12bit-compressed)
  • Nikon Z 6 (except uncompressed raws)
  • Nikon Z 7 (except 14-bit uncompressed raw)
  • Olympus E-PL8
  • Olympus E-PL9
  • Olympus SH-2
  • Panasonic DC-FZ80 (4:3)
  • Panasonic DC-G9 (4:3)
  • Panasonic DC-GH5S (4:3, 3:2, 16:9, 1:1)
  • Panasonic DC-GX9 (4:3)
  • Panasonic DC-LX100M2 (4:3, 1:1, 16:9, 3:2)
  • Panasonic DC-TZ200 (3:2)
  • Panasonic DC-TZ202 (3:2)
  • Panasonic DMC-FZ2000 (3:2)
  • Panasonic DMC-FZ2500 (3:2)
  • Panasonic DMC-FZ35 (3:2, 16:9)
  • Panasonic DMC-FZ38 (3:2, 16:9)
  • Panasonic DMC-GX7MK2 (4:3)
  • Panasonic DMC-ZS100 (3:2)
  • Paralenz Dive Camera (chdk)
  • Pentax 645Z
  • Pentax K-1 Mark II
  • Pentax KP
  • Phase One P65+
  • Sjcam SJ6 LEGEND (chdk-b, chdk-c)
  • Sony DSC-HX99
  • Sony DSC-RX0
  • Sony DSC-RX100M5A
  • Sony DSC-RX10M4
  • Sony DSC-RX1RM2
  • Sony ILCE-7M3

White Balance Presets

  • Canon EOS M100
  • Leaf Credo 40
  • Nikon D3400
  • Nikon D5600
  • Nikon D7500
  • Nikon D850
  • Nikon Z 6
  • Olympus E-M10 Mark III
  • Olympus E-M1MarkII
  • Panasonic DC-G9
  • Panasonic DC-GX9
  • Panasonic DMC-FZ300
  • Sony DSC-RX0
  • Sony ILCE-6500
  • Sony ILCE-7M3
  • Sony ILCE-7RM3

Noise Profiles

  • Canon EOS 200D
  • Canon EOS Kiss X9
  • Canon EOS Rebel SL2
  • Canon EOS 750D
  • Canon EOS Kiss X8i
  • Canon EOS Rebel T6i
  • Canon EOS 760D
  • Canon EOS 8000D
  • Canon EOS Rebel T6s
  • Canon EOS 77D
  • Canon EOS 9000D
  • Canon EOS 800D
  • Canon EOS Kiss X9i
  • Canon EOS Rebel T7i
  • Canon EOS M100
  • Canon EOS M6
  • Canon PowerShot G1 X Mark II
  • Canon PowerShot G1 X Mark III
  • Canon PowerShot G9 X
  • Fujifilm X-T3
  • Fujifilm X100F
  • Nikon COOLPIX B700
  • Nikon D5600
  • Nikon D7500
  • Nikon D850
  • Olympus E-M10 Mark III
  • Olympus TG-5
  • Panasonic DC-G9
  • Panasonic DC-GX9
  • Panasonic DMC-FZ35
  • Panasonic DMC-FZ38
  • Panasonic DMC-GF6
  • Panasonic DMC-LX10
  • Panasonic DMC-LX15
  • Panasonic DMC-LX9
  • Panasonic DMC-TZ70
  • Panasonic DMC-TZ71
  • Panasonic DMC-ZS50
  • Pentax K-01
  • Pentax KP
  • Samsung NX1
  • Sony DSC-RX100M4
  • Sony DSC-RX10M3
  • Sony ILCE-7M3


  • Afrikaans
  • Czech
  • German
  • Finnish
  • French
  • Galician
  • Hebrew
  • Hungarian
  • Italian
  • Norwegian Bokmål
  • Nepal
  • Dutch
  • Portuguese
  • Romanian
  • Russian
  • Slovenian
  • Albanian
  • Thai
  • Chinese

December 07, 2018

AMI joins the LVFS

American Megatrends Inc. may not be a company you’ve heard of, unless perhaps you like reading early-boot BIOS messages. AMI is the world’s largest BIOS firmware vendor, supplying firmware and tools to customers such as Asus, Clevo, Intel, AMD and many others. If you’ve heard of a vendor using Aptio for firmware updates, that means it’s from them. AMI has been testing the LVFS, UpdateCapsule and fwupd for a few months and is now fully compatible. They are updating their whitepapers for customers explaining the process of generating a capsule, using the ESRT, and generating deliverables for the LVFS.

This means “LVFS Support” becomes a first class citizen alongside Windows Update for the motherboard manufacturers. This should trickle down to the resellers, so vendors using Clevo motherboards like Tuxedo get LVFS support almost for free. This will take a bit of time to trickle down to the smaller OEMs.

Also, expect another large vendor announcement soon. It’s the one quite a few people have been waiting for.

December 05, 2018

Ton Roosendaal and Blender to receive ASIFA-Hollywood Ub Iwerks Award

The International Animated Film Society, ASIFA-Hollywood, will grant Ton Roosendaal and the Blender Open Source Software the Ub Iwerks Award at the upcoming 46th Annual Annie Awards™. As part of the juried awards category, the Ub Iwerks Award recognizes the technical advancements that make a significant impact on the art or industry of animation.

It will be an incredible honour for Ton and for the Blender project to accept this award in February 2019! This is the first time this industry award was granted to a free/open source software project.

About the Award

Named after the man who is credited for giving shape, movement, and personality to Mickey Mouse, the Ub Iwerks Award was created and given to individuals or companies for technical advancements that make a significant impact on the art or industry of animation.

Past recipients of this award include Dr. Ed Catmull for his breakthrough technologies at Pixar, Scott Johnston for his innovative work in Looney Tunes: Back in Action, Digital Domain, Inc. for their groundbreaking innovations in Titanic, and Eric Daniels for the development of the ‘Deep Canvas’ process for Walt Disney’s Tarzan.

December 04, 2018

Fedora Design Team Meeting, 4 Nollaig 2018

Fedora Design Team Logo

Today we had a Fedora Design Team meeting. Here’s what went down (meetbot link).

Freenode<> Issues

Tango Internet Group Chat, CC0 from openclipart.ogr

About half of the team members who participated today used (e.g. the client). Unfortunately, we noticed an issue with bridging between these two networks today – both sides could see IRC comments, but comments weren’t getting sent to IRC. ctmartin recognized the issue from another Fedora channel and figured out that if we added +v to the channel members using matrix, that would fix the issue. I am not sure if this is All Fixed Now or is going to be an ongoing Thing. But that is why our meeting started late today.

If anybody has ideas on how to resolve this in a permanent way, I would very much appreciate your advice!

Fedora 30 Artwork

CC BY-SA 3.0, wikimedia commons "A Fresnel lens exhibited in the Musée national de la Marine"

For 5 Fedora releases now, the design team has been using a famous scientist / mathematician / technologist as the inspiration for the release artwork. We do this based on an alphabetical system; Fedora 30 is slated to be a person whose names begins with an “F.” Gnokii manages this process, and already set up and tallied the results for the design team-specific vote on which we chose from the following:

  • Federico Faggin (microprocessor)
  • Rosalin Franklin (DNA helix)
  • Sandford Fleming (Universal Standard Time)
  • Augustin-Jean Fresnel (fresnel lens)

As gnokii announced on our team mailing list, the inspiration for the Fedora 30 artwork will be Augustin-Jean Fresnel. He also gathered the following set of inspirational images, which all revolve around the design of the Fresnel lens, which we talked about in the meeting would be a good central focus / concept for the artwork, whether it’s a depiction of a lens itself or some form of study of the diffraction pattern (and “thin-film” rainbow effect”) that inspired its invention:

The action item we got out of this discussion is that we need to meet separately, a remote hackfest if you will, to work on the F30 artwork (as we typically do each release.) This will take place in #fedora-design on IRC (or Fedora Design on If you are interested in participating, here is the to organize a time for this event:

Exploring a Fedora logo refresh

For the past few weeks we have been working with mattdm on exploring what a refresh of the Fedora logo might look like. This work has been ongoing in design ticket #620. There’s a few issues such a thing would aim to address – if you’ve ever worked with the current Fedora logo yourself, these should be pretty familiar (copy-pasta-ed from the ticket):

  • It doesn’t work well at small sizes
  • It doesn’t work at all in a single color
  • It’s hard to work with on a dark background
  • The “voice” bubble means it’s hard to center visually in designs
  • The Fedora wordmark is based on a non-open-source font
  • The “a” in the wordmark is easily mistaken for an o
  • The horizontal wordmark + logo with the “floated” trailing logo is challenging to work with
  • It gets confused with the Facebook logo

The general approach here is a light touch, and not an overhaul. Below are some of the leading concepts / experiments thus far:

The next step here that we discussed is for each concept, to create something like “style tiles” for each so we can better understand how each would play in context – how would it look like with our fonts, color palette, and what design elements would go with it. That process may surface some issues in the design of each which we’ll need to address.

After that, we’ll open up to broad community input – maybe a formal survey and/or maybe some mini IRC or video chat focus group sessions and see how folks feel about it, gather feedback, see which concept the broader community prefers and see if there are tweaks / adjustments we can make to iterate it based on the feedback we receive.

This is something we’ll continue to work on for the next few months. If you have feedback on the assets so far, please feel free to leave it in the comments here, but be nice please 🙂 and note this is still early stages.

Are you new to Fedora Design? Would you like to join?

This little ticket popped up in our triage during the meeting today, and is a good one for you to grab. It has a LibreOffice template you can use, or simply draw from for inspiration. Note the base font should be Overpass (free font, downloadable at

If that’s not your speed, we have a couple of other newbie tickets in our queue, check them out and feel free to grab one that piques your interest!

Fedora Podcast Website Design

terezahl, the Fedora Design team intern, has been working on a website design for the Fedora Podcast that x3mboy has created. She showed us a snapshot of her work-in-progress, and we gave her some feedback. Overall, it looks great, and we’re excited to see where it goes 🙂

That’s it folks!

If you are interested in participating in the Fedora 30 Artwork IRC Hackfest, please vote for a timeslot here, ASAP 🙂

Flatpaks in Fedora – now live


I’m pleased to announce that we now have full initial support for Flatpak creation in Fedora infrastructure: Flatpaks can be built as containers, pushed to testing and stable via Bodhi, and installed by users from through the command line, GNOME Software, or KDE Discover.

The goal of this work has been to enable creating Flatpaks from Fedora packages on Fedora infrastructure – this will expand the set of Flatpaks that are available to all Flatpak users, provide a runtime that gets updates as bugs and security fixes appear in Fedora, and provide Fedora users, especially on Fedora Silverblue, with an out-of-the-box set of Flatpak applications enabled by default.

At a technical level, a very brief summary of the approach we’ve taken is to take Fedora RPMs, rebuild them with prefix=/app using the Fedora modularity framework, then using the same container build service we use for server-side containers, create Flatpaks as OCI images . Flatpak has been extended to know how to browse, download, and install Flatpaks packaged as OCI images from a container registry. See my my talk at DevConf.CZ last year  for a slightly longer introduction to what we are doing.

Right now, there are only a few applications in the registry, but we will work to build up the set of applications over the next few months, and hopefully by the time that Fedora 30 comes out in the spring, will have something that will be genuinely useful for Fedora users and that can be enabled by default in Fedora Workstation.

Special thanks to Clement Verna, Randy Barlow, Kevin Fenzi, and the rest of the Fedora infrastructure team for a lot of help in making the Fedora deployment happen, as well as to the OSBS team and Alex Larsson!

Using it

From the command line, either add the stable remote:

flatpak remote-add fedora oci+

Or add the testing remote:

flatpak remote-add fedora-testing oci+

There is no point in adding both, since all Flatpaks in the stable remote are also in the testing remote. (The plan is to eventually that most users will simply use the stable remote, and there will be links in Bodhi to make it easy to install a single application from testing.)

Note that some fixes were needed to make OCI remotes work properly system-wide. These were backported to the Fedora Flatpak-1.0.6 packages, but are only in master upstream, so if you aren’t using Fedora, you should add the remote per-user instead.

Creating Flatpaks

If you are a Fedora packager and want to create a Flatpak of a graphical application you maintain, or want to help out creating Flatpaks in general, see the packager documentation. There is also a list of applications that are easy to package as Flatpaks.

Future work

One thing that should make things easier for packagers is the flatpak common module – by depending on this module, different Flatpaks can share the same binary builds of common libraries. This is particularly important for libraries that take a long time to build, or when an application needs to bundle a large set of libraries (think KDE or TeX). The current flatpak-common is a prototype, and there needs to be some thought given to the policies and tools for updating it.

Automatic rebuilds of Flatpaks are essential: when a fix is added to a library that an application bundles, the module and Flatpak should automatically be rebuilt without the maintainer of the Flatpak having to know that there was something to do – and then the maintainer should be notified, either with a link to a build failure, or, more hopefully, with a link to a Bodhi update that was automatically filed. Adding Flatpak support to Freshmaker and deploying it for Fedora is probably the right course here – though not a small task.

Another thing that I hope to address in the near term is signatures. Right now the authenticity is checked by downloading a master index from that contains hashes for the latest versions of applications. But having signatures on the images would add further protection against tampering, and depending on how they were implemented, could allow things like third party signatures added by an organization’s IT department. There’s quite a bit of complexity here, because there are multiple competing signature frameworks to coordinate: not just Flatpaks native signatures, but multiple different ways of signing container images.

A more long-term goal is to create a way to download updates to Flatpak container images as deltas, so that every update is not a full download. Reusing OCI images for Fedora Flatpaks has strong advantages for creating a common ecosystem between server applications and desktop applications, but on the server side, reducing bandwidth usage for server updates was usually not an important consideration, so the distribution strategy is simply to download everything from scratch. Hopefully work here could be shared between Flatpaks and the server usage of OCI images.

December 03, 2018

darktable 2.6.0rc0 released

we’re proud to announce the first release candidate for the upcoming 2.6 series of darktable, 2.6.0rc0!

the github release is here:

as always, please don’t use the autogenerated tarball provided by github, but only our tar.xz. the checksums are:

$ sha256sum darktable-2.6.0~rc0.tar.xz
5317f6353a1811ffc1e4c06fb983db5cd0bcfdccd6d8f595f470a3536424658f darktable-2.6.0rc0.tar.xz
$ sha256sum darktable-2.6.0rc0.dmg
??? darktable-2.6.0rc0.dmg
$ sha256sum darktable-2.6.0rc0.exe
??? darktable-2.6.0rc0.exe

when updating from the currently stable 2.4.x series, please bear in mind that your edits will be preserved during this process, but it will not be possible to downgrade from 2.6 to 2.4.x any more.

Important note: to make sure that darktable can keep on supporting the raw file format for your camera, please read this post on how/what raw samples you can contribute to ensure that we have the full raw sample set for your camera under CC0 license!

  • Over 1600 commits to darktable+rawspeed since 2.4
  • 260+ pull requests handled
  • 250+ issues closed
  • Updated user manual is coming soon™

The Big Ones

  • new module retouch allowing changes based on image frequency layers
  • new module filmic which can replace the base curve and shadows and highlights
  • new module to handle duplicates in the darkroom with possibility to add a title, create standard or virgin duplicate, delete duplicate and quickly compare with a duplicate
  • new logarithm controls for the tone-curve
  • new mode for the unbreak profile module
  • add mask preview to adjust size, hardness before placing them
  • make it possible to change the cropped area in the perspective correction module
  • the mask blur has been complemented with a guided-filter to fine tune it (this works on RGB and Lab color space).
  • color balance module has two new modes based on ProPhotoRGB and HSL
  • Experimental support for PPC64le architecture (OpenCL support needs to be disabled, -DUSE_OPENCL=OFF)

New Features And Changes

  • search from the map view is now fixed
  • visual rework of the lighttable (color label, image kind, local copy)
  • an option make it possible to display some image information directly on the thumb
  • add optional scrollbars on lighttable, or lighttable and darkroom
  • allow each masks of the clone module to have the opacity adjusted
  • lightroom import module supports the creator, rights, title, description and publisher information.
  • enhance TurboPrint support by displaying the dialogue with all possible options
  • new sort filter based on the image’s aspect
  • new sort filter based on the image’s shutter speed
  • new sort filter based on the image’s group
  • new sort filter based on a personalized sorting order (drag&drop on the lighttable view)
  • collection based on the local copy status
  • group image number displayed on the collection module
  • new zoom level at 50%; 400%, 800% and 1600%
  • better support for monochrome RAW
  • add contextual help pointing to the darktable’s manual
  • better copy/paste support for multiple instances
  • add support for renaming the module instances
  • add frequency based adjustment for the RAW denoise module
  • add frequency based adjustment for the denoise profile module
  • all widgets should be themable via CSS now
  • add support for configuring the modules layout
  • different way to select hierarchical tags in the collection module (only the actual parent tag, all children or the parent and children)
  • better handling of grouped images by allowing setting stars, color label for the whole group.
  • make it possible to apply a preset to a new module instance using the middle click
  • new script to migrate collection from Capture One Pro

Bug fixes

  • Fix the color pickers behavior in all modules
  • Fix liquify tools switching
  • Many more bugs got fixed


  • No changes

Changed Dependencies

  • CMake 3.4 is now required
  • In order to compile darktable you now need at least gcc-5.0+/clang-3.9+
  • Minimal clang version was bumped from 3.4+ to 3.9+
  • Packagers are advised to pass -DRAWSPEED_ENABLE_LTO=ON to CMake to enable partial LTO.

RawSpeed changes

  • GoPro ‘.GPR’ raws are now supported via new, fast ‘VC-5’ parallel decompressor
  • Panasonic’s new raw compression (‘.RW2’, GH5s, G9 cameras) is now supported via new fast, parallel ‘Panasonic V5’ decompressor
  • Panasonic’s old (also ‘.RW2’) raw decompressor got rewritten, re-parallelized
  • Phase One (‘.IIQ’) decompressor got parallelized
  • Nikon NEF ‘lossy after split’ raw support was recovered
  • Phase One (‘.IIQ’) Quadrant Correction is now supported
  • Olympus High-Res (uncompressed) raw support
  • Lot’s and lot’s and lot’s of maintenance, sanitization, cleanups, small rewrites/refactoring.
  • NOTE: Canon ‘.CR3’ raws are NOT supported as of yet.

Camera support, compared to 2.4.0

Base Support

  • Canon EOS 1500D
  • Canon EOS 2000D
  • Canon EOS Rebel T7
  • Canon EOS 3000D
  • Canon EOS 4000D
  • Canon EOS Rebel T100
  • Canon EOS 5D Mark IV (sRaw1, sRaw2)
  • Canon EOS 5DS (sRaw1, sRaw2)
  • Canon EOS 5DS R (sRaw1, sRaw2)
  • Canon PowerShot G1 X Mark III
  • Fujifilm X-A5
  • Fujifilm X-H1 (compressed)
  • Fujifilm X-T100
  • Fujifilm X-T3 (compressed)
  • GoPro FUSION (dng)
  • GoPro HERO5 Black (dng)
  • GoPro HERO6 Black (dng)
  • GoPro HERO7 Black (dng)
  • Hasselblad CFV-50
  • Hasselblad H5D-40
  • Hasselblad H5D-50c
  • Kodak DCS Pro 14nx
  • Kodak DCS520C
  • Kodak DCS760C
  • Kodak EOS DCS 3
  • Nikon COOLPIX P1000 (12bit-uncompressed)
  • Nikon D2Xs (12bit-compressed, 12bit-uncompressed)
  • Nikon Z 6 (except uncompressed raws)
  • Nikon Z 7 (except 14-bit uncompressed raw)
  • Olympus E-PL8
  • Olympus E-PL9
  • Olympus SH-2
  • Panasonic DC-FZ80 (4:3)
  • Panasonic DC-G9 (4:3)
  • Panasonic DC-GH5S (4:3, 3:2, 16:9, 1:1)
  • Panasonic DC-GX9 (4:3)
  • Panasonic DC-TZ200 (3:2)
  • Panasonic DC-TZ202 (3:2)
  • Panasonic DMC-FZ2000 (3:2)
  • Panasonic DMC-FZ2500 (3:2)
  • Panasonic DMC-FZ35 (3:2, 16:9)
  • Panasonic DMC-FZ38 (3:2, 16:9)
  • Panasonic DMC-GX7MK2 (4:3)
  • Panasonic DMC-ZS100 (3:2)
  • Paralenz Dive Camera (chdk)
  • Pentax 645Z
  • Pentax K-1 Mark II
  • Pentax KP
  • Phase One P65+
  • Sjcam SJ6 LEGEND (chdk-b, chdk-c)
  • Sony DSC-RX0
  • Sony DSC-RX100M5A
  • Sony DSC-RX10M4
  • Sony DSC-RX1RM2
  • Sony ILCE-7M3

White Balance Presets

  • Canon EOS M100
  • Leaf Credo 40
  • Nikon D3400
  • Nikon D5600
  • Nikon D7500
  • Nikon D850
  • Olympus E-M10 Mark III
  • Olympus E-M1MarkII
  • Panasonic DC-G9
  • Panasonic DC-GX9
  • Panasonic DMC-FZ300
  • Sony DSC-RX0
  • Sony ILCE-6500
  • Sony ILCE-7M3
  • Sony ILCE-7RM3

Noise Profiles

  • Canon EOS 200D
  • Canon EOS Kiss X9
  • Canon EOS Rebel SL2
  • Canon EOS 750D
  • Canon EOS Kiss X8i
  • Canon EOS Rebel T6i
  • Canon EOS 760D
  • Canon EOS 8000D
  • Canon EOS Rebel T6s
  • Canon EOS 77D
  • Canon EOS 9000D
  • Canon EOS 800D
  • Canon EOS Kiss X9i
  • Canon EOS Rebel T7i
  • Canon EOS M100
  • Canon EOS M6
  • Canon PowerShot G1 X Mark II
  • Canon PowerShot G1 X Mark III
  • Canon PowerShot G9 X
  • Fujifilm X-T3
  • Fujifilm X100F
  • Nikon COOLPIX B700
  • Nikon D5600
  • Nikon D7500
  • Nikon D850
  • Olympus E-M10 Mark III
  • Olympus TG-5
  • Panasonic DC-G9
  • Panasonic DC-GX9
  • Panasonic DMC-FZ35
  • Panasonic DMC-FZ38
  • Panasonic DMC-GF6
  • Panasonic DMC-LX10
  • Panasonic DMC-LX15
  • Panasonic DMC-LX9
  • Panasonic DMC-TZ70
  • Panasonic DMC-TZ71
  • Panasonic DMC-ZS50
  • Pentax K-01
  • Pentax KP
  • Samsung NX1
  • Sony DSC-RX100M4
  • Sony DSC-RX10M3
  • Sony ILCE-7M3


  • Afrikaans
  • Czech
  • German
  • Finnish
  • French
  • Galician
  • Hebrew
  • Hungarian
  • Italian
  • Norwegian Bokmål
  • Nepal
  • Dutch
  • Portuguese
  • Romanian
  • Russian
  • Slovenian
  • Albanian
  • Thai
  • Chinese

“Start request repeated too quickly”

If one of your units is not running any more and you find this in your journal: 

● getmail.service - getmail
Loaded: loaded (/home/rjoost/.config/systemd/user/getmail.service; enabled; vendor preset: enabled)
Active: failed (Result: start-limit-hit) since Thu 2018-11-29 18:42:17 AEST; 3s ago
Process: 20142 ExecStart=/usr/bin/getmail --idle=INBOX (code=exited, status=0/SUCCESS)
Main PID: 20142 (code=exited, status=0/SUCCESS)

Nov 29 18:42:17 bali systemd[3109]: getmail.service: Service hold-off time over, scheduling restart.
Nov 29 18:42:17 bali systemd[3109]: getmail.service: Scheduled restart job, restart counter is at 5.
Nov 29 18:42:17 bali systemd[3109]: Stopped getmail.
Nov 29 18:42:17 bali systemd[3109]: getmail.service: Start request repeated too quickly.
Nov 29 18:42:17 bali systemd[3109]: getmail.service: Failed with result 'start-limit-hit'.
Nov 29 18:42:17 bali systemd[3109]: Failed to start getmail.

it might be because your command really exits immediately and you may want to run the command manually to verify if that’s the case. Also check if you indeed have the unit configured with

Restart: always


I you’re sure it really does not restart too quickly, you can reset the counter with:

$ systemctl reset-failed unit

Further information can be found in the man pages of: systemd.unit(5) and systemd.service(5)

FreeCAD BIM development news - November 2018

Hi all, This is the November edition of our monthly report about the development of BIM tools for FreeCAD. As you know already if you read last month's report, we are (increasingly) busy preparing FreeCAD for its next 0.18 release, which is scheduled to happen before the end of this month/year. So far so good, we...

November 30, 2018

Who’s eating your corners?

Back in the early 90s, Greco’s marketing team asked the age-old question: “If you’re not eating square pizza, who’s eating your corners!?

A variation on the same question came to mind this morning when looking at the search home page:

Screenshot of

First, your version of the Google home page you see is likely different from what I see. I’m logged in to a Google account. I’m also in Canada. Who knows what else determines what Google shows me.

Notice the four corners of that screenshot. Year over year, they seem to be filling with a creeping set of links and features. There’s an inscrutable grid of nine squares, a circle with a bell in it, a links – more and more links.

To be fair, it’s still a relatively simple and clear page. The search is the obvious focus – something that’s not as easy to maintain as it may sound. The secondary focus seems to be showing how Google’s artificial intelligence efforts are helping people. OK.

Even with this relative austerity of interface elements, I can’t help but see these creeping links and features as metaphor for a muddying of focus with which any company operating at the scope and scale of Google must contend.

It’s also worth noting that almost every link on that page (About, Store, Images, Privacy, Advertising, etc.) all take you to different websites with completely different navigation and interface structures.

Oh, and at risk of being “that person”, I mostly use DuckDuckGo for search these days.

Blender 2.80-beta announcement

Nov 29, 2018

As of today, the Blender 2.8 project officially entered the Beta phase. That means all the major features are in place, and the Blender developers start focusing on bug fixes and polishing features based on user feedback.

Blender 2.80 Beta is available for download as a continuously updating build, getting more stable and day by day.

We do not recommend using Beta versions in production, there are still many bugs and data loss is possible. The exact release data for Blender 2.80 final is unknown at this point, but we estimate at least 4 months before things are fully stable.

Add-on authors can now start updating their code for Blender 2.8. There may still be some smaller API changes, particularly if user feedback requires use to adjust new features. However in general we will try to keep breaking changes limited.

The bug tracker is open for any bug reports. We expect there will be many reports, so it may take a while for a developer to get to yours. As usual, please carefully follow the bug reporting instructions to make the process as efficient as possible.

silverorange is hiring a quality assurance analyst – is it you?

The Web agency I have been helping to build for the past nineteen years is looking to hire a Quality Assurance Analyst:

[…] Over the nineteen years of growing silverorange we’ve focused as much on quality of life, openness, empathy, and a wonderful work environment, as we have on our dedication to building great user-focused systems.

Our team benefits from those with over a decade of shared experience and has only become stronger with the eight amazing people who’ve joined in the last four years.

We’re looking for a junior-to-intermediate Quality Assurance Analyst to join our team in order to help us improve our testing and QA infrastructure.

As an applicant, if you have either a programming background and a real passion for testing, or you are an experienced QA tester looking to build your technical skills, this position is for you. We will provide mentoring, support, and learning opportunities to help you expand your skills.

Your first task will be writing end-to-end test cases for a fantastic long-term client.

Applications close January 7th, 2019, at which point we’ll be in touch with only those people we shortlist for interviews. This position is available immediately and we will work with you to get you started as soon as possible. […]

silverorange Quality Assurance Analyst job listing

The full job listing explains the position and the company well. If you know someone who might be interested, please let them know. We’re particularly interested in getting the word out to those in under-represented communities.

It is a truly great team to work with.

November 27, 2018

Ode to the people who write taglines for stories

When looking for a brief distraction, I still have web-muscle-memory that takes me to Though the site has changed completely from its time as the post-Slashdot/pre-Reddit community-powered news site, it’s still kind of fun.

The best part of the site, by far, is the tiny little taglines posted above each story headline. Somewhere, there is a person or group of people working away tirelessly to tag each story with a little quip that I always enjoy.

Here are a few examples from today’s front page:

On the story, The Quest To Beat ‘Super Mario Bros. 3’ As Fast As Possible… Without Warping“, the tagline reads, TANOOKI SUIT UP.

On the story, We’re Losing Our Marbles Over This 11,000-Marble-Deep Marble Run, the tagline reads, THE ONLY THING TO SPHERE IS SPHERE ITSELF.

On the story, Huge Dog And Mini Horse Are The Cutest Best Friend Pair You’ll See All Week, the tagline reads simple, GIDDY UP.

Sometimes they editorialize, as on the story Why It’s Time We All Became Climate Change Optimists, where the tagline reads, WE’RE ALL GONNA DIE ANYWAY.

Sometimes they go terribly punny, as on the story Chinese Scientist Claims He’s Created World’s First Gene-Edited Babies, where the tagline reads, GATTACA-N YOU BELIEVE IT?

There are times when you can almost see them throwing up their hands in defeat, as on the story Here’s How Long It’d Take You To Poop A Lego, where the tagline reads only OKAY.

Thanks, We see you. — the Krita Question and Answers Site!

With the help of the awesome KDE sysadmin team, Scott Petrovic has created — the Krita Question and Answers site.

In the past couple of years, Krita has become more and more popular. With over a million downloads a year, there are now so many users that it’s become impossible to answer every question for the developers. The forum, bugzilla, reddit, twitter — there are too many places where people ask questions that have often been answered before. Nobody reads a plain old FAQ anymore, after all. is a place where it’s simple to find out if your question has been asked before, simple to ask a question, and simple to answer a question. It’s a central place where, we hope, Krita users will get together and help each other. Like a stackoverflow site, or like


  • If you don’t know how Krita works, are new to Krita, or new to digital painting, read the manual.
  • If you’ve got a question about using Krita, use
  • If you’ve got a development proposal, art you want to share, want to discuss a plugin or a script you’re working on or want to share a tutorial or a tip, use the forum, or go to reddit, whichever you like best.
  • If you have found a bug in Krita, read the guidelines on reporting a bug, and report the bug.

Keep in mind that there are many hundreds of thousands of users, and only a few developers, so help each other as much as you can!

November 25, 2018

2018 PlayRaw Calendar

2018 PlayRaw Calendar

Chris creates a new calendar for the community

Last year I got an amazing surprise in the mail. It was an awesome calendar of a handpicked selection of results from the years PlayRaw images.

Chris (@chris) put together another fantastic calendar for this year (while juggling kids, too) and it’s too nice to not have a post about it!

Play Raw Calendar 2019 Yep, that’s the back side.
Monkey Business by Dimitrios Psychogios (cba)

It was a really awesome surprise to recieve my calendar last year - and I wish I would have planned a little better to be able to grab a photo of the calendar hanging in my office (it’s my work desk calendar - it never fails to remind me that there’s more fun things to life than work - also that I need to up my processing game… ).

This year Chris has done it again by assembling a wonderfully curated collection of images and edits from the various Play Raws that were posted this year. I’ve plagiarized his post on the forums to put together this post and get some more publicity for his time and effort!

If you get a moment, please thank Chris for his work putting this together!

You can download the PDF: 2018 Play Raw Calendar

Here are the images he chose for the calendar and the edits he included:

month image title photographer editor license
0 Monkey Business jinxos andrayverysame CC BY-SA
1 Glaciers, Birds, and Seals at Jökulsárlón/Iceland BayerSe McCap CC BY-NC-SA
2 Shooting Into the Sun davidvj Adlatus CC BY-SA
3 The Rail Bridge, North Queensferry Brian_Innes Jean-Marc_Digne CC BY-SA
4 Sunset sea Thanatomanic sls141 CC BY-NC-SA
5 Vulcan stone sunset asn kazah7 CC BY-NC-SA
6 Venise la sérénissime sguyader Thomas_Do CC BY-NC-SA
7 Dockland side view at night gRuGo CriticalConundrum CC BY-NC-SA
8 Eating cicchetti with ghosts in Venezia sguyader msd CC BY-NC-SA
9 maritime museum wiegemalt yteaot CC BY-SA
10 Alfred’s Vision jinxos msd CC BY-SA
11 Crescent Moon through silhouetted fern fronds martin.scharnke gRuGo CC BY-NC-SA
12 Everything frozen asn McCap CC BY-NC-SA

A preview (also shamelessly lifted from Chris’s forum post):

small-playraw-Seite001 small-playraw-Seite002 small-playraw-Seite003 small-playraw-Seite004 small-playraw-Seite005 small-playraw-Seite006 small-playraw-Seite007 small-playraw-Seite008 small-playraw-Seite009 small-playraw-Seite010 small-playraw-Seite011 small-playraw-Seite012 small-playraw-Seite013

These Play Raws are a ton of fun and one of the great aspects of having such a generous community to share the images and allowing everyone to practice and play. I am constantly humbled by the amazing work our community produces and shares with everyone.

Thank you to everyone who shared image and participated in processing (and sharing how you achieved your results)! I have really learned some neat things based on others work and look forward to even more opportunities to play (pun intended).

Fun side note: the Play Raws are actually something that began on the old RawTherapee forums. When they moved their official forums here with us it was one of those awesome things I’m glad they brought over with them (the people were pretty great too… :)).

November 22, 2018

Giving More Thanks

Giving More Thanks

For an awesome community

It is a yearly tradition for us to post something giving thanks around this holiday. I think it’s because this community has become such a large part of our lives. Also, I think it helps to remind ourselves once in a while of the good things that happen to us. So in that spirit…

Financial Supporters

We are lucky enough (for now) to not have huge costs, but they are costs none-the-less. We have been very fortunate that so many of you have stepped up to help pay those costs.

The Goliath of Givers

For the last several years, Dimitrios Psychogios has graciously covered our server expenses (and then some). On behalf of the community, thank you so much! You keep the servers up and running. Your generosity will cover infrastructure costs for the year and give us room to grow as the community does.

We also have some awesome folks who support us through monthly donations (which are nice because we can plan better if we need to). Together they cover the costs of data storage + transfer in/out of Amazon AWS S3 storage (basically the storage and transfer of all of the attachments and files in the forums). So thank you, you cool friends, you make the cogs turn:

  • Jonas Wagner
  • elGordo
  • Chris
  • Christian
  • Claes
  • Thias
  • Stephan Vidi
  • ukbanko
  • Bill Z
  • Damon Hudac
  • Luka Stojanovic (a multi-year contributor!)
  • Moises Mata
  • WoodShop Artisans
  • Barrie Minney (He’s a long time monthly contributor!)
  • Mica

It is so amazing not to have to worry about finding the capital to support our growing community; we just expand things as necessary. It is super great.

If you’d like to join them in supporting the site financially, check out the support page.


As of today, we have 3135 users, so we’ve continued to grow at a very good rate! Welcome to all the new users.

As you can see from our discuss stats, we’re approaching 500k page views per month:

PIXLS.US monthly stats

And our yearly community health is very positive:

PIXLS.US yearly stats


This year we added the gphoto project to our list of supported applications! gPhoto is an awesome library for interfacing with your camera. It is used by darktable and entangle to allow you to shoot with your camera attached to your laptop or other device. We’re thrilled that they’ve joined us on the forums!


Natron is a compositing application mostly used in 3D/video compositing. The main developer was looking to give the project more of a community focus, so of course we were happy to provide them their own spot in the forum for their users to communicate and collaborate.


For another year, @darix continues to keep our stuff up and running! Do you ever notice outages? No?! Me either, and that is due to his daily diligence. We can’t thank him enough for his dedication to our community.

patdavid or Pat David

The originator of it all, thank you for the initial push to create this community where we are not divided by which application we use. And for your continued good will towards everyone here, your welcoming spirit, and passion. We’d never have done it without you! And for all the great things to come!

All of You

The community is the sum of parts + all the extra love that comes from all of you! Thank you so much continuing to stick around, share you knowledge, and spread the great community. It keeps me motivated, creative, and challenged and for that I am very thankful.

Krita Updated on Steam

We have finally figured out how to update Krita on Steam to the latest version. We’re really sorry for the long delays, but we plan to keep Krita up to date from now.

That means that Krita Gemini has been replaced by the regular desktop Krita: no matter where you get Krita, Steam, the Windows Store or, you get the same experience. Getting Krita on Steam helps support Krita’s full-time development, and from now on gets you updates whenever we create a new version.

We also have some new plans, like supporting macOS and Linux through Steam.

Don’t name your software after an implementation detail

Beware of software that includes the name of its programming language in its name. This belies a focus on implementation details over the experience of using the software. Of course, exceptions abound.

November 20, 2018

Krita Fall 2018 Sprint Results: HDR support for Krita and Qt!

In October we held a Krita developers' sprint in Deventer. One of my goals for the sprint was to start implementing High Dynamic Range (HDR) display support for Krita. Now almost a month have passed and I am finally ready to publish some preliminary results of what I started during the sprint.

The funny thing is, before the sprint I had never seen what HDR picture even looks like! People around talked about that, shops listed displays with HDR support, documentation mentioned that, but what all this buzz was about? My original understanding was like "Krita passes 16-bit color to OpenGL, so we should already be ready for that". In Deventer I managed to play with Boud's display, which is basically one of few certified HDR displays with support of 1000 nits brightness, and found out that my original understanding was entirely wrong :)

Last couple of years the computer display industry has changed significantly. For almost 30 years people expected their monitors to look like normal paper. Displays were calibrated to look like a sheet of white paper illuminated by a strictly defined source of light (D65).

Now, with appearance of HDR technology, the things has changed. Displays don't try to emulate paper, they try to resemble real sources of light! For example, modern displays can show a sun beam piercing through a window not just like "a paper photo of a beam", but just as a real source of light getting directly into you eye! Basically, now the displays have LEDs that can shine as bright as a real sun, so why not use it? :) (Well, the display is only 1000 nits, and the sun 1.6 billion nits, but it's still very bright).

If you look at the original original EXR file you will see how the window "shines" from your screen, as if it were real
By itself, the idea of having a display that can send a sun-strength beam into your eye might be not a lot of fun, but the side effects might of the technology are quite neat.

In the first place, the displays supporting HDR do not work in the standard sRGB color space! Instead they use Rec. 2020, a color space widely used in cinematography. It has different primary colors for "green" and “red” channels, which means it can encode much more variations of greenish and reddish colors.

In the second place, instead of using traditional exponential gamma correction, they use Perceptual Quantizer (PQ), which not just extends the dynamic range to sun-bright values, but also allows to encode very dark areas, not available in usual sRGB.

Finally, all HDR displays transfer data in 10-bit mode! Even if one doesn't need real HDR features, having a 10-bit pipeline can improve both painters' and designers' workflow a lot!

Technical details

From the developer's point of view, the current state of HDR technology is a bit of a mess. It's really early days. The only platform where HDR is supported at the moment is Windows 10 (via DirectX).

Neither Linux nor OSX can handle the hardware in HDR mode currently. Therefore all the text below will be related to Windows-only case.

When the user switches the display into HDR mode, the OS automatically starts to communicate with it in p2020-pq mode. Obviously, all the colors that normal applications render in sRGB will be automatically converted. That is, if an application wants to render an image directly in p2020-pq, it should create a special "framebuffer" object (swap chain), set its colorspace to p2020-pq and ensure that all the internal textures have correct color space and bit depth.

In general, to ensure that the window is rendered in HDR mode, one should do the following:

  1. Create a DXGI swap chain with 10-bit or 16-bit pixel format
  2. Set the color space of that swap chain to either p2020-pq (for 10-bit mode) or scRGB (for 16-bit mode).
  3. Make sure all the intermediate textures/surfaces are rendered in 10/16-bit mode (to avoid loss of precision)
  4. Since the GUI is usually rendered on the same swap chain, one should also ensure that the GUI is converted from sRGB into the destination color space (either p2020-pq or scRGB)
In Krita we use Qt to render everything, including our OpenGL canvas widget. I had to go deep into Qt's code to find out that Qt unconditionally uses 8-bit color space for rendering windows. Therefore, even though Krita passes 16-bit textures to the system, the data is still converted into 8-bits somewhere in Qt/OpenGL. So I had to hack Qt significantly...

Making Qt and Angle support HDR

Here comes the most interesting part. We use Qt to access the system's OpenGL implementation. Traditionally, Qt would forward all our requests to the OpenGL implementation of the GPU driver, but... The problem is that quite a lot of OpenGL drivers on Windows are of "suboptimal quality". The quality of the drivers is so "questionable", that people from the Google Chromium project even wrote a special library that converts OpenGL calls into DirectX API calls and use it instead of calling OpenGL directly. The library is called Angle. And, yes, Qt also uses Angle.

Below is a sketch, showing the relation between all the libraries and APIs. As you can see, Angle provides two interfaces: EGL for creating windows, initializing surfaces configuring displays and traditional OpenGL for the rendering itself.

To allow switching of the surface's colorspace I had to hack Angle's EGL interface and, basically, implement three extensions for it:

After that I had to patch QTextureFormat a to support all these color spaces (previously, it supported sRGB only). So now, if you configure the default format before creating a QGuiApplication object, the frame buffer object (swap chain) will have it! :)

// set main frame buffer color space to scRGB
QSurfaceFormat fmt;
// ... skipped ...

// create the app (also initializes OpenGL implementation
// if compiled dynamically)
QApplication app(argc, argv);
return app.exec();

I have implemented a preliminary demo app that uses a patched Qt and shows an EXR image in HDR mode. Please check out the source code here:

Demo application itself:

Patched version of QtBase (based on Qt 5.11.2):

The application and Qt's patch are still work-in-progress, but I would really love to hear some feedback and ideas from KDE and Qt community about it. I would really love to push this code upstream to Qt/Angle when it is finished! :)

List of things to be done

  1. Color-convert Qt's internal GUI textures. Qt renders the GUI (like buttons, windows and text boxes) in a CPU memory (in sRGB color space), then uploads the stuff into an OpenGL texture and renders that into a frame buffer. Obviously, when the frame buffer is tagged with a non-sRGB color space, the result is incorrect --- the GUI becomes much brighter or darker than expected. I need to somehow mark all Qt's internal textures (surfaces?) with a color space tag, but I don't yet know how... Does anybody know how?
  2. Qt should also check if the extensions are actually supported by the implementation, e.g. when Qt uses a driver-provided implementation of OpenGL. Right now it tries to use the extensions without any asking :)
  3. The most difficult problem: OpenGL does not provide any means of finding out if a combination of frame buffer format and color space is actually supported by the hardware! A conventional way to check if the format/color space is supported: create an OpenGL frame buffer and set the desired color space. If the calls do not fail with error, then the mode is supported. But it doesn't fit the way how Qt selects it! Qt expects one to call a singleton QSurfaceFormat::setDefaultFormat() once and then proceed to creation of the application. If the creation fails, there is no way to recover! Does anybody have an idea how that could be done in the least hackish way?

I would really love to hear your questions and feedback on this topic! :)

November 15, 2018

Create lens calibration data for lensfun

Create lens calibration data for lensfun

Adding support for your lens


All photographic lenses have several types of errors. Three of them can be corrected by software almost losslessly: distortion, transverse chromatic aberration (TCA), and vignetting. The Lensfun library provides code to do these corrections. Lensfun is not used by the photographer directly. Instead, it is used by a photo raw development software such as darktable or RawTherapee. For example, if you import a RAW into darktable, darktable detects the lens model, focal length, aperture and focal distance used for the picture, and it then calls Lensfun to automatically correct the photograph.

Photo with lens distortion Figure 1: 16mm lens showing distortion (click on the image to show the distortion corrected image)

Lensfun uses a database to know all the parameters needed to do the lens corrections. This database is filled by photographers like you, who took time to calibrate their lenses and to submit their findings back to the Lensfun project. If you’re lucky, your lens models are already included. If not, please use this tutorial to do the calibration and contribute your results.

Let us assume your lens isn’t covered by Lensfun yet, or the corrections are either not good enough or incomplete. The following sections will explain how to take pictures for calibration. It will also show you how to create an entry of your own. The best is to provide information for all three errors but maybe you only need distortion then this is fine too.

Checking if your lens is already supported

Before you start to calibrate new lenses or report missing cameras please check the lens database first! The list is updated daily. If your lens is already support then everything is fine and you just have to update your database.

If the lens is not supported or doesn’t provide all corrections you could add the missing data following this tutorial.

Taking pictures

Before we start you need to take a lot of images for the three errors we are able to correct. This section will explain how to take them and what you need to pay attention to.

For all pictures you should use a tripod, turn off all image correction and disable image stabilization in the camera and in the lens itself! Also make sure to that all High Dynamic Range (HDR) or Dynamic Range Optimizer (DRO) features are turned off. All those options could mess up your calibration.


For distortion you can to take pictures of a building with several parallel straight lines. You need at least two lines, one should be at the top of the image (Nearly touching the top of frame) and the other line at about a third down from the first line. The following example demonstrates this.

Photo with lens distortion Figure 2: Parking house with straight lines

The lines must be perfectly straight and aligned. You can twist and rotate the camera, but the lines must have no imperfections. Common mistakes are using tiles or bricks: to your eye they may be “straight”, but it will cause calibration defects. The best buildings turn out to be parking houses (US: parking lot, EN: garages, car parks) or modern glass buildings like fruit-technology stores.

For a fixed focal length lens, you only will require one image. For a zoom lens it is recommended to take 5 to N pictures where N is max focal length minus min focal length. You must take an image at the minimum focal length, and the maximum focal length. You can move (step backward on forward) between shots to keep the 1/3rd rule above consistent.

You should shoot at your lenses sharpest aperture - this is often f/8 to f/11. Setup your camera on a tripod. Shoot at the lowest ISO (without extended values). This will be 100 or 200. Disable any inbody lens corrections. Every vendor has a different name for this process (Fuji is modular lens optimization for example). Check your camera manual and menus.

Chromatic aberrations (TCA)

For TCA images look for a large object with sharp high-contrast edges throughout the image. Preferably, the edges should be black–white but anything close to that is sufficient. Make sure that you have hard edges from the center throughout to one of the edges. The best buildings, for taking photos, have dark windows with white or gray frames.

Here are some example pictures:

Photo with grey framed windows Figure 4: Building with gray framed windows

You should take your pictures being at least 8 meters away. For zoom lenses, take pictures at the same focal lengths as for distortion (5 to N). Make sure to capture really sharp photos using at least f/8. The best is to use aperture control, f/8 and ISO 100 on a tripod to avoid any color noise.

You can use e.g. a streetview service to find the right building in your town (big buildings, dark windows with white or grey frames).


To create pictures for vignetting you need a diffuser in front of the lens. This may be translucent milk glass, or white plastic foil on glass. Whatever, as long as it is opaque enough so that nothing can be seen through it, yet transparent enough so that light can pass through it. It must not be thicker than 3 mm and shouldn’t have a noticeable texture. It must be perfectly flush with the lens front, and it mustn’t be bent. It must be illuminated homogeneously.

I ordered a piece of acryl glass, opal white (milky), smoothly polished, 78% translucency, 3mm thick, 20 x 20 cm, which is about 8 Euro on Amazon.

However white plastic foil taped on a piece of ordinary glass for stability might be enough, if the plastic doesn’t have any texture.

I normally wait for a cloudy day with no sun, then the sky is homogeneously lit. Put the camera on a tripod and point it to the sky. Put the glass directly on the lens (remove any filters). In some places where sunlight is different you may need to shoot indoors. You should experiment to make sure your images are evenly lit (except for vignetting obviously).

Photo showing a camera with milky glass Figure 6: Camera setup to take pictures for vignetting correction
Photo showing lens vignetting Figure 6: Image showing vignetting of a wide angle lens at 16mm

Make sure that no corrections are applied by the camera (some models do this even for RAWs). Set the camera to aperture priority and the lowest real ISO (this is normally 100 or 200, don’t use extended ISO values).

Switch to manual focus and focus to infinity. This is the most critical step!

For zoom lenses, you need to take pictures at five different focal lengths. You only need pictures for five focal lengths because for the other steps it gets interpolated. For a prime lens you need to take only pictures for the single focal length.

Take the pictures as RAW at the fastest aperture (e.g. f/2.8) and at three more closed apertures at 1 EV distance, and also at the most closed aperture (e.g. f/22.0). These are often marked on your lens’ aperture ring, or on your electronic display.

If you have for example a 16-35mm lens with aperture f/2.8 - f/22, you need to take pictures at 16mm, 20mm, 24mm, 28mm and 35mm focal length (Remember you require the min and max zoom values). For each for those focal lengths you need to take five pictures at f/2.8, f/4.0, f/5.6, f/8.0 and f/22.0. This makes 25 pictures in total.

For a 50mm prime lens with f/1.4 - f/16 you need to take 5 pictures at f1.4, f/2.0, f/2.8, f/4.0, and f/16.0.

Vignetting correction for the professionals

The following steps are to get really fine grained vignetting corrections. The gain in accuracy is really very small! I probably only makes sense for prime lenses used for portrait or macro photography. However this is not required, the above it absolutely enough.

Lensfun is able to correct vignetting depending on focal distance. Thus, you can achieve a bit more accuracy by shooting at different focal distances. This means you will have to take pictures at 4 different focal distances.

The first focus on the near point (The near point is the closest distance that can be brought in focus). The next focal distances are the near point multiplied by 2 and by 6 and finally focus at infinity.

Example: For a 85mm prime lens with the near point at 0.8 m. You have to take pictures at 0.8 m, 1.6 m, 4.8 m and infinity.

Create calibration data

There are two ways to perform the calibration.

Lensfun allows an upload of data to the project, and they’ll do the program work for you. They’ll also review your images to make sure they are correctly taken.

Or you can do it yourself with the lens calibration script from the lensfun project.

The script needs the following dependencies to be installed on your system:

You can download the lens calibration script HERE or get it as a package for the major distributions HERE.

Once you have downloaded the tool create a folder for your lens calibration data, change to the directory and run:

$ init
The following directory structure has been created in the local directory

1. distortion - Put RAW file created for distortion in here
2. tca        - Put chromatic abbreviation RAW files in here
3. vignetting - Put RAW files to calculate vignetting in here

Follow the instructions and copy your raw files in the corresponding directories.

Vignetting correction for the professionals

For each focal distance you captures pictures you have to create a folder.

Lets pick up the example from above. For a 85mm prime lens we took pictures at 0.8 m, 1.6 m, 4.8 m and infinity. For this lens you would have to create the following folder structure in the vignetting directory:


The folder inf is for the focal distance at infinity.


Once you copied the files in place it is time to generate the pictures (tif files) for distortion calculations. You can do this with the ‘distortion’ option:

$ distortion
Running distortion corrections ...
Converting distortion/_7M32376.ARW to distortion/exported/_7M32376.tif ... DONE
A template has been created for distortion corrections as lenses.conf.

Once the tif files has been created, you can start Hugin.

Torsten Bronger created a screen cast to give an overview about the distortion process in Hugin. He uses an old Hugin version in the video. The following section of this tutorial explains how to do it with Hugin 2018. You can watch the screen cast first if you want, you can do it here (Vimeo).

If you start Hugin the first time, the windows you will get should look like in Figure 8.

Hugin start screen Figure 8: Hugin start screen

First select on the menu bar Interface -> Expert to switch to the Expert mode. You will get a windows which should look like as in Figure 9.

Hugin expert mode Figure 9: Hugin expert mode

Once in the export mode click on Add images (Figure 10) and load the first tiff from the distortion/exported folder.

Hugin add image Figure 10: Adding images and setting the focal length and crop factor

By default the lens type should be set to Normal (rectiliniar) for normal standard lenses. Make sure that the focal length is correct and set the Focal length multiplier, which is the crop factor of your camera. For full frame bodies this value should be 1. If you have a crop camera you need to set the correct crop value you can find in the specifications. Next click on the Control Points tab (Figure 11).

Hugin control points tab Figure 11: The control points tab

This is the tab to set the control points so that we can tell the software what are our straight lines we are interested in. In this tab you have to make sure that auto fine-tune is disabled, auto add is enabled and auto-estimate is disabled! If this is the case zoom the image to 200% (you can also do this by pressing ‘2’ on the keyboard).

In the zoomed images you have start at the top edges. On the left go to the top left corner and to the top right corner on the right. The first straight line, from left to right, should be visible. Select the first control point on the left edge of the picture on the left page and the right edge on the right (Figure 12).

Adding control points Figure 12: Setting the first two control points for the line to add

IMPORTANT: Once you have the first control point selected in both images. Select Add new Line in the mode dropdown menu! This will add the two control points as line 3! Now continue adding corresponding control points in both pictures till you’re in the middle on both sides.

Tip: The easiest and fasted is to set control points in the middle at the tiling line. This reduces the required mouse movements.

Now zoom out by pressing ‘0’ and check it if everything has been added correctly (Figure 13).

Control points for line 3 Figure 13: Control points for line3

While you are zoomed out, find a line which is about 3rd into the image from the top to repeat adding a line. Zoom to 200% again, select the first control points and again Add a new line which will result in line4 (Figure 14)!

Control points for line 3 and 4 Figure 14: Control points for line 3 and line 4

Zoom out by pressing ‘0’ and check that you have two lines, line3 and line4. Now move on to the Stitcher tab (Figure 14).

Selecting the projection Figure 15: The stitcher tab, select the correct projection here.

In the Stitcher tab you need to select the correct Projection for your lens. This is Rectilinear for standard lenses. Once done switch to the Photos tab (Figure 16).

Optimizer tab Figure 16: Enable the Optimizer tab.

At the bottom under Optimize select Custom parameters for Geometric. This will add an Optimizer tab. Switch to it once it appears (Figure 17).

Optimizer: Select a b c Figure 17: Optimizer tab: Select a b c for barrel distortion correction

Select the ‘a’, ‘b’ and ‘c’ lens parameters and click on Optimize now!. Accept the calculation with yes. Now the values for ‘a’, ‘b’ and ‘c’ will change (Figure 18).

Optimizer: Calculated a b c Figure 18: Calculated distortion correction ‘a’, ‘b’ and ‘c’.

The calculated correction values for ‘a’, ‘b’ and ‘c’ you can find in the tab need to be added to the lenses.conf. Open The file and fill out the missing options. Here is an example:


[FE 85mm F1.4 GM]
maker = Sony
mount = Sony E
cropfactor = 1.0
aspect_ratio = 3:2
type = normal
  • maker is should be the lens manufacturer e.g. Sony
  • mount is the mount system for the lens, check the lensfun database
  • cropfactor is 1.0 for full frame cameras, if you have a crop camera find out the correct crop factor for it.
  • aspect_ratio is the aspect ratio for the pictures which is normally 3:2.
  • type is the type of the lens, e.g. ‘normal’ for standard rectilinear lenses. Other values are: stereographic, equisolid, stereographic, panoramic or fisheye.

If you have e.g. a 85mm there should be an entry for the focal length which is set to: 0.0, 0.0, 0.0. You need to change the values in the lenses.conf for your focal length with the calculated corrections from the Optimizer tab (Figure 16).

[FE 85mm F1.4 GM]
maker = Sony
mount = Sony E
cropfactor = 1.0
aspect_ratio = 3:2
type = normal
distortion(85mm) = 0.002, 0.001, -0.009

But I don’t want to do distortion corrections!

No problem, if you want to skip this step then you can created the lenses.conf manually. It should look like the following example:

[lens model]
maker =
mount =
cropfactor = 1.0
aspect_ratio = 3:2
type = normal

The section name is the lens model. You can find it out by running:

exiv2 -g LensModel -pt <raw image file>

The other options are:

  • maker is should be the lens manufacturer e.g. Sony
  • mount is the mount system for the lens, check the lensfun database
  • cropfactor is 1.0 for full frame cameras, if you have a crop camera find out the correct crop factor for it.
  • aspect_ratio is the aspect ratio for the pictures which is normally 3:2.
  • type is the type of the lens, e.g. ‘normal’ for standard rectilinear lenses. Other values are: stereographic, equisolid, stereographic, panoramic or fisheye.


You can skip this step if you don’t want to do TCA corrections.

This step is fully automatic, all you have to do is to run the following command and wait:

$ lens_calibrate tca
Running TCA corrections for tca/exported/_7M32375.ppm ... DONE

However it possible to calculate more complex TCA corrections. For this you need to run the step it with an additional command line argument, like this:

$ lens_calibrate --complex-tca tca
Running TCA corrections for tca/exported/_7M32375.ppm ... DONE


You can skip this step if you don’t want to do vignetting corrections.

To calculate the vignetting corrections it is also a very simple step. All you have to do is to run the following command and wait:

$ lens_calibrate vignetting

Generating the XML

To get corrections for lensfun you need a lenses.conf with the required options to be filled out (maker, mount, cropfactor, aspect_ratio, type). And at least one of the corrections steps done. If you have this you can generate the XML file which can be consumed by lensfun. You can do it with the following command:

$ lens_calibrate generate_xml
Generating lensfun.xml

You can redo this step as many times as you want. And you can just rerun it if you add an additional correction.

Using the lensfun.xml

If you want to use the generated lensfun.xml file to test if the calibration you created works, you can copy to the local lensfun config folder in your home directory.

cp lensfun.xml ~/.local/share/lensfun

Make sure your camera is recognized by lensfun or you need to add an entry to the lensfun.xml file too.

Contributing your lensfun.xml

To add lens data to the project please open a bug at:

For the subject use:

Calibration data for <lens model>

And for the description just use:

Please add the attached lens data to the lensfun data base.

Attach lensfun.xml to the bugreport.


Feedback for this article is very welcome. If you’re a lensfun developer and read this please contact me. I would like to contribute the script to lensfun and further improve the article. I still have unanswered questions.

November 14, 2018

Shop update! Digital Atelier and a new USB-Card

We promised Digital Atelier would be available in the Krita Shop after the succesful finish of the Fundraiser. A bit later than expected, we’ve updated the shop with the Digital Atelier brush preset bundle and tutorial download:

There are fifty great brush presents, more than thirty brush tips, twenty paper textures and almost twohours of in-depth video tutorial, working you through the process of creating new brush presets.

Digital Atelier sells for 39,95 euros, ex VAT.

And we’ve also created a new USB-card, with the newest stable version of Krita for all OSes. Includes Comics with Krita, Muses, Secrets of Krita and Animate with Krita tutorial packs.

It’s in the credit-card format again, which makes it easy to carry Krita with you wherever you go! Besides, this format lets us use the new Krita 4 splash screen by Tyson Tan.

The usb-card is available in two versions: as-is, for €29,95 and (manually) updated to the latest version of Krita for €34,95.

November 13, 2018

A brief(ish) history of mouse cursors

Screenshot from Michiel De Boer's history of mouse cursors video.

This video tour of the history of mouse cursors by Michiel de Boer is oddly delightful. Apple or Microsoft should give Michiel a bucket of money to make their cursors better.

Adding an optional install duration to LVFS firmware

We’ve just added an optional feature to fwupd and the LVFS that some people might find useful: The firmware update process can now tell the user how long in seconds the update is going to take.

This means that users can know that a dock update might take 5 minutes, and so they start the update process before they go to lunch. A UEFI update will require multiple reboots and will take 45 minutes to apply, and so the user will only apply the update at the end of the day rather than losing access to the their computer for nearly an hour.

If you want to use this feature there are currently three ways to assign the duration to the update:

  • Changing the value on the LVFS admin console — the component update panel now has an extra input field to enter the
    duration in
  • Adding a new attribute to the element, for instance:
    <release version="3.0.2" date="2018-11-09" install_duration="120">
  • Adding a ‘quirk’ to fwupd, for instance:
    InstallDuration = 40
  • For updates requiring a reboot the install duration should include the time to POST the system both before and after the update has run, but it can be approximate. Only users running very new versions of fwupd and gnome-software will be shown the install duration, and older versions will be unchanged as the new property will just be ignored. It’s therefore safe to include in all versions of firmware without adding a the dependency on a specific fwupd version.

CMYKTool 0.1.5 Released

Just in time for the New Year, CMYKTool 0.1.5 is released.  Highlights since the 0.1.4 release include:

  • Better support for greyscale images, and an embedded greyscale profile will no longer cause an error.
  • Optional dithering of images when saving in 8-bit formats.  This can help if you have an image with very smooth gradients that end up visibly contoured once the image is saved.  (But note that it won't help get rid of contouring in the original image!)
  • DeviceLink profiles can now be exported for use in other programs.
  • The Win32 build now bundles a couple of CMYK profiles from the openicc-data package, which can be found here.

DownloadSource code - .tar.gz

DownloadUbuntu PPA - ppa:amr/blackfiveimaging

DownloadWindows Binary - .zip

Please note: the Windows build requires an existing GTK+ installation - the easiest way to get this is to install either Inkscape or GIMP version 2.6 or earlier.


November 12, 2018

More fun with libxmlb

A few days ago I cut the 0.1.4 release of libxmlb, which is significant because it includes the last three features I needed in gnome-software to achieve the same search results as appstream-glib.

The first is something most users of database libraries will be familiar with: Bound variables. The idea is you prepare a query which is parsed into opcodes, and then at a later time you assign one of the ? opcode values to an actual integer or string. This is much faster as you do not have to re-parse the predicate, and also means you avoid failing in incomprehensible ways if the user searches for nonsense like ]@attr. Borrowing from SQL, the syntax should be familiar:

g_autoptr(XbQuery) query = xb_query_new (silo, "components/component/id[text()=?]/..", &error);
xb_query_bind_str (query, 0, "gimp.desktop", &error);

The second feature makes the caller jump through some hoops, but hoops that make things faster: Indexed queries. As it might be apparent to some, libxmlb stores all the text in a big deduplicated string table after the tree structure is defined. That means if you do <component component="component">component</component> then we only store just one string! When we actually set up an object to check a specific node for a predicate (for instance, text()='fubar' we actually do strcmp("fubar", "component") internally, which in most cases is very fast…

Unless you do it 10 million times…

Using indexed strings tells the XbMachine processing the predicate to first check if fubar exists in the string table, and if it doesn’t, the predicate can’t possibly match and is skipped. If it does exist, we know the integer position in the string table, and so when we compare the strings we can just check two uint32_t’s which is quite a lot faster, especially on ARM for some reason. In the case of fwupd, it is searching for a specific GUID when returning hardware results. Using an indexed query takes the per-device query time from 3.17ms to about 0.33ms – which if you have a large number of connected updatable devices makes a big difference to the user experience. As using the indexed queries can have a negative impact and requires extra code it is probably only useful in a handful of cases. In case you do need this feature, this is the code you would use:

xb_silo_query_build_index (silo, "component/id", NULL, &error); // the cdata
xb_silo_query_build_index (silo, "component", "type", &error); // the @type attr
g_autoptr(XbNode) n = xb_silo_query_first (silo, "component/id[text()=$'test.firmware']", &error);

The indexing being denoted by $'' rather than the normal pair of single quotes. If there is something more standard to denote this kind of thing, please let me know and I’ll switch to that instead.

The third feature is: Stemming; which means you can search for “gaming mouse” and still get results that mention games, game and Gaming. This is also how you can search for words like Kongreßstraße which matches kongressstrasse. In an ideal world stemming would be computationally free, but if we are comparing millions of records each call to libstemmer sure adds up. Adding the stem() XPath operator took a few minutes, but making it usable took up a whole weekend.

The query we wanted to run would be of the form id[text()~=stem('?') but the stem() would be called millions of times on the very same string for each comparison. To fix this, and to make other XPath operators faster I implemented an opcode rewriting optimisation pass to the XbMachine parser. This means if you call lower-case(text())==lower-case('GIMP.DESKTOP') we only call the UTF-8 strlower function N+1 times, rather than 2N times. For lower-case() the performance increase is slight, but for stem it actually makes the feature usable in gnome-software. The opcode rewriting optimisation pass is kinda dumb in how it works (“lets try all combinations!”), but works with all of the registered methods, and makes all existing queries faster for almost free.

One common question I’ve had is if libxmlb is supposed to obsolete appstream-glib, and the answer is “it depends”. If you’re creating or building AppStream metadata, or performing any AppStream-specific validation then stick to the appstream-glib or appstream-builder libraries. If you just want to read AppStream metadata you can use either, but if you can stomach a binary blob of rewritten metadata stored somewhere, libxmlb is going to be a couple of orders of magnitude faster and use a ton less memory.

If you’re thinking of using libxmlb in your project send me an email and I’m happy to add more documentation where required. At the moment libxmlb does everything I need for fwupd and gnome-software and so apart from bugfixes I think it’s basically “done”, which should make my manager somewhat happier. Comments welcome.

Second Edition of “Dessin et peinture numérique avec Krita” published!

Last month French publisher D-Booker released the 2nd edition of Timothée Giet’s book “Dessin et peinture numérique avec Krita”.

The first edition was written for Krita 2.9.11, almost three years ago. A lot of things have changed since then! So Timothée has completely updated this new edition for Krita version 4.1. There are also a number of  notes about the new features in Krita 4.

And more-over, D-Booker worked again on updating and improving the French translation of Krita! Thanks again to D-Booker edition for their contribution.

You can order this book directly from the publisher’s website. There is both a digital edition (pdf or epub) as well as a paper edition.

Interview with HoldXtoRevive

Could you tell us something about yourself?

I’m Brigette, but I go mainly go by my online handle of HoldXtoRevive. I’m from the UK and mostly known as a fanartist.

Do you paint professionally, as a hobby artist, or both?

I have had a few commissions but outside that I would call myself a hobbyist. I would love to work professionally at some point.

What genre(s) do you work in?

I do semi-realistic sci-fi art. I most recently I have been drawing character portraits inspired from the Art Nouveau style, the majority of it has been fanart of a few different Sci-fi games.

Whose work inspires you most — who are your role models as an artist?

It’s hard to list them all really. Top of the list would be my other half, RedSkittlez, who is an amazing concept and character artist, also my friends Blazbaros, SilverBones and many more that would cause this to go on for too long.

Outside of my friends I would say Charles Walton, Pete Mohrbacher and Valentina Remenar to name a few.

How and when did you get to try digital painting for the first time?

About 4 years ago I downloaded GIMP as I wanted to get back into art after not drawing for about 15 years. I got a simple drawing tablet soon after and things just progressed from there.

What makes you choose digital over traditional painting?

The flexibility and practicality of it. Whilst I would love to try traditional acquiring, maintaining and storing supplies is not easy for me.

How did you find out about Krita?

My partner was looking at alternatives to photoshop and came across it via a youtube video. He recommended me to try it out.

What was your first impression?

How clean the UI is and how all of the tools where easy to find, and the fun I had messing with the brushes.

What do you love about Krita?

The fact it was really easy to get to grips with, yet I can tell there is more I can get from it. Also the autosave.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I would like a brightness/contrast slider alongside the curve for ease of use. It would also be nice if the adjustment windows would not close when the autosave kicks in.

What sets Krita apart from the other tools that you use?

I have not used many programs before I can across Krita. But the thing that jumped out at me was the ease of use and it had everything I wanted in an art program, I know that if I want to try animation I do not need to go and find another program.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

That is hard to say, but in a pinch I would say the one titled “Saladin’s White Wolf”. I was really happy how the background came out, it was also the one to be picked out and promoted by Bungie on their twitter.

What techniques and brushes did you use in it?

For the most part I use a multiply layer over flats for shading. My main brushes are just the basic tip (gaussian), basic wet soft and the soft smudge brush.

Where can people see more of your work?

Over on my DeviantArt page:
And my twitter:

Anything else you’d like to share?

I’m bad with words but I want to show appreciation to the Krita crew for making this wonderful program and to everyone who has supported and encouraged me.

November 08, 2018

French Krita book – 2nd edition

(Post in french, english version below)

Le mois dernier est sorti la seconde édition de mon livre “Dessin et peinture numérique avec Krita”. Je viens juste de recevoir quelques copies, le moment est donc parfait pour en parler rapidement sur mon blog.

J’avais écrit la première édition pour la version 2.9.11 de Krita, il y a bientôt trois ans. Beaucoup de choses ont changées depuis, j’ai donc mis à jour cette seconde édition pour la version 4.1.1, et ajouté quelques notes supplémentaires concernant de nouvelles fonctions.

Aussi, mon éditeur a de nouveau travaillé sur la mise à jour et l’amélioration de la traduction Française de Krita. Merci encore aux éditions D-Booker pour leur contribution 🙂

Vous pouvez commander ce livre directement sur le site de l’éditeur, au format papier ou numérique.



Last month was released the 2nd edition of my book “Dessin et peinture numérique avec Krita”. I just received a few copies, so now is time to write a little about it.

I wrote the first edition for Krita 2.9.11, almost three years ago. A lot of things have changed, so I updated this second edition for Krita version 4.1.1, and added a few notes about some new features.

Also, my publisher worked again on updating and improving the french translation of Krita. Thanks again to D-Booker edition for their contribution 🙂

You can order this book directly from the publisher website, printed or digital edition.

November 07, 2018

GIMP 2.10.8 Released

Though the updated GIMP release policy allows cool new features in micro releases, we also take pride on the stability of our software (so that you can edit images feeling that your work is safe).

In this spirit, GIMP 2.10.8 is mostly the result of dozens of bug fixes and optimizations.

Stable GIMP, Wilber and Co. Wilber and Co. strip, by Aryeom and Jehan, 2013

Notable improvements

In particular, chunk size of image projections are now determined dynamically depending on processing speed, allowing better responsiveness of GIMP on less powerful machines whereas processing would be faster on more powerful ones.

Moreover various tools have been added to generate performance logs, which will allow us to optimize GIMP even more in the future. As with most recent optimizations of GIMP, these are the results of Ell’s awesomeness. Thanks Ell!

In the meantime, various bugs have been fixed in wavelet-decompose, the new vertical text feature (including text along path), selection tools, and more. On Windows, we also improved RawTherapee detection (for RawTherapee 5.5 and over), working in sync with the developers of this very nice RAW processing software. And many, many more fixes, here and there…

The Save dialog also got a bit of retouching as it now shows more prominently the features preventing backward compatibility (in case you wish to send images to someone using an older version of GIMP). Of course, we want to stress that we absolutely recommend to always use the latest version of GIMP. But life is what it is, so we know that sometimes you have no choice. Now it will be easier to make your XCF backward compatible (which means, of course, that some new features must not be used).

Compatibility issues in Save dialog Save dialog shows compatibility issues when applicable

Thanks to Ell, the Gradient tool now supports multi-color hard-edge gradient fills. This feature is available as a new Step gradient-segment blending mode. This creates a hard-edge transition between the two adjacent color stops at the midpoint.

Step blending in gradient fills Newly added Step blending in gradient fills

On the usability end of things, all transform tools now apply changes when you save or export/overwrite an image without pressing Enter first to confirm changes. Ell also fixed the color of selected text which wasn’t very visible when e.g. renaming a layer.

CIE xyY support

Thanks to Elle Stone, GIMP now features initial support for color readouts in the CIE xyY color space. You can see these values in the Info window of the Color Picker tool and in the Sample Points dock. Most of the related code went into the babl library.

Much like CIE LAB, this color space is a derivative of CIE XYZ. The Y channel separates luminance information from chromaticity information in the x and y channels. You might be (unknowingly) familiar with this color space if you ever looked at a horsehoe diagram of an ICC profile.

CIE xyY is useful to explore varios color-related topics like the Abney effect. See this thread for an example of what people do with this kind of information.

Improved GIMP experience on macOS

Our new macOS contributor, Alex Samorukov, has been very hard at work improving the macOS/OSX package, debugging and patching both GIMP, GEGL, and the gtk-osx project.

Some of the macOS specific bugs he fixed are artifacts while zooming, the windows focus bug in plug-ins, and a non-functional support for some non-Wacom tablets. Jehan, Ell, and Øyvind actively participated in fixing these and other macOS issues.

We also thank CircleCI for providing their infrastructure to us free of charge. This helps us automatically building GIMP for macOS.

That said, please keep in mind that we have very few developers for macOS and Windows. If you want GIMP to be well supported on your operating system of choice, we do welcome new contributors!

Also, see the NEWS file for more information on the new GIMP release, and the commit history for even more details.

Around GIMP

GEGL and babl

The babl library got an important fix that directly affects GIMP users: the color of transparent pixels is now preserved during conversion to premultiplied alpha. This means all transform and deformation operations now maintain color for fully transparent pixels, making unerase and curves manipulation of alpha channel more reliable.

On the GEGL side, a new buffer iterator API was added (GIMP code has been ported to this improved interface as well). Additionally, new GEGL_TILE_COPY command was added to backends to make buffer duplication/copies more efficient.

Recently, Øyvind Kolås has been working again on multispectral/hyperspectral processing in GEGL, which happens to be the groundwork for CMYK processing. This is therefore the first steps for better CMYK support in GIMP! We hope that anyone who wants to see this happening will support Øyvind on Patreon!

GIMP in Université de Cergy-Pontoise

Aryeom, well known around here for being the director of ZeMarmot movie, a skilled illustrator, and a contributor to GIMP has given a graphics course with GIMP as a guest teacher for nearly a week at the Université de Cergy-Pontoise in France, mid-October.

She taught to two classes: a computer graphics class and a 3D heritage one, focusing on digital illustration for the former and retouching for the latter.

Students being taught computer graphics with GIMP Aryeom and her students in University of Cergy-Pontoise

This is a good hint that GIMP is getting more recognition as it now gets taught in universities. Students were very happy overall, and we could conclude by quoting one of them at the end of a 3-day course:

I didn’t know that GIMP was the Blender for 2D; now this is one more software in my toolbox!

We remind that you can also support Aryeom’s work on Patreon, on Tipeee or by others means!

Flatpak statistics

Although Flathub does not (yet) provide any public statistics for packages, an internal source told us that there have been over 214,000 downloads of GIMP since its existence (October 2017). This is more than 500 downloads a day, and by far the most downloaded application there!

Flathub is a new kind of application repository for GNU/Linux, so of course these numbers are not representative of all downloads. In particular, we don’t have statistics for Windows and macOS. Even for Linux, every distribution out there makes its own package of GIMP.

So this is a small share, and a nice one at that, of the full usage of GIMP around the globe!

GIF is dead? Long live WebP!

The GIF format is the only animated image format which is visible in any web browser, making it the de-facto format for basic animation on the web, despite terrible quality (256 colors!), binary transparency (no partial transparency), and not so good compression.

Well, this may change! A few days ago, WebP reached support in most major browsers (Mozilla Firefox, Google Chrome, Microsoft Edge, Opera), when a 2-year old feature request for Mozilla Firefox got closed as “FIXED. This will be available for Firefox 65.

Therefore, we sure hope web platforms will take this new format into consideration, and that everyone will stop creating GIF images now that there are actual alternatives in most browsers!

And last but not least, we remind everyone that GIMP has already had WebP support since GIMP 2.10.0!

If you see this text (instead of an animation), your browser does not support WebP yet! A WebP animation (done in GIMP), by Aryeom, featuring ZeMarmot and a penguin.

Disclaimer: the GIMP team is neutral towards formats. We are aware of other animated image formats, such as APNG or MNG, and wish them all the best as well! We would also be very happy to support them in GIMP, if contributors show up with working patches.

What’s next

We’ve been running late with this release, so we haven’t included some of the improvements available in the main development branch of GIMP. And there are even more changes coming!

Here is what you can expect in GIMP 2.10.10 when it’s out.

  • ACES RRT display filter that can be used in scene-referred imaging workflows. Technically, it’s a luminance-only approximation of the ACES filmic HDR-to-SDR proofing mapping originally written in The Baking Lab project.
  • Space invasion: essentially you can now take an image that’s originally in e.g. ProPhotoRGB, process it in the CIE LAB color space, and the resulted image will be in ProPhotoRGB again, with all color data correctly mapped to the original space / ICC profile. This is a complicated topic, we’ll talk more about it when it’s time to release 2.10.10.

Another new feature we expect to merge to a public branch soon is smart colorization based on the original implementation in the ever-popular GMIC filter.

Given quickly approaching winter holidays and all the busy time that comes with it, we can’t 100% guarantee another stable release this year, but we’ll do our best to keep’em coming regularly!


We wish you a lot of fun with GIMP, as it becomes more stable every day!

November 05, 2018

FreeCAD BIM development news - October 2018

Hi all, High time for a new article about what I've been doing this month with FreeCAD related to BIM development. Sorry for being late! This month again, there are not many new features, basically because 1) I've been to the Google Summer of Code mentors summit at Google, in San Francisco Bay Area, and 2)...

November 02, 2018

Intro to UX design for the ChRIS Project – Part 1

What is ChRIS?

Something I’ve been working on for a while now at Red Hat is a project we’re collaborating on with Boston Children’s Hospital, the Massachusetts Open Cloud (MOC), and Boston University. It’s called the ChRIS Research Integration Service or just “ChRIS”.

Rudolph Pienaar (Boston Children’s), Ata Turk (MOC), and Dan McPherson (Red Hat) gave a pretty detailed talk about ChRIS at the Red Hat Summit this past summer. A video of the full presentation is available, and it’s a great overview of why ChRIS is an important project, what it does, and how it works. To summarize the plot: ChRIS is an open source project that provides a cloud-based computing platform for the processing and sharing of medical imaging within and across hospitals and other sites.

There’s a number of problems ChRIS seeks to solve that I’m pretty passionate about:

  • Using technology in new ways for good.Where would we all be if we could divert just a little bit of the resources we in the tech community collectively put towards analyzing the habits of humans and delivering advertising content to them? ChRIS applies cloud computing, container, and big data analysis towards good – helping researchers better understand medical conditions!
  • Making open source and free software technology usable and accessible to a larger population of users.A goal of ChRIS is to make accessible new tools that can be used in image processing but require a high level of technical expertise to even get up and running. ChRIS has a plugin system is container-based, providing a standardized way of running a diverse array of image processing applications. Creating a ChRIS plugin involves containerizing these tools and making them available via the ChRIS platform. (Resources on how to create a ChRIS plugin are available here.)We are working on a “ChRIS Store” web application to allow plugin developers to share their ready-to-go ChRIS plugins with ChRIS users so they can find and use these tools easily.
  • Giving users control of their data.One of the driving reasons for ChRIS’ creation was to allow for hospitals to own and control their own data without needing to give it up to the industry. How do you apply the latest cloud-based rapid data processing technology without giving your data to one of the big cloud companies? ChRIS has been built to interface with cloud providers such as the Massachusetts Open Cloud that have consortium-based data governance that allow for users to control their own data.

I want to emphasize the cloud-based computing piece here because it’s important – ChRIS allows you run image processing tools at scale in the cloud, so elaborate image processing that typically days, weeks, or months to complete could be completed in minutes. For a patient, this could enable a huge positive shift in their care  – rather than have to wait for days to get back results of an imaging procedure (like an MRI), they could be consulted by their doctor and make decisions about their care that day. The ChRIS project is working with developers who build image processing tools and helps them modify them and package them so they be parallelized to run across multiple computing nodes in order to gain those incredible speed increases. ChRIS as deployed today makes use of the Massachusetts Open Cloud for its compute resources; it’s a great resource, at a scale that many image processing developers previously never had access to.


A diagram showing a data source at left with images in it. The images move right into a ChRIS block, from where they are passed further right into compute environments on the right. Within the compute environment block at the right, there are individual compute nodes, each taking an input image passed from ChRIS, pushing it through a plugin from the ChRIS store, and creating an output. The outputs are pushed back to ChRIS. On top of ChRIS are several sibling blocks - the ChRIS UI (red), the Radiology Viewer (yellow), and a '...' block (blue) to represent other front ends that could run on top.

I have some – but little experience – with OpenShift as a user, and no experience with OpenStack or in image processing development. UX design, though – that I can do. I approached Dan McPherson to see if there was any way I could help with the ChRIS project on the UX front, and as it turned out, yes!

In fact, there are a lot of interesting UX problems around ChRIS, some I am sure analogous to other platforms / systems, but some are maybe a bit more unique! Let’s break down the human interface components of ChRIS, represented by the red, yellow, and blue components on the top of the following diagram:

The diagram above is a bit of a remix of the diagram Rudolph walks through at this point in the presentation; basically what I have added here are the UI / front end components on the top. Must-see, though, is the demo Rudolph gave that showed both of these user interfaces (radiology viewer and the ChRIS UI) in action:

During the demo you’ll see some back and forth between two different UIs. We’ll start by talking about the radiology viewer.

Radiology Viewer (and, what do we mean by images?)

Today, let’s talk about the radiology viewer (I’ll call it “Rav”) first. It’s the yellow component in the diagram above. Rav is a front end that can be run on top of ChRIS that allows you to explore medical images, in particular MRIs. You can check out a live version of the viewer that does not include the library component here:

Through walking through the UX considerations of this kind of tool, we’ll also talk about some properties of the type of images ChRIS is meant to work with. This will help, I hope, to demonstrate the broader problem space of providing a user experience around medical imaging data.

Rav might be used by a researcher to explore MRI images. There’s a two main tasks they’ll do using this interface: locating the images they want to work with, then viewing and manipulating those images.

User tasks: Locate images to work with

A PACS (Picture Archiving and Communication System) server is what a lot of medical institutions use to store medical imaging data. It’s basically the ‘data source’ in the diagram at the top of this post. End users may need to go retrieve images they’d like to work with in rav from a PACS server – this involves using some metadata about the image(s), such as record number, date, etc. to find the image then adding them to a selection of images to work with. The PACS server itself needs to be configured as well (but hopefully that’ll be set up for users by an admin.)

A thing to note about a PACS server is you can assume it has a substantial number of images on it, so this image-finding / filtering-by-metadata first step is important so users don’t have to sift through a mountain of irrelevant data. The other thing to note – PACS is a type of storage, which based on implementation may suffer from some of the UX issues inherent in storage.

Below is a rough mockup showing how this interface might look. Note the interface has been split into two main tabs in this mockup – “Library” and “Explore.” The “Library” tab here is devoted to the location of images for building a selection to work with.

User Task: View and configure selected images

Once you have a set of images to work with, you need to actually examine them. To work with them, though, you have to understand what you’re looking at. First of all, one thing that can be hard to remember when looking at 2D representations of images like MRIs – these images of the same object along 3 different axes. From one scan, there may be hundreds of individual images that together represent a single object. It’s a bit more complex than your typical 3D view where you can represent an object from say a top, side, and front shot – you’ve got images that actually move inside the object, so there’s kind of a 4th dimension going on.

With that in mind, there’s a few types of image sets to be aware of:

Reference vs. Patient
  • Normative / Atlas – These are not images for the patient(s) at hand. These are images that serve as a reference for what the part of the body under study is expected to look like.
  • Patient – These are images that are being examined. They may need to be compared to the normative / atlas images to see if there are differences.
Registered vs. Unregistered
  • Unregistered images are standalone – they are basically the images positioned / aligned as they came from the imaging device.
  • Registered images have been manipulated to align with another image or images via a common coordinate system – scaled, rotated, re-positioned, etc. to line up with each other so they may be compared. A common operation would be to align a patient scan with a reference scan to be able to identify different structures in the patient scan as they were mapped out in the reference.
Processed vs. Unprocessed
  • You may have a set of images that are of the same exact patient, but some versions of them are the output of an image processing tool.
  • For example, the output may have been run through a tractography tool and look something like this.
  • Another example, the output may have been segmented using a tool (e.g., using image processing techniques to add metadata to the images to – for example – denote which areas are bone and which are tissue) and look something like this.
  • Yet another example – the output could be a mesh of a brain in 3D space. (More on meshes.)
  • The type of output the viewer is working with can dictate what needs to be shown in the UI to be able to understand the data.
Other Properties
  • You may have multiple images sets of the same patient taken at different times. Maybe you are tracking whether or not an area is healing or if a structure is growing over time.
  • You may have reference images or patient images taken at particular ages – structures in the body change over time based on age, so when choosing a reference / studying a set of images you need some awareness of the age of the references to be sure they are relevant to the patient / study at hand.
  • Each image has three main anatomical planes along which it may be viewed in 2D – sagittal (side-side), coronal / frontal (front-back), and transverse / axial (top-bottom).

Once a user understands these properties of the image sets sufficiently, they arrange them in a grid-based layout on what I’ll call the viewing table in the center. Once you have an image ‘on the table,’ you can use a mouse scroll wheel or the play button to view the image planes along the axis the images were taken. This sounds more complex than it is – imagine a deck of playing cards. If you’re looking at a set of images of a head from a sagittal view, the top card in the deck might show the person’s right ear, the 2nd card might show their right eye in cross-section, the 3rd card might show their nose in cross-section, the 4th card might show their left eye in cross-section, the 5th card might show their left ear… so on and so forth. Rinse and repeat for front-to-back, and top-to-bottom.

You can link two images together (for example, a patient image that is registered to a normative image) so that as you step along the axis the images were taken in a given image set, the linked image (perhaps a reference image) also steps along, so you can go slice-by-slice through two or more images at the same time and compare at that level.

Below is a mockup I made with some suggestions to the pre-existing UI last fall with some of these ideas in mind (some, I learned about in the back and forth and discussion afterwards. 🙂 )

A little more information about Rav’s development

Rav as a codebase right now isn’t in active development. It was written using a framework called Polymer, but due to various technical considerations, the team decided the road ahead will involve rewriting the viewer application in React.

An important component used in the viewer that continues to be developed is called amijs. This is the specific component that allows viewing of the image files in the Rav interface.

In terms of UX design, a future version of Rav will likely be implemented using the UX designs we worked on for Rav as it is today. There is a UX issues queue for Rav in the general ChRIS design repo. Rav-specific issues are tagged. You can look through those issues to see some interesting discussions around the UX for this tool

What’s next?

I’m hoping to become a regular blogger again. 🙂 I am planning to do another blog post in this series, and it will focus on the main UI of ChRIS itself (the red block in the diagram at the top of this post.) Specifically, I’ll go through some ideas I have for the concept model of the ChRIS UI, which is honestly not complete.

After that, I plan to do another post in the series about the ChRIS store UI, which my colleague Joe Caiani is working on now with design created by my UX intern this past summer Shania Ambros.

Questions, ideas, and feedback most welcome in the comments section!

October 31, 2018

Pipewire Hackfest 2018

Good morning from Edinburgh, where the breakfast contains haggis, and the charity shops have some interesting finds.

My main goal in attending this hackfest was to discuss Pipewire integration in the desktop, and how it will eventually replace PulseAudio as the audio daemon.

The main problem GNOME has had over the years with PulseAudio relate mostly to how PulseAudio was a black box when it came to its routing policy. What happens when you plug in an HDMI cable into your laptop? Or turn on your Bluetooth headset? I've heard the stories of folks with highly mobile workstations having to constantly visit the Sound settings panel.

PulseAudio has policy scattered in a number of places (do a "git grep routing" inside the sources to see that): some are in the device manager, then modules themselves can set priorities for their outputs and inputs. But there's nothing to take all the information in, and take a decision based on the hardware that's plugged in, and the applications currently in use.

For Pipewire, the policy decisions would be split off from the main daemon. Pipewire, as it gains PulseAudio compatibility layers, will grow a default/example policy engine that will try to replicate PulseAudio's behaviour. At the very least, that will mean that Pipewire won't regress compared to PulseAudio, and might even be able to take better decisions in the short term.

For GNOME, we still wanted to take control of that part of the experience, and make our own policy decisions. It's very possible that this engine will end up being featureful and generic enough that it will be used by more than just GNOME, or even become the default Pipewire one, but it's far too early to make that particular decision.

In the meanwhile, we wanted the GNOME policies to not be written in C, difficult to experiment with for power users, and for edge use cases. We could have started writing a configuration language, but it would have been too specific, and there are plenty of embeddable languages around. It was also a good opportunity for me to finally write the helper library I've been meaning to write for years, based on my favourite embedded language, Lua.

So I'm introducing Anatole. The goal of the project is to make it trivial to write chunks of programs in Lua, while the core of your project is written in C (we might even be able to embed it in Python or Javascript, once introspection support is added).

It's still in the very early days, and unusable for anything as of yet, but progress should be pretty swift. The code is mostly based on Victor Toso's incredible "Lua factory" plugin in Grilo. (I'm hoping that, once finished, I won't have to remember on which end of the stack I need to push stuff for Lua to do something with it ;)

October 30, 2018

Using the LVFS to influence procurement decisions

The National Cyber Security Centre (part of GCHQ, the UK version of the NSA) wrote a nice article on using the LVFS to influence procurement decisions. It’s probably also worth noting that the two biggest OEMs making consumer hardware also require all their ODMs to also support firmware updates on the LVFS. More and more mega-corporations also have “supports the LVFS” as a requirement for procurement.

The LVFS is slowly and carefully moving to the Linux Foundation, so expect more outreach and announcements soon.

October 29, 2018

Interview with Autumn Lansing

Could you tell us something about yourself?

I’m a comics artist and web developer from Oklahoma City. I’ve been creating comics and artwork for most of my life, though I didn’t seriously start making comics until several years ago. I’ve also worked variously as a writer, editor, layout artist, concert promoter, and art gallery director, among other things.

Do you paint professionally, as a hobby artist, or both?

At this point in time it’s mostly something I do for enjoyment, though I occasionally create artwork for someone else. I’d love to make comics my career, but so far that possibility hasn’t presented itself.

What genre(s) do you work in?

I mostly create comic art in the science fiction and/or horror genres, often with a lot of humor in it. My style ranges a bit. While I sometimes enjoy working in a colorful painterly manner, lately I’ve been happier working in grayscale with mostly flat colors. I find it less distracting. It allows me to focus on linework, which I feel is my strongest point. Sometimes color just gets in the way. I also really like how grayscale looks.

Whose work inspires you most — who are your role models as an artist?

Jaime Hernandez has probably inspired me more than any other comic artist. His work in Love and Rockets strongly influenced both my design and my storytelling. Christopher Baldwin (Spacetrawler, Anna Galactic) has also been a big inspiration. His science fiction comics prompted me to start making my own. Aaron Diaz (Dresden Codak) and John Allison (Scary Go Round, Bad Machinery) have also been influential to a degree. And I certainly can’t leave out fellow Krita user David Revoy. His tutorials helped me learn Krita and shaped how I approach digital art.

How and when did you get to try digital painting for the first time?

When I decided to make comics again in 2011, I initially went the more traditional path of ink and paper, but I quickly saw the advantages of going fully digital, so I bought an Intuos tablet and started learning, using Gimp and Inkscape. I actually got quite good in Inkscape, and most of my earliest digital work was done in it.

What makes you choose digital over traditional painting?

As a comics artist, digital wins hands down over traditional methods. Being able to rearrange things in a panel and quickly change dialogue makes life much easier, especially for me, as I’m always fiddling with things.

How did you find out about Krita?

I took a short break from comics a few years ago, and when I finally decided to start back at it again, I looked around to see if anything new was available other than Gimp and Inkscape. Being a Linux user, my choices are limited in the area of digital painting programs, and I was surprised to learn about Krita. It seemed like a good alternative, so I gave it a try. This was during the 2.x versions. I was pleasantly surprised and decided to stick with it.

What was your first impression?

The first thing that made me love Krita was its focus on drawing and painting, unlike Gimp, which tries to be everything for everyone. The fact that Krita was tailored for what I wanted to do made it more pleasant to work with once I learned how to use it. I always seemed to be fighting Gimp or Inkscape, and Krita felt very friendly in contrast.

What do you love about Krita?

So many things! Especially with the 4.x branch. One of the most important new features for me, though, has been sessions, as I work on multiple pages at a time. Before sessions, I used to keep Krita open for days at a time just so I wouldn’t have to reopen and reposition everything each time I started it. It all happens automatically now, and that’s really saved me some time.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I’d really like to see text improved. I’m appreciative of the complexity required in adding custom built vector capabilities to Krita, but the promised text functionality that was announced a few years ago lacks a lot of polish. Even using it for short bits of text, like signs or sound effects, can end up being frustrating, and it’s not useful for dialogue at all, or at least not for the way that I work, as there’s no wrap-around flow. I still use Inkscape for text.


What sets Krita apart from the other tools that you use?

At the moment, I don’t really use any other tools for drawing. I use Inkscape and GIMP for various other art related tasks, though, and neither of them are as easy to work with as Krita. Inkscape seems quite buggy, and GIMP just doesn’t have the project management abilities that Krita does.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

If I had to choose one panel, it would probably be the panel from Jane Smith, Inc. where all four of the primary characters appear together for the first time. I really like the composition. Each character’s personality is on full display, and you can easily see their relationship to each other in their words, expressions and body language.

What techniques and brushes did you use in it?

For flat grayscale work, I keep things very simple. I usually work on an entire chapter or scene at once, so I storyboard the pages first in Krita using a blue pencil and text files created in Inkscape. I then do rough sketches of each panel with a custom pencil, and once I’m happy with how everything is falling together, I go back and clean up the rough sketches.

I use two different brushes to ink: a 12px brush for panel borders and a 5px brush for everything else. I like to hand-ink as much as possible, to give more of an alt-comics feel, rather than using svg for frames and balloons. I also do most of my detail work in the inking phase. All inking is done in two layers: a layer for the frames, and one for everything inside.

For coloring, I mostly use the fill tool and a couple of custom brushes — one large brush for places that need hand coloring and one for detail work. Again, keeping it simple, coloring is all done in one layer.

Where can people see more of your work?

I post comics on my website at I’m working on a couple of different projects at the moment, and I put up new pages from time to time. I don’t keep to any schedule. I consider myself a traditional comics artist and not a web comics creator. My work isn’t designed to be read one page at a time, so I only post when I complete scenes or chapters.

Anything else you’d like to share?

I’d like to encourage any comics artist who hasn’t tried Krita to do so. You might be pleasantly surprised at what you find. It’s very possible to make good quality comics with Krita. Don’t let the fact that it’s FOSS scare you away. And I’d also like to thank the Krita dev team for putting up with all my weird bug reports over the years! You guys do a great job.

How to Extend a Moonrise

Last night, as we drove home from the Pumpkin Glow -- one of Los Alamos's best annual events, a night exhibition of dozens of carved pumpkins all together in one place -- I noticed a glow on the horizon right around Truchas Peak and wondered if the moon was going to rise that far north.

Sure enough, I saw the first sliver of the moon poking over the peak as we passed the airport. "We may get an extended moonrise tonight", I said, realizing that as the moon rose, we'd be descending the "Main Hill Road", as that section of NM 502 is locally known, so we'd get lower with respect to the mountains even as the moon got higher. Which would win?

As it turns out, neither. The change of angle during the descent down the Main Hill Road exactly matches the rate of moonrise, so the size of the moon's sliver stayed almost exactly the same during the whole descent, until we got down to the "Y" where a nearby mesa blocked our view entirely. By the time we could see the moon again, it was just freeing itself of the mountains.

Neat! Made me think of The Little Prince: his home asteroid B6-12 (no, that's not a real asteroid desgination) was small enough that by moving his chair, he could watch sunset over and over again. I'm a sucker for moonrises -- and now I know how I can make them last longer!

October 26, 2018

Hello world!

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

October 25, 2018

Astonishing Details About Custom Essay Unveiled

Astonishing Details About Custom Essay Unveiled

The Custom Essay Trap

The essays are critical since they’re employed as resources to rate the proficiency and comprehension levels of college students. They truly are really critical as they’re utilized as instruments to value the understanding and competence levels of students. Another issue to look at regarding custom essays will be that you simply will possibly perhaps not necessarily get exactly what you pay for.

The article quality enables students to attain excellent grades at a price tag that is affordable. Our informative article authors will provide APA argumentative essay that is original to you. By assessing out the informative article you will assure yourself.

An article is a brief little bit of writing on a matter that is particular. To create certain the essay corresponds to your requirements, as properly about the distinctive and distinct needs of the pupil, we normally issue the informative article to rigorous superb guarantee inspection. Because your essay is mostly focused on outlining a list of sources you should be certain that you’re using scholarly sources that are credible before you get started composing.

Producing a thesis isn’t an easy job. Composing a research paper might be ambitious. You have the ability to quotation advertisements studies or a few research.

Your hard-won cash is custom essays going to become spent sensibly after you ask pros to aid you. Our firm supplies you with distinct argumentative essay matters at which you could be in a position to find quality academic paper. To have a newspaper from our organization you just have to get it from our informative article service.

It’s crucial see that in end doesn’t bring in just about any fresh info however smartly summarizes the whole essay to a few phrases https://en homework to allow it to appear complete. The most crucial aim of MLA format argumentative essay will be to modify the beliefs that many don’t want to modify. There is A thesis just one which you will support with investigation that is commendable.

The Good, the Bad and Custom Essay

Calendar several years the word indicates. Having encounter a few handy advice you’re ready to shoot notes and convenient citations from the correct format and testimonials. Types of battle might not be so easily found.

Sites offering essays that are excellent that were poor in an price can not be reliable in any way. If you’re in demand of custom made essay online, do not be afraid to talk with us today for we believe in supplying results efficient and oriented classification essay assistance to our clients. For the cheapest deals from the industry, you have the written essays out of all of us!

In a few instances, the mission’s conditions are so intricate it is difficult for the pupils to realize exactly what the true question would be, it’s virtually not possible for them to build degree-level academic material and the paper writing online penalty for plagiarism are significant afterward they accept the aid of papers writers. At present times, most colleges and universities have their own website, which homes plenty of helpful particulars. They ought to buy newspapers from businesses that have the characteristics listed beneath.

Fully being truly a trustworthy provider of composing companies for above five decades Essays Agency has completed orders in subjects. Benefit out of our custom writing services and also find why Custom Essay additionally is actually a cut-over the contest. That they collect data about this issue to maintain quality Once you give the writers an purchase or theme, as well as a custom made study is performed by professionals’ group.

You’re proposed to keep up the standard of online contents and articles to be able to maximize the consequence of one’s own blogs. Each and every info you’re putting has to be encouraged with evidence. So to start with your phone search via loose classifieds you wish to sign in the website first.

If you’re a student or someone who’s definitely going to accomplish loads of producing then you ought to give yourself the tech especially now when it costless to accomplish that. To be certain you’re on the the most suitable manner you require help on paper your paperwork. Without any boss writers have the best to decide on.

Spend some do not squander your time and effort and that means you will craft the perfect essay that you have the ability to submit and time on paper. They have a great deal of quotations predicated on facts and laws and also show the photograph of the circumstance. No matter the situation, done essay is uploaded in to the device and also you will be able to download it almost any given moment.

Because you realize exactly the best location to set up the papers on almost any stage you may rest guaranteed you will accomplish results that are awesome. In case you have to purchase faculty paper from Best Essays service is simply click the button and additionally begin. For obtaining a paper they have to keep in mind these things.

The Key to Successful Custom Essay

The Key to Successful Custom Essay

Tell us. There’s no particular limit on the quantity of phrases need to be written from the article Composing Component mainly due to the fact examiners commonly devote the very same limitation of time around every evaluation work but it’s not vital to write greater than 350 phrases. The purpose isn’t to learn all mistakes, but modest ones whilst to store time.

It will be potential to have experiments written for you personally today and don’t need to be concerned about lifting a finger. Essays may make an effort to specify a abstract thought or snare down a specific word’s significance. Whatever the instance, done informative article is uploaded in to the device and also you will be able to put in it almost any given moment.

For example an editorial writer, you should do some thing involving. Because it truly is definitely going to be more applied for making this essay keep in mind the part of a narrative. In conclusion of one’s King Lear informative article will amount up whatever you have prepared within your system.

Completing your thesis may possibly not be described as considered a very simple task . however, it really is really a rewarding one. Having come across some handy information you are ready to shoot notes and suitable citations in the ideal format and references. You need to generate a thesis statement.

The proposed information will make it possible for one to distinguish a corporation from broad array of enterprises within the instructional writing marketplace. Establish a strict term limitation for yourself dependent upon the number of advice and fascinating quotes you’ve got. Describing the simple actuality that you just might want to seek out.

You might think it is cheating. But it just isn’t. It really is pretty natural, which change their newspaper after and students wish to attain results that are awesome without employing an excessive level of effort. The students should guarantee they avail essay producing services which are economical as it regards rates.

Vice versa, purchasing custom papers on the net is the only real manner from the jail. Benefit out of our custom writing solutions and discover why Custom Essay Plus can be really a cutover your competition. They gather data regarding the topic to keep excellent after you provide the writers an buy or theme, along with a custom-made analysis is performed by the set of specialists.

Each and just about every payment is going to become generated via E Wallets and payment systems when you choose to procure an informative article on the website. Ergo, you become qualified service online and might pick an corporation. Our service is about to help you in accord with your own requirements.

Here’s What I Know About Custom Essay

Composing human anatomy writing Each informative article has to get a more typical arrangement whether there aren’t any instructions. Expert writers may enable the college students full essays of any type smoothly. You report the sort of essay writing service a wide selection of authors ought to be shown for your requirements and also you should have .

No matter the mission in the essay, be convinced you are interested within the own topic. Our informative article writers can supply you with APA article that is initial. By simply assessing out the whole article you may simply assure yourself.

Today you’ll get a selection of instructional organizations offering essay writing for higher school college and university students. You say that it has to be essential to compose my paper. Therefore, you’re a journalism student and you also would love to fully grasp how to impress your professor that is completely brand new.

Such students may obtain their essay accomplished from reputed American on-line pros. Like in all sorts of writing, clarity and brevity in essays is more paramount and, thus must not be over looked you’re writing in any speech. To compose a collaboration debut needs to be present and concise outline of those discussion.

The most essential advice may be mentioned by students in the finish of experiments. They compose various forms of newspapers that are MLA. So they must think about the factors in determining the optimal/optimally essay.

After writing a composition you definitely will need to locate resources for getting thoughts. There are a lot of online tools from in which you are able to master the nuances of custom essay creating. Look carefully at the article for plagiarism An informative article that is exceptional is a exceptional article , so a check for plagiarism can essay writing be an vitally essential stage.

Compare and Contrast Essay Examples Exposed

Compare and Contrast Essay Examples Exposed

Certainly one is by simply supplying a wide variety of college essay illustrations. It might be necessary to liaise to comprehend the compare and contrast essay topics. You are able to also search for tips to assess essay writing which you may discover on line.

The thesis statement needs to ensure it is evident that the regions that you would like to contrast and compare within the body of the specific article.

As a way to learn just how exactly to begin with a compare and contrast essay a person should know the newspaper requirements. The topic for a compare and contrast essay will probably be set course or by the topic a student is registered. Contrast and compare essays can be challenging to produce.

It is possible to locate a clearer idea for what format you prefer to compose your essay by utilizing various compare and comparison essay cases writing. You may utilize this product for an idea to work expert writers on your private essay. There are a great deal of formats to pick out of when considering how to compose your composition.

The Ultimate Compare and Contrast Essay Examples Trick

What’s more, an article writer needs to additionally possess the capacity to supply references on the disagreements he’d want to add from this paragraph. It was tough to detect what the essay was around. Now, let us assess a number of decision instances to obtain a more practical insight relating to this.

Begin. You can have lots of different points in just a paragraph nevertheless that they must all be certainly in regards to the subject of this paragraph. Be certain to look at My research completed In the event that you’d want to see additional topics.

The Honest to Goodness Truth on Compare and Contrast Essay Examples

The job of contrasting and comparing phenomena might be demanding a single. When the essential notions may not be analyzed by you Simply citing the similarities and differences isn’t enough. Besides the difference of their own meaning, a single particular struggles to track down any gaps between each equally.

Compare and Contrast Essay Examples Secrets

Be sure you’re currently evaluating each negative if buy essay paper that is the case. The risk can’t be prediction and also measured. In such case, you are going to have topic and you also will get to execute just a tiny amount of studying to find two different subjects to both examine and comparison.

The Most Popular Compare and Contrast Essay Examples

Guarantee you talk with your lecturer and fellow students until you pick 1. Like a senior school student, you shouldn’t pick since you may possibly be stuck on the way an interest which is complicated. They will need to make sure they understand the topic before they start off the doc.

Professors will own a list of certain job requirements that will want to get met. Sites tend to be inclined because they can result in additional marital relationships based on Islam to be restricted. Even the kind of speech given during america will be the speech that is informative.

What You Don’t Know About Compare and Contrast Essay Examples

So that you’ll be able to write a important that is inspirational article, even ask the assistance of authorities and sometimes you might need to take into account several crucial pondering illustrations. There’s a perfect example of this kind of topics under. The first method would be to concentrate on a issue and clarify its benefits and disadvantages, then proceed to another location topic.

If you should be writing a newspaper that is able to allow one to compare and contrast something, inch matter you prefer to centre about the launching structure of your paper since it is going to contain pointers and tips which could guide your reader to everything your matter is all about. It’s mandatory that you research the theme. After this, you cannot track down an area.

The Secret to Compare and Contrast Essay Examples

Still another matter is your own audience. Assist from professional writers can help save you a plenty of period. There’s another common error.

The Little-Known Secrets to Compare and Contrast Essay Examples

If you would like to buy your online make certain you receive it from the seller that is the most proper. Both tablets and Faculties possess a couple down sides at exactly the exact same time plus added benefits. Whenever there is absolutely no delegated textbook concerning the mission you may start out with a subject search and also a search in an library.

The Hidden Truth About Compare and Contrast Essay Examples

Provided that as you are ready to remain on trail of the goal reachable, i.e. receiving marvelous grades, trying to keep a sensible program and studying like mad, you will be simply nice. All players have the possibility to industry and bat. You conserve money, at will save of travelling you’d have to do the number.

Those things’ titles are recorded throughout the very top. The pattern’s collection is dependent upon the magnitude of the specific article. The design of these notions ought to be in purchase.

The Pitfall of Buy Essays Cheap

The Pitfall research paper writers of Buy Essays Cheap

The Fight Against Buy Essays Cheap

Like a means it really is far better to purchase term newspapers from sites. You understand an qualitative and honest on line essay writing agency can provide assistance it’s time to set up your purchase. Now you donat desire to have a excellent article for your sociology class but then begin browsing for an additional internet site when you are in need of some research paper from the discipline of history.

There are particular professors or teacher who might ask for a sort of phrase Annotated Bibliography Templates and the professor needs to be consulted in regards to exactly the precise identical. It really is important to keep in mind that we keep your custom designed after the delivery you can be sure that your writing wo be formatted to get a learner. Re vision Policy At many situations you get the reasonably priced essay writing you’ve requested from the draft.

Whenever you purchase documents you don’t even need to fairly share with you your entire title or faculty details. The tools out there for essay serve the exact aim of one-of-a-kind heights of the curricula. It is somewhat easy to select the least expensive essay writing assistance merely by comparing the prices offered by numerous personalized essay writing service companies.

For somebody who’s not experienced essay producing a 5-page essay that written may take a minimum of one day or more. You may possibly make certain your story comes with a catchy and brilliant opening when you purchase storyline essays. But narrative essays should have an email you would like to ship to a reader.

The thing is that you don’t will want to get all your essays independently because you have the ability touse our article producing any moment. Be sure you tell all your friends just what a support it will be and what’s the ideal place to get inexpensive essays, After getting your completed essay. Locating a essay writing agency has gotten really challenging.

You also might too communicate straight to produce certain the work appears the manner in which you will require. Faculty essay on the internet is the easiest approach to truly have a high-value essay using effort. You are ready to purchase informative article.

University work might be challenging, there are subjects who have many topics, also it will become challenging to carry on to maintain a watch out. Pupils consider because you can locate a great deal of web businesses which assert to furnish college students with superior essays which can be 30,, the perfect way to obtain universities of quality. You may possibly be totally the student on globe.

May place a bidding to aid you. Simple like that, so until you start searching for someone, be certain write my essay, think about doing it yourself, you may discover that it’s quite straightforward and intriguing actions to take. What’s more, it has to be connected towards this matter.

Make sure which you’re going to find yourself a superb worth on the riches and you picking a trustworthy essay au thor. It normally suggests that we shall certainly help you finish your writing mission yet fast you require it to be attained Since you’ve presently understood. Especially if composing a essay, to how you’re very likely to take it within the 20, intend supplied by educator.

Do everything you can to read reviews and to participate with customers who’ve been around earlier. Our pros may stick to all of your instructions thoroughly to be able to compose well suited for your own requirements, an customized mission. It is going to never frustrate you.

Ideas, Formulas and Shortcuts for Buy Essays Cheap

The cost of your document will be formed dependent on caliber grade, the deadline, and also wide range of newspaper you will be needing. You’re able to dictate elements of newspapers you’re assigned to create way far too. Updates in the newspaper which means you are able to monitor it.

Get the Scoop on Buy Essays Cheap Before You’re Too Late

The issue is that services do not offer one of the top quality paper that you expected and also control a lot of money. They’re going to deliver papers and the quality plus the internet sites for affordable services which people’ve reviewed are all excellent in a range of ways you desire. In the event that you avail our documents supplier that is economical you are going to get the best substantial quality material on really price.

The Benefits of Buy Essays Cheap

In case you might have any difficulties with writing of your essay get hold of us and we’ll provide writing support of special forms for affordable rates. The composition thesis that is additional needs to be a schedule of assault for. On average, your portfolio essay may be unorganized.

Free essay sockets are an incredible supply of analysis info. Start with the introduction on your own thesis and explain what it’s that you’re inclined to be re searching at the write-up. You really needs counter-intuitive essay to stay away in the cookie cutter article internet sites that are free of charge.

Regardless of what is said about with an informative article author to compose my article the very fact remains there are a lot of benefits of buying your papers. What resulted was an article. It’s crucial to grasp the gap between the two terms When picking out on out a word papers writing service.

Compare and Contrast Essay Examples Exposed

Compare and Contrast Essay Examples Exposed

Definitions of Compare and Contrast Essay Examples

Additionally, pressure the campaign and time required to publish articles is a bit lower than this which is demanded whenever you have to create over a chalkboard. Both textbooks and the tablets possess benefits and a few disadvantages at research paper writing precisely the time. Now you’re alert about the dangers of applying a compare and contrast essay sample, along with all the simple actuality that wanting to find someone is many situations every time consuming process from itself, you might be asking yourself whether it is the habit written composition service may help you.

You should select a structure which is acceptable for college as well as your class. Both big and small universities can provide all the chances to pupils to realize educational objectives. School is entirely at no cost and mandatory.

Professors will own. Producing an Essay was never simple nor could it be, and in the event you go on action just like an simple endeavor, there’s never a certainty you’ll get the grade that is precise that you simply want to get. Even the kind of speech given throughout america is your speech that is enlightening.

The Secret to Compare and Contrast Essay Examples

Another thing is your own audience. Typically cats don’t desire to sit down with you, and a couple of them dislike being held. There’s another common mistake.

There are lots of aspects that have to get considered in case you may love to decide on a workable and fascinating informative article matter. It might be necessary to talk with different people to comprehend the compare and contrast essay topics. You may even hunt for hints for compare and contrast essay writing that you simply are able to discover on line.

When at all possible, come across an appealing subject about that you may possibly produce.

In order to fully grasp how exactly to begin with a quote and contrast essay somebody should know the newspaper needs. The topic for a quote and contrast essay will probably be put by this issue or course there is a student enrolled. Contrast and Evaluate essays are extended in faculty libraries.

You don’t wish to confuse the reader, so therefore it’s best in case you utilize it throughout your essay and also then select a format that is single. This product may be used by you for an idea to work on your essay that is personal. A contrast essay format needs to to be the format.

The Do’s and Don’ts of Compare and Contrast Essay Examples

At an identical time which you could well be some one that can start out off an essay of one’s thoughts absolutely totally free of issue’s peak, a lot of people discover that it’s more easy to sit down and write out a summary before start. The thing we’ve got in common is the ability to have benefits for matters we’ve realized. Perhaps one of the most typical errors neglecting the other and is currently supplying too many details.

You might wind overly worried In the event you compose an essay with no help, and you also could knock out lots of time. Certainly, internet shopping needs to be carried out with care. A decision is quite stern.

Want to Know More About Compare and Contrast Essay Examples?

The essential guideline in writing paragraphs is always to have an individual notion in most paragraph. Additionally, it ought to have. Your first paragraph should begin with a powerful guide offer any required background advice and end using a thesis statement that is transparent.

The means to start the method of writing review contrast documents is always to think about point the subject or essence of each among the 2 subjects eassay writter service of study. Whenever you’re writing a fantastic conclusion paragraph, then you should consider the important point make sure it truly is involved and you wish to become over. If you don’t get it done correctly to get rid of an essay to be given a high score Even though it might appear to be that conclusion isn’t a very substantial part that your paper you might still get rid of points.

Choosing Compare and Contrast Essay Examples

Choose the desktop information you want to include from the debut. Networking essay topics or inspiration to create your advertising isn’t catchy to find. To compose a paper you’ve must understand howto pick your theme correctly and put it to use to make a more practical outline.

You may make use of the themes or you may simply make a decision to publish concerning one. It really is naturally your subjects ought to be more on and precise point. It is difficult to fully grasp when to discontinue when you are currently working to speak on a topic.

For instance, you may aid the reader visit a meaningful connection between both the subjects. In summary, in spite of the simple fact there really are always a distinct similarities among both nations’ civilizations, moreover, there are many variances. Whenever you comparison mathematics troubles or societal phenomena analyze every one of them and also you need to clearly show their gaps.

Be certain you’re currently evaluating each and every negative quite if that’s the case. Too much of meals and also consuming wholesome food is planning to contribute to disease our own body some times discover that it’s extremely hard to put up with. 1 main difference between both cultures is family worth.

The Truth About Compare and Contrast Essay Examples

The Truth About Compare and Contrast Essay Examples

There are several aspects that have to get considered in the event you would like to decide on a workable and intriguing informative article subject. You’ll find numerous compare and contrast essay subjects, along with a variety of them are not simple to perform. It’s normal to be assigned to compose essays in every area of research, but perhaps not just when choosing a composition class.

Introduction has a thesis that shows the essay’s aim. So, students need to do plenty of study for a way to produce comparison contrast essays. School essays really are simple to produce and also should adhere to the identical newspaper structure like every other additional writing mission that is academic.

Essays might be burdensome for learners since they might need that pupils comprehend just two issues in thickness to write. They should understand the specific prerequisites of an assignment. Numerous students may have to make comparison essays.

You’ll locate a better idea for what format you wish to compose your own essay by utilizing evaluate and comparison essay cases . You may utilize this product to get a guide to utilize your essay that is private. You will find plenty of formats you’re going to be able to follow along with .

It’s vital to choose a minumum of one example and generate a paragraph with all an counterargument too. Just about every paragraph needs to own a topic sentence. The large part of time this type of quick paragraph wont be fully developed as it ought to be.

You will be steered by the 8 steps through the method of writing. You can have a number of points however that they must be clearly in regards to the fundamental subject of this paragraph. You’ve must learn how to compose excellent sentences to keep keep your writing organized throughout revision stages and your first drafting.

The function of contrasting and comparing phenomena may be challenging one. Merely mentioning the similarities and differences isn’t enough if one could perhaps not test the ideas. You have to clearly show their gaps and also analyze every one of these Once you comparison social occurrences or mathematics issues.

It’s possible for you to pick out one predicated in your field of individual and study pursuits. The hazard can not be predict and measured. 1 principal gap between cultures is household values.

Whatever They Told You About Compare and Contrast Essay Examples Is Dead Wrong…And Here’s Why

You get hold of fellow students and your lecturer before you select a single. College can be a time for college students to concentrate. They choose the things that they have heard to finish the draft.

Lincoln desire through, and following the Civil War was assumed to maintain federal unity. Composing support is critical to get each individual. Address given through the duration of america’s type could be the speech that is informative.

What You Need to Do About Compare and Contrast Essay Examples Starting in the Next 10 Minutes

Choosing the themes that are appropriate might have a while in case that you don’t possess a set of sample issues confronting you. Catch the reader’s attention There are a great deal of tips and methods to aid you in capturing a reader’s interest rate. There are several sorts of themes for you personally.

If you’re creating a newspaper which can allow one to review something, inch issue you prefer to centre on the opening structure of one’s paper since it will consist of tips and pointers which will direct your reader to exactly everything your subject is all about. You have to research the topic to pick three asserts. It really is hard to comprehend when to discontinue Whenever you’re working to speak on a specific topic.

The following cause is that they’re loosely understood. Typically cats don’t desire to sit with you, as well as a couple of these dislike staying hauled. There’s another typical mistake.

Furthermore, pressure the effort and time necessary to write content onto the whiteboard is a bit lower than that which is demanded if you need to write over a chalkboard. Both Faculties and the tablets possess benefits along with a couple drawbacks at the time. How will an on-line service agents are well prepared to earn some thing first and plagiarism-free.

The Appeal of Compare and Contrast Essay Examples

So long since you’re prepared to remain on trail of their goal accessible, i.e. obtaining amazing grades, keeping a wise program and studying like crazy, you’re going to be just fine. All gamers got an possibility to bat and area. You conserve income will save of travelling you’d have to do the number.

You purchasing an essay online may possibly wind up stressed In the event you compose a composition with no support, and you also could remove plenty of time. Online shopping needs to be completed together with caution Write a article suggesting a trip it would be the very best selection for the family members and that you wish to take.

Compare and Contrast Essay Examples Exposed

Compare and Contrast Essay Examples Exposed

The New Fuss About Compare and Contrast Essay Examples

Overall, essays conclusion examples mentioned should offer you a bit of inspiration to get the paper. Again, both contrast and compare that they could cover almost any subject and may pop up in a wide assortment of niche locations. Essays need to get written entirely to compose a brand new one.

Every bit of producing, whether it really is an essay that was official along with a journal entry, needs to be typed and typed.

Unfortunately, you would like to prevent baseball games on the article. Students could possibly be asked to compose essays from virtually every part of analysis. Quite a few students may have to make comparison essays for various classes.

A clearer idea about what structure you prefer to compose your composition can be found by you by utilizing review and contrast essay cases writing. There is additionally a good example assess essay on the subject of communication technology. You can find a great deal of formats to select from when considering how exactly to compose your composition.

If you get a book review composition, comparing and contrasting any two chief characters or issues easily sorts out then the very issue of the best way in which. It had been simple to establish what the article was all around. Today, let us assess essays decision instances to receive a more practical in sight relating to this.

A compare and contrast essay will nevertheless has to acquire a great introduction and conclusion. You ought to consider the point be certain it is integrated and you wish to become over Whenever you’re writing an decision paragraph. For those who don’t take action correctly to finish an article to obtain a score though it might appear to be that completion isn’t a considerable part your document you might still shed cherished points.

The end should indicate there are differences and similarities between the 2 nations. Review a way to come across the comparison and similarity method to find the gap. Whenever you comparison science difficulties or societal phenomena you need to show their differences and also analyze just about every among them.

Our writers follow strict guidelines to obtain skilled and industrial good results. Some instructors prefer that you just write about the gaps between two matters, though some would prefer you to concentrate on explaining the similarities too. You may also pick a issue that is single and become started practicing.

Writing is a skill that somebody will learn. You shouldn’t select an interest which is complicated since you might be stuck on the manner. They take the things that they will have learned to complete the draft.

Lincoln’s sole desire during, and observing the Civil War has been supposed to keep unity. Support is only vital to find each individual. The instance is.

Choose the desktop information that you want to add up from the introduction. Catch the reader’s interest a good offer are of ways and suggestions to assist you . There are a number of forms of topics that write an essay for me you experienced .

Before you begin it is critical to select topics which you know properly. It truly is naturally your subjects ought to be more specific and on point. After that, an ideal area can’t be located by you.

The Secret to Compare and Contrast Essay Examples

At the usa that the land of option there means for an individual is enormous. Assistance from expert writers will save you a plenty of time. Moreover, there’s the simple fact of the manner.

The future could look in order because it’s still to reach. Around the flip side, college enables one to just take possession of one’s time, tasks and that you would like to eventually become Even a checklist may make it feasible to keep harmony.

At the same point you can be somebody that may begin an essay off of one’s head of issue’s summit, a great deal of folks find it simpler to sit and write out a summary before start. There’s how in which the car makes you feel as like a driver’s fact. Start with saying that the entire idea you’ve researched setting from the thing you’re speaking, subsequently generate a transition and then put up the entire listing of matters you prefer to defend about the issue.

Those things’ names are listed throughout the top. Very similar to the type of the paper, the tutorial is going to be listed as a means to reproduce it. Decide the way they are similar and they’re different In the event you have got just three items to either compare or contrast.

Getting the Best Plagiarism Free Check

Getting the Best Plagiarism Free Check

You may now also discover the author who is finishing the work without any waiting to this idea help supervisors are going to receive a grasp of him rather than you personally. The following matter is using graphics which are also large. Understand that the trade could possibly be available in time was that your time frame.

Plagiarism Free Check – Dead or Alive?

In the event that you would want to provide a kind of all services you can not sell those products at the shop such as a supermarket store. Therefore, the internet project assistances supply an extremely fair cost for the project to the students. You don’t will need to cover for any added expenses.

Acquiring a site hosting plan is important for new organizations. The firm was able posture to come up with a big way to obtain samples, clients choose if they are able to proceed on dealing with them and know and down load that the grade of those authors. Internet Business 1.

Ofcourse there’s an approval rate under each requester, and that means that you find it possible to discover how they should refuse you personally, but there’s no guarantee. We give you wide assortment of providers that lets you readily submit a paper for plagiarism-free online. There has already been a gain in the amount of services that find it, Since the quantity of plagiarism is on the gain.

In spite of the common premise a plagiarism checker that is superior that is high doesn’t need to charge you your whole kingdom however employing a lousy grade you might along with an arm and a leg. Before the exact cash is covered by you, beware. You are going to have the ability to edit your work better once you get a idea of a person’s personal mistakes.

Life, Death, and Plagiarism Free Check

The literature review is able to assist you to compare and contrast what you’re doing at the historic context of this study in addition to the way your research differs or initial from others have done, assisting you to rationalize why you should do this specific analysis (See Reference 2). First we authors that will supply you with services which can be original and free from grammar errors. You become started believing what method to choose from if you prefer to create a remarkable effect on your tutor.

Yoast SEO is beginner friendly, and it requires good care of things that numerous beginners arenat alert to. The above will be the tips on the way to discover traffic to a web site. Composing a dissertation that is personalized can be quite a grueling task due to the process plus it necessitates assistance whatsoever stages.

Utilize our website to provide a complimentary check your scrawling to you, you upload the file copying or redaction sample. It’s potential to produce custom writing the picture spacious with white space on either side. HubPro can be a free expert modifying agency given by HubPages.

The Plagiarism Free Check Cover Up

By using their English writing on the internet is maybe not using. Segment your dissertation which means you can have deadlines and objectives for them. Dissertation assistance in UK which can be found on the internet will be able to help you to get the most acceptable topic for the dissertation.

Pay for article and receive. From today onward, writing essays won’t be an battle. You will be given by our informative article producing company with skilled composition writing companies to college students that are set to come across new thoughts which will aid them within their writing.

You can find free and low cost plagiarism detector sites and apps available on the market. The way to produce an outstanding material would be to examine plagiarism also make your university from this plagiarism checker that is free of charge and upgrade grades. Our plagiarism checker plans to assist students stay clear of issues with the mistakes together with essay imagination checker at hand students can identify.

You’ve got a chance to make use of our plagiarism checker for. Any one of softwares for samedayessay login detecting plagiarism, accessible may be used to look at your specific article. Our plagiarism checker intends to assist students avert.

Away and copying from the net has turned into a menace. Open Site Explorer is an internet site audit device that’s really a hunt engine for visitors of the site. Furthermore, the net internet has many different plagiarism checkers which you might use for your own profit and ensure your document is still plagiarism.

Hopefully, you may begin to realize a substantial progress at the week. Usually, it’s going be instant nature at in conclusion of one’s first month or two so. In scholar’s life time has a crucial role that moment is wasted by you unquestionably it is going to be a vast lack in yours.

The Secret to Plagiarism Free Check

Novel on the web at no cost whatsoever at any given sites, though, a number of the websites possess the capacity to check and show only errors with no kind of comments that is useful. It is clear the moderators on FFnet have zero interest in strengthening the website . Take note that Google questions are confined to 32 words or less, therefore it is certainly going to have a great deal of copying and pasting to hunt the full text.

What Every Body Is Saying About Plagiarism Checker Is Wrong and Why

What Every Body Is Saying About Plagiarism Checker Is Wrong and Why

Choosing Good Plagiarism Checker

This tool finds out text which may well not be exceptional or plagiarized or copied by non-original presents that are whole. Among the benefits of downloading plagiarism sensor software could be that the capability to use user interface that is crystal and easy. Obviously there are a lot of varieties of men and women who are apt to be in a position.

Voice that is active is needed by Internet sites. Spammers have a inclination to package articles filled with key words simply to induce visitors. For this reason, you’re able to count on this checker free of hesitation.

The Upside to Plagiarism Checker

It is a paraphrasing instrument which assists you to rewrite your topic along with save great level time In that case you then ought to be familiarized with the name on-line article rewriter software. As SEO articles are typically re-written from different sources It’s encouraged to inspect the content compiled by SEO writers to guarantee there aren’t any duplicated paragraphs or portions of text. Web site content-writing necessitates authors to market the purchaser’s industry.

Using the support of our on-line editor which likewise acts as being a you also boost all pieces of work and also can avoid out. From advertising sphere and the digital marketing, Creating Buzz-Worthy information is the trick to good results. This second Grammar Checker on-line tool is aimed to lessen the tradition of producing a newspaper by way of checking its own grammar.

It’s likely to write precisely the average person along with all the host of the individual that copied your content if you could possibly get contact data subsequently. You could have a lot more than 1 article selected on time although None of those articles in your account goes to be edited in the same moment. At the exact time, there was not any available availability to vast data bases of documents.

Getting the Best Plagiarism Checker

Don’t forget , you found your magic wand that will cause you to find free and happy. It is sti must employ the brain for each replacement to pinpoint what matters to fix and also what things to discount. Plagiarism is split up in to intentional and accidental based on the presence or absence of somebody’s intention.

There’s obviously a risk to plagiarize in the event that you’ve surely got to reference many origins of advice. So, you may rest review assured your term-paper service goes to be shipped by means of a specialist. With all the rise within the internet usage, there are a number of folks employing strategies to produce sure that their material is unique and free of plagiarism.

The verification takes merely a little while, and it could even crash in the meantime,. Keep in mind that you’re part of the procedure too. The solution would be always to elect from the source that is credible.

Though when you will need to inspect bulk of content and’re working on a project, then it’s ideal to obtain the top edition. Attempt to remember your organism ought to be more regularly charged. You add the file and the remaining part of this project is going to be realized from the plagiarism checker or need to replicate paste your own text.

The bulk of as soon as you might require to borrow the advice and ideas of distinct authors and by doing this, correct paraphrasing is one of the maximum approaches to acknowledge the sources of one’s borrowed advice and guard against plagiarism a side from understanding, mentioning, and also with a plagiarism checker tool. The best way in which to the optimal/optimally article is by WritePaperFor.Me. No matter a literature search for estimating, in the procedure will probably soon turn up plagiarism of the period moreover and when it takes place is critical for a wonderful inspection.

If necessary you have a preview of your own essay and have to make adjustments. Decide to try topics and determine what works the very best for you. It’s possible to be sure isn’t like any informative article before you’ve done yours already released.

You can be sure you won’t be worried as students and that the variety of challenges will probably fall. Sentences from the first person singular almost all of the academic newspapers are written in the first person plural since it has really a set job of the instructor along with their student. If you’re a student you’ve must compete to the jobs that were better and for the better grades today with your peers.

Quite basically, the program detected a degree of similarity far past the normal coincidences which may happen in writing. It will be potential to get along with assess facts there. You definitely are able to procure a guarantee that the stability of the test is ensured at times especially if you should be doping an NYU essay in law if you may possibly use a plagiarism device that’s been around for no less than five years afterward.

The 30-Second Trick for Plagiarism Checker

Citation device Plus can help you start searching for write essay for me passages that may comprise plagiarism. It isn’t just a excellent backlink source. Essential to utilize Plagiarism Checker 1.

On the list of methods that are effective to reduce plagiarism will be citation. Punctuation Checker One is punctuation problems. As a result of internet sites working with plagiarism tests you can catch individuals who are guilty of plagiarism.

Away and copying out of the net has turned into a menace. Because of Internet allowing an plagiarism, it’s come to become harder to handle it.

Plagiarism checker free on line is useful in the event that you want to make certain that your essay is completely special. There certainly really are a range of software that check plagiarism of their standard content. Though, a number of the plagiarism programs that are free are adequate to finish you and the purpose don’t will need to get plagiarism checker.

The True Meaning of Plagiarism Free Check

The True Meaning of Plagiarism Free Check

DTC has produced just a small number of passive earnings, as well as now, Persona Paper has generated not one in virtually any respect. Updates around the newspaper and that means it’s possible to monitor it. Time could be your asset someone can have.

WriteCheck is an support! You are in serious demand of a platform at that you might possibly sell also solutions as well as items. The user reviews and comments of a product or agency performs a vital part in picking out the plagiarism application on line.

For greater communicating with clients and customers most reasons like entry, demonstrating services and contacting people far you would like to obtain yourself a site. Taking the opportunity to validate this content produced staff or by your freelancers will really go a exact long way in preventing the business from pricey and embarrassment litigations that may ensue. Today the the vast large part of the folks in services and search items online so you desire a stage at which people are able to get into your expert services.

In case the breach issues a web-based content, a letter that was similar might be served to service or the server that’s hosting the content. We give you big choice of services that lets you conveniently submit a newspaper for plagiarism-free online along with us. Its venture model has changed.

There’s no need to quit using the website such as anxiety about violating copyrights. Before you pay for the income Thus beware. You will have the ability to personalize your job better once you get a idea about somebody’s personal mistakes that are ordinary.

Type of Plagiarism Free Check

Finding free online producing projects on line advertising can be a tricky job. So, needless to say, to receive probably the be a consequence of your advertisements campaign, generate the very best results and you only should make use of strategies which require very little work. To get search engine optimisation companies fantastic tool alternatives start using figuring out the classes of jobs your work is very likely to take.

Giving visitors advice they can not track down anywhere else is more vital. At the realm of SEO, it’s called standing administration. Producing a dissertation can be a task as a result of the procedure and it necessitates assistance whatsoever stages.

Our advanced bundle of informative article checker providers allows you to protect against numerous content that is articles without a issue. You must insert the significant data in the chosen fields. If you would like to understand the way to own visitors to a own website with all just the best approach do read on.

Going right through the internet dissertations and mastering all of the required pieces will enable you to create a lot college paper writer much more economically and professionally. Composing plagiarized materials without a trouble along with checker your system in order to avert grammar assess choose our text-editor that is on-line and also acquire. You’ll also find how to unite sections of one’s dissertation.

Composing discussion informative article is an art in the meaning it requires thorough grasp of the discipline. Creating an article isn’t simple and at the exact time that it isn’t challenging. Great superior essays will suggest the provider recruits qualified writers and also also you also may subsequently be confident of receiving a excellent paper.

Ordinarily plagiarism solutions that are compensated are far more successful. The most acceptable strategy to beat plagiarism in your company is touse any of many resources that are online that is effective to examine the content you get from your own authors to get plagiarism. There is an abundance of plagiarism detector programs on the market today.

In case it offers an extensive record concerning the plagiarism, then you definitely need to choose this particular tool. Make use of our completely absolutely totally free web originality detection to be sure that your paper includes no plagiarism. Essay plagiarism is often straightforward as possible.

It also enables one to make XML-Sitemaps that would need an extra plug in to produce. All these free on-line Spyware and punctuation checker programs will be able to assist you. Furthermore, the net web has plagiarism checkers that you guarantee that your file is plagiarism and may use to your gain.

The Ultimate Approach to Plagiarism Free Check

Whilst the dissertation reflects your abilities along with your understanding in the finish of the calendar year, you have to do it attentively. You will find the services of a content writer to acquire the missions to acquire the best marks from University or your school. They must get taught about the value of citation it is clear that they do not completely comprehend its meaning.

The scamming ban can happen when you ship any type of unsolicited mails about your website. It is clear that the moderators on FFnet have zero interest in improving your site . You really don’t want unrelated content that doesn’t proceed along with your website’s matter matter.

October 22, 2018

security things in Linux v4.19

Previously: v4.18.

Linux kernel v4.19 was released today. Here are some security-related things I found interesting:

L1 Terminal Fault (L1TF)

While it seems like ages ago, the fixes for L1TF actually landed at the start of the v4.19 merge window. As with the other speculation flaw fixes, lots of people were involved, and the scope was pretty wide: bare metal machines, virtualized machines, etc. LWN has a great write-up on the L1TF flaw and the kernel’s documentation on L1TF defenses is equally detailed. I like how clean the solution is for bare-metal machines: when a page table entry should be marked invalid, instead of only changing the “Present” flag, it also inverts the address portion so even a speculative lookup ignoring the “Present” flag will land in an unmapped area.

protected regular and fifo files

Salvatore Mesoraca implemented an O_CREAT restriction in /tmp directories for FIFOs and regular files. This is similar to the existing symlink restrictions, which take effect in sticky world-writable directories (e.g. /tmp) when the opening user does not match the owner of the existing file (or directory). When a program opens a FIFO or regular file with O_CREAT and this kind of user mismatch, it is treated like it was also opened with O_EXCL: it gets rejected because there is already a file there, and the kernel wants to protect the program from writing possibly sensitive contents to a file owned by a different user. This has become a more common attack vector now that symlink and hardlink races have been eliminated.

syscall register clearing, arm64

One of the ways attackers can influence potential speculative execution flaws in the kernel is to leak information into the kernel via “unused” register contents. Most syscalls take only a few arguments, so all the other calling-convention-defined registers can be cleared instead of just left with whatever contents they had in userspace. As it turns out, clearing registers is very fast. Similar to what was done on x86, Mark Rutland implemented a full register-clearing syscall wrapper on arm64.

Variable Length Array removals, part 3

As mentioned in part 1 and part 2, VLAs continue to be removed from the kernel. While CONFIG_THREAD_INFO_IN_TASK and CONFIG_VMAP_STACK cover most issues with stack exhaustion attacks, not all architectures have those features, so getting rid of VLAs makes sure we keep a few classes of flaws out of all kernel architectures and configurations. It’s been a long road, and it’s shaping up to be a 4-part saga with the remaining VLA removals landing in the next kernel. For v4.19, several folks continued to help grind away at the problem: Arnd Bergmann, Kyle Spiers, Laura Abbott, Martin Schwidefsky, Salvatore Mesoraca, and myself.

shift overflow helper
Jason Gunthorpe noticed that while the kernel recently gained add/sub/mul/div helpers to check for arithmetic overflow, we didn’t have anything for shift-left. He added check_shl_overflow() to round out the toolbox and Leon Romanovsky immediately put it to use to solve an overflow in RDMA.

Edit: I forgot to mention this next feature when I first posted:

trusted architecture-supported RNG initialization

The Random Number Generator in the kernel seeds its pools from many entropy sources, including any architecture-specific sources (e.g. x86’s RDRAND). Due to many people not wanting to trust the architecture-specific source due to the inability to audit its operation, entropy from those sources was not credited to RNG initialization, which wants to gather “enough” entropy before claiming to be initialized. However, because some systems don’t generate enough entropy at boot time, it was taking a while to gather enough system entropy (e.g. from interrupts) before the RNG became usable, which might block userspace from starting (e.g. systemd wants to get early entropy). To help these cases, Ted T’so introduced a toggle to trust the architecture-specific entropy completely (i.e. RNG is considered fully initialized as soon as it gets the architecture-specific entropy). To use this, the kernel can be built with CONFIG_RANDOM_TRUST_CPU=y (or booted with “random.trust_cpu=on“).

That’s it for now; thanks for reading. The merge window is open for v4.20! Wish us luck. :)

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

libxmlb now a dependency of fwupd and gnome-software

I’ve just released libxmlb 0.1.3, and merged the branches for fwupd and gnome-software so that it becomes a hard dependency on both projects. A few people have reviewed the libxmlb code, and Mario, Kalev and Robert reviewed the fwupd and gnome-software changes so I’m pretty confident I’ve not broken anything too important — but more testing very welcome. GNOME Software RSS usage is about 50% of what is shipped in 3.30.x and fwupd is down by 65%! If you want to ship the upcoming fwupd 1.2.0 or gnome-software 3.31.2 in your distro you’ll need to have libxmlb packaged, or be happy using a meson subpackage to download libxmlb during the build of each dependent project.

How to tell sparrows apart

[Sparrow ID page] I was filing an eBird report the other day, dutifully cataloging the first junco of the year and the various other birds that have been hanging around, when a sparrow flew into my binocular field. A chipping sparrow? Probably ... but this one wasn't so clearly marked.

I always have trouble telling the dang sparrows apart. When I open the bird book, I always have to page through dozens of pages of sparrows that are never seen in this county, trying to figure out which one looks most like what I'm seeing.

I used to do that with juncos, but then I made a local copy of a wonderful comparison photo Bob Walker published a couple years ago on the PEEC blog: Bird of the Week – The Dark-eyed Junco. (I also have the same sort of crib sheet for the Raspberry Pi GPIO pins.) Obviously I needed a similar crib sheet for sparrows.

So I collected the best publically-licensed images I could find on the web, and made Sparrows of Los Alamos County, with comparison images close together so I can check them quickly before the bird flies away.

If you live somewhere else so the Los Alamos County list isn't quite what you need, you're welcome to use the code to make your own version.

October 16, 2018

Campaign 2018: Join the new Development Fund

Today Blender Foundation releases the Blender Development Fund 2.0 – an updated donation system created to empower the community and to ensure the future growth of Blender.

Join the campaign and become a member on

The community as driving force behind Blender development

Since its beginning in 2002, Blender has been a public and community-driven project, where hundreds of people collaborate online with the shared goal of developing a great 3D creation suite, free for everyone.

Contributions to Blender range from new features, to bug fixing, donations, producing documentation and tutorials. The Blender user community always has been a strong force in driving Blender development.

With the start of Blender Institute and the Open Movie Projects, Blender became more professional and suitable for CG production. This content-driven development model has helped bringing Blender where it is now, embraced by the professionals, studios and big players in the CG industry.

With so many people and companies depending on Blender, its future can’t be left to chance. That’s why the Blender Foundation keeps playing in important role – to be a neutral and independent force to manage the  infrastructure and to facilitate the active contributors for the long term.

Goal: support Core Blender Development

The Blender Development Fund was established in 2011, with the following goals:

  • Support developers with grants to work on projects with well defined objectives
  • Support work on the core  infrastructure and support activities such as patch reviews, triaging and bug fixing
  • Enable the best talents from the developers community to work full-time on generally agreed projects.

Core development means to ensure high quality support for essential features: which can range from sculpting, texture painting, UV editing, rendering speedup, character rigging tools, boolean operations, modifiers, add-on management, logic editing, video editing, compositor nodes, support for file formats, outliner, physics baking, etc etc.

Blender 2.8 Code Quest: a success story

This year’s Code Quest campaign has successfully pushed Blender 2.8 to where it is now. The beta is being wrapped up now and close to a release.

Having developers work full-time on Blender, preferably under the same roof, is vital for a project as big as Blender. As a result several bigger decisions could be made for 2.8 – for example a much better design for the tools system. Everyone loved the near-daily reports and videos we made – it was vibrant, energizing and reaching out to other developers and users in an unsurpassed way.

Unfortunately the Quest was just for 3 months. Most developers went home or to other tasks, and progress slowed down to a level we weren’t used to anymore. If only we could bring back everyone to work on Blender!

Development Fund 2.0

That’s where the Development Fund comes in. However, the Fund was hardly promoted, only supported PayPal and needed to be manually managed. Time for an upgrade, and time to implement a much wanted feature for many years – the badges system.

Campaign Targets

Currently the (old) Development Fund has 400 subscribers, bringing in 5500 euro per month fixed income. We will contact all of them to migrate their subscription to the new system. We also have bigger Development Fund contributors paying less regular or variable amounts. In total the development Fund brings in roughly 12k per month.

Allowing continuity is the primary goal of the Blender Development Fund. We have defined two funding milestones that will allow to achieve this goal.

  • 25K Euro/month: the main campaign target. With this budget the fund can support 5 full-timers, including a position for smaller projects.
  • 50K Euro/month: the stretch goal. While this might seem an ambitious goal, this was the monthly budget during the Code Quest. We supported 10 full-timers, including a position for docs/videos and a position for smaller projects.

The funding progress towards this goals can be monitored in real time on


All Development Fund grants and supported projects have been published on since 2011. To improve transparency and involvement we will also:

  • post a half-year report on past results and a roadmap proposal for projects to be supported in the coming half year. The reports will be shared on the blog (and mailing lists or the devtalk forum), open for everyone to discuss give feedback on.
  • spend a small but relevant percentage (5-8%)  of the budget on making development projects visible and accessible in general. That means improving communication (sending as well as receiving!), better technical documentation, and attention for onboarding of new developers.
  • aim for the widest possible possible consensus on roadmaps. We expect that with the badges, development fund members have a good way to make sure they’re being heard.

Final decisions on assigning developer grants will be made by the Blender Foundation, verified and confirmed by development project administrators – the five top contributors to Blender.

A voting or polling mechanism is not in the planning – although it’s open for discussion and review, especially when roadmaps don’t get wide endorsements.

How to contribute

Individual membership and Community Badges

We offer a Development Fund membership based on a small recurring donation, starting at 5 Euro per month. As reward members can get “badges” (tokens that show their support on prominent Blender websites) and a name mention on the website.

Companies and organisations are also welcome to sign up for individual membership and have their company name and url mentioned.

Corporate membership

This high rated membership level starts at 5k per year and is for organizations who want the option to monitor in more detail the projects that will get funded with their contributions. They will get personal attention from the Blender team for strategic discussions and feedback on the roadmap.

In addition to this, corporate members can get a prominent name and logo mention on and in official publications by Blender Foundation .

Amsterdam. October 16, 2018
Ton Roosendaal, Blender Foundation chairman
contact: foundation(at)

Krita at the University of La Plata

Sebastian Labi ha sido invitado para presentar Krita en el Laboratorio de herramientas de software libre de la Universidad de La Plata. Hablará sobre ilustración digital y usará Krita para dar una demostración de cómo usar Krita para el campo de la Ilustración Digital.

El SLAD- FBA (Software libre para Arte y diseño) es una nueva unidad de de investigación y formación en la Facultad de Bellas Artes que promueve el conocimiento y uso del software libre en la capacitación académica de la Universidad de La Plata.

El evento tendrá lugar el Miércoles 31 de Octubre a las 14:00.

Sebastian Labi has been invited to present Krita in the laboratory of Free Software tools of the Unversity of La Plata.  He will talk about digital illustration and using Krita, as well as giving a hands-on demonstration of Krita. SLAD – FBA | Free Software for Art and Design is a new unit of research and training at the Faculty of Fine Arts which promotes knowledge and use of Free Software in the academic training of the University of La Plata.

SLAD is a new research and teaching group in the Faculty of Fine Art in La Plata University which wants to promote systemtatic and sustained knowledge on how Art and Open Source interacts in an academic setting.

The meeting will be on Wednesday October 31st at 14:00.

October 15, 2018

Interview with Sira Argia

Could you tell us something about yourself?

Hi, my name is Sira Argia. I am a digital artist who uses FLOSS (Free/Libre Open Source Software) for work. I come from Indonesia, Sanggau Regency, West Kalimantan.

Do you paint professionally, as a hobby artist, or both?

I don’t think that I am professional in my digital painting work because there’s so much things I need to learn before I achieve something that can be called “professional”. At the beginning, it’s just a hobby. I remember my first painting artwork was very bad, haha. But now I think my artworks are far better than the first. I believe in “practice makes perferct”.

What genre(s) do you work in?

I call it “semi realistic” art style. I usually use anime/manga or Disney style for the character’s face look-alike and use realistic shading a little bit.

Whose work inspires you most — who are your role models as an artist?

David Revoy and Sara Tepes. They inspire and help me a lot with their tutorials.

How and when did you get to try digital painting for the first time?

I started in 2014-2015. I used the traditional method (with pencil and paper) and traced it with Inkscape.

Then 2016. That’s the first time I tried digital painting because I just bought my first graphic tablet that time.

What makes you choose digital over traditional painting?

I am not in the position that I have to choose between digital or traditional painting, because if there’s a teacher, I really need to learn all of them. The reasons why I am doing digital painting at this moment is that iI can find tutorials everywhere on the internet and it’s easy to practice because I have the monitor and the digital pen for digital painting. Besides, I am working as a freelance artist and every client asks for digital raw files.

How did you find out about Krita?

2014 is the year that I first started to try Linux on my laptop, and then I knew that Windows programs don’t run perfectly on Linux even using “wine”. My curiosity about Linux and the alternative programs led me to Krita. The more time I spent with Linux, the more I fell in love with it. And finally I thought that “I’ll choose Linux as a single OS on my laptop and Krita as a digital painting program for work someday after I get my first graphic tablet.”

What was your first impression?

The first Krita that I tried is version 2.9. I followed David Revoy’s blog and youtube channel to learn his tutorials about Krita. I thought at the time that Krita was very laggy for bigger canvas sizes. But I still used it as my favorite digital painting program because I thought that Krita was a powerful program.

What do you love about Krita?

Powerful and Free Open Source Software.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I hope Krita will not use so much processor and ram in the future, it always gets to force close when I use many layers in a bigger canvas.

What sets Krita apart from the other tools that you use?

I’ve never used other painting programs for a long time except Krita. So I don’t know, Krita has unique features. For example, other programs have something called called “clipping layer” but Krita called it “inherit alpha”.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

Here’s my character who is called Seira. Still my favourite because this is the first time I didn’t pick any palette colors from the internet and I
started to understand about “source light” in the digital painting.

What techniques and brushes did you use in it?

a. Sketch

I usually begin from the stickman. I thought about pose and the composition at this point. Then I start to draw the naked body from the stickman. That’s the part where I have to draw the anatomy clearly. After that, I start to draw the costume of the character. I am not really good at line art, so I let it become a sketch because for me, it’s just a guide to the next step.

b. Coloring

This is a very important part for me because I need to be clear with the shading, texture of the material, value, etc. For the brushes, the Deevad bundle and the Krita default brush bundle are more than enough.

Where can people see more of your work?

Anything else you’d like to share?

I just want to give a message to the people who see this. “It’s better to use FLOSS (Free/Libre Open Source Software) like Krita for example, than use proprietary software (even worse if you use the cracked software). It’s bad. Really…”

How not to parent

First few pages of parenting advice book: “Don’t invalidate your child’s feelings.”

My child: “I don’t like this casserole.”

Me: “Yes you do!”

Introducing Blender Community Badges

Today the Blender Foundation introduces Community Badges: a new way for Blender users to display their role and involvement in the online community.

Badges are assigned to users who take part in official initiatives, see the list below, and are managed via Blender ID. This means that every website providing Blender ID authentication will have the possibility to show the badges associated with each user.

Blender Community Badges

The upcoming Blender Community Badges

Available Community Badges

Websites supporting Blender Community Badges

You can check all your badges on your Blender ID account.

Blender Community Badges Overview

Blender Community Badges Overview in a Blender ID account

We believe this will help shaping online conversations, and offer the possibility to long-time contributors and members of the community to proudly show their status.

And so the Fundraiser Ends

Yesterday was the last day of the developers sprint^Wmarathon, and the last day of the fundraiser. We’re all good and knackered here, but the fundraiser ended at a very respectable 26,426 euros! That’s really awesome, thanks everybody!

We’ve already updated the about box for Krita 4.2 with the names or nicknames of everyone who wanted to be mentioned, and here is the final vote tally:

1 – Papercuts 202
2 – Brush Engine 132
3 – Animation 128
6 – Vector Objects and Tools 73
5 – Layers 59
7 – Text 48
10 – Photoshop layer styles 43
4 – Color Management 29
9 – Resource Management and Tagging 20
8 – Shortcuts and Canvas Input 18

Now we’re going to take a short break, and then it’s rollup our sleeves and get to work!

October 14, 2018

The Last Day of the Krita Sprint and the Last Day of the Krita Fundraiser

We fully intended to make a post every day to keep everyone posted on what’s happening here in sunny Deventer, the Netherlands.

And that failed because we were just too busy! I’m triaging bugs even as I’m working on this post, too! We are on track to have fixed about a hundred bugs during this sprint. On the other hand, the act of fixing bugs seems to invite people to test, and then to report bugs, so in the past ten days, there were fifty new reports. That all makes for a very hectic sprint indeed!

All through the week, we all touched many different parts of Krita, sharing knowledge and making everyone of us more all-round when it comes to Krita’s codebase. Remember, people have been working on this code since 1999. Nobody back then expected we’d have millions of users one day!

At the Krita headquarters, one usually cooks for two or three, cooking for eight was a bit of a challenge, but cook we did, for Wolthera, Irina, Boudewijn, Dmitry, Ivan, Jouni, Emmet and Eoin. Let’s go through the days, menu and bugs!

Saturday: Minestrone

On Saturday, we merged Michael Zhou’s Summer of Code work on improving the palette docker and making it possible to save palettes in your .kra project file. Of course, that meant that all through the week we had to fix issues here and there caused by this merge, but that was to be expected. All through the week, using the nightly builds must have been a roller-coaster experience for our testers!

And we got a lot more done, too: Jouni was demonstrating his clone frames and frames cycle feature, Eoin added a global kinetic scrolling feature, where you can pan pretty much every list with the middle-mouse button. That makes Krita much nicer to use on touchscreens.

Sunday: Pasta with tomato sauce

On Sunday, we really got into our stride. Wolthera updated the user manual with all the new features — if you haven’t checked out the manual, do so, we’re quite proud of it! Emmet fixed the color picker tool, which had a bit of trouble showing the correct value for the alpha channel. There were smaller and larger bug fixes, but a very nice new feature was added as well: new contributor Reptorian, after coding a bunch of new blending modes, put his teeth into a larger feature: a symmetric difference mode for the selection tools.

This is not the first time that someone with little or no background in C++ does really useful work for Krita. In fact, it happens all the time, and for this we worked together with Alberto Flores who was working on his very first patch. And he did make it!

Monday: Moussaka

On Monday, we fixed a nasty little bug where the order in which you select layers in the layerbox would influence the order in which layers would be merged. There were fixes to the animation timeline.

The big feature of Monday was making it possible to import SVG files as a vector layer. Originally, and silly enough, an SVG file would be imported as a raster layer when using the layer/import image as layer function.

Dmitry started working on making Krita’s canvas support display scaling properly.: this work would go on all through the week.

Jouni started working on fixing the sound stuttering when playing an animation. That patch is ready to land, and when we released on Thursday, it turns out that this was the thing we got most questions about.

And we fixed bugs…

Tuesday: Honey-braised pork, sesame beans and cabbage in oyster sauce

On Tuesday we did a bunch of bug triaging and assigned bugs to the people present. Then there were crash fixes, python fixes, ui polish… And we also looked into what would be needed to support HDR monitors, but that’s going to be a long slog!

Wednesday: Stuffed bellpeppers

On Wednesday, Dmitry refined the way users can modify and work with selections: now one should hover over the selection border to start moving a selection. And a lot of bugs and crashes were fixed all long day long, but we also started preparing for the release itself, by backporting all the fixes to the 4.1 branch.

Thursday: Dining out at Da Mario, and a release

This was the day we had to redo the release three times! But we did release in the end, with almost fifty bugs fixed.

But there was also fixing being done!

Eoin fixed a bug in the animation curve editor, fixed bugs in the bristle and smudge brushes. Wolthera fixed a regression in the color selector. Turns out that if you replace a division by two with a multiplication by 0.5, you shouldn’t keep dividing it…

Dmitry also pushed the final set of fixes for using display scaling correctly in Krita. That’s another very, very long standing bug finally fixed!

And it’s not just people at the sprint who are busy! Mehmet Salih Çalışkan submitted two patches, one for the text tool, one for the brush editor, and today his patches were pushed. Anna Medonosva was fixing issues with the artistic color selector.

In the afternoon, we went out for a walk and met the Deventer sheep herd, as well as the shepherd.

That evening, none of us cooked, we went out to Da Mario instead to have dinner together with one of the more local supporters of Krita.

Friday: Ajam pedis and sambal goreng beans

Emmett fixed a really hard bug where sometimes when smudging black gets mixed into the color. We’re pretty sure there are more bugs like this in the database, but we’d need some testing done to see which bugs those are. It’s one thing that makes the bugs database such a mess: we have lots of duplicates, but often the fact that they’re duplicates isn’t apparent at all.

Jouni fixed issues with the G’Mic integration, Boudewijn looked into the problem with saving a group layer with a transparency mask to OpenRaster: that needs updating the OpenRaster standard though. But he didn’t waste the entire day, he also fixed loading line end markers for vector lines on Windows and macOS. Ivan fixed macOS-specific issues in the animation timeline.

Later in the afternoon we discussed options to reduce the mess in bugzilla a bit by using a helpdesk system and/or an a question/answer type site. Bugzilla really should only contain bugs, not user support type questions!

Saturday: Runner beans and meatballs with mint sauce

Saturday was our day off. All work and no play and all that sort of thing. We visited a nearby town, called Zwolle. It’s a pretty place, a little larger than Deventer, and it has kept one of its town gates, the Sassenpoort. Previously used as the town’s archive, it’s now and then open to the public, and well worth a visit. Zwolle has been barbaric enough to close its local history museum because it wasn’t turning a profit, but there’s still the Fundatie, a modern art museum, which was showing a smallish exhibition of works by sculptors Giacometti and Chadwick.

And in the evening we went back to hacking. Jouni is this close to fixing audio playback for animations: in fact, it’s ready to be merged!

Sunday: Black pudding with beetroot and stewed pears

And today, while Jouni has already gone back to Finland, we’re back at fixing bugs! But this was one loooong sprint, more of a marathon, and we’re getting a bit frazzled now. Still, there are more bugs to be fixed!

The End

Tomorrow, our marathon coders will wend their way homewards. The fundraiser will end, at the very great amount of 25,000 euros (it’s actually more, because people have also been donating directly through Krita’s bank account, and the website doesn’t count that.). And we will be fixing bugs, and work on achieving that polish and stability that makes all the difference!

Tape Rabbit

[Packing tape rabbit] I had to mail a package recently, and finished up a roll of packing tape.

I hadn't realized before I removed the tape roll from its built-in dispenser that packing tape was dispensed by rabbits.

October 11, 2018

Krita 4.1.5 Released

Coming hot on the heels of Krita 4.1.3, which had an unfortunate accident to the TIFF plugin, we’re releasing Krita 4.1.5 today! There’s a lot more than just that fix, though, since we’re currently celebrating the last week of the Krita Fundraiser by having a very productive development sprint in Deventer, the Netherlands.

There are some nice highlights as well, like much improved support for scaling on hi-dpi or retina screens. But here’s a full list of more than fifty fixes:

  • Associate Krita with .ico files
  • Auto-update the device pixel ration when changing screens
  • Disable autoscrolling for the pan tool
  • Disable drag & drop in the recent documents list BUG:399397
  • Disable zoom-in/out actions when editing text in rich-text mode BUG:399157
  • Do not add template files to recent documents list BUG:398877
  • Do not allow creating masks on locked layers. BUG:399145
  • Do not close the settings dialog when pressing enter while searching for a shortcut BUG:399116
  • Fill polyline shapes if some fill style was selected BUG:399135
  • Fix Tangent Normal paintop to work with 16 and 32 bit floating point images BUG:398826
  • Fix a blending issue with the color picker when picking a color for the first time BUG:394399
  • Fix a problem with namespaces when loading SVG
  • Fix an assert when right-clicking the animation timeline BUG:399435
  • Fix autohiding the color selector popup
  • Fix canvas scaling in hidpi mode BUG:360541
  • Fix deleting canvas input settings shortcuts BUG:385662
  • Fix loading multiline text with extra mark-up BUG:399227
  • Fix loading of special unicode whitespace characters BUG:392710
  • Fix loading the alpha channel from Photoshop TIFF files BUG:376950
  • Fix missing shortcut from Fill Tool tooltip. BUG:399111
  • Fix projection update after undoing create layer BUG:399575
  • Fix saving layer lock, alpha lock and alpha inheritance. BUG:399513
  • Fix saving the location of audio source files in .kra files
  • Fix selections and transform tool overlay when Mirror Axis is active BUG:395222
  • Fix setting active session after creating a new one
  • Fix showing the color selector popup in hover mode
  • Fix the ctrl-w close window shortcut on Windows BUG:399339
  • Fix the overview docker BUG:396922, BUG:384033
  • Fix the shift-I color selector shortcut
  • Fix unsuccessful restoring of a session blocking Krita from closing BUG:399203
  • Import SVG files as vector layers instead of pixel layers BUG:399166
  • Improve spacing between canvas input setting categories
  • Make Krita merge layers correctly if the order of selecting layers is not top-down. BUG:399146
  • Make it possible to select the SVG text tool text has been moved inside an hidden group and then made visible again BUG:395412
  • Make the color picker pick the alpha channel value correctly. BUG:399169
  • Prevent opening filter dialogs on non-editable layers. BUG:398915
  • Reset the brush preset selection docker on creating a new document BUG:399340
  • Support fractional display scaling factors
  • Update color history after fill BUG:379199



Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.


(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

When it is updated, you can also use the Krita Lime PPA to install Krita 4.1.5 on Ubuntu and derivatives. We are working on an updated snap.


Note: the touch docker, gmic-qt and python plugins are not available on OSX.

Source code


For all downloads:


The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
. The signatures are here (filenames ending in .sig).

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

October 10, 2018

Great little improvements

There are relatively few great little improvements to desktop computing interfaces in recent years. One that I’ve particularly enjoyed is the searchable menus in macOS.

On any macOS app, open the Help menu and start typing. For example if you want the Connect to Server… feature in the Finder app, hit Help, start typing “conn…” and use the arrow keys and Enter key to choose the menu item.

This is particularly useful in complex applications, like Photoshop, that can have hundreds of nested menu items.

Searchable menus in macOS

Update: As Antonio points out in the comments, there’s a keyboard shortcut for this (Shift–Command–?). I never knew! Thanks, Antonio.

October 08, 2018

Krita October Sprint Day 1

On Saturday, the first people started to arrive for this autumn’s Krita development sprint. It’s also the last week of the fundraiser: we’re almost at 5 months of bug fixing funded! All in all, 8 people are here: Boudewijn, the maintainer, Dmitry, whose work is being sponsored by the Krita Foundation through this fundraiser, Wolthera, who works on the manual, videos, code, scripting, Ivan, who did the brush vectorization Google Summer of Code project this year, Jouni, who implemented the animation plugin, session management and the reference images tool, Emmet and Eoin who started hacking on Krita a short while ago, and who have worked on the blending color picker and kinetic scrolling.

We already did a ton of work! Wolthera finished up the last few problems in Michael Zhou’s Google Summer of Code rewrite of the palette docker: that’s merged to master, so it’s in the nightly builds for Windows and Linux. We did some pair programming so the text tool now creates new text with the currenly selected color.

Jouni got a long way with the implementation of animation clones and cycles: that is, a set of frames can now be “cloned” to appear in several places in your animation:

Then we sat down and distributed bugs to the hackers present, and we got rid of a lot of bugs already (total bugs, new reports, closed, balance):


We’re going to continue to fix bugs for the rest of the week, of course! And we did some experimentation with stream to twitch, so tomorrow afternoon, CEST, we’ll do a live streaming of bug fixing on! We’ll also be answering questions, so if you want to discuss a particular bug with us, join in!


We’ve also got the the updated vote tally for you:

1 – Papercuts 164
2 – Brush Engine 103
3 – Animation 88
6 – Vector Objects and Tools 56
5 – Layers 51
7 – Text 36
10 – Photoshop layer styles 28
4 – Color Management 21
9 – Resource Management and Tagging 18
8 – Shortcuts and Canvas Input 12

The only real change is that Resource Management now has dropped below Color Management, for the rest, the order is pretty stable.

And a bonus video

In case you missed it, Wolthera made a cool video showing off gamut masks and the new palette docker, create by two new Krita contributors:


And it’s great to be together, of course! We’ve got people from the US, from Mexico, Russia, Finland and the Netherlands. For three of us, it’s the first Krita sprint they’ve attended. Here are the early birds who were already happily hacking on Sunday morning, without even waiting until after breakfast!

October 07, 2018

Hot tubs, time machine

As always, the latest episode of Heavyweight made me laugh out loud and feel feelings. My favorite detail is host Jonathan Goldsteins’ pretentious pluralisation of the “Hot Tub Time Machine” movies as Hot Tubs Time Machine.

Yes Jonathan, we noticed.

Recommended particularly for those who remember and miss the great CBC Radio show, Wiretap.

October 04, 2018

Announcing the first release of libxmlb

Today I did the first 0.1.0 preview release of libxmlb. We’re at the “probably API stable, but no promises” stage. This is the library I introduced a couple of weeks ago, and since then I’ve been porting both fwupd and gnome-software to use it. The former is almost complete, and nearly ready to merge, but the latter is still work in progress with a fair bit of code to write. I did manage to launch gnome-software with libxmlb yesterday, and modulo a bit of brokenness it’s both faster to start (over 800ms faster from cold boot!) and uses an amazing 90Mb less RSS at runtime. I’m planning to merge the libxmlb branch into the unstable branch of fwupd in the next few weeks, so I need volunteers to package up the new hard dep for Debian, Ubuntu and Arch.

The tarball is in the usual place – it’s a simple Meson-built library that doesn’t do anything crazy. I’ve imported and built it already for Fedora, much thanks to Kalev for the super speedy package review.

I guess I should explain how applications are expected to use this library. At its core, there are just five different kinds of objects you need to care about:

  • XbSilo – a deduplicated string pool and a read only node tree. This is typically kept alive for the application lifetime.
  • XbNode – a “Gobject wrapped” immutable node available as a query result from XbSilo.
  • XbBuilder – a “compiler” to build the XbSilo from XbBuilderNode’s or XbBuilderSource’s. This is typically created and destroyed at startup or when the blob needs regenerating.
  • XbBuilderNode – a mutable node that can have a parent, children, attributes and a value
  • XbBuilderSource – a source of data for XbBuilder, e.g. a .xml.gz file or just a raw XML string

The way most applications will use libxmlb is to create a local XbBuilder instance, add some XbBuilderSource’s and then “ensure” a local cache file. The “ensure” process either mmap loads the binary blob if all the file mtimes are the same, or compiles a blob saving it to a new file. You can also tell the XbSilo to watch all the sources that it was built with, so that if any files change at runtime the valid property gets set to FALSE and the application can xb_builder_ensure() at a convenient time.

Once the XbBuilder has been compiled, a XbSilo pops out. With the XbSilo you can query using most common XPath statements – I actually ended up implementing a FORTH-style stack interpreter so we can now do queries like /components/component/id[contains(upper-case(text()),'GIMP')] – I’ll say a bit more on that in a minute. Queries can limit the number of results (for speed), and are deduplicated in a sane way so it’s really quite a simple process to achieve something that would be a lot of C code. It’s possible to directly query an attribute or text value from a node, so the silo doesn’t have to be passed around either.

In the process of porting gnome-software, I had to make libxmlb thread-safe – which required some internal organisation. We now have an non-exported XbMachine stack interpreter, and then the XbSilo actually registers the XML-specific methods (like contains()) and functions (like ~=). These get passed some per-method user data, and also some per-query private data that is shared with the node tree – allowing things like [last()] and position()=3 to work. The function callbacks just get passed an query-specific stack, which means you can allow things like comparing “1” to 1.00f This makes it easy to support more of XPath in the future, or to support something completely application specific like gnome-software-search() without editing the library.

If anyone wants to do any API or code review I’d be super happy to answer any questions. Coverity and valgrind seem happy enough with all the self tests, but that’s no replacement for a human asking tricky questions. Thanks!

Looking forward to Krita 4.2!

Everyone is hard at work, and what will become Krita 4.2 is taking shape already. Today we’re presenting a preview of Krita 4.2. It’s not complete yet, and there ARE bugs. More than in the stable release (we’ll be doing a 4.1.4 after all next week to clear up some more bugs…), and some might make you lose work.

Support Krita! Join the 2018 Fundraiser!

But experiment with it, test it, check it out! There are lots of new things in there, and we’d love your feedback! There are also quite a few things we’re right now working on, that aren’t in this preview, but will be in 4.2. And we’ll talk about, too!

What’s in there already

Masks and Selections. We’ve done a lot of work on improving working with masks and selections. Some of that work has already landed in Krita 4.1.3, but there’s more to come. Ranging from making it easier to paint selections masks, to a new way of moving and transforming selections, to performance improvements all round. The “select opaque” function has received new options.

Gamut masks. A much-demanded new feature, gamut masks mask out part of the color selector. A technique described by James Gurney, this helps you to use color in a harmonious way. You can create new masks and edit existing masks right inside Krita. The masks now work with both the artistic and the advanced color selector. Rotating masks over the color selector is possible as well!

Improved performance. We’re always working to make Krita perform better, and there will always be room for improvement. This preview contains Andrey Kamakin’s Google Summer of Code work on the very core of Krita: the tile engine. That is, the bits where all your layer’s pixels are handled. There’s still some fixing to do here, so be warned that if you paint with really big brushes, you may experience crashes. At the same time, this preview also contains more of Ivan Yossi’s Summer of Code work: the creation of brush masks now uses your CPU’s vectorization instructions, and that also increates performance! Dmitry also worked hard on improving the performance of the Layer Styles feature: especially for the Stroke Layer Style. The rendering became more correct at the same time. And fill layers have become much faster, too!

Keep up to date! Only on Linux for now, since we’re still working on setting up the necessary libraries for encryption on Windows and macOS. The welcome screen can now show the latest news about Krita. It’s off by default, since to bring you the news we have to connect to the Krita website.

Colored Assistants. It’s now possible to give your painting assistants individual colors, and that color is saved and restored when you save and load your .kra project file.

Activate transform tool on pasting. When pasting something in Krita a new layer is created. Most of the time you’ll want to move the pasted layer around or transform it. There’s now a setting in the preferences dialog that, when checked, will make Krita automatically select the transform tool.

Improved move tool: undo and redo with the move tool was always a bit wonky… Now it works as expected. Not a big thing, but it should help with everyone’s workflow.

A smoother UI: 4.2 will have lots of small fixes to improve your workflow. That ranges from making it possible to resize the thumbnails in the layer docker to improved interaction with color palettes to making it possible to translate plugins written in Python. There are also new blending modes, with more coming, and the G’Mic plugin has been updated to the latest version.

Lots of bug fixes. We’re already at nearly 200 bug fixes for 4.2, and that number will only increase.

What we’re working on

But we’re not done yet! We intend to release Krita 4.2 this year, in December, but we haven’t gone into feature freeze. This is a little taste of what may still be coming from 4.2!

This isn’t in the preview yet, but here are a couple of things that our UX expert, Scott, is working on. First, the brush editor is being redesigned. As you can see from the video, it’s being condensed, and that’s because we also want to make it possible to dock it as a panel in one of the dock areas of Krita, or have it floating free, possibly on another monitor.

Then, the text tool‘s UI is being revamped. While Dmitry is working on making the text tool more reliable, Scott is working on making it nicer to use:

Michael’s Summer of Code work hasn’t been merged yet, but we fully intend to do that before the release. This will improve the stability of working with color palettes and make it possible to save palettes in your .kra Krita project file.

Another area where work is going is resource management. That is, loading, working with, tagging, saving and sharing things like brush presets, brush tips, gradients or patterns. This is a big project, but when done Krita will start faster, use less memory and a lot of little niggles and bugs with handling resources will be gone.


Have fun with the 4.2 preview!




(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)


Note: the touch docker, gmic-qt and python plugins are not available on OSX.

Source code


For all downloads:

October 02, 2018

FreeCAD BIM development news - September 2018

Hi folks, Time for one more of our monthly posts about the development of BIM tools for FreeCAD. This month unfortunately, since I was taking some holiday, travelling (and sketching) for the biggest part of the month, I have less new stuff than usual to show. To compensate, I tried a longer and more detailed...

October 01, 2018

Updated Vote Tally!

And here’s an update on voting! We’re almost at 15,000 euros now, or more than four months of solid work. And this what it looks like you want us to work on!

1 – Papercuts 139
2 – Brush Engines 82
3 – Animation 68
6 – Vector Objects and Tools 44
5 – Layers 42
7 – Text 31
10 – Photoshop layer styles 21
9 – Resource Management and Tagging 16
4 – Color Management 15
8 – Shortcuts and Canvas Input 10

Last time we tallied the votes, the papercuts were on top as well. Probably because it’s the default choice, but still. Today, Animation is now in the third place, and brush engines in the second  But Layers has dropped one place, in favour of Vector Objects and Tools. And where Resource Management and Tagging was at the bottom last time, this time, it has climbed up to eight place.

This week, we also plan to bring out a preview release of Krita 4.2. We don’t have everyhing in that we want to yet — like the updated resource handling, but there’s already plenty to play with!

Interview with João Garcia

Could you tell us something about yourself?

My name is João Garcia and I’m an illustrator hailing from Brazil, more specifically, from the city of Florianópolis in the southern part of the country. I graduated in Design in the Universidade Federal de Santa Catarina with a focus on illustration and animation.

Do you paint professionally, as a hobby artist, or both?

I’d say both. I work mainly with game art but I like to explore new ideas with illustration in my spare time.

What genre(s) do you work in?

I don’t think I have a set genre I exclusively work in, but I tend to more modern stuff involving technology or the future. I had a cyberpunk streak a couple of months ago hahah.

I rarely do fantasy or medieval works, but, hey, maybe in the future I’ll work in that genre.

Whose work inspires you most — who are your role models as an artist?

Thanks to Instagram and ArtStation I just can’t stop finding new inspiring art. At the moment my inspirations are: Jacob Hankinson, Ashley Wood, Kudaman, Faraz Shanyar, Ilya Kuvshinov, Anthony MacBain and many more.

How and when did you get to try digital painting for the first time?

I tried my first hand at digital art when I entered college around 2009. There was this shiny thing called a tablet that I didn’t know existed, but I knew I immediately wanted to use it.

What makes you choose digital over traditional painting?

The possibilities. That being said I learned tons from practicing the traditional way.

How did you find out about Krita?

I simply wanted to find alternatives to Photoshop, to be honest. I tried GIMP first and it didn’t suit my workflow and then, luckily, I found Krita.

What was your first impression?

It took a day or two to get used to the software, but I quickly got the hang of it.

What do you love about Krita?

The fact that is specifically made for digital artists. It becomes so intuitive after a while. That, its selection of brushes and, well, because it’s open-source. It amazes me how much Krita suits digital art.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Maybe the fact that it seems to stutter with high resolution files. Or that may just be my not-so-great notebook.

What sets Krita apart from the other tools that you use?

As I said before, the fact that is made exclusively with the digital artist in mind. I mean, that new reference image tool is great!

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

“Museu Nacional”. It was a way of expressing my sadness at losing such a national treasure.

What techniques and brushes did you use in it?

Sketching 2 Chrome Large, Bristles 2 Flat Rough, Watercolor Texture, Airbrush Soft and Blender Rake, basically.

Where can people see more of your work?

I’m at ArtStation, Instagram and Mastodon as @joaogarciart. Also, my website:

Anything else you’d like to share?

Yeah. Krita developers and the whole team: Please keep up your great work!

September 29, 2018

Armband Exxess [sic] Max

Here’s a stupid product idea for you.

It’s a case for the iPhone XS Max that is compatible with two Apple Watch wristbands, so you can wear your phone strapped to your forearm.

I figure it could be at least $299, since you’ve already had to spent at least $1,197 and as much as $2,577 (with options maxed) on your phone and two wristbands.

I’d call it the Armband Exxess [sic] Max for iPhone XS Max. If it existed.

Here’s an artist’s rendering:

September 27, 2018

Reluctant heroism

What makes the testimony of a survivor of sexual assault so heroic is that they shouldn’t have had to be a hero. As a society, we owe a great debt.

Krita 4.1.3 Released

Today we’re releasing the latest version of Krita! In the middle of our 2018 fundraiser campaign, we’ve found the time to prepare Krita 4.1.3. There are about a hundred fixes, so it’s a pretty important release and we urge everyone to update! Please join the 2018 fundraiser as well, so we can continue to fix bugs!

Now you might be wondering where Krita 4.1.2 went to… The answer is that we had 4.1.2 prepared, but the Windows builds were broken because of a change in the build infrastructure. While we were fixing that, Dmitry made a couple of bug fixes we really wanted to release immediately, like an issue where multiline centered text would be squashed into a single line when you’d edit the text. So we went and created a new version, 4.1.3!

Krita 4.1.3 is a bug fix release, so that’s the most important thing, but there are also some new things as well.

The first of these is the new welcome screen, by Scott Petrovic. You get some handy links, a list of recently used files, a link to create or open a file and a hint that you can also drag and drop images in the empty window to open them.

Dmitry Kazakov has worked like crazy fixing bugs and improving Krita in the past couple of weeks. One of the things he did was improve Instant Preview mode. Originally funded by our 2015 Kickstarter, Instant Preview works by computing a scaled-down version of the image and displaying that. But with some brushes, it would cause a little delay at the end of a stroke, or some flickering on the canvas: BUG:361448. That’s fixed now, and painting really feels smoother! And for added smoothness, most of Ivan Yossi’s Google Summer of Code work is also included in this release.

We’ve also done work on improving working with selections. Krita’s selections can be defined as vectors or as a pixel mask. If you’re working on a vector selection, using the figure tools, like rectangle or ellipse now add a vector to the selection, instead of rasterizing the vector selection.

The move tool has been improved so it’s possible to undo the steps you’ve set with the move tool, instead of undo immediately placing back the layer where it originally came from. See BUG:392014.

The bezier curve tools have been improved: there is an auto-smoothing option. If you select auto-smoothing, the created curve will be not a polygon, but a smooth curve with the type of all the points set to “smooth”. BUG:351787

The final new feature is round corners for the rectangle tool. Whether you’re working on a pixel or a vector layer, you have an ability to set round corners for the resulting shape. BUG:335568. You could, of course, already round vector rectangles by editing the shape, but this is easier.

The Comics Project manager, a Python plugin created by Wolthera van Hövell tot Westerflier has seen a ton of improvements, especially when it comes to generating standard-compliant epub and acbf files. On a related note, check out Peruse, the KDE Comic Book Reader. It’s a long list of improvements:

  • Add improved navigation to generated epubs. This adds…
    • Region navigation for panels and balloons, as per epub spec.
    • Navigation that uses the “acbf_title” keyword to create a TOC in both nav.xhtml and ncx
    • A Pagelist in both nav.xhtml and ncx.
  • Ensure generated EPUBs pass EPUB check validation. This invoved ensuring that the mimetype gets added first to the zip, as well as as some fixes with the author metadata and the NCX pagelist.
  • Fix language nonsense.
  • Fix several issues with the EPUB metadata export.
    • Add MARC-relators for use with the ‘refines’.
    • Add UUID sharing between acbf and epub.
    • Add a modiied and proper date stuff.
  • Implement “epub_spread”, the primary color ahl meta and more. This also…
    • Makes the balloon localisation more robust.
    • Names all balloons text-areas as that is a bit more accurate
    • Set a sequence on author
    • Adds a ton of documentation everywhere.
  • Make the generated EPUB 3 files pre-paginated. This’ll allow comics to be rendered as part of a spread which should have a nice result.
  • Move Epub to use QDomDocument for generation, split out ncx/opf. This is necessary so we have nicer looking xml files, as well. as having a bit more room to do proper generation for epub 2/3/3+
  • Update ComicBookInfo and ComicRack generators.

And here’s the full list of fixed bugs:


  • Add a workaround for loading broken files with negative frame ids. BUG:395378
  • Delete existing frame files only within exported range BUG:377354
  • Fix a problem of Insert Hold Frames action. We should also “offset” empty cell to make sure the expanding works correctly. BUG:396848
  • Fix an assert when trying to export a PNG image sequence BUG:398608
  • Fix updates when switching frame on a layer with scalar channel
  • Use user-selected color label for the auto-created animation frames BUG:394072
  • saving of the multiple frames insertion/removal options to the config

Improvements to support for various file formats

  • Fix an assert if an imported SVG file links to non-existent color profile BUG:398576
  • Fix backward compatibility of adjustment curves. Older versions supported fewer adjustable channels, so we can no longer assume the count in configuration data to matches exactly. BUG:396625
  • Fix saving layers with layer styles BUG:396224
  • Let Krita save all the kinds of layers into PSD (in rasterized way) BUG:399002
  • PNG Export: convert to rgb, if the image isn’t rgb or gray BUG:398241
  • Remove fax-related tiff options. In fax mode tiff can store only 1 bit per channel images, which Krita doesn’t support. So just remove these options from the GUI BUG:398548


  • Add a shortcut for the threshold filter BUG:383818
  • Fix Burn filter to work in 16-bit color space BUG:387102
  • Make color difference threshold for color labels higher
  • Restore the shortcut for the invert filters.


  • Remove hardcoded brush size limit for the Quick Brush BUG:376085
  • Fix rotation direction when the transformed piece is mirrored. BUG:398928
  • Make Stamp brush preview be scaled correctly CCBUG:399065


  • Add a workaround for tablets not reporting tablet events in hover mode BUG:363284


  • Do not reset text style when selecting something in text editor
  • Fix saving line breaks when the text is not left aligned BUG:395769

Reference images tool

  • Fix reference image cache update conditions BUG:397208

build system

  • Fix build with dcraw 0.19


  • Disable pixel grid action if opengl is disabled BUG:388903 Patch by Shingo Ohtsuka, thanks!
  • Fix painting of selection decoration over grids BUG:362662

Fixes to Krita’s Core

  • Fix saving to a dropbox or google driver folder on Windows temporary workaround until QTBUG-57299: QSaveFile should be disabled on Windows.
  • Fix to/fromLab16/Rgb16 methods of the Alpha color space
  • Fix undo in the cloned KisDocument BUG:398730


  • Automatically avoid conflicts between color labels and system colors
  • Fix cursor jumps in the Layer Properties dialog BUG:398958
  • Fix resetting active tool when moving layers above vector layers BUG:398095
  • Fix selecting of the layer after undoing Flatten Image BUG:398814
  • Fix showing two nodes when converting to a Filter Mask 1) When a filter mask we should first remove the source layer, and only after that show the filter selection dialog 2) Also make sure that the operation is rolled back when the user presses Cancel button
  • Fix updates of Clone Layers when the nodes are updated with subtree walker
  • a spurious assert in layer cloning BUG:398788

Metadata handling

  • Fix a memory access problem in KisExifIO
  • Fix memory access problems in KisExifIo
  • Show metadata in the dublin core page of the metadata editor. The editor plugin is still broken, with dates not working, bools not working, but now at least a string one has entered is shown again. CCBUG:396672

Python scripting

  • SegFault in LibKis Node mergeDown BUG:397043
  • apidox for Node.position() BUG:393035
  • Add modified() getter to the Document class BUG:397320
  • Add resetCache() Python API to FileLayer BUG:398740
  • Fix memory management of the Filter’s InfoObject BUG:392183
  • Fix setting file path of the file layer through python API BUG:398740
  • Make sure we wait for the filter to be done

Resource handling

  • Fix saving a fixed bundle under the original name


  • Fix “stroke selection” to work with local selections BUG:398007
  • Fix a crash when moving a vector shape selection when it is an overlay
  • Fix crash when converting a shape selection shape into a shape selection
  • Fix crash when undoing removing of a selection mask
  • Fix rounded rectangular selection to actually work BUG:397806
  • Fix selection default bounds when loading old type of adjustment layers
  • Stroke Selection: don’t try to add a shape just because a layer doesn’t have a paint device BUG:398015

Other tools

  • Fix color picking from reference images. Desaturation now affects the picked color, and reference images are ignored for picking if hidden.
  • Fix connection points on cage transform BUG:396788
  • Fix minor UIX issues in the move tool: 1) adds an explicit frame when the move stroke is in progress; 2) Ctrl+Z now cancels the stroke if there is nothing to undo BUG:392014
  • Fix offset in Move Tool in the end of the drag
  • Fix shift modifier in Curve Selection Tool. The modifier of the point clicked the last should define the selection mode. For selection tools we just disable shift+click “path-close” shortcut of the base path tool. BUG:397932
  • Move tool crops the bounding area by exact bounds
  • Reduce aliasing in reference images BUG:396257

Papercuts and UI issues

  • Add the default shortcut for the close action: when opening Krita with an image, the close document shortcut was not available.
  • FEATURE: Add a hidden config option to lock all dockers in place
  • Fix KMainWindow saving incorrect widget settings
  • Fix broken buddy: Scale to New Size’s Alt-F points to Alt-T BUG:396948
  • Fix http link color in KritaBlender.colors: The link are now visible on the startup page of Krita and were dark blue, exact same value as the background making the frame hard to read. Switching them to bright cyan improves the situation.
  • Fix loading the template icons
  • Fix the offset dialog giving inaccurate offsets. BUG:397218
  • Make color label selector handle mouse release events CCBUG:394072
  • Remember the last opened settings page in the preferences dialog
  • Remember the last used filter bookmark
  • Remove the shortcut for wraparound mode: It’s still available from the menu and could be put on the toolbar, or people could assign a shortcut, but having it on by default makes life too hard for people who are trying to support our users.
  • Remove the shortcuts from the mdi window’s system menu’s actions. The Close Window action can now have a custom shortcut, and there are no conflicts with hidden actions any more. BUG:398729 BUG:375524 BUG:352205
  • Set color scheme hint for compositor. This is picked up by KWin and sets the palette on the decoration and window frame, ensuring a unified look.
  • Show a canvas message when entering wraparound mode
  • Show the zoom widget when switching documents BUG:398099
  • Use KSqueezedTextLabel for the pattern name in the pattern docker and brush editor BUG:398958
  • sort the colorspace depth combobox



Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.


(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

When it is updated, you can also use the Krita Lime PPA to install Krita 4.1.3 on Ubuntu and derivatives. We are working on an updated snap.


Note: the touch docker, gmic-qt and python plugins are not available on OSX.

Source code


For all downloads:


The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
. The signatures are here (filenames ending in .sig).

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

September 26, 2018

Support Andrea Ferrero on Patreon!

Support Andrea Ferrero on Patreon!

Andrea is developing Photo Flow, GIMP AppImage, Hugin AppImage, and more!

Andrea Ferrero, or as we know him Carmelo_DrRaw, has been contributing to the PIXLS.US community since April of 2015. A self described developer and photography enthusiast, Andrea is the developer of the PhotoFlow image editor, and is producing AppImages for:

Andrea is the best sort of community member, contributing six different projects (including his own)! He is always thoughtful in his responses, does his own support for PhotoFlow, and is kind and giving. He has finally started a Patreon page to support his all of his hard work. Support him now!

He was also kind enough to answer a few questions for us:

PX: When did you get into photography? What’s your favorite subject matter?

AF: I think I was about 15 when I got my first reflex, and I was immediately fascinated by macro-photography. This is still what I like to do the most, together with taking pictures of my kids. ;-) By the way, you can visit my personal free web gallery on GitHub: (adapted from this project).

It is still a work in progress, but you are welcome to fork it and adapt it to your needs if you find it useful!

PX: What brought you to using and developing Free/Open Source Software?

AF: I started to get interested in programming when I was at the university, in the late 90’s. At that time I quickly realized that the easiest way to write and compile my code was to throw Linux into my hard drive. Things were not as easy as today but I eventually managed to get it running, and the adventure began.

A bit later I started a scientific career (nothing related to image processing or photography, so I won’t bother with more details about my daily job), and since then I have been a user of Linux-based computing clusters for almost 20 years at the time of writing… A large majority of the software tools I use at work are free and open sourced and this definitely has marked my way of thinking and developing.

PX: What are some new/exciting features you develop in Photo Flow?

AF: Currently I am mostly focusing on HDR processing and high-quality Dynamic Range compression - what is also commonly called shadows/highlights compression.

More generally, there is still a lot of work to do on the performances side. The software is already usable and quite stable, but some of the image filters are still a bit too slow for real-time feedback, especially when combined together.

The image exporting module is also currently in a state of work in progress. It is already possible to select either Jpeg or TIFF (8, 16 or floating-point 32 bits bit depth) as the output format, to resize the image and add some post-resize sharpening, and to select the output ICC profile. What is still missing is a real-time preview of the final result, with a possibility to soft-proof the output profile. The same options need to be included in the batch processor as well.

On a longer term, and if there is some interest from the community, I am thinking about porting the code to Android in a simplified form that would be suitable for tablets and the like. The small memory footprint of the program could be an important advantage on such systems.

PX: What other applications would you like to make an AppImage for? Have you explored Snaps or Flatpaks?

AF: I am currently developing and refining AppImage packages for GIMP, RawTherapee, LuminanceHDR and HDRMerge, in addition to PhotoFlow. All packages are automatically built and deployed through Travis CI, for better reproducibility and increased security. Hugin is the next application that I plan to package as an AppImage.

All the AppImage projects are freely available on GitHub. That’s also the best place for any feedback, bug report, or suggestion.

There is an ongoing discussion with the GIMP developers about the possibility to provide the AppImage as an official download.

In addition to the AppImage packages, I am also working with the RawTherapee developers on cross-compiled Windows packages that are also automatically built on Travis CI. The goal is to help them provide up-to-date packages from the main development branches, so that more users can test them and provide feedback.

I’m also open to any suggestions for additional programs that could be packaged as AppImages, so do not hesitate to express your wishes!

Personally I am a big fan of the AppImage idea, mostly because, unlike Snap or Flatpack packages, it is not bound to any specific distribution or run-time environment. The packager has full control over the contents of the AppImage package, pretty much like MacOS bundles.

Moreover, I find the community of developers around the AppImage format very active and open-minded. I am currently collaborating to improve the packaging of GTK applications. For those who are interested in the details, the discussion can be followed here:

A human being finishing Super Mario Bros. in 4m 55s is the four-minute mile of our generation. What a time to be alive.

September 23, 2018

Writing Solar System Simulations with NAIF SPICE and SpiceyPy

Someone asked me about my Javascript Jupiter code, and whether it used PyEphem. It doesn't, of course, because it's Javascript, not Python (I wish there was something as easy as PyEphem for Javascript!); instead it uses code from the book Astronomical Formulae for Calculators by Jean Meeus. (His better known Astronomical Algorithms, intended for computers rather than calculators, is actually harder to use for programming because Astronomical Algorithms is written for BASIC and the algorithms are relatively hard to translate into other languages, whereas Astronomical Formulae for Calculators concentrates on explaining the algorithms clearly, so you can punch them into a calculator by hand, and this ends up making it fairly easy to implement them in a modern computer language as well.)

Anyway, the person asking also mentioned JPL's page HORIZONS Ephemerides page, which I've certainly found useful at times. Years ago, I tried emailing the site maintainer asking if they might consider releasing the code as open source; it seemed like a reasonable request, given that it came from a government agency and didn't involve anything secret. But I never got an answer.

[SpiceyPy example: Cassini's position] But going to that page today, I find that code is now available! What's available is a massive toolkit called SPICE (it's all in capitals but there's no indication what it might stand for. It comes from NAIF, which is NASA's Navigation and Ancillary Information Facility).

SPICE allows for accurate calculations of all sorts of solar system quantities, from the basic solar system bodies like planets to all of NASA's active and historical public missions. It has bindings for quite a few languages, including C. The official list doesn't include Python, but there's a third-party Python wrapper called SpiceyPy that works fine.

The tricky part of programming with SPICE is that most of the code is hidden away in "kernels" that are specific to the objects and quantities you're calculating. For any given program you'll probably need to download at least four "kernels", maybe more. That wouldn't be a problem except that there's not much help for figuring out which kernels you need and then finding them. There are lots of SPICE examples online but few of them tell you which kernels they need, let alone where to find them.

After wrestling with some of the examples, I learned some tricks for finding kernels, at least enough to get the basic examples working. I've collected what I've learned so far into a new GitHub repository: NAIF SPICE Examples. The README there explains what I know so far about getting kernels; as I learn more, I'll update it.

SPICE isn't easy to use, but it's probably much more accurate than simpler code like PyEphem or my Meeus-based Javascript code, and it can calculate so many more objects. It's definitely something worth knowing about for anyone doing solar system simulations.

Urban sketching in Salvador

The drawings I did during the wonderful national encounter of urban sketchers in Salvador da Bahia, Brazil

September 20, 2018

Speeding up AppStream: mmap’ing XML using libxmlb

AppStream and the related AppData are XML formats that have been adopted by thousands of upstream projects and are being used in about a dozen different client programs. The AppStream metadata shipped in Fedora is currently a huge 13Mb XML file, which with gzip compresses down to a more reasonable 3.6Mb. AppStream is awesome; it provides translations of lots of useful data into basically all languages and includes screenshots for almost everything. GNOME Software is built around AppStream, and we even use a slightly extended version of the same XML format to ship firmware update metadata from the LVFS to fwupd.

XML does have two giant weaknesses. The first is that you have to decompress and then parse the files – which might include all the ~300 tiny AppData files as well as the distro-provided AppStream files, if you want to list installed applications not provided by the distro. Seeking lots of small files isn’t so slow on a SSD, and loading+decompressing a small file is actually quicker than loading an uncompressed larger file. Parsing an XML file typically means you set up some callbacks, which then get called for every start tag, text section, then end tag – so for a 13Mb XML document that’s nested very deeply you have to do a lot of callbacks. This means you have to process the description of GIMP in every language before you can even see if Shotwell exists at all.

The typical way parsing XML involves creating a “node tree” when parsing the XML. This allows you treat the XML document as a Document Object Model (DOM) which allows you to navigate the tree and parse the contents in an object oriented way. This means you typically allocate on the heap the nodes themselves, plus copies of all the string data. AsNode in libappstream-glib has a few tricks to reduce RSS usage after parsing, which includes:

  • Interning common element names like description, p, ul, li
  • Freeing all the nodes, but retaining all the node data
  • Ignoring node data for languages you don’t understand
  • Reference counting the strings from the nodes into the various appstream-glib GObjects

This still has a both drawbacks; we need to store in hot memory all the screenshot URLs of all the apps you’re never going to search for, and we also need to parse all these long translated descriptions data just to find out if gimp.desktop is actually installable. Deduplicating strings at runtime takes nontrivial amounts of CPU and means we build a huge hash table that uses nearly as much RSS as we save by deduplicating.

On a modern system, parsing ~300 files takes less than a second, and the total RSS is only a few tens of Mb – which is fine, right? Except on resource constrained machines it takes 20+ seconds to start, and 40Mb is nearly 10% of the total memory available on the system. We have exactly the same problem with fwupd, where we get one giant file from the LVFS, all of which gets stored in RSS even though you never have the hardware that it matches against. Slow starting of fwupd and gnome-software is one of the reasons they stay resident, and don’t shutdown on idle and restart when required.

We can do better.

We do need to keep the source format, but that doesn’t mean we can’t create a managed cache to do some clever things. Traditionally I’ve been quite vocal against squashing structured XML data into databases like sqlite and Xapian as it’s like pushing a square peg into a round hole, and forces you to think like a database doing 10 level nested joins to query some simple thing. What we want to use is something like XPath, where you can query data using the XML structure itself.

We also want to be able to navigate the XML document as if it was a DOM, i.e. be able to jump from one node to it’s sibling without parsing all the great, great, great, grandchild nodes to get there. This means storing the offset to the sibling in a binary file.

If we’re creating a cache, we might as well do the string deduplication at creation time once, rather than every time we load the data. This has the added benefit in that we’re converting the string data from variable length strings that you compare using strcmp() to quarks that you can compare just by checking two integers. This is much faster, as any SAT solver will tell you. If we’re storing a string table, we can also store the NUL byte. This seems wasteful at first, but has one huge advantage – you can mmap() the string table. In fact, you can mmap the entire cache. If you order the string table in a sensible way then you store all the related data in one block (e.g. the <id> values) so that you don’t jump all over the cache invalidating almost everything just for a common query. mmap’ing the strings means you can avoid strdup()ing every string just in case; in the case of memory pressure the kernel automatically reclaims the memory, and the next time automatically loads it from disk as required. It’s almost magic.

I’ve spent the last few days prototyping a library, which is called libxmlb until someone comes up with a better name. I’ve got a test branch of fwupd that I’ve ported from libappstream-glib and I’m happy to say that RSS has reduced from 3Mb (peak 3.61Mb) to 1Mb (peak 1.07Mb) and the startup time has gone from 280ms to 250ms. Unless I’ve missed something drastic I’m going to port gnome-software too, and will expect even bigger savings as the amount of XML is two orders of magnitude larger.

So, how do I use this thing. First, lets create a baseline doing things the old way:

$ time appstream-util search gimp.desktop
real	0m0.645s
user	0m0.800s
sys	0m0.184s

To create a binary cache:

$ time xb-tool compile appstream.xmlb /usr/share/app-info/xmls/* /usr/share/appdata/* /usr/share/metainfo/*
real	0m0.497s
user	0m0.453s
sys	0m0.028s

$ time xb-tool compile appstream.xmlb /usr/share/app-info/xmls/* /usr/share/appdata/* /usr/share/metainfo/*
real	0m0.016s
user	0m0.004s
sys	0m0.006s

Notice the second time it compiled nearly instantly, as none of the filename or modification timestamps of the sources changed. This is exactly what programs would do every time they are launched.

$ df -h appstream.xmlb
4.2M	appstream.xmlb

$ time xb-tool query appstream.xmlb "components/component[@type='desktop']/id[text()='firefox.desktop']"
RESULT: <id>firefox.desktop</id>
RESULT: <id>firefox.desktop</id>
RESULT: <id>firefox.desktop</id>
real	0m0.008s
user	0m0.007s
sys	0m0.002s

8ms includes the time to load the file, search for all the components that match the query and the time to export the XML. You get three results as there’s one AppData file, one entry in the distro AppStream, and an extra one shipped by Fedora to make Firefox featured in gnome-software. You can see the whole XML component of each result by appending /.. to the query. Unlike appstream-glib, libxmlb doesn’t try to merge components – which makes it much less magic, and a whole lot simpler.

Some questions answered:

  • Why not just use a GVariant blob?: I did initially, and the cache was huge. The deeply nested structure was packed inefficiently as you have to assume everything is a hash table of a{sv}. It was also slow to load; not much faster than just parsing the XML. It also wasn’t possible to implement the zero-copy XPath queries this way.
  • Is this API and ABI stable?: Not yet, as soon as gnome-software is ported.
  • You implemented XPath in c‽: No, only a tiny subset. See the

Comments welcome.

Let’s Tally Some Votes!

We’re about a week into the campaign, and almost 9000 euros along the path to bug fixing. So we decided to do some preliminary vote tallying! And share the results with you all, of course!

On top is Papercuts, with 84 votes. Is that because it’s the default choice? Or because you are telling us that Krita is fine, it just needs to be that little bit smoother that makes all the difference? If the latter, we won’t disagree, and yesterday Boudewijn fixed one of the things that must have annoyed everyone who wanted to create a custom image: now the channel depths are finally shown in a logical order!

Next, and that’s a  bit of a surprise, is Animation with 41 votes. When we first added animation to Krita, we were surprised  by the enthusiasm with which it was welcomed. We’ve actually seen, with our own eyes, at a Krita Sprint, work done in Krita for a very promising animated television series!

Coming third, there’s the Brush Engine bugs, with 39 votes. Until we decided that it was time to spend time on stability and polish, we thought that in 2018, we’d work on adding some cool new stuff for brushes. Well, with Digital Atelier, it’s clear that there is a lot more possible with brushes in Krita than we thought — but as you’re telling us, there’s also a lot that should be fixed. The brush engine code dates back to a rewrite in 2006, 2007, with a huge leap made when Lukáš Tvrdý wrote his thesis on Krita’s brush engines. Maybe we’ll have to do some deep work, maybe it really is all just surface bugs. We will find out!

Fourth, bugs with Layer handling. 23 votes. For instance, flickering when layers get updated. Well, Dmitry fixed one bug there on Wednesday already!

Vector Objects and Tools: with 20 votes, Text with 15 votes and Layer Styles, with 13 votes (4 less than there are bug reports for Layer Styles…): enough to show people are very much interested in these topics, but it looks like the priority is not that high.

The remaining topics, Color Management, Shortcuts and Canvas Input, Resource Management and Tagging, all get 8 votes. We did fix a shortcuts bug, though… Well, that fix fixed three of them! And resource management is being rewritten in any case — maybe that’s why people don’t need to vote for it!

September 18, 2018

Let’s take this bug, for example…

Krita’s 2018 fund raiser is all about fixing bugs! And we’re fixing bugs already. So, let’s take a non-technical look at a bug Dmitry fixed yesterday. This is the bug: “key sequence ctrl+w ambiguous with photoshop compatible bindings set” And this is the fix.

So, we actually both started looking at the bug at the same time, me being Boudewijn. The issue is, if you use a custom keyboard shortcut scheme that includes a shortcut definition for “close current image”, then a popup would show, saying that the shortcut is ambiguous:

The popup doesn’t tell where the ambiguous definition is… Only that there is an ambiguous definition. Hm… Almost everything that does something in Krita that is triggered by a shortcut is an action. And deep down, Qt keeps track of all actions, and all shortcuts, but we cannot access that list.

So we went through Krita’s source code. The action for closing an image was really created only once, inside Krita’s code. And, another bug, Krita doesn’t by default assign a shortcut to this action. The default shortcut should be CTRL+W on Linux and Windows, and COMMAND+W on macOS.

Curiously enough, the photoshop-compatible shortcut definitions did assign that shortcut. So, if you’d select that scheme, a shortcut would be set.

Even curiouser, if you don’t select one of those profiles, so Krita doesn’t set a shortcut, the default ctrl+w/command+w shortcut would still work.

Now, that can mean only one thing: Krita’s close-image action is a dummy. It never gets used. Somewhere else, another close-image action is created, but that doesn’t happen inside Krita.

So, Dmitry started digging into Qt’s source code. Parts of Qt are rather old, and the module that makes it possible to show multiple subwindows inside a big window is part of that old code.

void QMdiSubWindowPrivate::createSystemMenu()
    addToSystemMenu(CloseAction, QMdiSubWindow::tr("&Close"), SLOT(close()));

Ah! That’s where another action is created, and a shortcut allocated. Completely outside our control. This bug, which was reported only two days ago, must have been in Krita since version 2.9! So, what we do now is to make sure that the Krita’s own close-image action’s shortcut gets triggered. We do that by making sure Qt’s action only can get triggered if the subwindow’s menu is open.

     * Qt has a weirdness, it has hardcoded shortcuts added to an action
     * in the window menu. We need to reset the shortcuts for that menu
     * to nothing, otherwise the shortcuts cannot be made configurable.
     * See:
    QMdiSubWindow *subWindow = d->mdiArea->currentSubWindow();
    if (subWindow) {
        QMenu *menu = subWindow->systemMenu();
        if (menu) {
            Q_FOREACH (QAction *action, menu->actions()) {

That means, for every subwindow we’ve got, we grab the menu. For every entry in the menu, we remove the shortcut. That means that our global Krita close-window shortcut always fires, and that people can select a different shortcut, if they want to.

September 17, 2018

Paris Art School Looking for Krita Teacher

An art school in Paris, France, is looking for a Krita teacher! This is pretty cool, isn’t it? If you’re interested, please mail and we’ll forward your mail to the school!

A freelance Krita teacher to teach basics to art school students, Paris 11e. November 2018.

The course has to be mainly in french (could be half in english).
That’s full-days session with differents students group (28 students x 5 classes).

  • The teacher has to have an “statut autoentrepreneur” (french freelance).
  • Price is around 50 euros per hour (6 hours days)
  •  The courses will be on : 5th, 8th, 9th,12th, 13th, 14th, 15th, 16th november 2018
  • It’s mainly about general tools of the software, and how to draw and paint with Krita
    using tablets
  • There are possibilities to work more than this first schedule, later in 2019

Profile: Game artist, Concept artist, digital painter, illustrator… please send your website if interested!

Cherche formateur auto-entrepreneur pour cours de Krita à des étudiants en Ecole d’Art, Paris 11. Novembre 2018.

Le cours doit être majoritairement donné en français, mais anglais partiel possible.
Jours pleins à l’école avec différents groupes d’étudiants (28 étudiants x 5 classes).

  • le formateur doit avoir un statut auto-entrepreneur
  • rémunération autour de 50 euros l’heure (journées de 6 heures)
  • les cours seront les : 5, 8, 9, 12, 13, 14, 15, 16 novembre 2018
  • il s’agit d’apprendre les outils de bases du logiciel, et surtout les outils de dessin et peinture avec tablette
  • Il y a des possibilités futures pour d’autres dates dans l’année 2019

Profil: Game artist, Concept artist, digital painter, illustrateur… merci d’envoyer votre site si intéressé !

Interview with Alyssa May

Could you tell us something about yourself?

I’m a graduate of the Kansas City Art Institute, but I’ve since moved back to Northwest Arkansas. When I’m not painting or playing video games, I’m enjoying the great outdoors with my wonderful husband, adorable daughter, and our 8-year-old puppy.

Do you paint professionally, as a hobby artist, or both?

I do both! I do freelance work for clients, but I still paint for fun (and for prints).

What genre(s) do you work in?

Primarily fantasy because I love how much room for imagination there is. Nothing is off-limits! While that genre is my favorite, I also do portraiture (mostly pets) and have been dabbling a bit in children’s illustration recently.

Whose work inspires you most — who are your role models as an artist?

Dan dos Santos is my absolute favorite artist. His rendering is gorgeous. His color is masterful. There’s a lot that can be learned just by looking at the paintings he produces, but luckily for me and anyone else who looks up to his work, he even shares some of his techniques and professional experience with the world. He works mostly in traditional media, but the concepts that he discusses are pretty universal. As far as digital artists go, I’m very fond of the work of people like Marta Dahlig and Daarken.

How and when did you get to try digital painting for the first time?

I think college was the first time that I sat down with a tablet and computer and gave the digital painting thing a go. Before that, though, I had dabbled in some other digital image-making techniques, I was more interested in the traditional stuff like oil paints and charcoal.

What makes you choose digital over traditional painting?

Once I got over that initial transitional hump from traditional to digital I was hooked. There’s no cleanup. You can’t run out of blue paint at just the wrong moment. The undo function is absolute magic. More than anything, though, it’s the control that keeps me working in a digital space. Between layers, history states, and myriad manipulation options, I can experiment without worrying about destroying anything. It’s very freeing and really strips the blank canvas of any of its intimidating qualities.

How did you find out about Krita?

Reddit. It came up in a number of threads about digital painting software. People had a lot of positive things to say about it, so when I felt like it was time to start looking at some Photoshop alternatives to paint in, it was the first one that I tried.

What was your first impression?

I was pleasantly surprised when I launched Krita the first time. The UI was polished and supported high DPI displays. All of the functionality that I was looking to replace from Photoshop CS5 was there—and it had been tuned for painting specifically. I was even able to open up the PSDs I had been working on and transition over to Krita without losing a beat. All of my layers and blending modes were intact and ready rock. Hard to ask for a smoother switch than that!

What do you love about Krita?

Aside from the pricing and the whole thing being built from the ground up with painting in mind, I think my favorite feature is actually just the ability to configure what shows up in the toolbars as much as you can in Krita. I work on a Surface Pro 4, and being able to have all of the functions I need in the interface itself so that I don’t have to have a keyboard in between me and the painting to keep repetitive functions speedy is so great.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Really, the brush system improvements in the last big update fixed up basically anything I could have hoped for! The only lingering thing for me probably comes down to a personal preference, but when I choose to save out a flat version of my document while working on a KRA, I’d rather the open document default to saving on the KRA on subsequent saves instead of that new JPEG/TIFF/whatever other format I selected for the un-layered copy.

What sets Krita apart from the other tools that you use?

I think the open source nature of it is a big component, but on a daily use basis, the biggest functional difference between Krita and other tools I use (like Photoshop and Illustrator) is that painting is the intended usage. The interface and toolsets are geared entirely toward painting and drawing by default, so doing that feels more natural most of the time.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I think my favorite Krita painting so far is my most recent—Demon in the Dark. Out of my work produced start to finish in the program, that one had the most detail to play with and I had a lot more fun balancing the composition. Cloth, smoky details, and the female figure are some of the things I enjoy painting most, and that one included all of them!

What techniques and brushes did you use in it?

There weren’t any particularly fancy techniques or brushes used. I like to keep it simple and sketch out my composition, broadly block in color, then progressively refine, layering in and collapsing down chunks of painting until things are reasonably polished and cohesive. To pull it off, I used my staple brushes (hard round and standard blender) for the brunt of the painting, then some more specific texture build-up brushes for hair, clouds, particles, and the like. A time lapse of the whole process is up on my YouTube channel (

Where can people see more of your work?

People can check out my work on my website,, or my Instagram, @artofalyssamay. They can also find time lapses of my paintings on my YouTube channel,!

Anything else you’d like to share?

Just my thanks to the Krita team for making and sharing such a solid program!

September 16, 2018

Printing Two-Sided from the Command Line

The laser printers we bought recently can print on both sides of the page. Nice feature! I've never had access to a printer that can do that before.

But that requires figuring out how to tell the printer to do the right thing. Reading the man page for lp, I spotted the sides option: print -o sides=two-sided-long-edge. But that doesn't work by itself. Adding -n 2 looked like the way to go, but nope! That gives you one sheet that has page 1 on both sides, and a second sheet that has page 2 on both sides. Because of course that's what a normal person would want. Right.

The real answer, after further research and experimentation, turned out to be the collate=true option:

lp -o sides=two-sided-long-edge -o collate=true -d printername file

September 15, 2018

The Krita 2018 Fundraiser Starts: Squash the Bugs!

It’s time for a new Krita fundraiser! Our goal this year is to make it possible for the team to focus on one thing only: stability. Our previous fundraisers were all about features: adding new features, extending existing features. Thanks to your help, Krita has grown at breakneck speed!

Squash the bugs!

This year, we want to take a step back, look at we’ve achieved, and take stock of what got broken, what didn’t quite make the grade and what got forgotten. In short, we want to fix bugs, make Krita more stable and bring more polish and shine to all those features you all have made possible!

We’re not using Kickstarter this year. Already in 2016, Kickstarter felt like a tired formula. We’re also not going for a fixed amount of funding this year: every 3500 euros funds one month of work, and we’ll spend that time on fixing bugs, improving features and adding polish.

Polish Krita!

As an experiment, Dmitry has just spent about a month on area of Krita: selections. And now there are only a few issues left with selection handling: the whole area has been enormously improved. And now we want to ask you to make it possible for us to do the same with some other important areas in krita, ranging from papercuts to brush engines, from color management to resource management. We’ve dug through the bugs database, grouped some things together and arrived at a list of ten areas where we feel we can improve Krita a lot.

The list is order of number of reports, but if you support Krita in this fundraiser, you’ll be able to vote for what you think is important! Voting is fun, after all, and we love to hear from you all what you find  the most important things.

Practical Stuff

Practically speaking, we’ve kicked out Kickstarter, which means that from the start, you’ll be able to support our fundraiser with credit cards, paypal, bank transfers — even bitcoin! Everyone who donates from 15 September to 15 October will get a vote.

And everyone who donates 50 euros or more will get a free download of Ramon Miranda’s wonder new brush preset bundle, Digital Atelier. Over fifty of the highest-quality painterly brush presets (oils, pastel, water color) and more than that: 2 hours of tutorial video explaining the creation process in detail.


Go to the campaign page!

September 12, 2018

Introducing Digital Atelier: a painterly brush preset pack by Ramon Miranda with tutorial videos!

Over the past months, Ramon Miranda, known for his wonderful introduction to digital painting, Muses, has worked on creating a complete new brush preset bundle: Digital Atelier. Not only does this contain over fifty new brush presets, more than thirty new brush tips and twenty patterns and surfaces.

There is almost two hours of in-depth video tutorial, working you through the process of creating new brush presets.

Ramon has gone deep here! The goal was to create painterly brushes: achieving the look and feel of oil paint, pastel or water colors. Ramon did a lot of research and experimentation and it has paid off handsomely:

The complete download is 8 gigabytes!


On Saturday, Krita’s 2018 Squash the Bugs fundraiser will start. Anyone who supports Krita with €50 or more will get a free download! On October 16th, we’ll put Digital Atelier in the Krita shop for €39.95.


Brush pack:

  • Fifty-one new brush presets — check out the reference sheet!
  • Twenty-four oil paint brush presets, of which four are very experimental.
  • Thirteen Pastel brush presets.
  • Fourteen Watercolor brush presets.
  • Thirty-four new PNG and five new SVG brush tips.
  • Twenty 512×512 paper surfaces and patterns.

Tutorial Videos:

  • Introduction: Knowing our tools
  • Oil painting
  • Pastel painting
  • Water Color painting
  • Creating your own Patterns
  • Creating your own Brush tips.

The music is by Kevin MacLeod. The language used in the videos is English.

September 08, 2018

PyBullet and “Sim-to-Real: Learning Agile Locomotion For Quadruped Robots”

PyBullet is receiving regular updates, you can see the latest version here:
Installation and update is simple:
pip install -U pybullet

Check out the PyBullet Quickstart Guide and clone the github repository for more PyBullet examples and OpenAI Gym environments.

A while ago, Our RSS 2018 paper “Sim-to-Real: Learning Agile Locomotion For Quadruped Robots” is accepted! (with Jie Tan, Tingnan Zhang, Erwin Coumans, Atil Iscen, Yunfei Bai, Danijar Hafner, Steven Bohez, Vincent Vanhoucke).

See also the video and paper on Arxiv.

Erwin @ twitter

September 07, 2018

3 Million Firmware Files and Counting…

In the last two years the LVFS has supplied over 3 million firmware files to end users. We now have about a two dozen companies uploading firmware, of which 9 are multi-billion dollar companies.

Every month about 200,000 more devices get upgraded and from the reports so far the number of failed updates is less than 0.01% averaged over all firmware types. The number of downloads is going up month-on-month, although we’re no longer growing exponentially, thank goodness. The server load average is 0.18, and we’ve made two changes recently to scale even more for less money: signing files in a 30 minute cron job rather than immediately, and switching from Amazon to BunnyCDN.

The LVFS is mainly run by just one person (me!) and my time is sponsored by the ever-awesome Red Hat. The hardware costs, which recently included random development tools for testing the dfu and nvme plugins, and the server and bandwidth costs are being paid from charitable donations from the community. We’re even cost positive now, so I’m building up a little pot for the next server or CDN upgrade. By pretty much any metric, the LVFS is a huge success, and I’m super grateful to all the people that helped the project grow.

The LVFS does have one weakness, that it has a bus factor of one. In other words, if I got splattered by a bus, the LVFS would probably cease to exist in the current form. To further grow the project, and to reduce the dependence on me, we’re going to be moving various parts of the LVFS to the Linux Foundation. This means that they’ll be sysadmins who don’t have to google basic server things, a proper community charter, and access to an actual legal team. From a OEM point of view, nothing much should change, including the most important thing that it’ll continue to be free to use for everyone. The existing server and all the content will be migrated to the LVFS infrastructure. From a users point of view, new metadata and firmware will be signed by the Linux Foundation key, rather than my key, although we haven’t decided on a date for the switch-over yet. The LF key has been trusted by fwupd for firmware since 1.0.8 and it’s trivial to backport to older branches if required.

Before anyone gets too excited and starts pointing me at all my existing bugs for my other stuff: I’ll probably still be the person “onboarding” vendors onto the LVFS, and I’m fully expecting to remain the maintainer and core contributor to the lvfs-website code itself — but I certainly should have a bit more time for GNOME Software and color stuff.

In related news, even more vendors are jumping on the LVFS. No more public announcements yet, but hopefully soon. For a lot of hardware companies they’ll be comfortable “going public” when the new hardware currently in development is on shelves in stores. So, please raise a glass to the next 3 million downloads!

Last Month in Krita: August 2018

We used to do a weekly development news post… Last Week in Krita. But then we got too busy doing development to maintain that, and that’s kind of a pity. Still, we’d like to share what we’re doing with you all — and not just through the git history! So, let’s try to revive the tradition…

In August, we started preparing for our next big Fund Raiser. Mid-September to mid-October, we’ll be raising funds for the next round of Krita development. The last fund raisers were all about features: this will be all about stability and polish. Zero Bugs, while obviously unattainable, is to be the rallying cry! We’re moving to a new payment provider, to make it possible to donate to Krita with other options than paypal or a direct bank transfer. Credit cards, various national e-payment systems and even bitcoin will become possible. It’s up already on our donation page!

We’ve already made a good start on stability and polish by fixing our unittests — small bits of code that test one or another function of Krita and that we run to see whether new code breaks something. We also fixed almost a hundred bugs. And, of course, the Google Summer of Code came to an end.

But let’s look at some highlights in a bit more detail:

Polish, polish, polish

After porting Krita to Qt5, we were left with dozens of unittests that didn’t work anymore. They wouldn’t build, they would hang, they would give the wrong results or even crash. August was almost too warm to really work on anything too complicated, so all we did was fix stuff. Well, apart from all the other work we’ve been doing, of course…

Improving Selections

Dmitry, one of our core developers, whose work is funded by your donations, has been working for a couple of weeks on improving selections in Krita. Selections in Krita are a bit different from other applications, since we have both local and global selections, and selections can be visualized with marching ants or as a mask. And a selection can either consist of pixels, or of vector objects. So, he has worked on like:

  • Painting on the selection mask in realtime
  • Make it possible to have more than one local selection
  • Adding opaque, intersect opaque and remove opaque from the global selection
  • A selection is automatically created when you select “show global selection”.
  • Conversions between vector and pixel selections
  • Making it possible to modify selections

Rewriting Resource Management

Resources are things you want to use with Krita. Brush tips for instance, or brush presets. Or gradients, or patterns, or workspaces. Some resources come with Krita by default, some you’ll create yourself, some you’ll want to download. Krita’s resource system dates back to the 20th century, or, if you prefer, the previous millennium. It was designed in the days when a 50 pixel brush was BIG, and patterns would be 128 x 128 pixels. These days, people want to use 1000 pixel brushes, 5000 pixel patterns and lots of them.

The original design for handling resources doesn’t scale any more! And besides, after over twenty years of hack work, it’s a mess. Doing a rewrite is almost always a mistake, but… We’re working on rewriting that part of Krita anyway. Boudewijn is making lots of progress, but nothing much that anyone can see, except for the Phabricator task. It’s been something we’ve been planning and discussing for a long time. All bugs with tagging, adding, removing and changing resources should be fixed in one big bang!

That’s the plan at least…  There’s still a way to go!

And then, there were the usual surprises and little extras:

New and unexpected

A pretty amazing new feature is the gamut mask, created by Anna Medonosova. This includes both editing and applying of the mask to the artistic color selector. See James Gurney’s blog post for some background information.

There’s now also a debug log docker, so Windows users don’t have to mess with debug view again:

(Artwork by Iza Ka)

And there’s more, of course! Reptorian has created a bunch of new blending modes and is now working on a new selection intersection mode. Jouni has been fixing issues to the reference images docker and has implemented clone frames for animation!

September 06, 2018

Design with Difficult Data: Published on A List Apart

I’m excited to have my second article in 17 years published on the website “For People Who Make Websites”, A List Apart: Design with Difficult Data.

Screenshot from A List Apart

Thanks to Ste Grainer for the great editing.

September 05, 2018

Krita’s 2018 Google Summer of Code

This year, we participated in Google Summer of Code with three students: Ivan, Andrey and Michael. Some of the code these awesome students produced is already in Krita 4.1.1, and most of the rest has been merged already, so you can give it a whirl in the latest nightly builds for Windows or Linux. So, let’s go through what’s been achieved this year!

Ivan’s project was all about making brushes faster using vectorization. If that sounds technical, it’s because it is! Basically, your CPU is powerful enough to do a lot of calculations at the same time, as long as it’s the same calculation, but with different numbers. You could feed more than 200 numbers to the CPU, tell it to multiply them all, and it would do that just as fast as multiplying one number. And it just happens that calculating the way a brush looks is more or less just that sort of thing. Of course, there are complications, and Ivan is still busy figuring out how to apply the same logic to the predefined brushes. But here’s a nice image from his blog:

Above, how it was, underneath, what the performance is now.

If Ivan’s project was all about performance, well, so was Andrey‘s project. Andrey has been working on something just as technical and head-achey. Modern CPU’s have many cores — some have four, and fake having eight, others have ten and fake having 20 — or even more. And unused silicon is a waste of good sand! Krita has been able to use multiple cores for a long time: it started with Dmitry Kazakov’s summer of code project, back in 2009, which was all about the tile engine. Nine years later, Dmitry mentored Andrey’s work on the tile engine. The tile engine is used to break up the image into smaller tiles, so every tile can be worked on independently.

So when Dmitry, last year, worked on a project to let Krita use more cores and simultaneously found some places where Krita would force cores to wait on each other, he made a note: something needed to be done about that. It’s called locking, and the solution is to get rid of those locks.

So Andrey’s project was all about making the list of tiles function without locks. And that work is done. There are still a few bugs — this stuff is amazingly complicated and tricky, so real testing is really needed. It will all be in Krita 4.2, which should be released by the end of this year. And some of it was already merged to Krita 4.1.1, too. He managed to get some really nice gains:

Michael has been working on something completely different: palette support in Krita. A palette or colorset is a set of colors — what we call a resource. Palette editing was one of the things we shared with the rest of Calligra, back then, or KOffice, even earlier on. The code is complicated and tangled and spread out. Michael’s first task was to bring order to the chaos, and that part has already been merged to Krita 4.1.1.

His next project was to work on the palette docker. His work is detailed in this Phabricator post. There’s still work in progress, but everything that was planned for the Summer of Code project has been done!

And this is the editor:

September 04, 2018

Raspberry Pi Zero as Ethernet Gadget Part 3: An Automated Script

Continuing the discussion of USB networking from a Raspberry Pi Zero or Zero W (Part 1: Configuring an Ethernet Gadget and Part 2: Routing to the Outside World): You've connected your Pi Zero to another Linux computer, which I'll call the gateway computer, via a micro-USB cable. Configuring the Pi end is easy. Configuring the gateway end is easy as long as you know the interface name that corresponds to the gadget.

ip link gave a list of several networking devices; on my laptop right now they include lo, enp3s0, wlp2s0 and enp0s20u1. How do you tell which one is the Pi Gadget? When I tested it on another machine, it showed up as enp0s26u1u1i1. Even aside from my wanting to script it, it's tough for a beginner to guess which interface is the right one.

Try dmesg

Sometimes you can tell by inspecting the output of dmesg | tail. If you run dmesg shortly after you initialized the gadget (either by plugging the USB cable into the gateway computer, you'll see some lines like:

[  639.301065] cdc_ether 3-1:1.0 enp0s20u1: renamed from usb0
[ 9458.218049] usb 3-1: USB disconnect, device number 3
[ 9458.218169] cdc_ether 3-1:1.0 enp0s20u1: unregister 'cdc_ether' usb-0000:00:14.0-1, CDC Ethernet Device
[ 9462.363485] usb 3-1: new high-speed USB device number 4 using xhci_hcd
[ 9462.504635] usb 3-1: New USB device found, idVendor=0525, idProduct=a4a2
[ 9462.504642] usb 3-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[ 9462.504647] usb 3-1: Product: RNDIS/Ethernet Gadget
[ 9462.504660] usb 3-1: Manufacturer: Linux 4.14.50+ with 20980000.usb
[ 9462.506242] cdc_ether 3-1:1.0 usb0: register 'cdc_ether' at usb-0000:00:14.0-1, CDC Ethernet Device, f2:df:cf:71:b9:92
[ 9462.523189] cdc_ether 3-1:1.0 enp0s20u1: renamed from usb0

(Aside: whose bright idea was it that it would be a good idea to rename usb0 to enp0s26u1u1i1, or wlan0 to wlp2s0? I'm curious exactly who finds their task easier with the name enp0s26u1u1i1 than with usb0. It certainly complicated all sorts of network scripts and howtos when the name wlan0 went away.)

Anyway, from inspecting that dmesg output you can probably figure out the name of your gadget interface. But it would be nice to have something more deterministic, something that could be used from a script. My goal was to have a shell function in my .zshrc, so I could type pigadget and have it set everything up automatically. How to do that?

A More Deterministic Way

First, the name starts with en, meaning it's an ethernet interface, as opposed to wi-fi, loopback, or various other types of networking interface. My laptop also has a built-in ethernet interface, enp3s0, as well as lo0, the loopback or "localhost" interface, and wlp2s0, the wi-fi chip, the one that used to be called wlan0.

Second, it has a 'u' in the name. USB ethernet interfaces start with en and then add suffixes to enumerate all the hubs involved. So the number of 'u's in the name tells you how many hubs are involved; that enp0s26u1u1i1 I saw on my desktop had two hubs in the way, the computer's internal USB hub plus the external one sitting on my desk.

So if you have no USB ethernet interfaces on your computer, looking for an interface name that starts with 'en' and has at least one 'u' would be enough. But if you have USB ethernet, that won't work so well.

Using the MAC Address

You can get some useful information from the MAC address, called "link/ether" in the ip link output. In this case, it's f2:df:cf:71:b9:92, but -- whoops! -- the next time I rebooted the Pi, it became ba:d9:9c:79:c0:ea. The address turns out to be randomly generated and will be different every time. It is possible to set it to a fixed value, and that thread has some suggestions on how, but I think they're out of date, since they reference a kernel module called g_ether whereas the module on my updated Raspbian Stretch is called cdc_ether. I haven't tried.

Anyway, random or not, the MAC address also has one useful property: the first octet (f2 in my first example) will always have the '2' bit set, as an indicator that it's a "locally administered" MAC address rather than one that's globally unique. See the Wikipedia page on MAC addressing for details on the structure of MAC addresses. Both f2 (11110010 in binary) and ba (10111010 binary) have the 2 (00000010) bit set.

No physical networking device, like a USB ethernet dongle, should have that bit set; physical devices have MAC addresses that indicate what company makes them. For instance, Raspberry Pis with networking, like the Pi 3 or Pi Zero W, have interfaces that start with b8:27:eb. Note the 2 bit isn't set in b8.

Most people won't have any USB ethernet devices connected that have the "locally administered" bit set. So it's a fairly good test for a USB ethernet gadget.

Turning That Into a Shell Script

So how do we package that into a pipeline so the shell -- zsh, bash or whatever -- can check whether that 2 bit is set?

First, use ip -o link to print out information about all network interfaces on the system. But really you only need the ones starting with en and containing a u. Splitting out the u isn't easy at this point -- you can check for it later -- but you can at least limit it to lines that have en after a colon-space. That gives output like:

$ ip -o link | grep ": en"
5: enp3s0:  mtu 1500 qdisc pfifo_fast state DOWN mode DEFAULT group default qlen 1000\    link/ether 74:d0:2b:71:7a:3e brd ff:ff:ff:ff:ff:ff
8: enp0s20u1:  mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000\    link/ether f2:df:cf:71:b9:92 brd ff:ff:ff:ff:ff:ff

Within that, you only need two pieces: the interface name (the second word) and the MAC address (the 17th word). Awk is a good tool for picking particular words out of an output line:

$ ip -o link | grep ': en' | awk '{print $2, $17}'
enp3s0: 74:d0:2b:71:7a:3e
enp0s20u1: f2:df:cf:71:b9:92

The next part is harder: you have to get the shell to loop over those output lines, split them into the interface name and the MAC address, then split off the second character of the MAC address and test it as a hexadecimal number to see if the '2' bit is set. I suspected that this would be the time to give up and write a Python script, but no, it turns out zsh and even bash can test bits:

ip -o link | grep en | awk '{print $2, $17}' | \
    while read -r iff mac; do
        # LON is a numeric variable containing the digit we care about.
        # The "let" is required so LON will be a numeric variable,
        # otherwise it's a string and the bitwise test fails.
        let LON=0x$(echo $mac | sed -e 's/:.*//' -e 's/.//')

        # Is the 2 bit set? Meaning it's a locally administered MAC
        if ((($LON & 0x2) != 0)); then
            echo "Bit is set, $iff is the interface"

Pretty neat! So now we just need to package it up into a shell function and do something useful with $iff when you find one with the bit set: namely, break out of the loop, call ip a add and ip link set to enable networking to the Raspberry Pi gadget, and enable routing so the Pi will be able to get to networks outside this one. Here's the final function:

# Set up a Linux box to talk to a Pi0 using USB gadget on
pigadget() {

    ip -o link | grep en | awk '{print $2, $17}' | \
        while read -r iff mac; do
            # LON is a numeric variable containing the digit we care about.
            # The "let" is required so zsh will know it's numeric,
            # otherwise the bitwise test will fail.
            let LON=0x$(echo $mac | sed -e 's/:.*//' -e 's/.//')

            # Is the 2 bit set? Meaning it's a locally administered MAC
            if ((($LON & 0x2) != 0)); then
                iface=$(echo $iff | sed 's/:.*//')

    if [[ x$iface == x ]]; then
        echo "No locally administered en interface:"
        ip a | egrep '^[0-9]:'
        echo Bailing.

    sudo ip a add dev $iface
    sudo ip link set dev $iface up

    # Enable routing so the gadget can get to the outside world:
    sudo sh -c 'echo 1 > /proc/sys/net/ipv4/ip_forward'
    sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

September 03, 2018

Interview with Danielle Williams

Could you tell us something about yourself?

My name is Danielle Williams. I’m a writer, an illustrator, and a fan of Krita.

Do you paint professionally, as a hobby artist, or both?

I’ve been paid for my work before and I use my art as covers for my ebooks, so technically I guess I am a professional…but I often feel like a hobby artist in my heart because of my desire to improve my skill.

What genre(s) do you work in?

I’m prone to figurative work—portraiture, animals, monsters, that sort of thing—in many styles, but I’m trying to improve and incorporate backgrounds into my art for greater impact. Drawing also helps me visualize my story characters and worlds.

Whose work inspires you most — who are your role models as an artist?

Between an online youth spent browsing Yerf, DeviantArt, Side7, and Elfwood, my love of animation art, and my visual arts degree, I’ve been exposed to a tsunami of creative folks! On the fine arts side: John Singer Sargent, Norman Rockwell, and Cecelia Beaux. I’m discovering more fine artists as I follow James Gurney’s blog (

On the animation art side: I admire the artists behind Earthworm Jim, Oddworld Inhabitants, Glen Keane, Marc Davis, and the other Nine Old Men of Disney Animation.

On the ‘net side: Ursula Vernon, Makani, TheMinttu, Don Seegmiller,, Dirk Grundy (, DimeSpin, Michelle Czajkowski (, Enayla.

… I’m forgetting lotsa folks, ha. No single one is my idol, though. I’m like a magpie—I pick out shinies wherever I see ‘em!

How and when did you get to try digital painting for the first time?

Oh, back when I was…maybe 12 or so? We got a scanner that came with “Micrografx Picture Publisher”, a sort of imitation Photoshop. Around that time I saved my money and bought a Wacom Bamboo. Through online tutorials I taught myself how to use it and my program.

The lesson I learned from using this off-brand product for so long was that it wasn’t the tool that made your art good, but your own practice and use of solid artistic principles. That’s what makes the difference.

What makes you choose digital over traditional painting?

A: I’m cheap. B: I don’t feel like I have any place to safely make a mess. I had fun taking watercolor classes in college and even did an oils class…but it’s different working with staining liquid paints over concrete in a designated area vs. the corner of the apartment you’re renting or at the table in your mom’s Better-Homes-And-Garden-worthy kitchen.

Other people’ve found workarounds, I know, but I’m still not comfortable with it.

Oh, and C: space considerations. Art supplies take up space!

How did you find out about Krita?

Some fifteen years after buying Photoshop CS2, it was finally glitching and refusing to open files on Windows 7, no matter how many trick reinstalls I did. I had to go looking for alternatives, and I just didn’t like the dated interface of the GIMP (though I know artists like DimeSpin can make that program sit up and do flips). I dunno where I got the Krita link, but I’m sure glad I found it!

What was your first impression?

“Oo, pretty interface. Wait, why won’t this brush make a mark?!”

What do you love about Krita?

The cost (see above), the quality, the interface, the compatibility—and the ability to open a window of your painting in LUT mode and watch it update in real time while you paint on the file in color. MAN that is helpful for values! (Thanks to

What do you think needs improvement in Krita? is there anything that really annoys you?

Annoys me? I refuse to be annoyed with free software. I have nothing to complain about. There are two things I miss that my old CS2 used to do, however.

First: I haven’t tried Krita’s text tool since before the last big update, but it wasn’t much use to me in making book covers. I’ve since added Inkscape to my workflow to do my title text, but it still requires some wrangling. I look forward to the day when Krita’s text tools are as good as CS2’s were. Or better!

Second: I miss Photoshop Actions. Actions + photos = a very quick way to make differently-mooded book covers.

Finally, I’m like, ack!!, knowing that I’m only scratching the surface of what Krita can do! I just learned about Liquefy this morning. I wish the documentation was more clearly written.

I also wish there were more step-by-step text-with-picture tutorials (like David Revoy’s Getting Started with Krita) about the different features—so you don’t just know that the features exist, but have some idea of what they’re good for.

I don’t like video tutorials as much; I’m definitely a read-the-manual, follow-the-pictures, scroll-back-up-when-you-make-a-mistake sorta gal.

What sets Krita apart from the other tools that you use?

That’s easy! The beautiful modern interface, and the COMMUNITY. The community around Krita is unlike any I’ve ever experienced around an art program! David Revoy, in particular, is a treasure.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

Ooh, that’s a toughie. I’ll pick the cover for my Armello fanfic The Heroes of Houndsmouth. It’s just so dang dramatic, even in greyscale, and I tackled a lot of different things in it, especially textures: teeth, fire, metal scratches, fur.

What techniques and brushes did you use in it?

It’s been a couple years so I’m not exactly sure. I remember using a brush like the current “Ink 1 Precision” or “Ink 2 Fineliner” to do the whiskers and keep the fur texture sharper. I also laid a free metal texture on a layer over the shield, then used a transparency layer (applied to the metal texture layer) to erase the metal where I didn’t want it (such as in the flames). Transparency layers are your friends!

Oh, and I used the LUT management trick (see video link above) while painting in color to make the values *smoochy finger-kiss thing* MWAH! I really love that trick.

Where can people see more of your work? My ebook cover paintings are in use at my author homepage,

Anything else you’d like to share?

I can play a piano, but never in a million years could I build one. Similarly, while I can use Krita fairly well, it’s the fine folks working
their code sorcery behind the scenes that make Krita—and the art I create with it—possible. I salute them!

September 01, 2018

FreeCAD BIM development news - August 2018

Hi there, One month passes bloody fast, doesn't it? So here we are again, for one more report about what I've been coding this month in FreeCAD. Looking at the text below (I'm writing this intro after I wrote the contents) I think we actually have an interesting set of new features. None of this open-source...

August 31, 2018

Raspberry Pi Zero as Ethernet Gadget Part 2: Routing to the Outside World

I wrote some time ago about how to use a Raspberry Pi over USB as an "Ethernet Gadget". It's a handy way to talk to a headless Pi Zero or Zero W if you're somewhere where it doesn't already have a wi-fi network configured.

However, the setup I gave in that article doesn't offer a way for the Pi Zero to talk to the outside world. The Pi is set up to use the machine on the other end of the USB cable for routing and DNS, but that doesn't help if the machine on the other end isn't acting as a router or a DNS host.

A lot of the ethernet gadget tutorials I found online explain how to do this on Mac and Windows, but it was tough to find an example for Linux. The best I found was for Slackware, How to connect to the internet over USB from the Raspberry Pi Zero, which should work on any Linux, not just Slackware.

Let's assume you have the Pi running as a gadget and you can talk to it, as discussed in the previous article, so you've run:

sudo ip a add dev enp0s20u1
sudo ip link set dev enp0s20u1 up
substituting your network number and the interface name that the Pi created on your Linux machine, which you can find in dmesg | tail or ip link. (In Part 3 I'll talk more about how to find the right interface name if it isn't obvious.)

At this point, the network is up and you should be able to ping the Pi with the address you gave it, assuming you used a static IP: ping If that works, you can ssh to it, assuming you've enabled ssh. But from the Pi's end, all it can see is your machine; it can't get out to the wider world.

For that, you need to enable IP forwarding and masquerading:

sudo sh -c 'echo 1 > /proc/sys/net/ipv4/ip_forward'
sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

Now the Pi can route to the outside world, but it still doesn't have DNS so it can't get any domain names. To test that, on the gateway machine try pinging some well-known host:

$ ping -c 2
PING ( 56(84) bytes of data.
64 bytes from ( icmp_seq=1 ttl=56 time=78.6 ms
64 bytes from ( icmp_seq=2 ttl=56 time=78.7 ms

--- ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 78.646/78.678/78.710/0.032 ms

Take the IP address from that -- e.g. -- then go to a shell on the Pi and try ping -c 2, and you should see a response.

DNS with a Public DNS Server

Now all you need is DNS. The easy way is to use one of the free DNS services, like Google's Edit /etc/resolv.conf and add a line like

and then try pinging some well-known hostname.

If it works, you can make that permanent by editing /etc/resolv.conf, and adding this line:


Otherwise you'll have to do it every time you boot.

Your Own DNS Server

But not everyone wants to use public nameservers like For one thing, there are privacy implications: it means you're telling Google about every site you ever use for any reason.

Fortunately, there's an easy way around that, and you don't even have to figure out how to configure bind/named. On the gateway box, install dnsmasq, available through your distro's repositories. It will use whatever nameserver you're already using on that machine, and relay it to other machines like your Pi that need the information. I didn't need to configure it at all; it worked right out of the box.

In the next article, Part 3: more about those crazy interface names (why is it enp0s20u1 on my laptop but enp0s26u1u1i1 on my desktop?), how to identify which interface is the gadget by using its MAC, and how to put it all together into a shell function so you can set it up with one command.

August 29, 2018

GIMP receives a $100K donation

Earlier this month, GNOME Foundation announced that they received a $400,000 donation from, of which $100,000 they transferred to GIMP’s account.

We thank both and GNOME Foundation for the generous donation and will use the money to do much overdue hardware upgrade for the core team members and organize the next hackfest to bring the team together, as well as sponsor the next instance of Libre Graphics Meeting.

Handshake is a decentralized, permissionless naming protocol compatible with DNS where every peer is validating and in charge of managing the root zone with the goal of creating an alternative to existing Certificate Authorities. Its purpose is not to replace the DNS protocol, but to replace the root zone file and the root servers with a public commons.

GNOME Foundation is a non-profit organization that furthers the goals of the GNOME Project, helping it to create a free software computing platform for the general public that is designed to be elegant, efficient, and easy to use.

G'MIC 2.3.6

G'MIC 2.3.6

10 Years of Open Source Image Processing!

The IMAGE team of the GREYC laboratory is happy to celebrate the 10th anniversary of G’MIC with you, an open-source (CeCILL), generic and extensible framework for image processing. GREYC is a public research laboratory on digital technology located in Caen, Normandy/France, under the supervision of 3 research institutions: the CNRS (UMR 6072), the University of Caen Normandy and the ENSICAEN engineering school.

G’MIC-Qt G’MIC-Qt, the main user interface of the G’MIC project. 

This celebration gives us the perfect opportunity to announce the release of a new version (2.3.6) of this free software and to share with you a summary of the latest notable changes since our last G’MIC report, published on PIXLS.US in February 2018.

Related links:

(Click on the images of the report to display them in full resolution)

1. Looking back at 10 years of development

G’MIC is a multiplatform framework (GNU/Linux, macOS, Windows…) providing various user interfaces for manipulating generic image data, such as 2D or 3D hyperspectral images or image sequences with float values (thus including “normal” color images). More than 1000 different operators for image processing are included, a number that is extensible at will since users can add their own functions by using the embedded script language.

It was at the end of July 2008 that the first lines of G’MIC code were created (in C++). At that time, I was the main developer involved in CImg, a lightweight open source C++ library for image processing, when I made the following observation:

  • The initial goal of CImg, which was to propose a “minimal” library of functions to help C++ developers to develop image processing algorithms, was broadly achieved; most of the algorithms I considered as essential in image processing were integrated. CImg was initially meant to stay lightweight, so I didn’t want to include new algorithms ad vitam æternam, which would be too heavy or too specific, thus betraying the initial concept of the library.
  • However, this would only cater to a rather small community of people with both C++ knowledge and image processing knowledge! One of the natural evolutions of the project, creating bindings of CImg to other programming languages, didn’t appeal much to me given the lack of interest I had in writing the code. And these potential bindings still only concerned an audience with some development expertise.

My ideas were starting to take shape: I needed to find a way to provide CImg processing features for non-programmers. Why not attempt to build a tool that could be used on the command line (like the famous convert command from Imagemagick)? A first attempt in June 2008 (inrcast, presented on the French news site LinuxFR), while unsuccessful, allowed me to better understand what would be required for this type of tool to easily process images from the command line.

In particular, it occurred to me that conciseness and coherence of the command syntax were the two most important things to build upon. These were the aspects that required the most effort in research and development (the actual image processing features were already implemented in CImg). In the end, the focus on conciseness and coherence took me much further than originally planned as G’MIC got an interpreter) of its own scripting language, and then a JIT compiler for the evaluation of mathematical expressions and image processing algorithms working at the pixel level.

With these ideas, by the end of July 2008, I was happy to announce the first draft of G’MIC. The project was officially up and running!

G’MIC logo Fig. 1.1: Logo of the G’MIC project, libre framework for image processing, and its cute mascot “Gmicky” (illustrated by David Revoy).

A few months later, in January 2009, enriched by my previous development experience on GREYCstoration (a free tool for nonlinear image denoising and interpolation, from which a plug-in was made for GIMP), and in the hopes of reaching an even larger public, I published a G’MIC GTK plug-in for GIMP. This step proved to be a defining moment for the G’MIC project, giving it a significant boost in popularity as seen below (the project was hosted on Sourceforge at the time).

Download statistics Fig.1.2: Monthly downloads statistics of G’MIC, between July 2008 and May 2009 (release of the GIMP plug-in happened in January 2009).

The sudden interest in the plugin from different users of GIMP (photographers, illustrators and other types of artists) was indeed a real launchpad for the project, with the rapid appearance of various contributions and external suggestions (for the code, management of the forums, web pages, writing of tutorials and realization of videos, etc.). The often idealized community effect of free software finally began to take off! Users and developers began to take a closer look at the operation of the original command-line interface and its associated scripting language (which admittedly did not interest many people until that moment!). From there, many of them took the plunge and began to implement new image processing filters in the G’MIC language, continuously integrated them into the GIMP plugin. Today, these contributions represent almost half of the filters available in the plugin.

Meanwhile, the important and repeated contributions of Sébastien Fourey, colleague of the GREYC IMAGE team (and experienced C++ developer) significantly improved the user experience of G’MIC. Sébastien is indeed at the heart of the main graphical interface development of the project, namely:

  • The G’MIC Online web service (which was later re-organised by GREYC’s Development Department).
  • Free Software ZArt, a graphical interface - based on the _Qt_ library - for the application of G’MIC filters to video sequences (from files or digital camera streams).
  • And above all, at the end of 2016, Sébastien tackled a complete rewrite of the G’MIC plugin for GIMP in a more generic form called G’MIC-Qt. This component, also based on the _Qt_ library (as the name suggests), is a single plugin that works equally well with both GIMP and Krita, two of the leading free applications for photo retouching/editing and digital painting. G’MIC-Qt has now completely supplanted the original GTK plugin thanks to its many features: built-in filter search engine, better preview, superior interactivity, etc. Today it is the most successful interface of the G’MIC project and we hope to be able to offer it in the future for other host applications (contact us if you are interested in this subject!).
Interfaces graphiques de G’MIC Fig.1.3: Different graphical interfaces of the G’MIC project, developed by Sébastien Fourey: G’MIC-Qt, G’MIC Online and ZArt.

The purpose of this article is not to go into too much detail about the history of the project. Suffice it to say that we have not really had time to become bored in the last ten years!

Today, Sébastien and I are the two primary maintainers of the G’MIC project (Sébastien mainly for the interface aspects, myself for the development and improvement of filters and the core development), in addition to our main professional activity (research and teaching/supervision).

Let’s face it, managing a free project like G’MIC takes a considerable amount of time, despite its modest size (~120k lines of code). But the original goal has been achieved: thousands of non-programming users have the opportunity to freely and easily use our image processing algorithms in many different areas: image editing, photo manipulation, illustration and digital painting, video processing, scientific illustration, procedural generation, glitch art

The milestone of 3.5 million total downloads was exceeded last year, with a current average of about 400 daily downloads from the official website (figures have been steadily declining in recent years as G’MIC is becoming more commonly downloaded and installed via alternative external sources).

It is sometimes difficult to keep a steady pace of development and the motivation that has to go with it, but we persisted, thinking back to the happy users who from time to time share their enthusiasm for the project!

Obviously we can’t name all the individual contributors to G’MIC whom we would like to thank, and with whom we’ve enjoyed exchanging during these ten years, but our heart is with them! Let’s also thank the GREYC laboratory and INS2I institute of CNRS for their strong support for this free project. A big thank you also to all the community of PIXLS.US who did a great job supporting the project (hosting the forum and publishing our articles on G’MIC).

But let’s stop reminiscing and get down to business: new features since our last article about the release of version 2.2!

2. Automatic illumination of flat-colored drawings

G’MIC recently gained a quite impressive new filter named « Illuminate 2D shape », the objective of which is to automatically add lit zones and clean shadows to flat-colored 2D drawings, in order to give a 3D appearance.

First, the user provides an object to illuminate, in the form of an image on a transparent background (typically a drawing of a character or animal). By analyzing the shape and content of the image, G’MIC then tries to deduce a concordant 3D elevation map (“ bumpmap “). The map of elevations obtained is obviously not exact, since a 2D drawing colored in solid areas does not contain explicit information about an associated 3D structure! From the estimated 3D elevations it is easy to deduce a map of normals (“ normalmap “) which is used in turn to generate an illumination layer associated with the drawing (following a Phong Shading model).

Illuminate 2D shape Fig. 2.1: G’MIC’s “Illuminate 2D shape“ filter in action, demonstrating automatic shading of a beetle drawing (shaded result on the right).

This new filter is very flexible and allows the user to have a fairly fine control over the lighting parameters (position and light source rendering type) and estimation of the 3D elevation. In addition the filter gives the artist the opportunity to rework the generated illumination layer, or even directly modify the elevation maps and estimated 3D normals. The figure below illustrates the process as a whole; using the solid colored beetle image (top left), the filter fully automatically estimates an associated 3D normal map (top right). This allows it to generate renditions based on the drawing (bottom row) with two different rendering styles: smooth and quantized.

Normalmap estimation Fig. 2.2: The process pipeline of the G’MIC “Illuminate 2D shape“ filter involves the estimation of a 3D normal map to generate the automatic illumination of a drawing.

Despite the difficulty inherent in the problem of converting a 2D image into 3D elevation information, the algorithm used is surprisingly effective in a good many cases. The estimation of the 3D elevation map obtained is sufficiently consistent to automatically generate plausible 2D drawing illuminations, as illustrated by the two examples below - obtained in just a few clicks!

Shading example 1 Shading example 2 Fig. 2.3: Two examples of completely automatic shading of 2D drawings, generated by G’MIC

It occurs, of course, that the estimated 3D elevation map does not always match what one might want. Fear not, the filter allows the user to provide “guides” in the form of an additional layer composed of colored lines, giving more precise information to the algorithm about the structure of the drawing to be analyzed. The figure below illustrates the usefulness of these guides for illuminating a drawing of a hand (top left); the automatic illumination (top right) does not account for information in the lines of the hand. Including these few lines in an additional layer of “guides” (in red, bottom left) helps the algorithm to illuminate the drawing more satisfactorily.

Using additional guides Fig. 2.4: Using a layer of “guides” to improve the automatic illumination rendering generated by G’MIC.

If we analyze more precisely the differences obtained between estimated 3D elevation maps with and without guides (illustrated below as symmetrical 3D objects), there is no comparison: we go from a very round boxing glove to a much more detailed 3D hand estimation!

Estimated 3D elevations with and without guides Fig. 2.5: Estimated 3D elevations for the preceding drawing of a hand, with and without the use of “guides”.

Finally, note that this filter also has an interactive preview mode, allowing the user to move the light source (with the mouse) and have a preview of the drawing illuminated in real time. By modifying the position parameters of the light source, it is thus possible to obtain the type of animations below in a very short time, which gives a fairly accurate idea of the 3D structure estimated by the algorithm from the original drawing.

light animation Fig. 2.6: Modification of the position of the light source and associated illumination renderings, calculated automatically by G’MIC.

A video showing the various possible ways to edit the illumination allowed by this filter is visible here. The hope is this new feature of G’MIC allows artists to accelerate the illumation and shading stage of their future drawings!

3. Stereographic projection

In a completely different genre, we have also added a filter implementing stereographic projection, suitably named “Stereographic projection“. This type of cartographic projection makes it possible to project planar defined image data onto a sphere. It should be noted that this is the usual projection used to generate images of “mini-planets” from equirectangular panoramas, like the one illustrated in the figure below.

equirectangular panorama Fig. 3.1: Example of equirectangular panorama (created by Alexandre Duret-Lutz).

If we launch the G’MIC plugin with this panorama and select the filter “Stereographic projection“, we get:

Filter 'Stereographic projection' Fig. 3.2: The “Stereographic projection“ filter of G’MIC in action using the plugin for GIMP or Krita.

The filter allows precise adjustments of the projection center, the rotation angle, and the radius of the sphere, all interactively displayed directly on the preview window (we will come back to this later). In a few clicks, and after applying the filter, we get the desired “mini-planet”:

Mini-planet Fig. 3.3: “Mini-planet” obtained after stereographic projection.

It is also intruiging to note that simply by reversing the vertical axis of the images, we transform a “mini-planet” into a “max-tunnel”!

Max-tunnel Fig. 3.4: “Maxi-tunnel” obtained by inversion of the vertical axis then stereographic projection.

Again, we made this short video which shows this filter used in practice. Note that G’MIC already had a similar filter (called “Sphere“), which could be used for the creation of “mini-planets”, but with a type of projection less suitable than the stereographic projection now available.

4. Even more possibilities for color manipulation

Manipulating the colors of images is a recurring occupation among photographers and illustrators, and G’MIC already had several dozen filters for this particular activity - grouped in a dedicated category (the originally named “Colors“ category!). This category is still growing, with two new filters having recently appeared:

  • The “CLUT from after-before layers“ filter tries to model the color transformation performed between two images. For example, suppose we have the following pair of images:
Image pair Fig. 4.1: Pair of images where an unknown colorimetric transformation has been applied to the top image to obtain the bottom one.

Problem: we do not remember at all how we went from the the original image to the modified image, but we would like to apply the same process to another image. Well, no more worries, call G’MIC to the rescue! The filter in question will seek to better model the modification of the colors in the form of a HaldCLUT, which happens to be a classic way to represent any colorimetric transformation.

Filter 'CLUT from after-before layers' Fig. 4.2: The filter models the color transformation between two images as a HaldCLUT.

The HaldCLUT generated by the filter can be saved and re-applied on other images, with the desired property that the application of the HaldCLUT on the original image produces the target model image originally used to learn the transformation. From there, we are able to apply an equivalent color change to any other image:

HaldCLUT applied on another image Fig. 4.3: The estimated color transformation in the form of HaldCLUT is re-applied to another image.

This filter makes it possible in the end to create HaldCLUT “by example”, and could therefore interest many photographers (in particular those who distribute compilations of HaldCLUT files, freely or otherwise!).

  • A second color manipulation filter, named “Mixer [PCA]“ was also recently integrated into G’MIC. It acts as a classic color channel mixer, but rather than working in a predefined color space (like sRGB, HSV, Lab…), it acts on the “natural” color space of the input image, obtained by principal component analysis (PCA) of its RGB colors. Thus each image will be associated with a different color space. For example, if we take the “lion” image below and look at the distribution of its colors in the RGB cube (right image), we see that the main axis of color variation is defined by a straight line from dark orange to light beige (axis symbolized by the red arrow in the figure).
PCA of RGB colors Fig. 4.4: Distribution of colors from the “lion” image in the RGB cube, and associated main axes (colorized in red, green and blue).

The secondary axis of variation (green arrow) goes from blue to orange, and the tertiary axis (blue arrow) from green to pink. It is these axes of variation (rather than the RGB axes) that will define the color basis used in this channel mix filter.

Filter 'Mixer [PCA]' Fig. 4.5: The “Mixer [PCA]“ filter is a channel mixer acting on the axes of “natural” color variations of the image.

It would be wrong to suggest that it is always better to consider the color basis obtained by PCA for the mixing of channels, and this new filter is obviously not intended to be the “ultimate” mixer that would replace all others. It simply exists as an alternative to the usual tools for mixing color channels, an alternative whose results proved to be quite interesting in tests of several images used during the development of this filter. It does no harm to try in any case…

5. Filter mishmash

This section is about a few other filters improved or included lately in G’MIC which deserve to be talked about, without dwelling too much on them.

  • Filter “Local processing” applies a color normalization or equalization process on the local image neighborhoods (with possible overlapping). This is an additional filter to make details pop up from under or over-exposed photographs, but it may create strong and unpleasant halo artefacts with non-optimal parameters.

    Filter 'Local processing' Fig. 5.1: The new filter “Local processing” enhances details and contrast in under or over-exposed photographs.
  • If you think that the number of layer blending modes available in GIMP or Krita is not enough, and dream about defining your own blending mode formula, then the recent improvement of the G’MIC filter « Blend [standard] » will please you! This filter now gets a new option « Custom formula » allowing the user to specify their own mathematical formula when blending two layers together. All of your blending wishes become possible!

    Filter 'Blend (standard)'' Fig. 5.2: The “Blend [standard]“ filter now allows definition of mathematical formulas for layer merging.
  • Also note the complete re-implementation of the nice “Sketch“ filter, which had existed for several years but could be a little slow on large images. The new implementation is much faster, taking advantage of multi-core processing when possible.

    Filter 'Sketch' Fig. 5.3: The “Sketch“ filter has been re-implemented and now exploits all available compute cores.
  • A large amount of work has also gone into the re-implementation of the “Mandelbrot - Julia sets“ filter, since the navigation interface has been entirely redesigned, making exploration of the Mandelbrot set much more comfortable (as illustrated by this video). New options for choosing colors have also appeared.

    Filtre Mandelbrot - Julia sets Fig. 5.4: The “Mandelbrot - Julia sets“ filter and its new navigation interface in the complex space.
  • In addition, the “Polygonize [Delaunay]“ filter that generates polygonized renderings of color images has a new rendering mode, using linearly interpolated colors in the Delaunay triangles produced.

    Filtre 'Polygonize (Delaunay)' Fig. 5.5: The different rendering modes of the “Polygonize [Delaunay]“ filter.

6. Other important highlights

6.1. Improvements of the plug-in

Of course, the new features in G’MIC are not limited to just image processing filters! For instance, a lot of work has been done on the graphical interface of the plug-in G’MIC-Qt for GIMP and Krita:

  • Filters of the plug-in are now allowed to define a new parameter type point(), which displays as a small colored circle over the preview window. The user can drag this circle and move it with the mouse. As a result this can give the preview widget a completely new type of user interaction, which is no small thing! A lot of filters now use this feature, making them more pleasant to use and intuitive (look at this video for some examples). The animation below shows for instance how these new interactive points has been used in the filter « Stereographic projection » described in previous sections.
Interactive preview window Fig. 6.1: The preview window of the G’MIC-Qt plug-in gets new user interaction abilities.
  • In addition, introducing these interactive points has allowed improving the split preview modes, available in many filters to display the « before/ after » views side by side when setting the filter parameters in the plug-in. It is now possible to move this « before/ after » separator, as illustrated by the animation below. Two new splitting modes (« Checkered » and « Inverse checkered » ) have been also included alongside it.
Division de prévisualisation interactive Fig. 6.2: The division modes of the preview now have a moveable “before / after” boundary.

A lot of other improvements have been made to the plug-in: the support of the most recent version of GIMP (2.10), of Qt 5.11, improved handling of the error messages displayed over the preview widget, a cleaner designed interface, and other small changes have been made under the hood, which are not necessarily visible but slightly improve the user experience (e.g. an image cache mechanism for the preview widget). In short, that’s pretty good!

6.2. Improvements in the software core

Some new refinements of the G’MIC computational core have been done recently:

  • The “standard library” of the G’MIC script language was given new commands for computing the inverse hyperbolic functions (acoss, asinh and atanh), as well as a command tsp (travelling salesman problem) which estimates an acceptable solution to the well-known Travelling salesman problem, and this, for a point cloud of any size and dimension.

    Travelling salesman problem in 2D Fig. 6.3: Estimating the shortest route between hundreds of 2D points, with the G’MIC command tsp.
    Travelling salesman problem in 2D Fig. 6.4: Estimating the shortest route between several colors in the RGB cube (thus in 3D), with the G’MIC command tsp.
  • The demonstration window, which appears when gmic is run without any arguments from the command line, has been also redesigned from scratch.

Demonstration window Fig. 6.5: The new demonstration window of gmic, the command line interface of G’MIC.
  • The embedded JIT compiler used for the evaluation of mathematical expressions has not been left out and was given new functions to draw polygons (function polygon()) and ellipses (function ellipse()) in images. These mathematical expressions can in fact define small programs (with local variables, user-defined functions and control flow). One can for instance easily generate synthetic images from the command line, as shown by the two examples below.

Example 1

$ gmic 400,400,1,3 eval "for (k = 0, k<300, ++k, polygon(3,u([vector10(0),[w,h,w,h,w,h,0.5,255,255,255])))"


Function 'polygon()'' Fig. 6.6: Using the new function polygon() from the G’MIC JIT compiler, to render a synthetic image made of random triangles.

Example 2

$ gmic 400,400,1,3 eval "for (k=0, k<20, ++k, ellipse(w/2,h/2,w/2,w/8,k*360/20,0.1,255))"


Function 'ellipse()'' Fig. 6.7: Using the new function ellipse() from the G’MIC JIT compiler, to render a synthetic flower image.
  • Note also that NaN values are now better managed when doing calculus in the core, meaning G’MIC maintains coherent behavior even when it has been compiled with the optimisation -ffast-math. Thus, G’MIC can be flawlessly compiled now the maximum optimization level -Ofast supported by the compiler g++, whereas we were restricted to the use of -O3 before. The improvement in computation speed is clearly visible for some of the offered filters !

6.3. Distribution channels

A lot of changes have also been made to the distribution channels used by the project:

  • First of all, the project web pages (which are now using secured https connections by default) have a new image gallery. This gallery shows both filtered image results from G’MIC and the way to reproduce them (from the command line). Note that these gallery pages are automatically generated by a dedicated G’MIC script, which ensures the displayed command syntax is correct.

    galerie d'image Fig. 6.8: The new image gallery on the G’MIC web site.

This gallery is split into several sections, depending on the type of processing done (Artistic, Black & White, Deformations, Filtering, etc.). The last section « Code sample » is my personal favorite, as it exhibits small animations (shown as looping animated GIFs) which have been completely generated from scratch by short scripts, written in the G’MIC language. Quite a surprising use of G’MIC that shows its potential for generative art.

Code sample1 Code sample2 Fig. 6.9: Two small GIF animations generated by G’MIC_ scripts that are visible in the new image gallery._
  • We have also moved the main git source repository of the project to Framagit, still keeping one synchronized mirror on Github at the same place as before (to benefit from the fact that a lot of developers have already an account on Github which makes it easier for them to fork the project and write bug reports).

7. Conclusions and Perspectives

Voilà! Our tour of news (and the last six months of work) on the G’MIC project comes to an end.

We are happy to be celebrating 10 years with the creation and evolution of this Free Software project, and to be able to share with everyone all of these advanced image processing techniques. We hope to continue doing so for many years to come!

Note that next year, we will also be celebrating the 20th anniversary of CImg, the C++ image processing library (started in November 1999) on which the G’MIC project is based, proof that interest in free software is enduring.

As we wait for the next release of G’MIC, don’t hesitate to test the current version. Freely and creatively play with and manipulate your images to your heart’s content!

Thank you, Translators: (ChameleonScales, Pat David)

August 27, 2018

Realtek on the LVFS!

For the last week I’ve been working with Realtek engineers adding USB3 hub firmware support to fwupd. We’re still fleshing out all the details, as we also want to also update any devices attached to the hub using i2c – which is more important than it first seems. A lot of “multifunction” dongles or USB3 hubs are actually USB3 hubs with other hardware connected internally. We’re going to be working on updating the HDMI converter firmware next, probably just by dropping a quirk file and adding some standard keys to fwupd. This will let us use the same plugin for any hardware that uses the rts54xx chipset as the base.

Realtek have been really helpful and open about the hardware, which is a refreshing difference to a lot of other hardware companies. I’m hopeful we can get the new plugin in fwupd 1.1.2 although supported hardware won’t be available for a few months yet, which also means there’s no panic getting public firmware on the LVFS. It will mean we get a “works out of the box” experience when the new OEM branded dock/dongle hardware starts showing up.

August 24, 2018

Best joke of the year

This joke from Michelle Wolf’s routine at the White House Correspondents’ Dinner has stuck with me. I can’t explain why I love it so much, but it is great:

“Mike Pence is the kind of guy that brushes his teeth and then drinks orange juice and thinks, ‘Mmm.'”

~ Michelle Wolf, April 2018

Making Sure the Debian Kernel is Up To Date

I try to avoid Grub2 on my Linux machines, for reasons I've discussed before. Even if I run it, I usually block it from auto-updating /boot since that tends to overwrite other operating systems. But on a couple of my Debian machines, that has meant needing to notice when a system update has installed a new kernel, so I can update the relevant boot files. Inevitably, I fail to notice, and end up running an out of date kernel.

But didn't Debian use to have a /boot/vmlinuz that always linked to the latest kernel? That was such a good idea: what happened to that?

I'll get to that. But before I found out, I got sidetracked trying to find a way to check whether my kernel was up-to-date, so I could have it warn me of out-of-date kernels when I log in.

That turned out to be fairly easy using uname and a little shell pipery:

# Is the kernel running behind?
kernelvers=$(uname -a | awk '{ print $3; }')
latestvers=$(cd /boot; ls -1 vmlinuz-* | sort --version-sort | tail -1 | sed 's/vmlinuz-//')
if [[ $kernelvers != $latestvers ]]; then
    echo "======= Running kernel $kernelvers but $latestvers is available"
    echo "The kernel is up to date"

I put that in my .login. But meanwhile I discovered that that /boot/vmlinuz link still exists -- it just isn't enabled by default for some strange reason. That, of course, is the right way to make sure you're on the latest kernel, and you can do it with the linux-update-symlinks command.

linux-update-symlinks is called automatically when you install a new kernel -- but by default it updates symlinks in the root directory, /, which isn't much help if you're trying to boot off a separate /boot partition.

But you can configure it to notice your /boot partition. Edit /etc/kernel-img.conf and change link_in_boot to yes:

link_in_boot = yes

Then linux-update-symlinks will automatically update the /boot/vmlinuz link whenever you update the kernel, and whatever bootloader you prefer can point to that image. It also updates /boot/vmlinuz.old to point to the previous kernel in case you can't boot from the new one.

August 23, 2018

Fun with SuperIO

While I’m waiting back for NVMe vendors (already one tentatively onboard!) I’ve started looking at “embedded controller” devices. The EC on your laptop historically used to just control the PS/2 keyboard and mouse, but now does fan control, power management, UARTs, GPIOs, LEDs, SMBUS, and various tasks the main CPU is too important to care about. Vendors issue firmware updates for this kind of device, but normally wrap up the EC update as part of the “BIOS” update as the system firmware and EC work together using various ACPI methods. Some vendors do the EC update out-of-band and so we need to teach fwupd about how to query the EC to get the model and version on that specific hardware. The Linux laptop vendor Tuxedo wants to update the EC and system firmware separately using the LVFS, and helpfully loaned me an InfinityBook Pro 13 that was immediately disassembled and connected to all kinds of exotic external programmers. On first impressions the N131WU seems quick, stable and really well designed internally — I’m sure would get a 10/10 for repairability.

At the moment I’m just concentrating on SuperIO devices from ITE. If you’re interested what SuperIO chip(s) you have on your machine you can either use superiotool from coreboot-utils or sensors-detect from lm_sensors. If you’ve got a SuperIO device from ITE please post what signature, vendor and model machine you have in the comments and I’ll ask if I need any more information from you. I’m especially interested in vendors that use devices with the signature 0x8587, which seems to be a favourite with the Clevo reference board. Thanks!

Please welcome AKiTiO to the LVFS

Another week, another vendor. This time the vendor is called AKiTiO, a vendor that make a large number of very nice ThunderBolt peripherals.

Over the last few weeks AKiTiO added support for the Node and Node Lite devices, and I’m sure they’ll be more in the future. It’s been a pleasure working with the engineers and getting them up to speed with uploading to the LVFS.

In other news, Lenovo also added support for the ThinkPad T460 on the LVFS, so get any updates while they’re hot. If you want to try this you’ll have to enable the lvfs-testing remote either using fwupdmgr enable-remote lvfs-testing or using the sources dialog in recent versions of GNOME Software. More Lenovo updates coming soon, and hopefully even more vendor announcements too.

August 22, 2018

SIGGRAPH 2018 Report

View on Vancouver downtown from the north side of the lake. Here hotels were stil affordable (yachts not included :).

My annual trip to SIGGRAPH took me to Vancouver again. A place I’ve started to love a lot. A young and vibrant city, with a stunning skyline – and one main drawback: prices have doubled in the past 5 years. Airbnb and hotels alike – housing a crew for a week here would have costed a fortune. It’s one of the reasons we skipped a booth this year.

As usual it was a great visit – my 20th SIGGRAPH in a row even. It’s a place where you see old friends once a year. Aside of all the fun, here is a list of notes I made to sketch an impression for you.

Blender Birds of a Feather, Foundation-Community meeting by Ton Roosendaal

Birds of a Feather

Birds of a Feather events were not part of the main schedule booklet this year. Instead it had just one page in the back with a note “please check online for schedule and locations”. Our event was put in a hotel without any signage to the BoF rooms. It’s clearly being marginalized. I know SIGGRAPH is struggling with the BoF concept. For Open Source projects (about one third of the BoFs) it’s the only way to get an event scheduled there. But there’s also abuse (recruiters, commercial sowftware) and in a sense it became some kindof parallel conference-in-a-conference. I’m curious what the future for this will be.

It’s an old tradition to start the Blender BoF with giving everyone a short moment to introduce self, occupation and what they do with Blender (or want to know). Visitors came again from many different areas, including from Netflix, Amazon, NASA, Microsoft, Sidefx.

In my talk I used a lot of videos (copied from the great Code Quest collection), so can’t share it as slides easily… basically I had no real news, with one exception:  announcing the “Dev Fund 2.0” project. We are working on giving the Development Fund a major boost, especially by giving the members more recognition. Will go live mid September!

Ton Roosendaal presenting Birds of Feather

The 2nd BoF was the “Blender Spotlight” – organized by David Andrade. Members of the audience were invited to step forward and show what they do with Blender.
For the evening David had a nice surprise, invited me to join the Theory Studios boat-cruise-dinner!

Note to self: maybe record the BoF’s next time…

Meetings / sessions

  • Met with 3D Artist Magazine – they’re covering Blender very well for years. Made sure we’re well lined up for a Blender 2.8 themed issue.
  • Met with the CEO of Original Force studios China. They’ve been eying Blender already a while but didn’t make a step yet. I had two more meetings with him on the days after, also with his CTO. It’s very interesting to hear the point of view of bigger studios (1400 people) towards Blender. It would probably be best to send them a couple of good trainers or artists over. Conversations are ongoing.
  • Had a nice talk with fellow giant (2 meters) Jos Stam. He was granted the SIGGRAPH academy membership this year. And – he left Autodesk.
  • Nvidia’s keynote was actually a fun watch. CEO Jensen Huang managed to keep the audience involved for 90 minutes. Best trick he used: center everything around one message, “real-time ray-tracing”.
  • The day after the keynote I had an hour meeting with 9 (!) Nvidia product managers/engineers. Needless to say, just only about one topic: how do we get Optix and MDL in Blender. It was a productive meeting with a good outcome. Nvidia has  to internally sign this off still, so I can’t say more now. :)
  • Blender will become official member of the Academy Software Foundation! Met with a Linux Foundation representative who makes it happen for us now.
  • I was on a panel. Invited by Jon Peddie, on his famous annual Luncheon. Topic was “virtual studios”.

And then there was the tradeshow. Blender was prominently present at the AMD booth.

Mike Pan demoing Blender in the AMD booth theater.

Permanent Blender demo station at AMD booth, right at show entran

  •  Met with a representative of Oculus (someone I already knew a long time and who switched jobs :). We’re discussing to setup a project around VR authoring together.
  • Had several good meetings with our AMD friends. Aside of hardware seeds (threadripper 2 has a special thread balance issue we need to tackle) we also discussed renewing development support. More to follow.
  • Had two meetings with Intel. This was also to check on some details for the Blender Conference Sponsoring (Intel = Main Sponsor!) but I also wanted to warm them up for joining the Development Fund.
  • Had a really cool meeting with Wacom too. They always support Blender very well, whatever is needed we can ask them. Good tablet support is essential for Blender users, especially with the rise of Grease Pencil and 2D/3D animation tools in Blender. Wacom *loves* Grease Pencil, we will work with them on demo files they can share via their own channels (and demo themselves on shows).
  • Met with a guy who used Blender at Lucas Arts in London, for concept development of Jurassic World. Trying to get him to present it at the Blender Conference.
  • Had a long chat with Neil Trevett of Khronos. They’re actually funding Blender development now (glTF exporter). He also would look forward seeing us moving to Vulkan – we can make a project plan for it. And he knows that Apple Metal and Vulkan will get a good compatibility library as well. No worries for the Apple users then!

And then I had notes about Sketchfab (free 3D model importer addon), Lulzbot printers (much improved resolution), Adobe Dimension (heavily dominated by Blender artists), Nimble Collective (interested to help us out with animation/rigging tools), Epic Games (interested to support us), but I think it’s best to leave it with this!

Just one closing note. A couple of years ago I found out that Blender was finally being taken seriously in the industry. In this edition of SIGGRAPH it was the first time I noticed people want to do business with us. A new milestone.

Ton Roosendaal
Chairman Blender Foundation

August 21, 2018

Adventures with NVMe, part 2

A few days ago I asked people to upload their NVMe “cns” data to the LVFS. So far, 908 people did that, and I appreciate each and every submission. I promised I’d share my results, and this is what I’ve found:

Number of vendors implementing slot 1 read only “s1ro” factory fallback: 10 – this was way less than I hoped. Not all is lost: the number of slots in a device “nfws”indicate how many different versions of firmware the drive can hold, just like some wireless broadband cards. The idea is that a bad firmware flash means you can “fall back” to an old version that actually works. It was surprising how many drives didn’t have this feature because they only had one slot in total:

I also wanted to know how many firmware versions there were for a specific model (deduping by removing the capacity string in the model); the idea being that if drives with the same model string all had the same version firmware then the vendor wasn’t supplying firmware updates at all, and might be a lost cause, or have perfect firmware. Vendors don’t usually change shipped firmware on NMVe drives for no reason, and so a vendor having multiple versions of firmware for a given model could indicate a problem or enhancement important enough to re-run all the QA checks:

So, not all bad, but we can’t just assume that trying to flash a firmware is a safe thing to do for all drives. The next, much bigger problem was trying to identify which drives should be flashed with a specific firmware. You’d think this would be a simple problem, where the existing firmware version would be stored in the “fr” firmware revision string and the model name would be stored in the “mn” string. Alas, only Lenovo and Apple store a sane semver like 1.2.3, other vendors seem to encode the firmware revision using as-yet-unknown methods. Helpfully, the model name alone isn’t all we need to identify the firmware to flash as different drives can have different firmware for the laptop OEM without changing the mn or fr. For this I think we need to look into the elusive “vs” vendor-defined block which was the reason I was asking for the binary dump of the CNS rather than the nvme -H or nvme -o json output. The vendor block isn’t formally defined as part of the NVMe specification and the ODM (and maybe the OEM?) can use this however they want.

Only 137 out of the supplied ~650 NVMe CNS blobs contained vendor data. SK hynix drives contain an interesting-looking string of something like KX0WMJ6KS0760T6G01H0, but I have no idea how to parse that. Seagate has simply 2002. Liteon has a string like TW01345GLOH006BN05SXA04. Some Samsung drives have things like KR0N5WKK0184166K007HB0 and CN08Y4V9SSX0087702TSA0 – the same format as Toshiba CN08D5HTTBE006BEC2K1A0 but it’s weird that the blob is all ASCII – I was somewhat hoping for a packed GUID in the sea of NULs. They do have some common sub-sections, so if you know what these are please let me know!

I’ve built a fwupd plugin that should be able to update firmware on NVMe drives, but it’s 100% untested. I’m going to use the leftover donation money for the LVFS to buy various types of NVMe hardware that I can flash with different firmware images and not cry if all the data gets wiped or the device get bricked. I’ve already emailed my contact at Samsung and fingers crossed something nice happens. I’ll do the same with Toshiba and Lenovo next week. I’ll also update this blog post next week with the latest numbers, so if you upload your data now it’s still useful.

August 20, 2018

security things in Linux v4.18

Previously: v4.17.

Linux kernel v4.18 was released last week. Here are details on some of the security things I found interesting:

allocation overflow detection helpers
One of the many ways C can be dangerous to use is that it lacks strong primitives to deal with arithmetic overflow. A developer can’t just wrap a series of calculations in a try/catch block to trap any calculations that might overflow (or underflow). Instead, C will happily wrap values back around, causing all kinds of flaws. Some time ago GCC added a set of single-operation helpers that will efficiently detect overflow, so Rasmus Villemoes suggested implementing these (with fallbacks) in the kernel. While it still requires explicit use by developers, it’s much more fool-proof than doing open-coded type-sensitive bounds checking before every calculation. As a first-use of these routines, Matthew Wilcox created wrappers for common size calculations, mainly for use during memory allocations.

removing open-coded multiplication from memory allocation arguments
A common flaw in the kernel is integer overflow during memory allocation size calculations. As mentioned above, C doesn’t provide much in the way of protection, so it’s on the developer to get it right. In an effort to reduce the frequency of these bugs, and inspired by a couple flaws found by Silvio Cesare, I did a first-pass sweep of the kernel to move from open-coded multiplications during memory allocations into either their 2-factor API counterparts (e.g. kmalloc(a * b, GFP...) -> kmalloc_array(a, b, GFP...)), or to use the new overflow-checking helpers (e.g. vmalloc(a * b) -> vmalloc(array_size(a, b))). There’s still lots more work to be done here, since frequently an allocation size will be calculated earlier in a variable rather than in the allocation arguments, and overflows happen in way more places than just memory allocation. Better yet would be to have exceptions raised on overflows where no wrap-around was expected (e.g. Emese Revfy’s size_overflow GCC plugin).

Variable Length Array removals, part 2
As discussed previously, VLAs continue to get removed from the kernel. For v4.18, we continued to get help from a bunch of lovely folks: Andreas Christoforou, Antoine Tenart, Chris Wilson, Gustavo A. R. Silva, Kyle Spiers, Laura Abbott, Salvatore Mesoraca, Stephan Wahren, Thomas Gleixner, Tobin C. Harding, and Tycho Andersen. Almost all the rest of the VLA removals have been queued for v4.19, but it looks like the very last of them (deep in the crypto subsystem) won’t land until v4.20. I’m so looking forward to being able to add -Wvla globally to the kernel build so we can be free from the classes of flaws that VLAs enable, like stack exhaustion and stack guard page jumping. Eliminating VLAs also simplifies the porting work of the stackleak GCC plugin from grsecurity, since it no longer has to hook and check VLA creation.

Kconfig compiler detection
While not strictly a security thing, Masahiro Yamada made giant improvements to the kernel’s Kconfig subsystem so that kernel build configuration now knows what compiler you’re using (among other things) so that configuration is no longer separate from the compiler features. For example, in the past, one could select CONFIG_CC_STACKPROTECTOR_STRONG even if the compiler didn’t support it, and later the build would fail. Or in other cases, configurations would silently down-grade to what was available, potentially leading to confusing kernel images where the compiler would change the meaning of a configuration. Going forward now, configurations that aren’t available to the compiler will simply be unselectable in Kconfig. This makes configuration much more consistent, though in some cases, it makes it harder to discover why some configuration is missing (e.g. CONFIG_GCC_PLUGINS no longer gives you a hint about needing to install the plugin development packages).

That’s it for now! Please let me know if you think I missed anything. Stay tuned for v4.19; the merge window is open. :)

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

Interview with Margarita Gadrat

Could you tell us something about yourself?

Hello! My name is Margarita Gadrat, I was born in Russia and live in France. Drawing is my favourite activity since my childhood. After some years working as a graphic designer in different companies, I decided to follow my dream and now I’m a freelance illustrator and graphic designer.

Do you paint professionally, as a hobby artist, or both?

Both. Personal paintings for experimenting and improving my technique. Professionally, I’m open to new interesting projects. There are still a lot to learn and this is so much fun!

What genre(s) do you work in?

I like painting nature inspired subjects, like landscapes, cute animals. And mysterious, dark atmospheres.

Whose work inspires you most — who are your role models as an artist?

Ketka, Ruan Jia, Hellstern, Pete Mohrbacher… I couldn’t put all of them 🙂

I love the works of classical masters too – Sargent, Turner, Ivan Shishkin, Diego Velasquez, Bosch. Aivazovsky’s sea is stunning! And the Pre-Raphaelites art has a magical aura.

How and when did you get to try digital painting for the first time?

10 years ago my husband offered me a Wacom tablet. After trying this tool on Photoshop, I was impressed.

What makes you choose digital over traditional painting?

The creative possibilities you have without buying oils, watercolor. No need to clean your table and material after! Also, you can easily correct details with filters and doing the Ctrl-Z 😉 Working fast too, thanks to useful tools: selections, transforming tools…

How did you find out about Krita?

My husband, who is into FOSS, told me about Krita.

What was your first impression?

Whoa, it’s so fluid and comfortable! Coming from Photoshop, I wasn’t lost with the general concepts (layers, filters, blending, masks…), but had to take time to understand how it was organized in Krita.

What do you love about Krita?

All those features to help in a work process: drawing assistants for perspective, the new reference tool where you can easily arrange your references and put it to your canvas, the freedom of the brush presets. And working with layers in the non destructive way I love so much. The animation section is great too.

What do you think needs improvement in Krita? Is there anything that
really annoys you?

Nothing that really annoys me. Krita is awesome and complete software! Maybe a couple of little things, but I don’t really use them. Like text tool, which is now getting better and better. And I’d like to be able to move the selection form not while selecting, but after it is selected.

What sets Krita apart from the other tools that you use?

Krita is really rich software. You can imitate the traditional materials, but also experimenting with blending to create original results. It permits a fast and quality workflow.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

“Lake house”

This work combines architecture and nature, it was a nice challenge working on the design of the house and the composition.

What techniques and brushes did you use in it?

I painted in Krita with grey scale values, mostly with the default round brush. Default blending brushes make smooth values. After that, I colorized it with color layers and adjusted levels with the filter layer.

Where can people see more of your work?

My personal site with the illustration and graphic design works (in French):

Anything else you’d like to share?

Thank you for Krita, it’s a wonderful program, working on all the platforms, free, open source and constantly including new features!