July 11, 2018

Welcoming the gPhoto Project to the PIXLS.US community!


Welcoming the gPhoto Project to the PIXLS.US community!

Helping the community one project at a time

A major goal of the PIXLS.US effort is to do whatever we can do to help developers of projects unburden themselves from administrating their project. We do this, in part, by providing forum hosting, participating in support, providing web design, and doing community outreach. With that in mind, we are excited to welcome the gPhoto Projects to our discuss forum!

The Entangle interface, which makes use of libgphoto The Entangle interface, which makes use of libgphoto.

You may not have heard of gPhoto, but there is a high chance that you’ve used the project’s software. At the heart of the project is libgphoto2, a portable library that gives application access to hundreds of digital cameras. On top of the foundational library is gphoto2, a command line interface to your camera that supports almost everything that the library can do. The library is used in a bunch of awesome photography applications, such as digiKam, darktable, entangle, and GIMP. There is even a FUSE module, so you can mount your camera storage as a normal filesystem.

gPhoto was recruited to the PIXLS.US community when @darix was sitting next to gPhoto developer Marcus. Marcus was using darix’s Fuji camera to test integration into libgphoto, then the magic happened! Not only will some Fuji models be supported, but our community is growing larger. This is also a reminder that one person can make a huge difference. Thanks darix!

Welcome, gPhoto, and thank you for the years and years of development!

July 09, 2018

Interview with Andrea Buso

Could you tell us something about yourself?

“I am in the middle of the journey of our life ..” (50 years). I was born in Padua in Italy. Since I was a child I’ve always designed; during primary school I created my first Japanese robot style cartoon (Mazinga). I attended art school. I attended computer graphics courses with Adobe products in 1995 and then a course of specialization at the internship school of Illustration for Children of Sarmede in Treviso (IT).

I worked as a freelancer in advertising and comics, I taught traditional and digital painting and drawing in my studio (now closed) La Casa Blu. Today I continue to draw as a hobby and some work, since I teach painting in a center for disabled people.

Do you paint professionally, as a hobby artist, or both?

These days I paint both for hobby and for work, even more as a hobby. Teaching in the special-needs center takes me a lot of time, but it also gives me a lot of satisfaction.

What genre(s) do you work in?

I do not have a specific genre, I like to change technique and style, I like to change, to find new ways. Even if my background is the comic. However, generally I prefer themes of fiction, fantasy. As you can guess I love mixing everything I like.

Whose work inspires you most — who are your role models as an artist?

In addition to loving Michelangelo Buonarotti and Caravaggio, I studied Richard Corben, Simon Bisley, Frank Frazetta. Currently I am following Mozart Couto, Ramon Miranda and David Revoy.

How and when did you get to try digital painting for the first time?

In 2000, my brother, a computer programmer, made me try OpenSuse. I used Gimp, and I felt good because I could draw what I wanted and how I wanted. Since then, I have abandoned Windows for Linux and I have discovered a series of wonderful programs which allow me to work professionally, giving me the advantage of digital.

What makes you choose digital over traditional painting?

In my opinion digital painting is infinity. You can do whatever you want and go back on your steps, whenever you want. It has an infinite number of techniques and tools to create, techniques and tools that you can create yourself. The limit is your own imagination.

How did you find out about Krita?

Watching Youtube videos by Mozart Couto, Ramon Miranda and David Revoy, I saw that they used Krita. I did not know what it was, I did some research on the Internet and found the site Voilà! Love is born! Today it is my favorite program (I’m not saying that to make a good impression on you!).

What was your first impression?

I must say that at the beginning the approach with Krita was a bit difficult. I came from the experience with Gimp and Mypaint, software that has a mild learning curve. But in the end, I managed to “tame” Krita at my will, now it’s my home.

What do you love about Krita?

Given that there are characteristics of Krita that I don’t know and that I will maybe never know, because they’re not necessary to my painting technique, I love everything about Krita!

Above all the panel to create the brushes. It’s wonderful, sometimes I spend hours creating brushes, which I’ll never use because they don’t make sense, but I create them to see how far Krita can get. I love the possibility of combining raster and vector levels; the ability to change the text as I want, level styles. Everything is perfect for my needs in Krita.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Improvements for Krita valid for me would be an implementation of the effects such as: Clouds, plasma, etcetera (I mention those of Gimp to give an example). Moreover, because I have the habit of adjusting lights and shadows at the end of work, I miss controls such as exposure or other typically photographic effects.

I have nothing negative to say about Krita.

What sets Krita apart from the other tools that you use?

The freedom to manage your work, the potential of the various tools and the stability of the software are the most salient features of Krita. When I use Krita, I feel free to create without technical limitations of the software. Also, the quality of the brushes is unparalleled, and when you print your work you realize how much they are cared for and real.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

North (spirit), is my favorite work done with Krita, so far.

In it there are my first (serious) created brushes. The texture of the paper was created with Krita. Also there is my passion for the peoples of the north: my great-grandfather was Swedish. Therefore, in the drawing there is a large part of myself and the drawing represents me.

What techniques and brushes did you use in it?

I used a brush of my own creation and I took inspiration from Mucha (an Art Nouveau painter). The coloring style is similar to cell-shading, while the reflections and the glow of the moonlight were created with the standard FX brushes. The setting is on a magical night.

Where can people see more of your work?

You can see my work on Deviantart: https://audector.deviantart.com/

Anything else you’d like to share?

Thank you for the interview, it was an honor for me, I would just invite all creative people to use Krita, but above all Linux and KDE. The possibilities to work well and professionally are concrete, there is no longer a gap between open source and Windows. However, with Linux and KDE there is the possibility to work better. Thanks to you!

July 07, 2018

Script to modify omni.ja for a custom Firefox

A quick followup to my article on Modifying Firefox Files Inside omni.ja:

The steps for modifying the file are fairly easy, but they have to be done a lot.

First there's the problem of Firefox updates: if a new omni.ja is part of the update, then your changes will be overwritten, so you'll have to make them again on the new omni.ja.

But, worse, even aside from updates they don't stay changed. I've had Ctrl-W mysteriously revert back to its old wired-in behavior in the middle of a Firefox session. I'm still not clear how this happens: I speculate that something in Firefox's update mechanism may allow parts of omni.ja to be overridden, even though I was told by Mike Kaply, the onetime master of overlays, that they weren't recommended any more (at least by users, though that doesn't necessarily mean they're not used for updates).

But in any case, you can be browsing merrily along and suddenly one of your changes doesn't work any more, even though the change is still right there in browser/omni.ja. And the only fix I've found so far is to download a new Firefox and re-apply the changes. Re-applying them to the current version doesn't work -- they're already there. And it doesn't help to keep the tarball you originally downloaded around so you can re-install that; firefox updates every week or two so that version is guaranteed to be out of date.

All this means that it's crazy not to script the omni changes so you can apply them easily with a single command. So here's a shell script that takes the path to the current Firefox, unpacks browser/omni.ja, makes a couple of simple changes and re-packs it. I called it kitfox-patch since I used to call my personally modified Firefox build "Kitfox".

Of course, if your changes are different from mine you'll want to edit the script to change the sed commands.

I hope eventually to figure out how it is that omni.ja changes stop working, and whether it's an overlay or something else, and whether there's a way to re-apply fixes without having to download a whole new Firefox. If I figure it out I'll report back.

July 03, 2018

GIMP 2.10.4 Released

The latest update of GIMP’s new stable series delivers bugfixes, simple horizon straightening, async fonts loading, fonts tagging, and more new features.

Simple Horizon Straightening

A common use case for the Measure tool is getting GIMP to calculate the angle of rotation, when horizon is uneven on a photo. GIMP now removes the extra step of performing rotation manually: after measuring the angle, just click the newly added Straighten button in the tool’s settings dialog.

Straightening images Straightening images in GIMP 2.10.4.

Asynchronous Fonts Loading

Loading all available fonts on start-up can take quite a while, because as soon as you add new fonts or remove existing ones, fontconfig (a 3rd party utility GIMP uses) has to rebuild the fonts cache. Windows and macOS users suffered the most from it.

Thanks to Jehan Pagès and Ell, GIMP now performs the loading of fonts in a parallel process, which dramatically improves startup time. The caveat is that in case you need to immediately use the Text tool, you might have to wait till all fonts complete loading. GIMP will notify you of that.

Fonts Tagging

Michael Natterer introduced some internal changes to make fonts taggable. The user interface is the same as for brushes, patterns, and gradients.

GIMP doesn’t yet automatically generate any tags from fonts metadata, but this is something we keep on our radar. Ideas and, better yet, patches are welcome!

Dashboard Updates

Ell added several new features to the Dashboard dockable dialog that helps debugging GIMP and GEGL or, for end-users, finetune the use of cache and swap.

New Memory group of widgets shows currently used memory size, the available physical memory size, and the total physical memory size. It can also show the tile-cache size, for comparison against the other memory stats.

Updated Dashboard Updated Dashboard in GIMP 2.10.4.

Note that the upper-bound of the meter is the physical memory size, so the memory usage may be over 100% when GIMP uses the swap.

The Swap group now features “read” and “written” fields which report the total amount of data read-from/written-to the tile swap, respectively. Additionally, the swap busy indicator has been improved, so that it’s active whenever data has been read-from/written-to the swap during the last sampling interval, rather than at the point of sampling.

PSD Loader Improvements

While we cannot yet support PSD features such as adjustment layers, there is one thing we can do for users who just need a file to render correctly in GIMP. Thanks to Ell, GIMP now can load a “merged”, pre-composited version of the image, that becomes available when a PSD file was saved with “Maximize Compatibility” option enabled in Photoshop.

This option is currently exposed as an additional file type (“Photoshop image (merged)”), which has to be explicitly selected from the filetype list when opening the image. GIMP then will render the file correctly, but drop certain additional data from the file, such as channels, paths, and guides, while retaining metadata.

Builds for macOS Make a Comeback

Beta builds of GIMP 2.10 for macOS are available now. We haven’t eliminated all issues yet, and we appreciate your feedback.

GEGL and babl

Ell further improved the Recursive Transform operation, allowing multiple transformations to be applied simultaneously. He also fixed the trimming of tile xache into the swap.

New Selective Hue-Saturation operation by Miroslav Talasek is now available in the workshop. The idea is that you can choose a hue, then select width of the hues range around that base hue, then tweak saturation of all affected pixels.

Øyvind Kolås applied various fixes to the Pixelize operation and added the “needs-alpha” meta-data to Color to Alpha and svg-luminancetoalpha operations. He also added a Threshold setting to the Unsharp Mask filter (now called Sharpen (Unsharp Mask)) to restore and improve the legacy Unsharp Mask implementation from GIMP prior to v2.10.

In babl, Ell introduced various improvements to the babl-palette code, including making the default palette initialization thread-safe. Øyvind Kolås added an R~G~B~ set of spaces (which for all BablSpaces mean use sRGB TRC), definitions of ACEScg and ACES2065-1 spaces, and made various clean-ups. Elle Stone contributed a fix for fixed-to-double conversions.

Ongoing Development

While we spend as much time on bugfixing in 2.10.x as we can, our main goal is to complete the GTK+3 port as soon as possible. There is a side effect of this work: we keep discovering old subpar solutions that frustrate us until we fix them. So there is both GTK+3 porting and refactoring, which means we can’t predict when it’ll be done.

Recently, we also revitalized an outdated subproject called ‘gimp-data-extras’ with the sole purpose of keeping the Alpha-to-Logo scripts that we removed from 2.10 due to poor graphics quality. Since some users miss those scripts, there is now a simple way to get them back: download gimp-data-extras v2.0.4, unpack the archive, and copy all ‘.scm’ files from the ‘scripts’ folder to your local GIMP’s ‘scripts’ folder.

July 02, 2018

Affiliated Vendors on the LVFS

We’ve just about to deploy another feature to the LVFS that might be interesting to some of you. First, some nomenclature:

OEM: Original Equipment Manufacturer, the user-known company name on the outside of the device, e.g. Sony, Panasonic, etc
ODM: Original Device Manufacturer, typically making parts for one or more OEMs, e.g. Foxconn, Compal

There are some OEMs where the ODM is the entity responsible for uploading the firmware to the LVFS. The per-device QA is typically done by the OEM, rather than the ODM, although it can be both. Before today we didn’t have a good story about how to handle this other than having a “fake” oem_odm@oem.com useraccounts that were shared by all users at the ODM. The fake account isn’t of good design from a security or privacy point of view and so we needed something better.

The LVFS administrator can now mark other vendors as “affiliates” of other vendors. This gives the ODM permission to upload firmware that is “owned” by the OEM on the LVFS, and that appears in the OEM embargo metadata. The OEM QA team is also able to edit the update description, move the firmware to testing and stable (or delete it entirely) as required. The ODM vendor account also doesn’t have to appear in the search results or the vendor table, making it hidden to all users except OEMs.

This also means if an ODM like Foxconn builds firmware for two different OEMs, they also have to specify which vendor should “own” the firmware at upload time. This is achieved with a simple selection widget on the upload page, but will only be shown if affiliations have been set up. The ODM is able to manage their user accounts directly, either using local accounts with passwords, or ODM-specific OAuth which is the preferred choice as it means there is only one place to manage credentials.

If anyone needs more information, please just email me or leave a comment below. Thanks!

fwupdate is {nearly} dead; long live fwupd

If the title confuses you, you’re not the only one that’s been confused with the fwupdate and fwupd project names. The latter used the shared library of the former to schedule UEFI updates, with the former also providing the fwup.efi secure-boot signed binary that actually runs the capsule update for the latter.

In Fedora the only user of libfwupdate was fwupd and the fwupdate command line tool itself. It makes complete sense to absorb the redundant libfwupdate library interface into the uefi plugin in fwupd. Benefits I can see include:

  • fwupd and fwupdate are very similar names; a lot of ODMs and OEMs have been confused, especially the ones not so Linux savy.
  • fwupd already depends on efivar for other things, and so there are no additional deps in fwudp.
  • Removal of an artificial library interface, with all the soname and package-induced pain. No matter how small, maintaining any project is a significant use of resources.
  • The CI and translation hooks are already in place for fwupd, and we can use the merging of projects as a chance to write lots of low-level tests for all the various hooks into the system.
  • We don’t need to check for features or versions in fwupd, we can just develop the feature (e.g. the BGRT localised background image) all in one branch without #ifdefs everwhere.
  • We can do cleverer things whilst running as a daemon, for instance uploading the fwup.efi to the ESP as required rather than installing it as part of the distro package.
    • The last point is important; several distros don’t allow packages to install files on the ESP and this was blocking fwupdate being used by them. Also, 95% of the failures reported to the LVFS are from Arch Linux users who didn’t set up the ESP correctly as the wiki says. With this new code we can likely reduce the reported error rate by several orders of magnitude.

      Note, fwupd doesn’t actually obsolete fwupdate, as the latter might still be useful if you’re testing capsule updates on something super-embedded that doesn’t ship Glib or D-Bus. We do ship a D-Bus-less fwupdate-compatible command line in /usr/libexec/fwupd/fwupdate if you’re using the old CLI from a shell script. We’re all planning to work on the new integrated fwupd version, but I’m sure they’ll be some sharing of fixes between the projects as libfwupdate is shipped in a lot of LTS releases like RHEL 7.

      All of this new goodness is available in fwupd git master, which will be the new 1.1.0 release probably available next week. The 1_0_X branch (which depends on libfwupdate) will be maintained for a long time, and is probably the better choice to ship in LTS releases at the moment. Any distros that ship the new 1.1.x fwupd versions will need to ensure that the fwup.efi files are signed properly if they want SecureBoot to work; in most cases just copying over the commands from the fwupdate package is all that is required. I’ll be updating Fedora Rawhide with the new package as soon as it’s released.

      Comments welcome.

FreeCAD BIM development news - June 2018

Hi all, Time for a new update on the development of BIM tools for FreeCAD. There is some exciting new stuff, most of it are things that I've been working for some time, that are now ready. As always, a bug thank you to everybody who helped me this month through Patreon or Liberapay! We are...

June 27, 2018

Krita 4.1.0 Released

Three months after the release of Krita 4.0, we’re releasing Krita 4.1!

This release includes the following major new features:

 

  • A new reference images tool that replaces the old reference images docker.
  • You can now save and load sessions: the set of images and views on images you were working on
  • You can create multi-monitor workspace layouts
  • An improved workflow for working with animation frames
  • An improved animation timeline display
  • Krita can now handle larger animation by buffering rendered frames to disk
  • The color picker now has a mixing option
  • Improved vanishing point assistant — and assistants can be painted with custom colors
  • Krita’s scripting module can now be built with Python 2
  • The first part of Ivan Yossi’s Google Summer of Code work on improving the performance of brush masks through vectorization is included as well!

And there are a host of bug fixes, of course, and improvements to the rendering performance and more features. Read the full release notes to discover what’s new in Krita 4.1!

Image by RJ Quiralta

Note!

We found a bug where activating the transform tool will cause a crash if you had selected the Box filter previously. if you experience a crash when enabling the transform tool in krita 4.1.0, go to your kritarc file () and remove the line that says “filterId=Box” in the [KisToolTransform] section. Sorry for the inconvenience. We will bring out a bug fix release as soon as possible.

Download

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

When it is updated, you can also use the Krita Lime PPA to install Krita 4.1.0 on Ubuntu and derivatives. We are working on an updated snap.

OSX

Note: the touch docker, gmic-qt and python plugins are not available on OSX.

Source code

md5sum

For all downloads:

Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here (filenames ending in .sig).

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

June 25, 2018

Interview with Natasa

Could you tell us something about yourself?

Hey, my name is Natasa, I’m a Greek illustrator from Athens currently living in Portugal. My nick is Anastasia_Arjuk. I get all of my inspiration from nature, mythology and people.

Do you paint professionally, as a hobby artist, or both?

I’ve been working on off professionally, did some book covers, children’s book illustration and a bit of jewelry design back home. But life happened and now I’m starting fresh trying to build something that’s all mine. I’ve never stopped drawing though, very happy about that.

What genre(s) do you work in?

The picture has to tell a story, that’s all I really look into. Other than that I just pick what feels right each time.

Whose work inspires you most — who are your role models as an artist?

They’re so many! Contemporary ones I’d say Gennady Spirin, Lisbeth Zwerger and Andrew Hem. From digital art Apterus is excellent in my opinion. I also love Byzantine art, Islamic art and a huge number of old painters, way too many to mention here. Don’t ignore history of art folks, you won’t believe the difference it will make to your work.

How and when did you get to try digital painting for the first time?

I actually started in early 2017, been working only traditional before that. Still not completely comfortable with it but getting there.

What makes you choose digital over traditional painting?

For practical reasons really, it’s so much easier to work professionally on digital art. From having more room, to mailing, to everything. I still prefer traditional art for my personal projects though.

How did you find out about Krita?

I was looking into YouTube for Photoshop lessons at the time, and ran into an artist’s channel who was using Krita. The brushwork seemed so creamy and rich, I had to try it out.

What was your first impression?

I loved the minimal UI and it felt very intuitive. Easy to pick up and go.

What do you love about Krita?

First of all it has an Animation Studio included, I haven’t done 2D animation in years and now I can do it at home, on my PC. Yay! The brush engine is second to none quite frankly and yes I’ve tried more than Krita before I reach that conclusion. I love the mirror tools, the eraser system and that little colour pick up docker where you can attach your favorite brushes as well. Love that little bugger, so practical. Oh and the pattern tool.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I’d like to be able to lock the entire UI in place, not just the dockers, if possible. To be able to zoom in and out like it is on Photoshop, like the Z key in combination with the pen. An improved Text tool. Also probably a stronger engine, to handle larger files. Just nitpicking really.

What sets Krita apart from the other tools that you use?

It’s a very professional freeware program. I very much support what that stands for and like I said, amazing amazing brush engine. Coming from traditional media, textures are extremely important for me. Also the animation possibilities.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I don’t like to dwell on older pieces you can see all their mistakes after they’re done, but I’d say Anansi, the Spider. I learned a lot working on that piece.

What techniques and brushes did you use in it?

I just painted it the same way as traditional art, layers of colour on top of each other, new layer – paint – erase at certain spots, rinse and repeat, a bit like water colour technique. I want the underpainting to be visible an parts. I don’t remember the brushes now but they were all default brushes from the Paint tab which I also used as erasers. A little bit of overlay filters and color maps and voila. Like I said Krita is very intuitive.

Where can people see more of your work?

Artstation: https://www.artstation.com/anastasia_arjuk
Behance: https://www.behance.net/Anastasia_Arjuk
Instagram: https://www.instagram.com/anastasia_arjuk/
Twitter: https://twitter.com/Anastasia_Arjuk
Deviant Art: https://anastasia-arjuk.deviantart.com/
YouTube: https://www.youtube.com/channel/UCAy9Hg8ZaV87wqT6GO4kVnw

Anything else you’d like to share?

Thanks for having me first of all and keep up the good work. In all honesty Krita makes a huge difference to people who want to get involved with art but can’t afford (or don’t want to use) the industry standards. So such a professional open source program is a vital help.

June 24, 2018

Modifying Firefox Files Inside Omni.ja

My article on Fixing key bindings in Firefox Quantum by modifying the source tree got attention from several people who offered helpful suggestions via Twitter and email on how to accomplish the same thing using just files in omni.ja, so it could be done without rebuilding the Firefox source. That would be vastly better, especially for people who need to change something like key bindings or browser messages but don't have a souped-up development machine to build the whole browser.

Brian Carpenter had several suggestions and eventually pointed me to an old post by Mike Kaply, Don’t Unpack and Repack omni.ja[r] that said there were better ways to override specific files.

Unfortunately, Mike Kaply responded that that article was written for XUL extensions, which are now obsolete, so the article ought to be removed. That's too bad, because it did sound like a much nicer solution. I looked into trying it anyway, but the instructions it points to for Overriding specific files is woefully short on detail on how to map a path inside omni.ja like chrome://package/type/original-uri.whatever, to a URL, and the single example I could find was so old that the file it referenced didn't exist at the same location any more. After a fruitless half hour or so, I took Mike's warning to heart and decided it wasn't worth wasting more time chasing something that wasn't expected to work anyway. (If someone knows otherwise, please let me know!)

But then Paul Wise offered a solution that actually worked, as an easy to follow sequence of shell commands. (I've changed some of them very slightly.)

$ tar xf ~/Tarballs/firefox-60.0.2.tar.bz2
  # (This creates a "firefox" directory inside the current one.)

$ mkdir omni
$ cd omni

$ unzip -q ../firefox/browser/omni.ja
warning [../firefox-60.0.2/browser/omni.ja]:  34187320 extra bytes at beginning or within zipfile
  (attempting to process anyway)
error [../firefox-60.0.2/browser/omni.ja]:  reported length of central directory is
  -34187320 bytes too long (Atari STZip zipfile?  J.H.Holm ZIPSPLIT 1.1
  zipfile?).  Compensating...
zsh: exit 2     unzip -q ../firefox-60.0.2/browser/omni.ja

$ sed -i 's/or enter address/or just twiddle your thumbs/' chrome/en-US/locale/browser/browser.dtd chrome/en-US/locale/browser/browser.properties

I was a little put off by all the warnings unzip gave, but kept going.

Of course, you can just edit those two files rather than using sed; but the sed command was Paul's way of being very specific about the changes he was suggesting, which I appreciated.

Use these flags to repackage omni.ja:

$ zip -qr9XD ../omni.ja *

I had tried that before (without the q since I like to see what zip and tar commands are doing) and hadn't succeeded. And indeed, when I listed the two files, the new omni.ja I'd just packaged was about a third the size of the original:

$ ls -l ../omni.ja ../firefox-60.0.2/browser/omni.ja
-rw-r--r-- 1 akkana akkana 34469045 Jun  5 12:14 ../firefox/browser/omni.ja
-rw-r--r-- 1 akkana akkana 11828315 Jun 17 10:37 ../omni.ja

But still, it's worth a try:

$ cp ../omni.ja ../firefox/browser/omni.ja

Then run the new Firefox. I have a spare profile I keep around for testing, but Paul's instructions included a nifty way of running with a brand new profile and it's definitely worth knowing:

$ cd ../firefox

$ MOZILLA_DISABLE_PLUGINS=1 ./firefox -safe-mode -no-remote -profile $(mktemp -d tmp-firefox-profile-XXXXXXXXXX) -offline about:blank

Also note the flags like safe-mode and no-remote, plus disabling plugins -- all good ideas when testing something new.

And it worked! When I started up, I got the new message, "Search or just twiddle your thumbs", in the URL bar.

Fixing Ctrl-W

Of course, now I had to test it with my real change. Since I like Paul's way of using sed to specify exactly what changes to make, here's a sed version of my Ctrl-W fix:

$ sed -i '/key_close/s/ reserved="true"//' chrome/browser/content/browser/browser.xul

Then run it. To test Ctrl-W, you need a website that includes a text field you can type in, so -offline isn't an option unless you happen to have a local web page that includes some text fields. Google is an easy way to test ... and you might as well re-use that firefox profile you just made rather than making another one:

$ MOZILLA_DISABLE_PLUGINS=1 ./firefox -safe-mode -no-remote -profile tmp-firefox-profile-* https://google.com

I typed a few words in the google search field that came up, deleted them with Ctrl-W -- all was good! Thanks, Paul! And Brian, and everybody else who sent suggestions.

Why are the sizes so different?

I was still puzzled by that threefold difference in size between the omni.ja I repacked and the original that comes with Firefox. Was something missing? Paul had the key to that too: use zipinfo on both versions of the file to see what differed. Turned out Mozilla's version, after a long file listing, ends with

2650 files, 33947999 bytes uncompressed, 33947999 bytes compressed:  0.0%
while my re-packaged version ends with
2650 files, 33947969 bytes uncompressed, 11307294 bytes compressed:  66.7%

So apparently Mozilla's omni.ja is using no compression at all. It may be that that makes it start up a little faster; but Quantum takes so long to start up that any slight difference in uncompressing omni.ja isn't noticable to me.

I was able to run through this whole procedure on my poor slow netbook, the one where building Firefox took something like 15 hours ... and in a few minutes I had a working modified Firefox. And with the sed command, this is all scriptable, so it'll be easy to re-do whenever Firefox has a security update. Win!

Update: I have a simple shell script to do this: Script to modify omni.ja for a custom Firefox.

June 22, 2018

Thomson 8-bit computers, a history

In March 1986, my dad was in the market for a Thomson TO7/70. I have the circled classified ads in “Téo” issue 1 to prove that :)



TO7/70 with its chiclet keyboard and optical pen, courtesy of MO5.com

The “Plan Informatique pour Tous” was in full swing, and Thomson were supplying schools with micro-computers. My dad, as a primary school teacher, needed to know how to operate those computers, and eventually teach them to kids.

The first thing he showed us when he got the computer, on the living room TV, was a game called “Panic” or “Panique” where you controlled a missile, protecting a town from flying saucers that flew across the screen from either side, faster and faster as the game went on. I still haven't been able to locate this game again.

A couple of years later, the TO7/70 was replaced by a TO9, with a floppy disk, and my dad used that computer to write an educational software about top-down additions, as part of a training program run by the teachers schools (“Écoles Normales” renamed to “IUFM“ in 1990).

After months of nagging, and some spring cleaning, he found the listings of his educational software, which I've liberated, with his permission. I'm currently still working out how to generate floppy disks that are usable directly in emulators. But here's an early screenshot.


Later on, my dad got an IBM PC compatible, an Olivetti PC/1, on which I'd play a clone of Asteroids for hours, but that's another story. The TO9 got passed down to me, and after spending a full summer doing planning for my hot-dog and chips van business (I was 10 or 11, and I had weird hobbies already), and entering every game from the “102 Programmes pour...” series of books, the TO9 got put to the side at Christmas, replaced by a Sega Master System, using that same handy SCART connector on the Thomson monitor.

But how does this concern you. Well, I've worked with RetroManCave on a Minitel episode not too long ago, and he agreed to do a history of the Thomson micro-computers. I did a fair bit of the research and fact-checking, as well as some needed repairs to the (prototype!) hardware I managed to find for the occasion. The result is this first look at the history of Thomson.



Finally, if you fancy diving into the Thomson computers, there will be an episode coming shortly about the MO5E hardware, and some games worth running on it, on the same YouTube channel.

I'm currently working on bringing the “TeoTO8D emulator to Flathub, for Linux users. When that's ready, grab some games from the DCMOTO archival site, and have some fun!

I'll also be posting some nitty gritty details about Thomson repairs on my Micro Repairs Twitter feed for the more technically enclined among you.

June 21, 2018

First Beta Release of Krita 4.1

Three months after the release of Krita 4.0, we’re releasing the first (and probably only) beta of Krita 4.1, a new feature release! This release includes the following major new features:

  • A new reference images tool that replaces the old reference images docker.
  • You can now save and load sessions: the set of images and views on images you were working on
  • You can create multi-monitor workspace layouts
  • An improved workflow for working with animation frames
  • An improved animation timeline display
  • Krita can now handle larger animation by buffering rendered frames to disk
  • The color picker now has a mixing option
  • Improved vanishing point assistant — and assistants can be painted with custom colors
  • Krita’s scripting module can now be built with Python 2
  • The first part of Ivan Yossi’s Google Summer of Code work on improving the performance of brush masks through vectorization is included as well!

And there’s more. Read the full release notes to discover what’s new in Krita 4.1! With this beta release, the release notes are still work in progress, though.

Download

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

When it is updated, you can also use the Krita Lime PPA to install Krita 4.1.0-beta.2 on Ubuntu and derivatives. We are working on an updated snap.

OSX

Note: the touch docker, gmic-qt and python plugins are not available on OSX.

Source code

md5sum

For all downloads:

Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here (filenames ending in .sig).

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

June 18, 2018

Practical Printer Profiling with Gutenprint

Some time ago I purchased an Epson Stylus Photo R3000 printer, as I wanted to be able to print at A3 size, and get good quality monochrome prints. For a while I struggled a bit to get good quality color photo output from the R3000 using Gutenprint, as it took me a while to figure out which settings proved best for generating and applying ICC profiles.

Sidenote, if you happen to have a R3000 as well and you want to be able to get good results using Gutenprint, you can get some of my profiles here, not all of these profiles have been practically tested. Obviously your milage may vary.

When reading Gutenprint’s documentation they clearly indicated that you should use the “Uncorrected” Color Correction mode, which is very much good advice, as we need deterministic output to be able to generate and apply our ICC profiles in a consistent manner. What kinda threw me off, is that the “Uncorrected” Color Correction mode produces linear gamma output, which practically means very dark output, which the ICC profile is going to need to correct for. And while this is a valid approach, it does generally mean you need to generate a profile using more color patches, which means using more ink and paper for each profile you generate. A more practical approach would be to set Composite Gamma to a value of 1.0, which gamma corrects the output to look more perceptually natural, which consequently means the ICC profile has less to correct for, and thus can be generated using less color patches, and therefore using less ink and paper to do so.

Keep in mind that a printer profile is only valid for a particular combination of Printer, Ink set, Paper, Driver and Settings. Therefore you should document all these facets while generating a profile. This can be as simple as including a similarly named plain text file with each profile you create, for example:

Filename ............: epson_r3000_tecco_photo_matte_230.icc
MD5 Sum .............: 056d6c22ea51104b5e52de8632bd77d4

Paper Type ..........: Tecco Photo Matte 230

Printer Model .......: Epson Stylus Photo R3000
Printer Ink .........: Epson UltraChrome K3 with Vivid Magenta
Printer Firmware ....: AS25E3 (09/11/14)
Printer Driver ......: Gutenprint 5.2.13 (Ubuntu 18.04 LTS)

Color Model .........: RGB
Color Precision .....: Normal
Media Type ..........: Archival Matte Paper
Print Quality .......: Super Photo
Resolution ..........: 1440x1440 DPI
Ink Set .............: Matte Black
Ink Type ............: Standard
Quality Enhancement .: None
Color Correction ....: Uncorrected
Image Type ..........: Photograph
Dither Algorithm ....: EvenTone
Composite Gamma .....: 1.0

You’ll note I’m not using the maximum 5760×2880 resolution Gutenprint supports for this printer, as the quality increase seems almost negligible, and it slows down printing immensely, and might also increase ink consumption with little to show for it.

While the Matte Black (MK) ink set and Archival Matte Paper media type works very well for matte papers, you should probably use Photo Black (PK) ink set and Premium Glossy Photo Paper media type for glossy or Premium Semigloss Photo Paper for pearl, satin & lustre media types.

The following profiling procedure uses only a single sheet of A4 paper, with very decent results, you can use multiple pages by increasing the patch count, the increase in effective output quality will likely be underwhelming though, but your mileage may vary of course.

To proceed you’ll need a spectrophotometer (a colorimeter won’t suffice) supported by ArgyllCMS, like for example the X-Rite Color Munki Photo.

To install ArgyllCMS and other relevant tools on Debian (or one of its derivatives like Ubuntu):

apt-get install argyll liblcms2-utils imagemagick

First we’ll need to generate a set of color patches (we’re including a neutral grey axis, so the profile can more effectively neutralize Epson’s warm tone grey inks):

targen -v -d 3 -G -g 14 -f 210 myprofile
printtarg -v -i CM -h -R 42 -t 360 -M 6 -p A4 myprofile

This results in a TIF file, which you need to print at whatever settings you want to use the profile at. Make sure you let the print dry (and outgas) for an hour at the very least. After the print has dried we’ll need to start measuring the patches using our spectrophotometer:

chartread -v -H myprofile

Once all the patches have been read, we’re ready to generate the actual profile.

colprof -v -D "Tecco Photo Matte 230 for Epson R3000" \
           -C "Copyright 2018 Your Name Here" \
           -Zm -Zr -qm -nc \
           -S /usr/share/color/argyll/ref/sRGB.icm \
           -cmt -dpp myprofile

Note if you’re generating a profile for a glossy or lustre paper type remove the -Zm from the colprof commandline.

Evaluating Your Profile

After generating a custom print profile we can evaluate the profile using xicclu:

xicclu -g -fb -ir myprofile.icc

Looking at the graph above, there are a few things of note, you’ll notice the graph doesn’t touch the lower right corner, which represents a profiles black point, keep in mind that the blackest black any printer can print, still reflects some light, and thus isn’t perfectly black, i.e. 0.

Another point of interest is the curvature of lines, if the graph is bowing significantly to the upper right, it means the media type you have chosen for your profile is causing Gutenprint to put down more ink than the paper you’re using is capable of taking. And conversely if the graph is bowing significantly to the lower left, it means the media type you have chosen for your profile is causing Gutenprint to put down less ink than the paper you’re using is capable of taking. While a profile will compensate for either, having a profile compensate too strongly for either may cause banding artifacts in rare cases especially with an 8-bit workflow. While, I haven’t had a case yet where I needed to, you can use the Density control to adjust the amount of ink put on paper.

Visualizing Printer Gamut

To visualize the effective gamut of your profile you can generate a 3D Lab colorspace graph using iccgamut, which you can view with any modern web browser:

iccgamut -v -w -n myprofile.icc
xdg-open myprofile.x3d.htm

Comparing Gamuts

To compare the gamut of our new custom print profile against a standard working colorspace like sRGB follow these steps:

cp /usr/share/color/argyll/ref/sRGB.icm .
iccgamut -v sRGB.icmiccgamut -v myprofile.icc
viewgam -i -n myprofile.gam sRGB.gam srgb_myprofile
Intersecting volume = 406219.5 cubic units
'epson_r3000_hema_matt_coated_photo_paper_235.gam' volume = 464977.8 cubic units, intersect = 87.36%
'sRGB.gam' volume = 899097.5 cubic units, intersect = 45.18%
xdg-open srgb_myprofile.x3d.htm

From the above output we can conclude that our custom print profile covers about 45% of sRGB, meaning the printer has a gamut that is much smaller than sRGB. However we can also see that sRGB in turn covers about 87% of our custom print profile, which means that 13% of our custom print profile gamut is actually beyond the gamut of sRGB.

This is where gamut mapping comes in. This is where declared rendering intents actually affect how colors outside of the shared gamut is handled.

While a Relative Colorimetric rendering intent limits your prints to the shared area, effectively giving you the smallest practical gamut, it will however offer you the best color accuracy.

A Perceptual rendering intent will scale down colors from an area where a working space profile has a larger gamut (the other 55% of sRGB) into a smaller gamut.

A Saturation rendering intent will also scale up colors from an area where a working space profile has a smaller gamut into a larger gamut (the 13% of our custom print profile).

Manually Preparing Prints using liblcms2-utils

To test your profile, I suggest getting a good test image, like for example from SmugMug, and applying your new profile, using either Perceptual gamut mapping or Relative Colorimetric gamut mapping with Black Point Compensation respectively:

jpgicc -v -o printer.icc -t 0    -q 95 original.jpg print.jpg
jpgicc -v -o printer.icc -t 1 -b -q 95 original.jpg print.jpg

When you open either of the print corrected images, you’ll most likely find they’ll both look awful on your computer’s display, but keep in mind, this is because the images are correcting for printer, driver, ink & paper behavior. If you actually print either image, the printed image should look fairly close to the original image on your computer’s display (presuming you have your display setup properly and calibrated as well).

Manually Preparing Prints using ImageMagick

A more sophisticated way to prepare real images for printing would be using (for example) ImageMagick, these examples below illustrate how you can use ImageMagick to scale an image to a set resolution (360 DPI) for a given paper size, add print sharpening (this is why having a known static resolution is important, otherwise the sharpening would give inconsistent results across different images), then we add a thin black border, and a larger but equidistant (presuming a 3:2 image) white border, and finally we convert the image to our custom print profile:

A4 paper

convert -profile /usr/share/color/argyll/ref/sRGB.icm \
        -resize 2466^ -density 360 -unsharp 2x2+1+0 \
        -bordercolor black -border 28x28 -bordercolor white -border 227x227 \
        -black-point-compensation -intent relative -profile myprofile.icc \
        -strip -sampling-factor 1x1 -quality 95 original.jpg print.jpg

A3 paper

convert -profile /usr/share/color/argyll/ref/sRGB.icm \
        -resize 3487^ -density 360 -unsharp 2x2+1+0 \
        -bordercolor black -border 28x28 -bordercolor white -border 333x333 \
        -black-point-compensation -intent relative -profile myprofile.icc \
        -strip -sampling-factor 1x1 -quality 95 original.jpg print.jpg

A3+ paper

convert -profile /usr/share/color/argyll/ref/sRGB.icm \
        -resize 4320^ -density 360 -unsharp 2x2+1+0 \
        -bordercolor black -border 28x28 -bordercolor white -border 152x152 \
        -black-point-compensation -intent relative -profile myprofile.icc \
        -strip -sampling-factor 1x1 -quality 95 original.jpg print.jpg

Automatically Preparing Prints via colord

While the above method describes a way that gives you a lot of control on how to prepare images for printing, you may also want to use a profile for printing on plain paper, where the input is output of any random application, as opposed to a raster image file that can be very easily preprocessed.

Via colord you can assign a printer an ICC profile that will be automatically applied through cups-filters (pdftoraster), but keep in mind that this profile can only be changed through colormgr (or another colord frontend, like GNOME Control Center) and not through an application’s print dialog, sadly. To avoid messing with driver settings too much, I would suggesting duplicating your printer in CUPS, for example:

  • a printer instance for plain paper prints (with an ICC profile assigned through colord
  • a printer instance for matte color photographic prints (without a profile assigned through colord)
  • a printer instance for (semi)glossy color photographic prints
  • a printer instance for matte black and white photographic prints (likely without a need for a profile at all).
  • a printer instance for (semi)glossy black and white photographic prints (likely without a need for a profile at all).

One caveat of having a printer duplicated in CUPS is that it essentially also creates multiple print queues, which means if you have sent prints to multiple separate queues, you’ll have a race condition where it’s anybody’s guess which queue actually delivers the next print to your single physical printer, which may result in prints coming out in a different order as you had sent them. But it’s my guess that this disadvantage will hardly be noticeable for most people, and very tolerable to most who would notice it.

One thing to keep in mind is that pdftoraster applies an ICC profile by default using a Perceptual rendering intent, which means that out of gamut colors in a source image are scaled to fit inside the the print profile’s gamut. Fundamentally the Perceptual rendering intent makes the tradeoff to keep gradients intact, at the expense of color accuracy, which is most often a fairly sensible thing to do. Given this tidbit of information, and the fact that pdftoraster assumes sRGB input (unless explicitly told otherwise), I’d like to emphasize the importance of passing the -S parameter with an sRGB profile to colprof when generating print profiles for on Linux.

To assign an ICC profile to be applied automatically by cups-filters:

sudo cp navigator_colour_documents_120.icc /var/lib/colord/icc/navigator_colour_documents_120.icc
colormgr import-profile /var/lib/colord/icc/navigator_colour_documents_120.icc
colormgr find-profile-by-filename /var/lib/colord/icc/navigator_colour_documents_120.icc
colormgr get-devices-by-kind printer
colormgr device-add-profile \
         /org/freedesktop/ColorManager/devices/cups_EPSON_Epson_Stylus_Photo_R3000 \
         /org/freedesktop/ColorManager/profiles/icc_c43e7ce085212ba8f85ae634085ecfd3

More on Gutenprint media types

In contrast to commercial printer drivers, Gutenprint gives us the opportunity to peek under the covers, and find out more about the different media types Gutenprint supports for your printer, first lookup your printers model number:

$ grep 'R3000' /usr/share/gutenprint/5.2/xml/printers.xml 
<printer ... name="Epson Stylus Photo R3000" driver="escp2-r3000" ... model="115" ...

Then find the relevant media definition file:

$ grep 'media src' /usr/share/gutenprint/5.2/xml/escp2/model/model_115.xml 
<media src="escp2/media/f360_ultrachrome_k3v.xml"/>

Finally you can dig through the relevant media type definitions, where the Density parameter is of particular interest:

$ less /usr/share/gutenprint/5.2/xml/escp2/media/f360_ultrachrome_k3v.xml
<paper ... text="Plain Paper" ... PreferredInkset="ultra3matte">
  <ink ... name="ultra3matte" text="UltraChrome Matte Black">
    <parameter type="float" name="Density">0.720000</parameter>
<paper ... text="Archival Matte Paper" ... PreferredInkset="ultra3matte">
  <ink ... name="ultra3matte" text="UltraChrome Matte Black">
    <parameter type="float" name="Density">0.920000</parameter>
<paper ... text="Premium Glossy Photo Paper" ... PreferredInkset="ultra3photo">
  <ink ... name="ultra3photo" text="UltraChrome Photo Black">
    <parameter type="float" name="Density">0.720000</parameter>
<paper ... text="Premium Semigloss Photo Paper" ... PreferredInkset="ultra3photo">
  <ink ... name="ultra3photo" text="UltraChrome Photo Black">
    <parameter type="float" name="Density">0.720000</parameter>
<paper ... text="Photo Paper" ... PreferredInkset="ultra3photo">
  <ink ... name="ultra3photo" text="UltraChrome Photo Black">
    <parameter type="float" name="Density">1.000000</parameter> 

Dedicated Grey Neutralization Profile

As mentioned earlier, the Epson R3000 uses warm tone grey inks, which results in very pleasant true black & white images, without any color inks used, at least when Gutenprint is told to print in Greyscale mode.

If, unlike me, you don’t like the warm tone effect, applying the ICC we generated should neutralize it mostly, but possibly not perfectly, which is fine for neutral area’s in color prints, but may be less satisfactory for proper black & white prints.

While I haven’t done any particular testing on this issue, you may want to consider doing a second profile dedicated and tuned to grey neutralization, just follow the normal profiling procedure with the following target generation command instead:

targen -v -d 3 -G -g 64 -f 210 -c previous_color_profile.icc -A 1.0 -N 1.0 grey_neutral_profile

Obviously you’ll need to use this particular profile in RGB color mode, even though your end goal may be monochrome, given that the profile needs to use color inks to compensate for the warm tone grey inks.

YouTube Blocks Blender Videos Worldwide

Thursday June 21 2018, by Ton Roosendaal

Last night all videos came back (except the one from Andrew Price, which still is blocked in USA).
According to another person in Youtube we now *do* have to sign the other agreement as well. You can read it here.

I’m not sure if we should accept this. Will be studied on.

Wednesday 17h, June 20 2018, by Ton Roosendaal

None of our videos play still.

Wednesday 10.30h, June 20 2018, by Ton Roosendaal

Last night the Youtube Support team contacted Francesco Siddi by phone. As we understand it now it’s a mix of coincidences, bad UIs, wrong error messages, ignorant support desks and our non-standard decision to not monetize a popular Youtube channel.

The coincidence is that Youtube is rolling out their subscription system in Europe (and Netherlands). This subscription system will allow users to stream music and enjoy Youtube ad-free. They updated terms and conditions for it and need to get monetized channel owners to approve that. Coincidentally our channel was set to allow monetization.

The bad UI was that the ‘please accept the new terms’ button was only visible if you go the new Youtube “Content Manager” account, which I was not aware of and which is not active when you login to Youtube using the Foundation account to manage videos. The channel was also set to monatization mode which has no option to reset it. To make us even more confused, yesterday the system generated the wrong agreement to be signed.

(Image: after logging in to the Foundation account, the menu “Switch Accounts” – shows the option to login as “Content Manager”).

Because of not accepting the new terms, the wrong error message was to put all videos on “Not available in your country” mode, which usually signals that there is a copyright issue. Similar happened for Andrew Price’s video last year, which (according to our new contact) was because of a trademark dispute but that was never made explicit to us.

All support desk people we contacted (since December last year) couldn’t find out what was wrong. They didn’t know that not accepting ‘terms and conditions’ could be causing this. Until yesterday they thought there was a technical error.

After reviewing the new terms and conditions (which basically is to accept the subscription system, I decided to accept that. According to the new Youtube contact our channel then would be back in a few hours.

Just while writing this, the video thumbnails appeared to be back! They don’t play yet.

Tuesday (afternoon) 19 June 2018, by Ton Roosendaal

We are doing a PeerTube test on video.blender.org. It is running on one of our own servers, in a European datacenter. Just click around and have some fun. We’re curious to see how it holds!

Tuesday 19 June 2018, by Ton Roosendaal

Last night we received a contract from Google. You can read it here. It’s six pages of legal talk, but the gist of the agreement appears to be about Blender Foundation accepting to monetize content on its Youtube channel.

However, BF already has an ad-free Youtube account since 2008. We have monetizing disabled, but it looks like Google is going to change this policy. For example, we now see a new section on our channel settings page: “Monetization enabled”.

However, the actual advertisement option is disabled in the advanced settings:

Now there’s another issue. Last year we were notified by US Youtube visitors that a very popular Blender Conference talk wasn’t visible for them – the talk Andrew Price gave in 2016; The 7 Habits of Highly Effective Artists. It had over a million views already.

With our channel reaching > 100k subscribers, we have special priority support. So we contacted them to ask what was wrong. After a couple of mails back and forth, the reply was as follows (22 dec 2017):

Thanks for your continued support and patience.

I’ve received an update from our experts stating that you need to enable ads for your video. Once you enable, your video will be available in the USA.

If there’s anything else you’d need help with, please feel free to write back to us anytime as we are available 24/7 to take care of every partner’s concerns.

Appreciate your understanding and thanks for being our valuable partner. Have an amazing day!

Which was quite a surprising statement for us. My reply therefore was (22 dec 2017):

I’m chairman of the Blender Foundation. We choose to use a 100% ad-free channel for our work, this to emphasis our public benefit and non-profit goals.

According to your answer we are being forced to enable advertising now.
I would like to know where this new Youtube policy has been published and made official.
We then had every other month a reply like this:

Please allow me some time to work with specialists on your issue. I’ll investigate further and will reach back to you with an update at the earliest possible.

Appreciate your patience and understanding in the interim.

Just last week, June 12, I mailed them again to ask for the status of this issue. The reply was:

I completely understand your predicament. Apologies for the unusual delay in hearing back from the Policy team. I’ve escalated this issue for further investigation and assistance. Kindly bear with us while we get this fixed.

Appreciate your understanding in this regard.

And then on June 15th the entire channel went black.
To us it is still unclear what is going on. It could be related to Youtube’s new “subscription” system. It can also be just a human error or a bug; our refusal to monetize videos on a massively popular channel isn’t common.
However – it remains a fair and relevant question to Google: do you allow adfree channels without monetization? Stay tuned!

Monday 18 June 2018, by Francesco Siddi

Since a few days all Blender videos on the OFFICIAL BLENDER CHANNEL have been blocked worldwide without explanation. We are working with YouTube to resolve the issue, but the support has been less than stellar. In the meantime you can find most of the videos on cloud.blender.org.

June 17, 2018

Blender at Annecy 2018

The Blender team is back from The Annecy International Animation Film Festival 2018 and MIFA, the industry marketplace which takes place during the festival. Annecy is a major international event for over 11000 animation industry professionals and having a Blender presence there was an extremely rewarding experience.


The MIFA 2018

The entrance of the MIFA, at the Hotel Imperial

Hundreds of people stopped by the Blender booth and were amazed by the upcoming Blender 2.8 feature videos, the Blender Open Movie reels, the Hero showcase and the live set of Grease Pencil demos prepared by Daniel M. Lara. Breaking down production files step-by-step was a crowd pleaser and got an impressive number of compliments, good feedback and follow-up requests.

Demo setup

Daniel M. Lara showcasing Grease Pencil

While two years ago our presence was more focused on the upcoming Agent 327 film project, having a clearer focus on software lead to active outreach from studios currently using Blender in their production pipeline. In France alone, there are dozens of small and medium studios using Blender to produce film and TV series. These companies are often looking for artists and professional trainers, and have expressed positive remarks about the Blender Network and the BFCT initiatives.

Café des Arts

Café des Arts is where the festival happens at night

Overall, this experience confirmed the growing appreciation and adoption of Blender as integral part of the production pipeline. This is made possible thanks to the Blender development team, and the Blender community, which is often seen as one of the main reasons for switching tools.

A shout out to Pablo, Hjalti and Daniel for the great work at the booth. Keeping the show running 10 hours a day for 4 consecutive days was no joke :)

Until next year!
Francesco

The Annecy 2018 Team

The Annecy 2018 Team

June 14, 2018

security things in Linux v4.17

Previously: v4.16.

Linux kernel v4.17 was released last week, and here are some of the security things I think are interesting:

Jailhouse hypervisor

Jan Kiszka landed Jailhouse hypervisor support, which uses static partitioning (i.e. no resource over-committing), where the root “cell” spawns new jails by shrinking its own CPU/memory/etc resources and hands them over to the new jail. There’s a nice write-up of the hypervisor on LWN from 2014.

Sparc ADI

Khalid Aziz landed the userspace support for Sparc Application Data Integrity (ADI or SSM: Silicon Secured Memory), which is the hardware memory coloring (tagging) feature in Sparc M7. I’d love to see this extended into the kernel itself, as it would kill linear overflows between allocations, since the base pointer being used is tagged to belong to only a certain allocation (sized to a multiple of cache lines). Any attempt to increment beyond, into memory with a different tag, raises an exception. Enrico Perla has some great write-ups on using ADI in allocators and a comparison of ADI to Intel’s MPX.

new kernel stacks cleared on fork

It was possible that old memory contents would live in a new process’s kernel stack. While normally not visible, “uninitialized” memory read flaws or read overflows could expose these contents (especially stuff “deeper” in the stack that may never get overwritten for the life of the process). To avoid this, I made sure that new stacks were always zeroed. Oddly, this “priming” of the cache appeared to actually improve performance, though it was mostly in the noise.

MAP_FIXED_NOREPLACE

As part of further defense in depth against attacks like Stack Clash, Michal Hocko created MAP_FIXED_NOREPLACE. The regular MAP_FIXED has a subtle behavior not normally noticed (but used by some, so it couldn’t just be fixed): it will replace any overlapping portion of a pre-existing mapping. This means the kernel would silently overlap the stack into mmap or text regions, since MAP_FIXED was being used to build a new process’s memory layout. Instead, MAP_FIXED_NOREPLACE has all the features of MAP_FIXED without the replacement behavior: it will fail if a pre-existing mapping overlaps with the newly requested one. The ELF loader has been switched to use MAP_FIXED_NOREPLACE, and it’s available to userspace too, for similar use-cases.

pin stack limit during exec

I used a big hammer and pinned the RLIMIT_STACK values during exec. There were multiple methods to change the limit (through at least setrlimit() and prlimit()), and there were multiple places the limit got used to make decisions, so it seemed best to just pin the values for the life of the exec so no games could get played with them. Too much assumed the value wasn’t changing, so better to make that assumption actually true. Hopefully this is the last of the fixes for these bad interactions between stack limits and memory layouts during exec (which have all been defensive measures against flaws like Stack Clash).

Variable Length Array removals start

Following some discussion over Alexander Popov’s ongoing port of the stackleak GCC plugin, Linus declared that Variable Length Arrays (VLAs) should be eliminated from the kernel entirely. This is great because it kills several stack exhaustion attacks, including weird stuff like stepping over guard pages with giant stack allocations. However, with several hundred uses in the kernel, this wasn’t going to be an easy job. Thankfully, a whole bunch of people stepped up to help out: Gustavo A. R. Silva, Himanshu Jha, Joern Engel, Kyle Spiers, Laura Abbott, Lorenzo Bianconi, Nikolay Borisov, Salvatore Mesoraca, Stephen Kitt, Takashi Iwai, Tobin C. Harding, and Tycho Andersen. With Linus Torvalds and Martin Uecker, I also helped rewrite the max() macro to eliminate false positives seen by the -Wvla compiler option. Overall, about 1/3rd of the VLA instances were solved for v4.17, with many more coming for v4.18. I’m hoping we’ll have entirely eliminated VLAs by the time v4.19 ships.

That’s in for now! Please let me know if you think I missed anything. Stay tuned for v4.18; the merge window is open. :)

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

The Font BoF at Libre Graphics Meeting 2018

(Call it What’s New in Open Fonts, № 003 if you must. At least this one didn’t take as long as the LibrePlanet one to get pushed out into the street.)

Libre Graphics Meeting (LGM) is the annual shindig for all FOSS creative-graphics-and-design projects and users. This year’s incarnation was held in Sevilla, Spain at the end of April. I barely got there in time, having been en route from Typo Labs in Berlin a week earlier.

I put a “typography BoF” onto the schedule, and a dozen-ish people turned up. BoFs vary wildly across instances; this one featured a lot of developers…

Help

…which is good. I got some feedback to a question I posed in my talk: where is the “right” place for a font package to integrate into a Linux/FOSS help system? I could see that working within an application (i.e., GIMP ingests any font-help files and makes them available via the GIMP help tool) or through the standard desktop “help.”

Currently, font packages don’t hook into the help system. At best, some file with information might get stuffed somewhere in /usr/share/doc/, but you’ll never get told about that. But they certainly could hook in properly, to provide quick info on the languages, styles, and features they support. Or, for newer formats like variable fonts or multi-layer color fonts, to provide info on the axes, layers, and color palettes. I doubt anyone could use Bungee without consulting its documentation first.

But a big question here (particularly for feature support) is what’s the distinction between this “help” documentation and the UI demo strings that Matthias is working on showing in the GTK+ font explorer (as discussed in the LibrePlanet post).

The short answer is that “demo strings” show you “what you will get” (and do so at a glance); help documentation tells you the “why this is implemented” … and, by extension, makes it possible to search for a use case. For example: SIL Awami implements a set of features that do Persian stylistic swashes etc.

You could show someone the affected character strings, but knowing the intent behind them is key, and that’s a help issue. E.g., you might open the help box and search for “Persian fonts” and you’d find the Awami help entry related to it. That same query wouldn’t work by just searching for the Arabic letters that happen to get the swash-variant treatment.

Anyway, GIMP already has hooks in place to register additions to its built-in help system; that’s how GIMP plug-ins get supported by the help framework. So, in theory, the same hooks could be used for a font package to let GIMP know that help docs are available. Inkscape doesn’t currently have this, but Tavmjong Bah noted that it wouldn’t be difficult to add. Scribus does not seem to have an entry point.

In any case, after just a couple of minutes it seemed clear that putting such help documentation into every application is the wrong approach (in addition to not currently being possible). It ought to be system wide. For desktop users, that likely means hooking into the GNOME or KDE help frameworks.

Styles and other human-centric metadata

The group (or is it birds?) also talked about font management. One of the goals of the “low hanging fruit” improvements to font packaging that I’ve been cheerleading for the past few months is that better package-level features will make it possible for *many* application types and/or utilities to query, explore, and help the user make font decisions. So you don’t get just the full-blown FontMatrix-style manager, but you might also get richer font exploration built into translation tools, and you might get a nice “help me find a replacement for the missing font in this document” extension for LibreOffice, etc. Maybe you could even get font identification.

Someone brought up the popular idea that users want to be able to search their font library via stylistic attributes. This is definitely true; the tricky part is that, historically, it’s proven to be virtually impossible to devise a stylistic-classification scheme that (a) works for more than just one narrow slice of the world’s typefaces and (b) works for more than just a small subset of users. PANOSE is one such system that hasn’t take the world over yet; another guy on Medium threw Machine Learning at the Google Fonts library and came up with his own classification set … although it’s not clear that he intends to make that data set accessible to anybody else.

And, even with a schema, you’d still have to go classify and tag all of the actual fonts. It’d be a lot of additional metadata to track; one person suggested that Fontbakery could include a check for it — someone else commented that you’d really want to track whether or not a human being had given those tags a QA/sanity check.

Next, you’d have to figure out where to store that stylistic metadata. Someone asked whether or not fontconfig ought to offer an interface to request fonts by style. Putting that a little more abstractly, let’s say that you have stylistic tags for all of your fonts (however those tags are generated); should they get stored in the font binary? In fonts.conf? Interestingly enough, a lot of old FontMatrix users still have their FontMatrix tags on their system, so they say, and supporting them is kind of a de-facto “tagging solution” for new development projects. Style tags (however they’re structured) are just a subset of user-defined tags anyway: people are certainly going to want to amend, adjust, overwrite, and edit tags until they’re useful at a personal level.

Of course, the question of where to store font metadata is a recurring issue. It’s also come up in Matthias Clasen’s work on implementing smart-font–feature previews for GTK+ and in the AppStream project <font>-object discussion. Storing everything in the binary is nice ‘n’ compact, but it means that info is only accessible where the binary is — not, for example, when you’re scrolling through a package manager trying to find a font you want to install. Storing everything in a sidecar file helps that problem, but it means you’ve got two files to lose instead of one. And, of course, we all die a little on the inside whenever we see “XML”.

Dave Crossland pointed out one really interesting tidbit here: the OpenType STAT table, which was ostensibly created in conjunction with the variable-fonts features, can be used in every single-master, non-variable font, too. There, it can store valuable metadata about where each individual font file sits on the standard axes of variation within a font family (like the weight, width, and optical size in relation to other font files). I wrote a brief article about STAT in 2016 for LWN if you want more detail. It would be a good thing to add to existing open fonts, certainly. Subsequently, Marc Foley at Google has started adding STAT tables to Google Fonts families; it started off as a manual process, but the hope is that getting the workflow worked out will lead to proper tooling down the line.

Last but not least, the Inkscape team indicated that they’re interested in expanding their font chooser with an above-the-family–level selector; something that lets the user narrow down the fonts to choose from in high-level categories like “text”, “handwriting”, “web”, and so on. That, too, requires tracking some stylistic information for each font.

The unanswered question remains whose job it should be to fix or extend this sort of metadata in existing open fonts? Particularly those that haven’t been updated in a while. Does this work rise to the level of taking over maintainership of the upstream source? That could be controversial, or at least complicated.

Specimens, collections, and more

We also revisited the question of specimen creation. We discussed several existing tools for creating specimens; when considering the problem of specimen creation for the larger libre font universe, however, the problem is more one of time x peoplepower. Some ideas were floated, including volunteer sprints, drives conducted in conjunction with Open Source Design, and design teachers encouraging/assigning/forcing(?)/cajoling students to work on specimens as a project. It would also certainly help matters to have several specimen templates of different varieties to help interested contributors get started.

Other topics included curated collections of open fonts. This is one way to get over the information-overload problem, and it has come up before; this time it was was Brendan Howell who broached the subject. Similar ideas do seem to work for Adobe Typekit. Curated collections could be done at the Open Font Library, but it would require reworking the site. That might be possible, but it’s a bit unclear at the moment how interested Fabricatorz (who maintains the site) would be in revisiting the project in such a disruptive way — much less devoting a big chunk of “company time” to it. More discussion needed.

I also raised the question of reverse-engineering the binary, proprietary VFB font-source format (from older versions of FontLab). Quite a few OFL-licensed fonts have source  available in VFB format only. Even if the design of the outlines is never touched again, this makes them hard to debug or rebuild. It’s worse for binary-only fonts, of course, but extending a VFB font is not particularly doable in free software.

The VFB format is deprecated, now replaced by VFC (which has not been widely adopted by OFL projects, so far). FontLab has released a freeware CLI utility that converts VFB smoothly to UFO, an open format. While VFB could perhaps be reverse-engineered by (say) the Document Liberation Project, that might be unnecessary work: batch converting all OFL-VFB fonts once and publishing the UFO source may suffice. It would be a static resource, but could be helpful to future contributors.

It probably goes without saying, but, just in case, the BoF attendees did find more than a few potential uses for shoehorning blockchain technology into fonts. Track versioning of font releases! Track the exact permission set of each font license sold! Solve all your problems! None of these may be practical, but don’t let that stand in the way of raking in heaps of VC money before the Bitcoin bubble crashes.

That about wraps it up. There is an Etherpad with notes from the session, but I’m not quite sure I’ve finished cleaning it up yet (for formatting and to reflect what was actually said versus what may have been added by later edits). I’ll append that once it’s done. For my part, I’m looking forward to GUADEC in a few weeks, where there will inevitably be even more excitement to report back on. Stay tuned.

Firefox Quantum: Fixing Ctrl W (or other key bindings)

When I first tried switching to Firefox Quantum, the regression that bothered me most was Ctrl-W, which I use everywhere as word erase (try it -- you'll get addicted, like I am). Ctrl-W deletes words in the URL bar; but if you type Ctrl-W in a text field on a website, like when editing a bug report or a "Contact" form, it closes the current tab, losing everything you've just typed. It's always worked in Firefox in the past; this is a new problem with Quantum, and after losing a page of typing for about the 20th time, I was ready to give up and find another browser.

A web search found plenty of people online asking about key bindings like Ctrl-W, but apparently since the deprecation of XUL and XBL extensions, Quantum no longer offers any way to change or even just to disable its built-in key bindings.

I wasted a few days chasing a solution inspired by this clever way of remapping keys only for certain windows using xdotool getactivewindow; I even went so far as to write a Python script that intercepts keystrokes, determines the application for the window where the key was typed, and remaps it if the application and keystroke match a list of keys to be remapped. So if Ctrl-W is typed in a Firefox window, Firefox will instead receive Alt-Backspace. (Why not just type Alt-Backspace, you ask? Because it's much harder to type, can't be typed from the home position, and isn't in the same place on every keyboard the way W is.)

But sadly, that approach didn't work because it turned out my window manager, Openbox, acts on programmatically-generated key bindings as well as ones that are actually typed. If I type a Ctrl-W and it's in Firefox, that's fine: my Python program sees it, generates an Alt-Backspace and everything is groovy. But if I type a Ctrl-W in any other application, the program doesn't need to change it, so it generates a Ctrl-W, which Openbox sees and calls the program again, and you have an infinite loop. I couldn't find any way around this. And admittedly, it's a horrible hack having a program intercept every keystroke. So I needed to fix Firefox somehow.

But after spending days searching for a way to customize Firefox's keys, to no avail, I came to the conclusion that the only way was to modify the source code and rebuild Firefox from source.

Ironically, one of the snags I hit in building it was that I'd named my key remapper "pykey.py", and it was still in my PYTHONPATH; it turns out the Firefox build also has a module called pykey.py and mine was interfering. But eventually I got the build working.

Firefox Key Bindings

I was lucky: building was the only hard part, because a very helpful person on Mozilla's #introduction IRC channel pointed me toward the solution, saving me hours of debugging. Edit browser/base/content/browser-sets.inc around line 240 and remove reserved="true" from key_closeWindow. It turned out I needed to remove reserved="true" from the adjacent key_close line as well.

Another file that's related, but more general, is nsXBLWindowKeyHandler.cpp around line 832; but I didn't need that since the simpler fix worked.

Transferring omni.ja -- or Not

In theory, since browser-sets.inc isn't compiled C++, it seems like you should be able to make this fix without building the whole source tree. In an actual Firefox release, browser-sets.inc is part of omni.ja, and indeed if you unpack omni.ja you'll see the key_closeWindow and key_close lines. So it seems like you ought to be able to regenerate omni.ja without rebuilding all the C++ code.

Unfortunately, in practice omni.ja is more complicated than that. Although you can unzip it and edit the files, if you zip it back up, Firefox doesn't see it as valid. I guess that's why they renamed it .ja: long ago it used to be omni.jar and, like other .jar files, was a standard zip archive that you could edit. But the new .ja file isn't documented anywhere I could find, and all the web discussions I found on how to re-create it amounted to "it's complicated, you probably don't want to try".

And you'd think that I could take the omni.ja file from my desktop machine, where I built Firefox, and copy it to my laptop, replacing the omni.ja file from a released copy of Firefox. But no -- somehow, it isn't seen, and the old key bindings are still active. They must be duplicated somewhere else, and I haven't figured out where.

It sure would be nice to have a way to transfer an omni.ja. Building Firefox on my laptop takes nearly a full day (though hopefully rebuilding after pulling minor security updates won't be quite so bad). If anyone knows of a way, please let me know!

June 13, 2018

Krita 4.0.4 released!

Today the Krita team releases Krita 4.0.4, a bug fix release of Krita 4.0.0. This is the last bugfix release for Krita 4.0.

Here is the list of bug fixes in Krita 4.0.4:

  • OpenColorIO now works on macOS
  • Fix artefacts when painting with a pixel brush on a transparency mask (BUG:394438)
  • Fix a race condition when using generator layers
  • Fix a crash when editing a transform mask (BUG:395224)
  • Add preset memory to the Ten Brushes Script, to make switching back and forward between brush presets more smooth.
  • Improve the performance of the stroke layer style (BUG:361130, BUG:390985)
  • Do not allow nesting of .kra files: using a .kra file with embedded file layers as a file layer would break on loading.
  • Keep the alpha channel when applying the threshold filter (BUG:394235)
  • Do not use the name of the bundle file as a tag automatically (BUG:394345)
  • Fix selecting colors when using the python palette docker script (BUG:394705)
  • Restore the last used colors on starting Krita, not when creating a new view (BUG:394816)
  • Allow creating a layer group if the currently selected node is a mask (BUG:394832)
  • Show the correct opacity in the segment gradient editor (BUG:394887)
  • Remove the obsolete shortcuts for the old text and artistic text tool (BUG:393508)
  • Allow setting the multibrush angle in fractions
  • Improve performance of the OpenGL canvas, especially on macOS
  • Fix painting of pass-through group layers in isolated mode (BUG:394437)
  • Improve performance of loading OpenEXR files (patch by Jeroen Hoolmans)
  • Autosaving will now happen even if Krita is kept very busy
  • Improve loading of the default language
  • Fix color picking when double-clicking (BUG:394396)
  • Fix inconsistent frame numbering when calling FFMpeg (BUG:389045)
  • Fix channel swizzling problem on macOS, where in 16 and 32 bits floating point channel depths red and blue would be swapped
  • Fix accepting touch events with recent Qt versions
  • Fix integration with the Breeze theme: Krita no longer tries to create widgets in threads (BUG:392190)
  • Fix the batch mode flag when loading images from Python
  • Load the system color profiles on Windows and macOS.
  • Fix a crash on macOS (BUG:394068)

Download

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

When it is updated, you can also use the Krita Lime PPA to install Krita 4.0.4 on Ubuntu and derivatives. We are working on an updated snap.

OSX

Note: the touch docker, gmic-qt and python plugins are not available on OSX.

Source code

md5sum

For all downloads:

Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here (filenames ending in .sig).

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

June 12, 2018

Fingerprint reader support, the second coming

Fingerprint readers are more and more common on Windows laptops, and hardware makers would really like to not have to make a separate SKU without the fingerprint reader just for Linux, if that fingerprint reader is unsupported there.

The original makers of those fingerprint readers just need to send patches to the libfprint Bugzilla, I hear you say, and the problem's solved!

But it turns out it's pretty difficult to write those new drivers, and those patches, without an insight on how the internals of libfprint work, and what all those internal, undocumented APIs mean.

Most of the drivers already present in libfprint are the results of reverse engineering, which means that none of them is a best-of-breed example of a driver, with all the unknown values and magic numbers.

Let's try to fix all this!

Step 1: fail faster

When you're writing a driver, the last thing you want is to have to wait for your compilation to fail. We ported libfprint to meson and shaved off a significant amount of time from a successful compilation. We also reduced the number of places where new drivers need to be declared to be added to the compilation.

Step 2: make it clearer

While doxygen is nice because it requires very little scaffolding to generate API documentation, the output is also not up to the level we expect. We ported the documentation to gtk-doc, which has a more readable page layout, easy support for cross-references, and gives us more control over how introductory paragraphs are laid out. See the before and after for yourselves.

Step 3: fail elsewhere

You created your patch locally, tested it out, and it's ready to go! But you don't know about git-bz, and you ended up attaching a patch file which you uploaded. Except you uploaded the wrong patch. Or the patch with the right name but from the wrong directory. Or you know git-bz but used the wrong commit id and uploaded another unrelated patch. This is all a bit too much.

We migrated our bugs and repository for both libfprint and fprintd to Freedesktop.org's GitLab. Merge Requests are automatically built, discussions are easier to follow!

Step 4: show it to me

Now that we have spiffy documentation, unified bug, patches and sources under one roof, we need to modernise our website. We used GitLab's CI/CD integration to generate our website from sources, including creating API documentation and listing supported devices from git master, to reduce the need to search the sources for that information.

Step 5: simplify

This process has started, but isn't finished yet. We're slowly splitting up the internal API between "internal internal" (what the library uses to work internally) and "internal for drivers" which we eventually hope to document to make writing drivers easier. This is partially done, but will need a lot more work in the coming months.

TL;DR: We migrated libfprint to meson, gtk-doc, GitLab, added a CI, and are writing docs for driver authors, everything's on the website!

What’s Worse?

  1. Getting your glasses smushed against your face.
  2. Having your earbuds ripped out of your ear when the cord catches on a doorknob.

June 11, 2018

Interview with Zoe Badini

Could you tell us something about yourself?

Hi, I’m Zoe and I live in Italy. Aside from painting I love cooking and spending my time outdoors, preferably snorkeling in the sea.

Do you paint professionally, as a hobby artist, or both?

I’m just now starting to take my first steps professionally after many years of painting as a hobby.

What genre(s) do you work in?

I love to imagine worlds and stories for my paintings, so most of what I’ve done is related to fantasy illustration and some concept art. I also do portraiture occasionally.

Whose work inspires you most — who are your role models as an artist?

There are way too many to mention, I try to learn as much as I can from other artists, so there are a lot of people I look up to. There are a few I often watch on Youtube, Twitch, or other platforms, I learned a lot from their videos: Clint Cearley, Marco Bucci, Suzanne Helmigh, David Revoy.

How and when did you get to try digital painting for the first time?

I was used to traditional drawing, then a few years ago I saw some beautiful digital illustrations and was curious to try my hand at it, there was this old graphic tablet at my parents’ house, so I tried it. What I made was atrocious, but it didn’t discourage me!

What makes you choose digital over traditional painting?

Working digitally I feel like a wizard, with a touch of my wand I have a huge array of tools at my disposal: different techniques, effects, trying out ideas and discarding them freely if they don’t work out. It’s also a big space saver!

How did you find out about Krita?

I had heard it mentioned a couple of times, then I posted a painting on reddit and a user recommended Krita to me, I was a bit uncertain because I was used to my setup, my brushes and so on… But the seed was planted, in the span of a few months I was using Krita exclusively and I never went back.

What was your first impression?

I was understandably a bit lost and watched a few tutorials, but I found the program intuitive and easy to navigate.

What do you love about Krita?

Its accessibility and completeness: there’s everything I may need to paint at a professional level and it’s easy to find and figure out. Krita also comes with a very nice selection of brushes right out of the box.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Nothing really annoys me, as for improvements I wanted to say the text tool, but I know you’re working on it and it was already improved in 4.0.

What sets Krita apart from the other tools that you use?

As I said it’s professional and easy to use, I feel like it’s made for me. It’s also free, which is great for people just starting out.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

My favourite is always one of the latest I did, just because I get better over time. In this case it’s “Big Game Hunt”.

What techniques and brushes did you use in it?

Nothing particular in terms of technique, my brushes come from the Krita presets, my own experiments and a lot of bundles I gathered from the internet over time.

Where can people see more of your work?

Artstation: https://www.artstation.com/zoebadini
Twitter: https://twitter.com/ZoeBadini

Anything else you’d like to share?

I want to thank the Krita team for making a great software and I encourage people to try it, you won’t be disappointed. If you use it and like it, consider donating to help fund the project!

June 09, 2018

Building Firefox for ALSA (non PulseAudio) Sound

I did the work to built my own Firefox primarily to fix a couple of serious regressions that couldn't be fixed any other way. I'll start with the one that's probably more common (at least, there are many people complaining about it in many different web forums): the fact that Firefox won't play sound on Linux machines that don't use PulseAudio.

There's a bug with a long discussion of the problem, Bug 1345661 - PulseAudio requirement breaks Firefox on ALSA-only systems; and the discussion in the bug links to another discussion of the Firefox/PulseAudio problem). Some comments in those discussions suggest that some near-future version of Firefox may restore ALSA sound for non-Pulse systems; but most of those comments are six months old, yet it's still not fixed in the version Mozilla is distributing now.

In theory, ALSA sound is easy to enable. Build pptions in Firefox are controlled through a file called mozconfig. Create that file at the top level of your build directory, then add to it:

ac_add_options --enable-alsa
ac_add_options --disable-pulseaudio

You can see other options with ./configure --help

Of course, like everything else in the computer world, there were complications. When I typed mach build, I got:

Assertion failed in _parse_loader_output:
Traceback (most recent call last):
  File "/home/akkana/outsrc/gecko-dev/python/mozbuild/mozbuild/mozconfig.py", line 260, in read_mozconfig
    parsed = self._parse_loader_output(output)
  File "/home/akkana/outsrc/gecko-dev/python/mozbuild/mozbuild/mozconfig.py", line 375, in _parse_loader_output
    assert not in_variable
AssertionError
Error loading mozconfig: /home/akkana/outsrc/gecko-dev/mozconfig

Evaluation of your mozconfig produced unexpected output.  This could be
triggered by a command inside your mozconfig failing or producing some warnings
or error messages. Please change your mozconfig to not error and/or to catch
errors in executed commands.

mozconfig output:

------BEGIN_ENV_BEFORE_SOURCE
... followed by a many-page dump of all my environment variables, twice.

It turned out that was coming from line 449 of python/mozbuild/mozbuild/mozconfig.py:

   # Lines with a quote not ending in a quote are multi-line.
    if has_quote and not value.endswith("'"):
        in_variable = name
        current.append(value)
        continue
    else:
        value = value[:-1] if has_quote else value

I'm guessing this was added because some Mozilla developer sets a multi-line environment variable that has a quote in it but doesn't end with a quote. Or something. Anyway, some fairly specific case. I, on the other hand, have a different specific case: a short environment variable that includes one or more single quotes, and the test for their specific case breaks my build.

(In case you're curious why I have quotes in an environment variable: The prompt-setting code in my .zshrc includes a variable called PRIMES. In a login shell, this is set to the empty string, but in subshells, I add ' for each level of shell under the login shell. So my regular prompt might be (hostname)-, but if I run a subshell to test something, the prompt will be (hostname')-, a subshell inside that will be (hostname'')-, and so on. It's a reminder that I'm still in a subshell and need to exit when I'm done testing. In theory, I could do that with SHLVL, but SHLVL doesn't care about login shells, so my normal shells inside X are all SHLVL=2 while shells on a console or from an ssh are SHLVL=1, so if I used SHLVL I'd have to have some special case code to deal with that.

Also, of course I could use a character other than a single-quote. But in the thirty or so years I've used this, Firefox is the first program that's ever had a problem with it. And apparently I'm not the first one to have a problem with this: bug 1455065 was apparently someone else with the same problem. Maybe that will show up in the release branch eventually.)

Anyway, disabling that line fixed the problem:

   # Lines with a quote not ending in a quote are multi-line.
    if False and has_quote and not value.endswith("'"):
and after that, mach build succeeded, I built a new Firefox, and lo and behond! I can play sound in YouTube videos and on Xeno-Canto again, without needing an additional browser.

June 06, 2018

Updating Wacom Firmware In Linux

I’ve been working with Wacom engineers for a few months now, adding support for the custom update protocol used in various tablet devices they build. The new wacomhid plugin will be included in the soon-to-be released fwupd 1.0.8 and will allow you safely update the bluetooth, touch and main firmware of devices that support the HID protocol. Wacom is planning to release a new device that will be released with LVFS support out-of-the-box.

My retail device now has a 0.05″ SWI debugging header installed…

In other news, we now build both flatpak and snap versions of the standalone fwupdtool tool that can be used to update all kinds of hardware on distributions that won’t (or can’t) easily update the system version of fwupd. This lets you easily, for example, install the Logitech unifying security updates when running older versions of RHEL using flatpak and update the Dell Thunderbolt controller on Ubuntu 16.04 using snapd. Neither bundle installs the daemon or fwupdmgr by design, and both require running as root (and outside the sandbox) for obvious reasons. I’ll upload the flatpak to flathub when fwupd and all the deps have had stable releases. Until then, my flatpak bundle is available here.

Working with the Wacom engineers has been a pleasure, and the hardware is designed really well. The next graphics tablet you buy can now be 100% supported in Linux. More announcements soon.

darktable 2.4.4 released

we’re proud to announce the fourth bugfix release for the 2.4 series of darktable, 2.4.4!

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.4.4.

as always, please don’t use the autogenerated tarball provided by github, but only our tar.xz. the checksums are:

$ sha256sum darktable-2.4.4.tar.xz
964320b8c9ffef680fa0407a6ca16ed5136ad1f449572876e262764e78acb04d darktable-2.4.4.tar.xz
$ sha256sum darktable-2.4.4.dmg
9324562c98a52346fa77314103a5874eb89bd576cdbc21fc19cb5d8dfaba307a darktable-2.4.4.dmg
$ sha256sum darktable-2.4.4-win64.exe
3763d681de4faa515049daf3dae62ee21812e8c6c206ea7a246a36c0341eca8c darktable-2.4.4-win64.exe
$ sha256sum darktable-2.4.4-win64.zip
5dba3423b0889c69f723e378564e084878b20baf3996c349bfc9736bed815067 darktable-2.4.4-win64.zip

when updating from the currently stable 2.2.x series, please bear in mind that your edits will be preserved during this process, but it will not be possible to downgrade from 2.4 to 2.2.x any more.

Important note: to make sure that darktable can keep on supporting the raw file format for your camera, please read this post on how/what raw samples you can contribute to ensure that we have the full raw sample set for your camera under CC0 license!

and the changelog as compared to 2.4.3 can be found below.

New Features

  • Added 50% zoom option in darkroom mode to the navigation dropdown
  • perspective correction: usability improvement – allow setting the radius when (de)selecting lines

Bugfixes

  • Fix selecting drives in the import dialog on Windows by bundling a patched glib
  • Add some space between checkbox and label in color picker
  • OpenCL: better readability of debug output on memory usage
  • Levels: catch an edge case where float != int
  • Fix the alignment in a tooltip in lens correction
  • Local contrast: Reset strength slider to 120% when double clicked
  • Drop unused clone masks when loading xmp files
  • Remove all sub masks when clearing cloning masks
  • darktable-cltest: do not print summary statistics on OpenCL usage
  • Perspective correction: take aspect parameter into account when judging on neutral settings
  • Haze removal: fix tiled processing
  • Fix install on Windows due to GraphicsMagick’s versioned filenames
  • PPM: Handle byte order when loading files
  • Fix #12165: Don’t try to show dialog without gui
  • Fix an out-of-bounds memory access
  • Tools: Fix typo in darktable-gen-noiseprofile that made it unusable
  • MacOS package: point gettext to correct localedir

Camera support, compared to 2.4.2

Warning: support for Nikon NEF ‘lossy after split’ raws was unintentionally broken due to the lack of such samples. Please see this post for more details. If you have affected raws, please contribute samples!

White Balance Presets

  • Sony ILCE-6500

Noise Profiles

  • Canon EOS 800D
  • Canon EOS Kiss X9i
  • Canon EOS Rebel T7i
  • Nikon COOLPIX B700
  • Nikon D5600
  • Olympus TG-5

Updated translations

  • German
  • Russian

June 04, 2018

Krita Sprint: long fight with jaggy lines on OSX

Two weeks ago we had a very nice and motivating sprint in Deventer, where many members of the Krita team gathered in one place and met each other. Boud has already written a good post about it, so I will try to avoid repetitions and only tell a saga of my main goal for this sprint… fix OSX tablet problems!

Jagged lines caused by OSX input events compression: main symptom – they disappear as soon as one disables openGL

Tablet events compression

Since the very first release of Krita on OSX we’ve had a weird problem. When the user painted too quickly, the strokes became jagged, or as we call it “bent”. The problem happened because tablet events coming from the stylus were being lost somewhere on their way from the driver to Krita.

I should say that this problem has already happened in Krita multiple times on Linux and Windows. In most of the cases it was caused by a Qt update that introduced/activated “input events compression”: a special feature of Qt to drop extra tablet/mouse move events if the application becomes too slow to process them in time. This feature is necessary for normal non-painting applications, which do not expect so many tablet move events and can simply sink in them. The main symptom of such compression is that “jagged lines” almost disappear when you disable the openGL canvas, and it was reported that on OSX this symptom is also present. I have already fixed such compression problems multiple times on other systems, so I was heading to the sprint in quite an optimistic mood…

But I became less optimistic when I arrived at the sprint and checked Qt’s sources: there was no events compression implemented for OSX! I was a bit shocked, but it was so. Tests proved that all events that arrived to Qt were successfully delivered to Krita. That was a bit unexpected. It looked like OSX itself dropped the events if the application’s event loop didn’t fetch them from the queue in time (I still think that is the case).

So we couldn’t do anything with this compression: it happened somewhere inside the operating system or driver. The only way out was to make the main Krita GUI thread more responsive, but there was another thing… openGL!

Prevent openGL from blocking Krita’s GUI thread

The main symptom of the compression problem, was related to the fact that sometimes openGL needs quite a bit of time to upload updated textures or do the rendering of the canvas. Very simplified, our rendering pipeline looked like this:

  1. Image is updated by brush
  2. GUI thread uploads the textures to GPU using glTexImage2D or glTexSubImage2D
  3. GUI thread calls QOpenGLWidget::update() to start new rendering cycle
  4. Qt calls QOpenGLWidget::paintGL(), where we generate mipmaps for the updated textures and render them on screen.

This pipeline worked equally good on all platforms except OSX. If we ran it on OSX, Krita would render the textures with corrupted mipmaps. Long time ago, when we first found this issue, we couldn’t understand why it happened and just added a dirty hack to workaround the problem: we added glFinish() between uploading the textures and rendering. It solved the problem of corrupted mipmaps, but it made the rendering loop slower. We never understood why it was needed, but it somehow fixed the problem, and the OSX-specific pipeline started to look like this:

  1. Update the image
  2. Upload textures
  3. Call glFinish() /* VEEERY SLOOOW */
  4. Call QOpenGLWidget::update()
  5. Generate mipmaps and render the textures

We profiled Krita with apitrace and it became obvious that this glFinish() is really a problem. It blocks the event loop for long time periods, making OSX drop input events. So we had to remove it, but why it was needed at all? OpenGL guarantees that all GPU calls are executed in chronological order, why do they become reordered?

I spent almost two days at the sprint trying to find out why this glFinish() was needed and two more days after returning back home. I even thought that it was a bug in OSX’s implementation of openGL protocol… but the thing was much simpler.

It turned out that we used two separate openGL contexts: one (Qt’s one) that uploaded the textures, and the other one (QOpenGLWidget’s one) that rendered the image. These contexts were shared, so we thought that they were equivalent, but they are not. Yes, they share all the resources, but the way how they process GPU command queues was undefined. On Linux and Windows they seem to share the commands queue so the commands were executed sequentially; but on OSX the queues were separate, so the commands became reordered and we got corrupted mipmaps…

In real life our pipeline looked like this:

  1. [openGL context 1] Update the image
  2. [openGL context 1] Upload textures
  3. [openGL context 2] Call QOpenGLWidget::update()
  4. [openGL context 2] Generate mipmaps and render the textures /* renders corrupted mipmaps, because uploading is not yet finished */

So we just had to move the uploading into the correct openGL context and the bug went away. The patch is now in master and is going to be released in Krita 4.0.4!

The moral of the story

Always take care about what openGL context you use for accessing GPU. If you are not inside QOpenGLWidget::paintGL(), the context might be random!

PS:
Of course, this patch hasn’t fixed the tablet problem completely. Compression still happens somewhere deep inside OSX, but it became almost impossible to notice! 🙂

PPS:
The 2018 Krita sprint was sponsored by KDE e.V. (travel) and the Krita Foundation (accommodation and food).

PPPS:

Apple is deprecating OpenGL…

June 03, 2018

Building Firefox Quantum

With Firefox Quantum, Mozilla has moved away from letting users configure the browser they way they like. If I was going to switch to Quantum as my everyday browser, there were several problems I needed to fix first -- and they all now require modifying the source code, then building the whole browser from scratch.

I'll write separately about fixing the specific problems; but first I had to build Firefox. Although I was a Firefox developer way back in the day, the development environment has changed completely since then, so I might as well have been starting from scratch.

Setting up a Firefox build

I started with Mozilla's Linux build preparation page. There's a script called bootstrap.py that's amazingly comprehensive. It will check what's installed on your machine and install what's needed for a Firefox build -- and believe me, there are a lot of dependencies. Don't take the "quick" part of the "quick and easy" comment at the beginning of the script too seriously; I think on my machine, which already has a fairly full set of build tools, the script was downloading additional dependencies for 45 minutes or so. But it was indeed fairly easy: the script asks lots of questions about optional dependencies, and usually has suggestions, which I mostly followed.

Eventually bootstrap.py finishes loading the dependencies and gets to the point of wanting to check out the mozilla-unified repository, and that's where I got into trouble.

The script wants to check out the bleeding edge tip of Mozilla development. That's what you want if you're a developer checking in to the project. What I wanted was a copy of the currently released Firefox, but with a chance to make my own customizations. And that turns out to be difficult.

Getting a copy of the release tree

In theory, once you've checked out mozilla-unified with Mercurial, assuming you let bootstrap.py enable the recommended "firefoxtree" hg extension (which I did), you can switch to the release branch with:

hg pull release
hg up -c release

That didn't work for me: I tried it numerous times over the course of the day, and every time it died with "abort: HTTP request error (incomplete response; expected 5328 bytes got 2672)" after "adding manifests" when it started "adding file changes".

That sent me on a long quest aided by someone in Mozilla's helpful #introduction channel, where they help people with build issues. You might think it would be a common thing to want to build a copy of the released version of Firefox, and update it when a new release comes out. But apparently not; although #introduction is a friendly and helpful channel, everyone seemed baffled as to why hg up didn't work and what the possible alternatives might be.

Bundles and Artifacts

Eventually someone pointed me to the long list of "bundle" tarballs and advised me on how to get a release tarball there. I actually did that, and (skipping ahead briefly) it built and ran; but I later discovered that "bundles" aren't actually hg repositories and can't be updated. So once you've downloaded your 3 gigabytes or so of Mozilla stuff and built it, it's only good for a week or two until the next Mozilla release, when you're now hopelessly out of date and have to download a whole nuther bundle. Bundles definitely aren't the answer, and they aren't well supported or documented either. I recommend staying away from them.

I should also mention "artifact builds". These sound like a great idea: a lot of the source is already built for you, so you just build a little bit of it. However, artifact builds are only available for a few platforms and branches. If your OS differs in any way from whoever made the artifact build, or if you're requesting a branch, you're likely to waste a lot of time (like I did) downloading stuff only to get mysterious error messages. And even if it works, you won't be able to update it to keep on top of security fixes. Doesn't seem like a good bet.

GitHub to the rescue

Okay, so Mercurial's branch switching doesn't work. But it turns out you don't have to use Mercurial. There's a GitHub mirror for Firefox called gecko-dev, and after cloning it you can use normal git commands to switch branches:

git clone https://github.com/mozilla/gecko-dev.git
cd gecko-dev/
git checkout -t origin/release

You can verify you're on the right branch with git branch -vv, or if you want to list all branches and their remotes, git branch -avv.

Finally: a Firefox release branch that you can actually update!

Building Firefox

Once you have a source tree, you can use the all-powerful mach script to build the current release of Firefox:

./mach build

Of course that takes forever -- hours and hours, depending on how fast your machine is.

Running your New Firefox

The build, after it finishes, helpfully tells you to test it with ./mach run, which runs your newly-built firefox with a special profile, so it doesn't interfere with your running build. It also prints:

For more information on what to do now, see https://developer.mozilla.org/docs/Developer_Guide/So_You_Just_Built_Firefox

Great! Except there's no information there on how to package or run your build -- it's just a placeholder page asking people to contribute to the page.

It turns out that obj-whatever/dist/bin is the directory that corresponds to the tarball you download from Mozilla, and you can run /path/to/mozilla-release/obj-whatever/dist/bin/firefox from anywhere.

I tried filing a bug request to have a sub-page created explaining how to run a newly built Firefox, but it hasn't gotten any response. Maybe I'll just edit the "So You Just Built" page.

Incidentally, my gecko-dev build takes 16G of disk space, of which 9.3G is things it built, which are helpfully segregated in obj-x86_64-pc-linux-gnu.

June 02, 2018

Not just Krita at the 2018 Krita Sprint

At the 2018 Krita Sprint we had a special guest: Valeriy Malov, the maintainer of the Plasma Wacom tablet settings module. We’ve asked him to write about his experience at the sprint, so over to him!

Hello,

This is my Krita 2018 sprint report and general report / pre-release announcement for new version of wacomtablet.

Krita 2018 sprint

A couple of weeks ago I’ve attended the Krita sprint, it was a both fun and productive event. Boudewijn, Timothee and Raghukamath have tested the git version of wacomtablet on their computers. I’ve also tested a handful of wacom devices with the KCM, got some user input, and made a few fixes:

  • Calibration of Cintiq devices should be severely improved. Previously the KCM didn’t account for Cintiq’s unusual sensor coordinate system, which doesn’t start at 0x0, because device sensors are larger than the built-in screen.
  • Support for devices that report separate USB ID for touch sensor has been added. It’s not great (yet), because it still might require user intervention. If you have a device that is listed twice in the KCM, please run Wacom Tablet Finder utility and mark pen/touch parts of the device accordingly.
  • Few other mapping calibration improvements: lock proportions button, calibration screen should now open on the screen the tablet is mapped to, and there’s now an option to manually fine tune calibration values without getting into configuration files (this one was requested on the sprint, but has been added only post-sprint).
  • A couple of minor bugs fixed: touch should now follow overall tablet rotation, hotkeys should not repeat themselves anymore.
  • General consensus was that KCM’s UI has some usability issues. I’m going to ask Krita team for some help with that, but this is postponed until 3.1.0 release. There’s also an open question about having good default settings.
  • I’ve tested rudimentary LED support. It works granted that system has been configured to allow normal users access to Wacom LED API (probably through udev rules). No OLED support yet, but it uses same API. Basically, this is something to work on, should be easy to fix, but unfortunately not a priority because not many devices have LEDs/OLEDs.

I want to thank Boudewijn and Irina for hosting the sprint, and Krita Foundation and KDE e.V. for sponsoring the event. Without them those issues probably wouldn’t get fixed anytime soon.

On new release and testing

There’s also been a major change since 3.0.0: libwacom support. This should increase the number of devices we support out of the box. However, it’s only partial for now (no LED support yet, no multiple USB ID devices yet, libwacom-supplied button schemes don’t fit very well in current UI). It also requires libwacom 0.29 for devices with quirky buttons (you can still build with older libwacom, but it will be much less useful). So don’t throw away “Wacom Tablet Finder” yet.

Another small change is that logging have been ported to QLoggingCategory, which means that for enabling debug logs you need to run kdebugsettings and look for “wacom”.

With all these changes I’m going to make a 3.1.0 branch soon, which means that a release should happen this month. Most important bug fixes since 3.0.0 are hard to backport, so most likely there will be no 3.0.1, sorry. There will be no beta release either (Neon Dev Unstable, Arch and Gentoo already provide git builds for testing).

Known issues:

  • No Wayland support. Like, at all, no ETA. I’m ready to cooperate with someone who wants to implement tablet support in KWin wayland, but I can’t work on it myself anytime soon.
  • Automated rotation tracking most likely won’t work on multi-screen setup. This is a Qt bug.
  • Calibration window can enter “drag” mode when touched/dragged by pen. This is quite annoying but it shouldn’t affect calibration results. This is a KDE feature (you can disable it in widget style settings) which I don’t know how to circumvent yet.

There’s also a handful of issues that are kept open for now, but after release of 3.1.0 I’ll eventually close some of them as I consider them fixed, unless anyone confirms otherwise:

  • Bug 334520 – Calibration fails on Tablet PC if external screen is connected and tablet mapped to internal screen (should be fixed in git/3.1.0)
  • Bug 336748 – Calibrate doesn’t work very good on cintiq13 (should be fixed in git/3.1.0)
  • Bug 322918 – Problem with calibrating wacom cintiq 13HD (should be fixed in git/3.1.0)
  • Bug 327952 – wacom module is not working for calibrating a Cintiq 21ux (should be fixed in git/3.1.0)
  • Bug 364043 – Intuos Pro cannot generate settings profiles, cannot configure buttons.
  • Bug 343666 – Device ‘Wacom Bamboo One M Pen’ is not in wacom_devicelist, not able to configure using tablet configuration (should be fixed in git/3.1.0)
  • Bug 339138 – Tablet screen mapping resets after KDE restart
  • Bug 325520 – Dell latitude xt2: touchscreen inverted when rotate to portrait (should be fixed in git/3.1.0)

Full list of open bugs/wishes here.

Do not hesitate opening a bug if you encounter an issue. If no new issues will surface after the 3.1.0 release, usability improvements is probably the next priority for the project.

On packaging

This is a sort of very important topic which I can’t do much about directly. Currently, wacom support in KDE is an optional component, so if Tablet section is missing from Input Devices settings, you need to install it. Package usually goes by name wacomtablet or kcm-wacomtablet. You can check if your distribution packages it here and here. As far as I know, only KDE Neon, Arch (+derivatives) and Gentoo provide up-to-date package for wacomtablet right now. Kubuntu has it too, but it’s hidden in experimental PPA. If you’re using something else, your options are:

  • Building from source and installing as README.md instructs, which I really don’t want people doing for a bunch of reasons.
  • Asking someone else (preferably your distribution’s KDE team) to package it. This is usually done via distribution’s bugtracker or support forums.

Unfortunately due to how the project is structured (it’s just a bunch of plugins), I don’t think I can build an AppImage for everyone to use. So the best way to get it in your distribution is letting distribution maintainers know that it exists and you need it to be packaged.

June 01, 2018

FreeCAD BIM development news - May 2018

Hi there, Time for a new update on BIM development in FreeCAD. Since last month saw the release of version 0.17, we now have our hands free to start working again on new features! There is quite a lot of new stuff this month, as usual now, spread between the Arch and BIM workbenches. For who...

May 31, 2018

Trying Firefox Variants: From Firefox ESR to Pale Moon to Quantum

For the last year or so the Firefox development team has been making life ever harder for users. First they broke all the old extensions that were based on XUL and XBL, so a lot of customizations no longer worked. Then they made PulseAudio mandatory on Linux bug (1345661), so on systems like mine that don't run Pulse, there's no way to get sound in a web page. Forget YouTube or XenoCanto unless you keep another browser around for that purpose.

For those reasons I'd been avoiding the Firefox upgrade, sticking to Debian's firefox-esr ("Extended Support Release"). But when Debian updated firefox-esr to Firefox 56 ESR late last year, performance became unusable. Like half a minute between when you hit Page Down and when the page actually scrolls. It was time to switch browsers.

Pale Moon

I'd been hearing about the Firefox variant Pale Moon. It's a fork of an older Firefox, supposedly with an emphasis on openness and configurability.

I installed the Debian palemoon package. Performance was fine, similar to Firefox before the tragic firefox-56. It was missing a few things -- no built-in PDF viewer or Reader mode -- but I don't use Reader mode that often, and the built-in PDF viewer is an annoyance at least as often as it's a help. (In Firefox it's fairly random about when it kicks in anyway, so I'm never sure whether I'll get the PDF viewer or a Save-as prompt on any given PDF link).

For form and password autofill, for some reason Pale Moon doesn't fill out fields until you type the first letter. For instance, if I had an account with name "myname" and a stored password, when I loaded the page, both fields would be empty, as if there's nothing stored for that page. But typing an 'm' in the username field makes both username and password fields fill in. This isn't something Firefox ever did and I don't particularly like it, but it isn't a major problem.

Then there were some minor irritations, like the fact that profiles were stored in a folder named ~/.moonchild\ productions/ -- super long so it messed up directory listings, and with a space in the middle. PaleMoon was also very insistent about using new tabs for everything, including URLs launched from other programs -- there doesn't seem to be any way to get it to open URLs in the active tab.

I used it as my main browser for several months, and it basically worked. But the irritations started to get to me, and I started considering other options. The final kicker when I saw Pale Moon bug 86, in which, as far as I can tell, someone working on the PaleMoon in OpenBSD tries to use system libraries instead of PaleMoon's patched libraries, and is attacked for it in the bug. Reading the exchange made me want to avoid PaleMoon for two reasons. First, the rudeness: a toxic community that doesn't treat contributors well isn't likely to last long or to have the resources to keep on top of bug and security fixes. Second, the technical question: if Pale Moon's code is so quirky that it can't use standard system libraries and needs a bunch of custom-patched libraries, what does that say about how maintainable it will be in the long term?

Firefox Quantum

Much has been made in the technical press of the latest Firefox, called "Quantum", and its supposed speed. I was a bit dubious of that: it's easy to make your program seem fast after you force everybody into a few years of working with a program that's degraded its performance by an order of magnitude, like Firefox had. After firefox 56, anything would seem fast.

Still, maybe it would at least be fast enough to be usable. But I had trepidations too. What about all those extensions that don't work any more? What about sound not working? Could I live with that?

Debian has no current firefox package, so I downloaded the tarball from mozilla.org, unpacked it, made a new firefox profile and ran it.

Initial startup performance is terrible -- it takes forever to bring up the first window, and I often get a "Firefox seems slow to start up" message at the bottom of the screen, with a link to a page of a bunch of completely irrelevant hints. Still, I typically only start Firefox once a day. Once it's up, performance is a bit laggy but a lot better than firefox-esr 56 was, certainly usable.

I was able to find replacements for most of the really important extensions (the ones that control things like cookies and javascript). But sound, as predicted, didn't work. And there were several other, worse regressions from older Firefox versions.

As it turned out, the only way to make Firefox Quantum usable for me was to build a custom version where I could fix the regressions. To keep articles from being way too long, I'll write about all those issues separately: how to build Firefox, how to fix broken key bindings, and how to fix the PulseAudio problem.

Funding Krita: 2017

We decided at the last sprint to publish a yearly report on Krita’s income and outlay. We did that in 2015 and 2016. 2017 has been over some time now, so let’s discuss last year’s finances a bit. Last year was weird, of course, and that’s clearly visible from the results: we ended the year € 9.211,84 poorer than we started.

Because of the troubles, we had to split sales and commercial work off from the Krita Foundation. We did have a “company” ready — Boudewijn Rempt Software, which was created when our maintainer was trying to fund his work on Krita through doing totally unrelated freelance jobs, after KO GmbH went bust. That company is now handling sales of art books, dvd’s and so on, as well as doing commercial support for Krita. So the “Sales” number is only for the first quarter of 2017.

We wouldn’t have survived 2017 as a project if two individuals hadn’t generously supported both Dmitry’s and Boudewijn’s work on Krita for several months. That is also not reflected in these numbers: that was handled directly, not through the Krita Foundation. And since Boudewijn, having been badly burned out on his consultancy gig in 2016, couldn’t manage combining working on Krita with a day job anymore, the remainder of his work on Krita in 2017 was sponsored by his savings, which is also not reflected in these numbers. If it were, the amount of money spent on development would be double what is in the books.

Loans were made to Boudewijn and have been repaid in 2018, when the income from the Windows Store started coming in. In 2017 we also produced the 2016 Art Book, which was rather expensive, and very expensive to send out. We still have a lot of copies left, too. Donations are donations we made to people who did things that were useful for Krita as a project, while the post “volunteers” represents money we give under Dutch tax law to people who do an inordinate amount of volunteer work for Krita .

In 2018, we are doing reasonably well. We have done some interesting paid projects for Intel, like optimzing Krita for many core system, creating window session management and a reference images tool. The Windows Store sales basically now fund Boudewijn to work full-time on Krita. That money goes to Boudewijn Rempt Software. We have an average of 2000 euros a month in donations to the Krita Foundation, which goes some way to funding Dmitry’s full-time work. Currently, we have 87 subscribers to the Development Fund, and that number is growing. We plan to have a fund raiser again in September.

May 30, 2018

GIMP has moved to Gitlab

Along with the GEGL and babl libraries, GIMP has moved to a new collaborative programming infrastructure based on Gitlab and hosted by GNOME. The new URLs are:

On the end-user side, this mostly means an improved bug reporting experience. The submission is easier to fill in, and we provide two templates — one for bug reports and one for feature requests.

New issue form on Gitlab New issue form on Gitlab.

For developers, it means simplified contribution, as you can simply fork the GIMP repository, commit changes, and send a merge request. Please note that while we accept merge requests, we only do that in cases when patches can be fast-forwarded. That means you need to rebase your fork on the master branch (we’ll see if we can do merge requests for the ‘gimp-2-10’ branch).

In the meantime, work continues in both ‘master’ branch (GTK+3) porting and the ‘gimp-2-10’ branch. Most notably, Ell and Jehan Pagès have been improving the user-perceivable time it takes GIMP to load fonts by adding the asynchronous loading of resources on startup.

What it means is that font loading does not block startup anymore, but if you have a lot of fonts and you want to immediately use the Text tool, you might have to wait.

The API is general rather than fonts-specific and can be further used to add the background loading of brushes, palettes, patterns etc.

May 28, 2018

Interview with Răzvan Rădulescu

Could you tell us something about yourself?

Hi! My name’s Răzvan Rădulescu, I’m from Romania. I’ve had an interest in drawing since I was little. Unfortunately Romania is one of those countries that can crush creativity at a very early stage. At the time I was also interested in computers and started learning programming by myself, finally ended up doing physics in college and about three years ago I started playing with the idea of digital drawing and painting. The first two years have been painting on and off different things to get the hang of it, but about a year ago I decided to think about this path as more than just a hobby.

Do you paint professionally, as a hobby artist, or both?

I guess the answer is I’m in-between, I’m finally in a position to start working on arts projects, I will elaborate a bit more on it later.

What genre(s) do you work in?

I’m interested in everything and do a lot of experimentation, as you see in my artwork it’s pretty much “all over the place”.

Whose work inspires you most — who are your role models as an artist?

Since I started very late in my life with digital painting, I am not influenced by well known masters, but being attracted to concept art/freelance type of work I have my selection of modern artists which I look up to: Sparth, Piotr Jabłoński, Sergey Kolesov, Sinix, Simon Stålenhag, Viktor Bykov, to name a few. They all have very different painting styles and techniques so that explains why my own art is “all over the place”, I’ve been trying to understand their work process and integrate it in my own.

How and when did you get to try digital painting for the first time?

It must have been about 12 years ago, when I first played with Photoshop, but I didn’t pursue it at the time, it was a very short experiment, a couple of months, no more.

What makes you choose digital over traditional painting?

The usual suspects: cleanliness over messiness, power of layers, easy adjustments and modifications, FX and so on and so forth.

How did you find out about Krita?

I think it was by mistake, I was searching for GIMP related stuff and someone must have mentioned Krita in a forum or something like that.

What was your first impression?

At the time I tried it, I was coming from GIMP Paint Studio, not having touched Photoshop in years and I honestly believed that GIMP Paint Studio is as best as it can be. I was pleasantly surprised to find Krita, for painting I thought it was awesome. I was really impressed by the tools and the “hidden” gems, I’m the type of guy that tries everything, looks at every detail so I quickly found the G’MIC plugin, the assistant tools, clone layers etcetera, and I’m barely scratching the surface still. For what I’ve seen people really don’t know about these features or they don’t really use them, but I like touching every corner of it even if I don’t end up using the features, I still keep them in mind just in case. With the addition of Python scripting, the feature list for Krita as a FOSS alternative is simply amazing.

What do you love about Krita?

I like the fact that it’s a real alternative for “industry” standard software like Photoshop, Corel Paint and so on. I am a huge fan of FOSS philosophy and initiative so Krita is very important to me and I think to the world in general. Krita is quickly becoming the Blender of 2D art world, people are slow to adopt these alternatives because of familiar workflows and for historical reasons, but people just starting off have no reason not to at least try them and I believe with time they will in become a core part of the of the professional artist. A year ago there was almost no mention of Krita, but with the release of v4 I think people are finally starting to get notice of it. I can see this in the LevelUp Facebook group (a very well known and important group for concept artists all over the world – https://www.facebook.com/groups/levelup.livestream) where I’m a moderator that now and then there’s the occasional mention of Krita so I know for a fact that more people are watching the development with anticipation.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Hm, it’s tough to say without experimenting with other software to have as a reference. If there’s one thing that annoys me it’s that there are some lingering bugs since for ever – I’m looking at you “random transform crash” & “color sliders bug”. In terms of improvement, I think the tag/brush menu system needs an update but I know it’s on the roadmap so it will be taken care of eventually. I would probably have a better response if I knew other painting software.

What sets Krita apart from the other tools that you use?

As far as digital painting, I don’t use other tools so there’s nothing much to say here. Generally speaking Krita is apart due to it being the only real FOSS alternative that can push a shift in mindset.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

It has to be this one, it’s my favourite because it’s the first successful one and it all started with a rock study, trying to understand how Piotr Jabłoński applies color and atmosphere in his work. It’s nothing special in terms of design but I I like the overall feel of it.

What techniques and brushes did you use in it?

Ah memories… I made it after starting to work on my RZV Krita Brushpack (which can be downloaded for free at my website – https://razcore.art/). I didn’t like the fuzziness and quality of the patterns of the default set so that prompted me to work on my own set. Having said that, I’ve tried the latest nightly build of Krita v4 and the new default set is simply awesome. I know this stuff can be pretty subjective, but from my point of view the latest brush set, thanks to David Revoy, is now years ahead of the default set in v3. It forced me to rethink my own brush set for which I’ll be releasing an update after Krita v4 official release. I think it will be quite a nice addition, I’ve only kept the really different brushes that make a difference.

As for the technique, for this painting it was quite the natural approach, I used more of a traditional style technique, with few layers, I’ve only really used layers for overlaps so that I don’t have to worry about moving things, like the floating rocks in front of the central island. One final set of layers was used for placing the lights, like you see on the tree, and that’s pretty much it.

Where can people see more of your work?

Some of my paintings can be found at https://razvanc-r.deviantart.com, but I plan to move my more advanced and successful ones to https://razcore.artstation.com/ (which is empty for now) and my own site https://razcore.art/.

Other places people can find me at:

– YouTube: https://www.youtube.com/channel/UC6iuu2ajEK2GiMc2TFWkEqw – I started this channel just a couple of months ago so it’s still very unpopular and it’s quite experimental. I’m trying to get back to it and create mini-courses of sorts using free online resources I know of. There are quite a few good places to learn digital art for free but it takes a very long time to find them because they’re scattered all over the place. One of my objectives is to explore them together with the viewers so all suggestions and comments are welcome. I had to take a bit of a break from the LLT (Let’s Learn Together) playlist project but once things settle down I’ll try to pick it up again.

– Mastodon: https://mastodon.art/@razcore

– Twitter: https://twitter.com/razcore_art

– Instagram: https://www.instagram.com/razcore.art

Anything else you’d like to share?

I think this is a good place to elaborate on the project I mentioned at the beginning. I’m currently in a collaboration with the CEO of Boot.AI and we’re interested in exploring the idea for a future society through design illustrations and podcasts to engage people to think and share opinions on these subjects. It’s still at a very early stage of development, but one thing we’d like to work on is a sort of design and illustration course tackling these subjects with the help of FOSS projects such as Krita, Blender, Natron, to name a few. For these we’re preparing the project at https://boot.ai/seed – an illustrated experimental city society.

 

Krita Manual Updated

Kiki, the Krita mascot, is on the foreground, leaning on a book with the title "Krita" and the Krita logo. In her right hand, she's holding up a stylus, and she's facing the audience with a big grin. On the background a giant sphinx is sitting, hunched over to look at Kiki.

Over the past month or two, we’ve been really busy with the manual. Our manual always has been a source of pride for the project: it’s extensive, detailed, up-to-date and contains a lot of in-depth information not just on Krita, but on digital painting in general. The manual was implemented as a mediawiki site. A wiki makes it easy to create and edit pages, but it turned out to be hard to have translations available or to generate an off-line version like a PDF or epub version of the manual.

Enter Sphinx. We’ve ported the entire manual to Sphinx. You can find the source here:

https://cgit.kde.org/websites/docs-krita-org.git/

Every time someone pushes a commit to the repository, the manual gets rebuilt!

The manual itself is in its old location:

https://docs.krita.org

All old links to pages have changed though! But all information is available, and the table of contents and search box work perfectly fine.

And you can get the manual as a 1000+ page epub, too!

https://docs.krita.org/en/epub/KritaManual.epub

Huge thanks go out to Scott, Wolthera, Raghukamath, Timothee and especially Ben who have made this happen!

May 27, 2018

Faking Javascript <body onload=""> in Wordpress

After I'd switched from the Google Maps API to Leaflet get my trail map working on my own website, the next step was to move it to the Nature Center's website to replace the broken Google Maps version.

PEEC, unfortunately for me, uses Wordpress (on the theory that this makes it easier for volunteers and non-technical staff to add content). I am not a Wordpress person at all; to me, systems like Wordpress and Drupal mostly add obstacles that mean standard HTML doesn't work right and has to be modified in nonstandard ways. This was a case in point.

The Leaflet library for displaying maps relies on calling an initialization function when the body of the page is loaded:

<body onLoad="javascript:init_trailmap();">

But in a Wordpress website, the <body> tag comes from Wordpress, so you can't edit it to add an onload.

A web search found lots of people wanting body onloads, and they had found all sorts of elaborate ruses to get around the problem. Most of the solutions seemed like they involved editing site-wide Wordpress files to add special case behavior depending on the page name. That sounded brittle, especially on a site where I'm not the Wordpress administrator: would I have to figure this out all over again every time Wordpress got upgraded?

But I found a trick in a Stack Overflow discussion, Adding onload to body, that included a tricky bit of code. There's a javascript function to add an onload to the tag; then that javascript is wrapped inside a PHP function. Then, if I'm reading it correctly, The PHP function registers itself with Wordpress so it will be called when the Wordpress footer is added; at that point, the PHP will run, which will add the javascript to the body tag in time for for the onload even to call the Javascript. Yikes!

But it worked. Here's what I ended up with, in the PHP page that Wordpress was already calling for the page:

<?php
/* Wordpress doesn't give you access to the <body> tag to add a call
 * to init_trailmap(). This is a wordaround to dynamically add that tag.
 */
function add_onload() {
?>

<script type="text/javascript">
  document.getElementsByTagName('body')[0].onload = init_trailmap;
</script>

<?php
}

add_action( 'wp_footer', 'add_onload' );
?>

Complicated, but it's a nice trick; and it let us switch to Leaflet and get the PEEC interactive Los Alamos area trail map working again.

May 24, 2018

Google Maps API No Longer Free?

A while ago I wrote an interactive trail map page for the PEEC nature center website. At the time, I wanted to use an open library, like OpenLayers or Leaflet; but there were no good sources of satellite/aerial map tiles at the time. The only one I found didn't work because they had a big blank area anywhere near LANL -- maybe because of the restricted airspace around the Lab. Anyway, I figured people would want a satellite option, so I used Google Maps instead despite its much more frustrating API.

This week we've been working on converting the website to https. Most things went surprisingly smoothly (though we had a lot more absolute URLs in our pages and databases than we'd realized). But when we got through, I discovered the trail map was broken. I'm still not clear why, but somehow the change from http to https made Google's API stop working. In trying to fix the problem, I discovered that Google's map API may soon cease to be free:

New pricing and product changes will go into effect starting June 11, 2018. For more information, check out the Guide for Existing Users.

That has a button for "Transition Tool" which, when you click it, won't tell you anything about the new pricing structure until you've already set up a billing account. Um ... no thanks, Google.

Googling for google maps api billing led to a page headed "Pricing that scales to fit your needs", which has an elaborate pricing structure listing a whole bnch of variants (I have no idea which of these I was using), of which the first $200/month is free. But since they insist on setting up a billing account, I'd probably have to give them a credit card number -- which one? My personal credit card, for a page that isn't even on my site? Does the nonprofit nature center even have a credit card? How many of these API calls is their site likely to get in a month, and what are the chances of going over the limit?

It all rubbed me the wrong way, especially when the context of "Your trail maps page that real people actually use has broken without warning, and will be held hostage until you give usa credit card number". This is what one gets for using a supposedly free (as in beer) library that's not Free open source software.

So I replaced Google with the excellent open source Leaflet library, which, as a bonus, has much better documentation than Google Maps. (It's not that Google's documentation is poorly written; it's that they keep changing their APIs, but there's no way to tell the dozen or so different APIs apart because they're all just called "Maps", so when you search for documentation you're almost guaranteed to get something that stopped working six years ago -- but the documentation is still there making it look like it's still valid.) And I was happy to discover that, in the time since I originally set up the trailmap page, some open providers of aerial/satellite map tiles have appeared. So we can use open source and have a satellite view.

Our trail map is back online with Leaflet, and with any luck, this time it will keep working. PEEC Los Alamos Area Trail Map.

May 22, 2018

Downloading all the Books in a Humble Bundle

Humble Bundle has a great bundle going right now (for another 15 minutes -- sorry, I meant to post this earlier) on books by Nebula-winning science fiction authors, including some old favorites of mine, and a few I'd been meaning to read.

I like Humble Bundle a lot, but one thing about them I don't like: they make it very difficult to download books, insisting that you click on every single link (and then do whatever "Download this link / yes, really download, to this directory" dance your browser insists on) rather than offering a sane option like a tarball or zip file. I guess part of their business model includes wanting their customers to get RSI. This has apparently been a problem for quite some time; a web search found lots of discussions of ways of automating the downloads, most of which apparently no longer work (none of the ones I tried did).

But a wizard friend on IRC quickly came up with a solution: some javascript you can paste into Firefox's console. She started with a quickie function that fetched all but a few of the files, but then modified it for better error checking and the ability to get different formats.

In Firefox, open the web console (Tools/Web Developer/Web Console) and paste this in the single-line javascript text field at the bottom.

// How many seconds to delay between downloads.
var delay = 1000;
// whether to use window.location or window.open
// window.open is more convenient, but may be popup-blocked
var window_open = false;
// the filetypes to look for, in order of preference.
// Make sure your browser won't try to preview these filetypes.
var filetypes = ['epub', 'mobi', 'pdf'];

var downloads = document.getElementsByClassName('download-buttons');
var i = 0;
var success = 0;

function download() {
  var children = downloads[i].children;
  var hrefs = {};
  for (var j = 0; j < children.length; j++) {
    var href = children[j].getElementsByClassName('a')[0].href;
    for (var k = 0; k < filetypes.length; k++) {
      if (href.includes(filetypes[k])) {
        hrefs[filetypes[k]] = href;
        console.log('Found ' + filetypes[k] + ': ' + href);
      }
    }
  }
  var href = undefined;
  for (var k = 0; k < filetypes.length; k++) {
    if (hrefs[filetypes[k]] != undefined) {
      href = hrefs[filetypes[k]];
      break;
    }
  }
  if (href != undefined) {
    console.log('Downloading: ' + href);
    if (window_open) {
      window.open(href);
    } else {
      window.location = href;
    }
    success++;
  }
  i++;
  console.log(i + '/' + downloads.length + '; ' + success + ' successes.');
  if (i < downloads.length) {
    window.setTimeout(download, delay);
  }
}
download();

If you have "Always ask where to save files" checked in Preferences/General, you'll still get a download dialog for each book (but at least you don't have to click; you can hit return for each one). Even if this is your preference, you might want to consider changing it before downloading a bunch of Humble books.

Anyway, pretty cool! Takes the sting out of bundles, especially big ones like this 42-book collection.

Back from Krita Sprint 2018

Hi,
Yesterday I came back from 3,5 days of Krita Sprint in Deventer. Even if nowadays I have less time for Krita with my work on GCompris, I’m always following what is happening and keep helping where I can, especially on icons, and a few other selected topics. And it’s always very nice to meet my old friends from the team, and the new ones! 🙂

A lot of things were discussed and done, and plans have been set for the next steps.
I was in the discussions for the next fundraiser, the Bugzilla policies, the next release, the resources management rewrite, and defining and ordering the priorities for the unfinished tasks.

I did start a little the french translation for the new manual that is coming soon, mostly porting the existing translation of the FAQ and completing it. Again about the manual I gave a little idea to Wolthera who was looking at reducing the size of png images.. result is almost half smaller, around 60Mo for 1000pages, not bad 😉

I discussed with Valeriy, the new maintainer of kcm-wacomtablet, about some little missing feature I would like to have, and built the git version to test on Mageia 6. Great progress already, and more goodies to come!

As we decided to make layer names in default document templates translatable, we defined a list of translatable keywords to use for layer names in those default templates. The list was made by most artists present there (me, Deevad, Wolthera, Raghukamath and Bollebib).

Also I helped Raghukamath who was fighting with his bluish laptop screen to properly calibrate it on his Linux system, and he was very happy of the result.

Many thanks to Boudewijn and Irina who organised and hosted the sprint in their house, to the Krita Foundation for the accommodation and food, and to KDE e.V. for the travel support that made it possible to gather contributors from many different countries.

You can find more info about this sprint on the Krita website:

Krita 2018 Sprint Report

Krita 2018 Sprint Report

This weekend, Krita developers and artists from all around the world came to the sleepy provincial town of Deventer to buy cheese — er, I mean, to discuss all things Krita related and do some good, hard work! After all, the best cheese shop in the Netherlands is located in Deventer. As are the Krita Foundation headquarters! We started on Thursday, and today the last people are leaving.

Image by David Revoy

Events like these are very important: bringing people together, not just for serious discussions and hacking, but for lunch and dinner and rambling walks makes interaction much easier when we’ve gone back to our IRC channel, #krita. We didn’t have a big sprint in 2017, the last big sprint was in 2016.

So… What did we do? We first had a long meeting where we discussed the following topics:

  • 2018 Fund Raiser. We currently receive about €2000 a month in donations and have about eighty development subscribers. This is pretty awesome, and goes a long way towards funding Dmitry’s work. But we still want to go back to having a yearly fund raiser! We aim for September. Fund raisers are always a fun and energizing way to get together with our community. However, Kickstarter is out: it’s a bit of tired formula. Instead we want to figure out how to make this more of a festival or a celebration. This time the fund raiser won’t have feature development as a target, because…
  • This year’s focus: zarro bugs. That’s what bugzilla used to tell you if your search didn’t find a single bug. Over the past couple of years we’ve implemented a lot of features, ported Krita to Qt5 and in general produced astonishing amounts of code. But not everything is done, and we’ve got way too many open bug reports, way too many failing unittests, way too many embarrassing hits in pvs, way too many features that aren’t completely done yet — so our goal for this year is to work on that.
  • Unfinished business: We identified a number of places where we have unfinished business that we need to get back to. We asked the artists present to rank those topics, and this is the result:
    • Boudewijn will work on:
      • Fix resource management (https://phabricator.kde.org/T379).
      • Shortcut and canvas input unification and related bugs
      • Improved G’Mic integration
    • Dmitry will work on:
      • Masks and selections
      • Improving the text layout engine, for OpenType support, vertical text, more SVG2 text features.
      • SVG leftovers: support for filters and patterns, winding mode and grouping
      • Layer styles leftovers
    • Jouni will work on animation left-overs:
      • Frame cycles and cloning
      • Transform mask interpolation curves
    • Wolthera will work on:
      • Collecting information about missing scripting API
      • Color grading filters
  • Releases. We intend to release Krita 4.1.0 June 20th. We also want to continue doing monthly bug-fix releases. We’ve asked the KDE system administrators whether we can have nightly builds of the stable branch so people can test the bug fix releases before we actually release them. Krita 4.1 will have lots of animation features, animation cache swapping, session management and the reference images tool — and more!

We also discussed the resource management fixing plan, worked really hard on making the OpenGL canvas work even smoother, especially on macOS, where it currently isn’t that smooth, added ffmpeg to the Windows installer, fixed translation issues, improved autosave reliability, fixed animation related bugs and implemented support for a cross-channel curves filter for color grading. And at the same time, people who weren’t present worked on improving OpenEXR file loading (it’s multi-threaded now, among other things), fixed issues with the color picker and made that code simpler and added even more improvements to the animation timeline!

And that’s not all, because Wolthera, Timothee and Raghukamath also finished porting our manual to Sphinx, so we can generate off-line documentation and support translations of the manual. The manual is over 1000 pages long!

There were three people who hadn’t attended a sprint before, artist Raghukamath, ace windows developer Alwin Wong and Valeriy Malov, the maintainer of the KDE Plasma desktop tablet settings utility, who improved support for cintiq-like devices during the weekend.

And of course, there was time for walks, buying cheese, having lunch at our regular place, De Rode Kater, and on Sunday the sun even started shining! And now back to coding!

Image by David Revoy.

The 2018 Krita sprint was sponsored by KDE e.V. (travel) and the Krita Foundation (accommodation and food).

May 19, 2018

GIMP 2.10.2 Released

It’s barely been a month since we released GIMP 2.10.0, and the first bugfix version 2.10.2 is already there! Its main purpose is fixing the various bugs and issues which were to be expected after the 2.10.0 release.

Therefore, 44 bugs have been fixed in less than a month!

We have also been relaxing the policy for new features and this is the first time we will be applying this policy with features in a stable micro release! How cool is that?

For a complete list of changes please see NEWS.

New features

Added support for HEIF image format

This release brings HEIF image support, both for loading and export!

Thanks to Dirk Farin for the HEIF plug-in.

New filters

Two new filters have been added, based off GEGL operations:

Spherize filter to wrap an image around a spherical cap, based on the gegl:spherize operation.

Spherize filter Spherize filter in GIMP 2.10.2.
Original image CC-BY-SA by Aryeom Han.

Recursive Transform filter to create a Droste effect, based on the gegl:recursive-transform operation.

Recursive Transform filter Recursive transform filter in GIMP 2.10.2, with a custom on-canvas interface.
Original image CC-BY by Philipp Haegi.

Noteworthy improvements

Better single-window screenshots on Windows

While the screenshot plug-in was already better in GIMP 2.10.0, we had a few issues with single-window screenshots on Windows when the target window was hidden behind other windows, partly off-screen, or when display scaling was activated.

All these issues have been fixed by our new contributor Gil Eliyahu.

Histogram computation improved

GIMP now calculates histograms in separate threads which eliminates some UI freezes. This has been implemented with some new internal APIs which may be reused later for other cases.

Working with third-parties

Packagers: set your bug tracker address

As you know, we now have a debug dialog which may pop-up when crashes occur with debug information. This dialog opens our bug tracker in a browser.

We realized that we get a lot of bugs from third-party builds, and a significant part of the bugs are package-specific. In order to relieve that burden a bit (because we are a very small team), we would appreciate if packagers could make a first triaging of bugs, reporting to us what looks like actual GIMP bugs, and taking care of their own packaging issues themselves.

This is why our configure script now has the --with-bug-report-url option, allowing you to set your own bug tracker web URL. This way, when people click the “Open Bug Tracker” button it will open the package bug tracker instead.

XCF-reader developers: format is documented

Since 2006, our work format, XCF, is documented thanks to the initial contribution of Henning Makholm. We have recently updated this document to integrate all the changes to the format since the GIMP 2.10.0 release.

Any third-party applications wishing to read XCF files can refer to this updated documentation. The git log view may actually be more interesting since you can more easily spot the changes and new features which have been documented recently.

Keep in mind that XCF is not meant to be an interchange format (unlike for instance OpenRaster) and this document is not a “specification”. The XCF reference document is the code itself. Nevertheless we are happy to help third-party applications, and if you spot any error or issues within this document feel free to open a bug report so we can fix it.

GIMP 3 is already on its way…

While GIMP 2.10.0 was still hot and barely released, our developers started working on GIMP 3. One of the main tasks is cleaning the code from the many deprecated pieces of code or data as well as from code made useless by the switch to GTK+ 3.x.

The deletion is really going full-speed with more than 200 commits made in less than a month on the gtk3-port git branch and with 5 times more lines deleted than inserted in the last few weeks.

Delete delete delete… exterminate!

Exterminate (GTK+2)! Michael Natterer and Jehan portrayed by Aryeom.
It’s actually misses Simon Budig, a long time contributor who made a big comeback on the GTK+3 port with dozens of commits!

May 14, 2018

Plotting the Jet Stream, or Other Winds, with ECMWF Data

I've been trying to learn more about weather from a friend who used to work in the field -- in particular, New Mexico's notoriously windy spring. One of the reasons behind our spring winds relates to the location of the jet stream. But I couldn't find many good references showing how the jet stream moves throughout the year. So I decided to try to plot it myself -- if I could find the data. Getting weather data can surprisingly hard.

In my search, I stumbled across Geert Barentsen's excellent Annual variations in the jet stream (video). It wasn't quite what I wanted -- it shows the position of the jet stream in December in successive years -- but the important thing is that he provides a Python script on GitHub that shows how he produced his beautiful animation.

[Sample jet steam image]

Well -- mostly. It turns out his data sources are no longer available, and he didn't go into a lot of detail on where he got his data, only saying that it was from the ECMWF ERA re-analysis model (with a link that's now 404). That led me on a merry chase through the ECMWF website trying to figure out which part of which database I needed. ECMWF has lots of publically available databases (and even more) and they even have Python libraries to access them; and they even have a lot of documentation, but somehow none of the documentation addresses questions like which database includes which variables and how to find and fetch the data you're after, and a lot of the sample code doesn't actually work. I ended up using the "ERA Interim, Daily" dataset and requesting data for only specific times and only the variables and pressure levels I was interested in. It's a great source of data once you figure out how to request it.

Sign up for an ECMWF API Key

Access ECMWF Public Datasets (there's also Access MARS and I'm not sure what the difference is), which has links you can click on to register for an API key.

Once you get the email with your initial password, log in using the URL in the email, and change the password. That gave me a "next" button that, when I clicked it, took me to a page warning me that the page was obsolete and I should update whatever bookmark I had used to get there. That page also doesn't offer a link to the new page where you can get your key details, so go here: Your API key. The API Key page gives you some lines you can paste into ~/.ecmwfapirc.

You'll also have to accept the license terms for the databases you want to use.

Install the Python API

That sets you up to use the ECMWF api. They have a Web API and a Python library, plus some other Python packages, but after struggling with a bunch of Magics tutorial examples that mostly crashed or couldn't find data, I decided I was better off sticking to the basic Python downloader API and plotting the results with Matplotlib.

The Python data-fetching API works well. To install it, activate your preferred Python virtualenv or whatever you use for pip packages, then run the pip command shown at Web API Downloads (under "Click here to see the installation/update instructions..."). As always with pip packages, you'll have to decide on a Python version (they support both 2 and 3) and whether to use a virtualenv, the much-disrecommended sudo pip, pip3, etc. I used pip3 in a virtualenv and it worked fine.

Specify a dataset and parameters

That's great, but how do you know which dataset you want to load?

There doesn't seem to be anything that just lists which datasets have which variables. The only way I found is to go to the Web API page for a particular dataset to see the form where you can request different variables. For instance, I ended up using the "interim-full-daily" database, where you can choose date ranges and lists of parameters. There are more choices in the sidebar: for instance, clicking on "Pressure levels" lets you choose from a list of barometric pressures ranging from 1000 all the way down to 1. No units are specified, but they're millibars, also known as hectoPascals (hPa): 1000 is more or less the pressure at ground level, 250 is roughly where the jet stream is, and Los Alamos is roughly at 775 hPa (you can find charts of pressure vs. altitude on the web).

When you go to any of the Web API pages, it will show you a dialog suggesting you read about Data retrieval efficiency, which you should definitely do if you're expecting to request a lot of data, then click on the details for the database you're using to find out how data is grouped in "tape files". For instance, in the ERA-interim database, tapes are grouped by date, so if you're requesting multiple parameters for multiple months, request all the parameters for a given month together, rather than making one request for level 250, another request for level 1000, etc.

Once you've checked the boxes for the data you want, you can fetch the data via the web interface, or click on "View the MARS request" to get parameters you can plug into a Python script.

If you choose the Python script option as I did, you can start with the basic data retrieval example. Use the second example, the one that uses 'format' : "netcdf", which will (eventually) give you a file ending in .nc.

Requesting a specific area

You can request only a limited area,

"area": "75/-20/10/60",
but they're not very forthcoming on the syntax of that, and it's particularly confusing since "75/-20/10/60" supposedly means "Europe". It's hard to figure how those numbers as longitudes and latitudes correspond to Europe, which doesn't go down to 10 degrees latitude, let alone -20 degrees. The Post-processing keywords page gives more information: it's North/West/South/East, which still makes no sense for Europe, until you expand the Area examples tab on that page and find out that by "Europe" they mean Europe plus Saudi Arabia and most of North Africa.

Using the data: What's in it?

Once you have the data file, assuming you requested data in netcdf format, you can parse the .nc file with the netCDF4 Python module -- available as Debian package "python3-netcdf4", or via pip -- to read that file:

import netCDF4

data = netCDF4.Dataset('filename.nc')

But what's in that Dataset? Try running the preceding two lines in the interactive Python shell, then:

>>> for key in data.variables:
...   print(key)
... 
longitude
latitude
level
time
w
vo
u
v

You can find out more about a parameter, like its units, type, and shape (array dimensions). Let's look at "level":

>>> data['level']
<class 'netCDF4._netCDF4.Variable'>
int32 level(level)
    units: millibars
    long_name: pressure_level
unlimited dimensions: 
current shape = (3,)
filling on, default _FillValue of -2147483647 used

>>> data['level'][:]
array([ 250,  775, 1000], dtype=int32)

>>> type(data['level'][:])
<class 'numpy.ndarray'>

Levels has shape (3,): it's a one-dimensional array with three elements: 250, 775 and 1000. Those are the three levels I requested from the web API and in my Python script). The units are millibars.

More complicated variables

How about something more complicated? u and v are the two components of wind speed.

>>> data['u']
<class 'netCDF4._netCDF4.Variable'>
int16 u(time, level, latitude, longitude)
    scale_factor: 0.002161405503194121
    add_offset: 30.095301438361684
    _FillValue: -32767
    missing_value: -32767
    units: m s**-1
    long_name: U component of wind
    standard_name: eastward_wind
unlimited dimensions: time
current shape = (30, 3, 241, 480)
filling on
u (v is the same) has a shape of (30, 3, 241, 480): it's a 4-dimensional array. Why? Looking at the numbers in the shape gives a clue. The second dimension has 3 rows: they correspond to the three levels, because there's a wind speed at every level. The first dimension has 30 rows: it corresponds to the dates I requested (the month of April 2015). I can verify that:
>>> data['time'].shape
(30,)

Sure enough, there are 30 times, so that's what the first dimension of u and v correspond to. The other dimensions, presumably, are latitude and longitude. Let's check that:

>>> data['longitude'].shape
(480,)
>>> data['latitude'].shape
(241,)

Sure enough! So, although it would be nice if it actually told you which dimension corresponded with which parameter, you can probably figure it out. If you're not sure, print the shapes of all the variables and work out which dimensions correspond to what:

>>> for key in data.variables:
...   print(key, data[key].shape)

Iterating over times

data['time'] has all the times for which you have data (30 data points for my initial test of the days in April 2015). The easiest way to plot anything is to iterate over those values:

    timeunits = JSdata.data['time'].units
    cal = JSdata.data['time'].calendar
    for i, t in enumerate(JSdata.data['time']):
        thedate = netCDF4.num2date(t, units=timeunits, calendar=cal)

Then you can use thedate like a datetime, calling thedate.strftime or whatever you need.

So that's how to access your data. All that's left is to plot it -- and in this case I had Geert Barentsen's script to start with, so I just modified it a little to work with slightly changed data format, and then added some argument parsing and runtime options.

Converting to Video

I already wrote about how to take the still images the program produces and turn them into a video: Making Videos (that work in Firefox) from a Series of Images.

However, it turns out ffmpeg can't handle files that are named with timestamps, like jetstream-2017-06-14-250.png. It can only handle one sequential integer. So I thought, what if I removed the dashes from the name, and used names like jetstream-20170614-250.png with %8d? No dice: ffmpeg also has the limitation that the integer can have at most four digits.

So I had to rename my images. A shell command works: I ran this in zsh but I think it should work in bash too.

cd outdir
mkdir moviedir

i=1
for fil in *.png; do
  newname=$(printf "%04d.png" $i)
  ln -s ../$fil moviedir/$newname
  i=$((i+1))
done

ffmpeg -i moviedir/%4d.png -filter:v "setpts=2.5*PTS" -pix_fmt yuv420p jetstream.mp4
The -filter:v "setpts=2.5*PTS" controls the delay between frames -- I'm not clear on the units, but larger numbers have more delay, and I think it's a multiplier, so this is 2.5 times slower than the default.

When I uploaded the video to YouTube, I got a warning, "Your videos will process faster if you encode into a streamable file format." I then spent half a day trying to find a combination of ffmpeg arguments that avoided that warning, and eventually gave up. As far as I can tell, the warning only affects the 20 seconds or so of processing that happens after the 5-10 minutes it takes to upload the video, so I'm not sure it's terribly important.

Results

Here's a video of the jet stream from 2012 to early 2018, and an earlier effort with a much longer 6.0x delay.

And here's the script, updated from the original Barentsen script and with a bunch of command-line options to let you plot different collections of data: jetstream.py on GitHub.

Fontstuff at LibrePlanet 2018

I’m going to try and capture some thoughts from recent conferences, since otherwise I fear that so much information gets lost in the fog.

* (If you want to think of it this way, consider this post “What’s New in Open Fonts, № 002”)

I went to LibrePlanet a few weeks ago, for the first time. One of the best outcomes from that trip (apart from seeing friends) was the hallway track.

[FYI, I was happy to see that LWN had some contributors on hand to provide coverage; when I was an editor there we always wanted to go, but it was never quite feasible, between the cost and the frequent overlap with other events. Anyway, do read the LWN coverage to get up to speed on the event.]

RFNs

Dave Crossland and I talked about Reserved Font Names (RFNs), an optional feature of the SIL Open Font License (OFL) in which the font publisher claims a reservation on a portion of their font’s name. Anyone’s allowed to make a derivative of the OFL-licensed font (which is true regardless of the RFN-invocation status), but if they do so they cannot use *any* portion of the RFN in their derivative font’s name.

The intent of the clause is to protect the user-visible “mark” (so to speak; my paraphrase) of the font publisher, so that users do not confuse any derivatives with the original when they see it in menus, lists, etc.

A problem arises, however, for software distributors, because the RFN clause is triggered by making any change to the upstream font — a low bar that includes a lot of functions that happen automatically when serving a font over HTTP (like Google Fonts does) and when rebuilding fonts from source (like Debian does).

There’s not a lot of good information out there on the effects that RFN-invocation has on downstream software projects. SIL has a section in its FAQ document, but it doesn’t really address the downstream project’s needs. So Dave and I speculated that it might be good to write up such a document for publication … somewhere … and help ensure that font developers think through the impact of the decision on downstream users before they opt to invoke an RFN.

My own experience and my gut feeling from other discussions is that most open-font designers, especially when they are new, plonk an RFN statement in their license without having explored its impact. It’s too easy to do, you might say; it probably seems like it’s built into the license for a reason, and there’s not really anything educating you about the impact of the choice going forward. You fill in a little blank at the very top of the license template, cause it’s there, and there’s no guidance.  That’s what needs to change.

Packages

We also chatted a little about font packaging, which is something I’m keen to revisit. I’ve been giving a talk about “the unsolved problems in FOSS type” the past couple of months, a discussion that starts with the premise that we’ve had open-source web fonts for years now, but that hasn’t helped open fonts make inroads into any other areas of typography: print, EPUB, print-on-demand, any forms of marketing product, etc. The root cause is that Google Fonts and Open Font Library are focused on providing a web service (as they should), which leaves a
lot of ground to be covered elsewhere, from installation to document templates to what ships with self-contained application bundles (hint: essentially nothing does).

To me, the lowest-hanging fruit at present seems to be making font packages first-class objects in the distribution packaging systems. As it is, they’re generally completely bare-bones: no documentation, no system integration, sketchy or missing metadata, etc. I think a lot can be done to improve this, of course. One big takeaway from the conversation was that Lasse Fister from the Google Fonts crew is working on a specimen micro-site generator.

That would fill a substantial hole in current packages: fonts tend to ship with no document that shows the font in use — something all proprietary, commercial fonts include, and
that designers use to get a feel for how the font works in a real document setting.

Advanced font features in GTK+ and GNOME

Meanwhile Matthias Clasen has been forging ahead with his own work enhancing the GNOME font-selection experience. He’s added support for showing what variation axes a variable font contains and for exposing the OpenType / smart-font features that the font includes.

He did, however, bring up several pain points he’s encountered. The first is that many of the OpenType features are hard to preview/demonstrate because they’re sparsely documented. The only substantive docs out there are ancient Microsoft material definitely written by committee(s) — then revised, in piecemeal format, by multiple unrelated committees. For example, go to the link above, then try and tell me the difference between `salt` (stylistic alternates), `ccNN` (character variants) and `ssNN` (stylistic sets). I think there’s an answer, but it’s detective work.

A more pressing concern Matthias raised was the need to create “demo strings” that show what actually changes when you enable or disable one of the features. The proper string for some features is obvious (like `onum` (oldstyle numerals): the digits 0 to 9). For others, it’s anybody’s guess. And the font-selector widget, ideally, should not have to parse every font’s entire GSUB feature table, look for all affected codepoints, and create a custom demo string. That might be arbitrarily complex, since GSUB substitutions can chain together, and might still be incorrect (not to mention the simpler case, of that method finding you random letters that add up to unhelpful gibberish).

At lunch on Sunday, Matthias, Dave, Owen Taylor, Felipe Sanches, and a few others … who I’m definitely drawing a blank on this far after the fact (go for the comments) … hashed through several other topics. The discussion turned to Pango, which (like several other storied GNOME libraries), isn’t exactly unmaintained, but certainly doesn’t get attention anymore … see also Cairo….). There are evidently still some API mismatches between what a Pango font descriptor gives you and the lower-level handles you need to work with newer font internals like
variation axes.

A longer-term question was whether or not Pango can do more for applications — there are some features it could add, but major work like building in hyphenation or justification would entail serious effort. It’s not clear that anyone is available to take on that role.

Interfaces

Of course, that ties into another issue Matthias raised, which is that it’s hard to specify a feature set for a “smart” font selector widget/framework/whathaveyou for GTK+ when there are not many GTK-based applications that will bring their own demands. GIMP is still using GTK2, Inkscape basically does its own font selection, LibreOffice has a whole cross-platform layer of its own, etc. The upshot is that application developers aren’t bringing itches needing to be scratched. There is always Gedit, as Matthias said (which I think was at least somewhat satirical). But it complicates the work of designing a toolkit element, to be sure.

The discussion also touched on how design applications like Inkscape might want to provide a user interface for the variable-font settings that a user has used before. Should you “bookmark” those somehow (e.g., “weight=332,width=117,slant=10” or whatnot)? If so, where are they saved? Certainly you don’t want users to have to eyeball a bunch of sliders in order to hit the same combination of axes twice; not providing a UI for this inevitably leads to documents polluted with 600-odd variable-font-setting regions that are all only slightly off from each other. Consensus seemed to lean towards saving variable-axes-settings in sort of “recently used” palette, much as many applications already do with the color picker. Still waiting to see the first implementations of this, however.

As we were leaving, Matthias posed a question to me — in response to a comment I’d made about there needing to be a line between a “generic” font selector and a “full-featured” font selector. The question was what sort of UI was I envisioning in the “generic” case, particularly where variable fonts are concerned, as I had suggested that a full set of sliders for the fonts variation axes was too complex.

I’m not sure. On the one hand, the simple answer would be “none” or “list the variation axes in the font”, but that’s not something I have any evidence for: it’s just a easy place to draw a line.

Perhaps I’m just worried that exposing too many dials and controls will turn users off — or slow them down when they’re trying to make a quick choice. The consumer/pro division is a  common tactic, evidently, for trying to avert UI overload. And this seems like a place where it’s worth keeping a watchful eye, but I definitely don’t have answers.

It may be that “pro” versus “consumer” user is not the right plane on which to draw a line anyway: when I was working on font-packaging questions, I found it really helpful to be document-first in my thinking (i.e., let the needs of the document the user is working on reveal what information you want to get from the font package). It’s possible that the how-much-information-do-you-show-in-the-UI question could be addressed by letting the document, rather than some notion of the “professionalism” of the user, be the guide. More thinking is required.

Interview with El Gato Cangrejo

Could you tell us something about yourself?

Well, I think I am a human shaped thing also known as Aedouard A. and also as El Gato Cangrejo, who loves making drawings and listening to music.

Do you paint professionally, as a hobby artist, or both?

I’m really trying to make it professionally, “very hard thing” but also I try to keep the fun in it so I would have to say both.

What genre(s) do you work in?

I like to let my hand and my pen go to wherever they want to go, and then I begin to think about those traces and it leads me to different shapes, themes and genres. I can build an script for a comic or for a short film, an illustration or even sounds based on a web of random traces on a digital canvas or on a piece of paper.

Whose work inspires you most — who are your role models as an artist?

I love the paintings, illustrations, designs and movies from these people: William Boguereau, Alphonse Mucha, Albrecht Durer, Jules Lefebvre, William Waterhouse, Masamune Shirow, Haruhiko Mikimoto, Shoji Kawamori, Mamoru Oshii, Quentin Tarantino, Hideaki Anno, Hayao Miyasaki, Ralph Bakshi, Guillermo del Toro… (not mentioning musicians, they are such an endless source of inspiration, I only can work while listening to music)

How and when did you get to try digital painting for the first time?

I tried digital painting for the first time like 12 years ago, I bought my first PC and I tried with a software called Image Ready from Photoshop, I did a couple of landscapes with the mouse and then I tried scanning my drawings and retrace them in Corel Draw, also with the mouse.

What makes you choose digital over traditional painting?

The production time, everything is like 10 times faster, expensive materials and the super powerful Ctrl-Z.

How did you find out about Krita?

I like to search for new tools and I try to use libre software. I can’t remember when I tried Krita for first time but I think it was like 7 years ago and it ran very very badly on my old PC.

What was your first impression?

I hated Krita at the time, now I love it!

What do you love about Krita?

The shortcuts are essential, the brushes, the animation tools, “insert meme here” it’s free!

What do you think needs improvement in Krita? Is there anything that really annoys you?

The performance in Linux, I recently changed my OS from Windows 7 to Linux Mint and I have noticed a significant difference in performance between the systems. I noticed a difference in performance between working in grayscale and working in color too, and and also I’m waiting for some layer FX’s as the ones in photoshop, specifically the trace effect, which I used a lot when I worked with photoshop.

What sets Krita apart from the other tools that you use?

As I said earlier, the shortcuts are essential, the animation tools combined with those awesome brushes makes a powerful tool for animation, and I love the fact that Krita has been made for professional use but you can also have tons of fun with it.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I would choose Distant.

What techniques and brushes did you use in it?

I like the “Airbrush_Linear” a lot. I set it to a big size and the opacity to 10 percent, then I use the “Eraser_Circle” the hard shaped one, to define shapes, also I use a lot the “Smudge_Soft” I like to play with it taking the paint from one side to another. When I grabbed Krita again it reminded me of my old times drawing with pencil and paper I just loved.

Where can people see more of your work?

https://gatocangrejo.deviantart.com/gallery/

Anything else you’d like to share?

If you are the pretty invisible friend, thanks and I’ll see you in a parallel universe.
If you are the Sorceress, I really sorry about the silence, I had a couple of good reasons…
If I owe you money, I’m trying to pay it.
If you are the extraterrestrial, stop it man.
If you are the C.I.A. stop sending stuff to my invisible friends and to the extraterrestrial.
If you like my drawings, keep your eyes peeled, I’m going to start a patreon/kickstarter campaign that involves comic, animation, Krita, Blender and other libre software.
If you are from Krita staff, thanks for Krita and thanks for the interview.
If you don’t know Krita, just give it a try, it is awesome. You don’t need to be an artist, you just need to have fun.

May 12, 2018

Stay Tuned

“The arc of the moral universe is long, but it bends towards podcasts.”

– Preet Bharara, while interviewing Bassem Youssef for his Stay Tuned podcast.

Krita 4.0.3 Released

Today the Krita team releases Krita 4.0.3, a bug fix release of Krita 4.0.0. This release fixes an important regression in Krita 4.0.2: sometimes copy and paste between images opened in Krita would cause crashes (BUG:394068).

Other Improvements

  • Krita now tries to load the system color profiles on Windows
  • Krita can open .rw2 RAW files
  • The splash screen is updated to work better on HiDPI or Retina displays (BUG:392282)
  • The OpenEXR export filter will convert images with an integer channel depth before saving, instead of giving an error.
  • The OpenEXR export filter no longer gives export warnings calling itself the TIFF filter
  • The emtpy error message dialog that would erroneously be shown after running some export filters is no longer shown (BUG:393850).
  • The setBackGroundColor method in the Python API has been renamed to setBackgroundColor for consistency
  • Fix a crash in KisColorizeMask (BUG:393753)

Download

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

When it is updated, you can also use the Krita Lime PPA to install Krita 4.0.3 on Ubuntu and derivatives. We are working on an updated snap.

OSX

Note: the touch docker, gmic-qt and python plugins are not available on OSX.

Source code

md5sum

For all downloads:

Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here (filenames ending in .sig).

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

May 11, 2018

Making Videos (that work in Firefox) from a Series of Images

I was working on a weather project to make animated maps of the jet stream. Getting and plotting wind data is a much longer article (coming soon), but once I had all the images plotted, I wanted to combine them all into a video showing how the jet stream moves.

Like most projects, it's simple once you find the right recipe. If your images are named outdir/filename00.png, outdir/filename01.png, outdir/filename02.png and so on, you can turn them into an MPEG4 video with ffmpeg:

ffmpeg -i outdir/filename%2d.png -filter:v "setpts=6.0*PTS" -pix_fmt yuv420p jetstream.mp4

%02d, for non-programmers, just means a 2-digit decimal integer with leading zeros, If the filenames just use 1, 2, 3, ... 10, 11 without leading zeros, use %2d instead; if they have three digits, use %03d or %3d, and so on.

The -pix_fmt yuv420p turned out to be the tricky part. The recipes I found online didn't include that part, but without it, Firefox claims "Video can't be played because the file is corrupt", even though most other browsers can play it just fine. If you open Firefox's web console and reload, it offers the additional information "Details: mozilla::SupportChecker::AddMediaFormatChecker(const mozilla::TrackInfo&)::<lambda()>: Decoder may not have the capability to handle the requested video format with YUV444 chroma subsampling.":

Adding -pix_fmt yuv420p cured the problem and made the video compatible with Firefox, though at first I had problems with ffmpeg complaining "height not divisible by 2 (1980x1113)" (even though the height of the images was in fact divisible by 2). I'm not sure what was wrong; later ffmpeg stopped giving me that error message and converted the video. It may depend on where in the ffmpeg command you put the pix_fmt flag or what other flags are present. ffmpeg arguments are a mystery to me.

Of course, if you're only making something to be uploaded to youtube, the Firefox limitation probably doesn't matter and you may not need the -pix_fmt yuv420p argument.

Animated GIFs

Making an animated GIF is easier. You can use ImageMagick's convert:

convert -delay 30 -loop 0 *.png jetstream.gif
The GIF will be a lot larger, though. For my initial test of thirty 1000 x 500 images, the MP4 was 760K while the GIF was 4.2M.

Rest in price

It’s easy to pile on criticism when a major company redesigns their logo, but I couldn’t help myself in this case. The logo looks fine to me, but am I the only one that sees a toe-tag on a corpse when I see the new Best Buy logo?

Cause of death: Excessive color saturation on demo-mode TVs.

May 10, 2018

Krita 4.0.2 released

Today the Krita team releases Krita 4.0.2, a bug fix release of Krita 4.0.0. We fixed more than fifty bugs since the Krita 4.0.0 release! See below for the full list of fixed isses. We’ve also got fixes submitted by two new contributors: Emmet O’Neil and Seoras Macdonald. Welcome!

Please note that:

  • The reference image docker has been removed. Krita 4.1.0 will have a new reference images tool. You can test the code-in-progress by downloading the nightly builds for Windows and Linux. You can also use Antoine Roux’s reference images docker python plugin.
  • Translations are broken in various ways. On Linux everything should work. On Windows, you might have to select your language as an extra override language in the Settings/Select language dialog. This might also be the case on macOS
  • The macOS binaries are now signed, but do not have G’Mic and do not have Python scripting.

If you find a new issue, please consult this draft document on reporting bugs before reporting an issue. After the 4.0 release more than 150 bugs were reported, but most of those reports were duplicates, requests for help or just not useful at all. This puts a heavy strain on the developers and makes it harder to actually find time to improve Krita. Please be helpful!

Improvements

Windows

  • Patch QSaveFile so working on images stored in synchronized folders (dropbox, google drive) is safe. BUG:392408
  • Enable WinInk or prompt if WinTab cannot be loaded

Animation

  • Fix canvas update issues when an animation is being rendered to the cache BUG:392969
  • Fix playback in isolated mode BUG:392559
  • Fix saving animated transparency and filter masks, adjustment layer BUG:393302
  • set size for a few timeline icons as it is painfully small on Windows
  • Fix copy-pasting pixel data from animated layers BUG:364162

Brushes

  • Fix keeping “eraser switch size/opacity” option when saving the brush BUG:393499
  • Fix update of the preset editor GUI when a default preset is created BUG:392869
  • Make strength and opacity sliders from 0 to 100 percent in brush editor

File format support

  • Fix saving state of the selection masks into .kra
  • Read multilayer EXR files saved by Nuke BUG:393771
  • PSD: convert the image if its colorspace is not supported
  • Don’t let autosave close currently running actions

Grids

  • increase the range for the pixel grid threshold
  • only allow isometric grid with OpenGL enabled BUG:392526

Crashes

  • Fix a hangup when closing the image BUG:393916
  • Fix a crash when duplicating active global selection masks BUG:382315
  • Fix crashes on undo/redo of vector path points operations BUG:393209, BUG:393087
  • Fix crash when deleting palette BUG:393353
  • Fix crash when resizing the Tool Options for the shape selection tool BUG:393217

User interface

  • Show the exact bounds in the layer properties dialog
  • Add ability for vanishing point assistants to show and configure radial lines
  • Make the Saturation slider update when picking a color that has Value 100 BUG:391934
  • Fix “Break at segment” to work correctly with closed paths
  • Disable right-clicking on popup palette BUG:391696, BUG:378484
  • Don’t let the color label widget mess up labels when right button is pressed BUG:392815
  • Fix Canvas position popping after pop-up palette rotation reset BUG:391921 (Patch by Emmet O’Neil, thanks!)
  • Change the behaviour of the add layer button BUG:385050 (Patch by Seoras Macdonald, thanks!)
  • Clicking outside preview box moves view to that point BUG:384687 (Patch by Seoras Macdonald, thanks!)
  • Implement double Esc key press shortcut for canceling continued transform mode BUG:361852
  • Display flow and opacity as percentage instead of zero to one on toolbar

Other

Download

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

When it is updated, you can also use the Krita Lime PPA to install Krita 4.0.2 on Ubuntu and derivatives. We are working on an updated snap.

OSX

Note: the gmic-qt and python plugins are not available on OSX.

Source code

md5sum

For all downloads:

Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here (filenames ending in .sig).

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

May 09, 2018

System76 and the LVFS

tl;dr: Don’t buy System76 hardware and expect to get firmware updates from the LVFS

System76 is a hardware vendor that builds laptops with the Pop_OS! Linux distribution pre-loaded. System76 machines do get firmware updates, but do not use the fwupd and LVFS shared infrastructure. I’m writing this blog post so I can point people at some static text rather than writing out long replies to each person that emails me wanting to know why they don’t just use the LVFS.

In April of last year, System76 contacted me, wanting to work out how to get on the LVFS. We wrote 30+ cordial emails back and forth with technical details. Discussions got stuck when we found out they currently use a nonfree firmware flash tool called afuefi rather than use the UEFI specification called UpdateCapsule. All vendors have support for capsule updates as a requirement for the Windows 10 compliance sticker, so it should be pretty easy to use this instead. Every major vendor of consumer laptops is already using capsules, e.g. Dell, HP, Lenovo and many others.

There was some resistance to not using the proprietary AUEFI executable to do the flashing. I still don’t know if System76 has permission to redistribute afuefi. We certainly can’t include the non-free and non-redistributable afuefi as a binary in the .cab file uploaded to the LVFS, as even if System76 does have special permission to distribute it, as the LVFS would be a 3rd party and is mirrored to various places. IANAL and all that.

An employee of System76 wrote a userspace tool in rust to flash the embedded controller (EC) using a reverse engineered protocol (fwupd is written in C) and the intention was add a plugin to fwupd to do this. Peter Jones suggested that most vendors just include the EC update as part of the capsule as the EC and system firmware typically form a tightly-coupled pair. Peter also thought that afuefi is really just a wrapper for UpdateCapsule, and S76 was going to find out how to make the AMI BIOS just accept a capsule. Apparently they even built a capsule that works using UpdateCapsule.

I was really confused when things went so off-course with a surprise announcement in July that System76 had decided that they would not use the LVFS and fwupd afterall even after all the discussion and how it all looked like it was moving forwards. Looking at the code it seems the firmware update notifier and update process is now completely custom to System76 machines. This means it will only work when running Pop_OS! and not with Fedora, Debian, Ubuntu, SUSE, RHEL or any other distribution.

Apparently System76 decided that having their own client tools and firmware repository was a better fit for them. At this point the founder of System76 got cc’d and told me this wasn’t about politics, and it wasn’t competition. I then got told that I’d made the LVFS and fwupd more complicated than it needed to be, and that I should have adopted the infrastructure that System76 had built instead. This was all without them actually logging into the LVFS and seeing what features were available or what constraints were being handled…

The way forward from my point of view would be for System76 to spend a few hours making UpdateCapsule work correctly, another few days to build an EFI binary with the EC update, and a few more hours to write the metadata for the LVFS. I don’t require an apology, and would happily create them a OEM account on the LVFS. It looks instead that the PR and the exclusivity are more valuable that working with other vendors. I guess it might make sense for them to require Pop_OS! on their hardware but it’s not going to help when people buy System76 hardware and want to run Red Hat Enterprise Linux in a business. It also means System76 also gets to maintain all this security-sensitive server and client code themselves for eternity.

It was a hugely disappointing end to the discussion as I had high hopes System76 would do the right thing and work with other vendors on shared infrastructure. I don’t actually mind if System76 doesn’t use fwupd and the LVFS, I just don’t want people to buy new hardware and be disappointed. I’ve heard nothing more from System76 about uploading firmware to the LVFS or using fwupd since about November, and I’d given multiple people many chances to clarify the way forward.

If you’re looking for a nice laptop that will run Linux really well, I’d suggest you buy a Dell XPS instead — it’ll work with any distribution you choose.

Decoding Codes

My friend and colleague of over 20 years, Nick Burka, has written a great article about the Usability for Promotion Codes and Access Codes over on the silverorange blog.

Read Usability for Promotion Codes and Access Codes by Nick Burka on the silverorange blog.

You might not care about promotion codes, but you’ve probably had to type in some kind of code for 2-factor authentication or the rare non-scammy coupon code. Nick’s article covers what can make these codes easy (or difficult) to remember, type, and say over the phone.

It’s too bad the creators of our Canadian postal code system couldn’t have read this before they put all of those Gs and Js in the Quebec postal codes (an English G and French J sound almost identical).

I’m particularly proud of this article as it draws on external expertise – something we’ve been trying to do more of at silverorange. This article in particular draws on things we learned for a literacy and essential skills consultant, and from the non-profit Computers for Success Canada.

May 07, 2018

A Hissy Fit

As I came home from the market and prepared to turn into the driveway I had to stop for an obstacle: a bullsnake who had stretched himself across the road.

[pugnacious bullsnake]

I pulled off, got out of the car and ran back. A pickup truck was coming around the bend and I was afraid he would run over the snake, but he stopped and rolled down the window to help. White Rock people are like that, even the ones in pickup trucks.

The snake was pugnacious, not your usual mellow bullsnake. He coiled up and started hissing madly. The truck driver said "Aw, c'mon, you're not fooling anybody. We know you're not a rattlesnake," but the snake wasn't listening. (I guess that's understandable, since they have no ears.)

I tried to loom in front of him and stamp on the ground to herd him off the road, but he wasn't having any of it. He just kept coiling and hissing, and struck at me when I got a little closer.

I moved my hand slowly around behind his head and gently took hold of his neck -- like what you see people do with rattlesnakes, though I'd never try that with a venomous snake without a lot of practice and training. With a bullsnake, even if they bite you it's not a big deal. When I was a teenager I had a pet gopher snake (a fringe benefit of having a mother who worked on wildlife documentaries), and though "Goph" was quite tame, he once accidentally bit me when I was replacing his water dish after feeding him and he mistook my hand for a mouse. (He seemed acutely embarrassed, if such an emotion can be attributed to a reptile; he let go immediately and retreated to sulk in the far corner of his aquarium.) Anyway, it didn't hurt; their teeth are tiny and incredibly sharp, and it feels like the pinprick from a finger blood test at the doctor's office.

Anyway, the bullsnake today didn't bite. But after I moved him off the road to a nice warm basalt rock in the yard, he stayed agitated, hissing loudly, coiling and beating his tail to mimic a rattlesnake. He didn't look like he was going to run and hide any time soon, so I ran inside to grab a camera.

In the photos, I thought it was interesting how he held his mouth when he hisses. Dave thought it looked like W.C. Fields. I hadn't had a chance to see that up close before: my pet snake never had occasion to hiss, and I haven't often seen wild bullsnakes be so pugnacious either -- certainly not for long enough that I've been able to photograph it. You can also see how he puffs up his neck.

I now have a new appreciation of the term "hissy fit".

[pugnacious bullsnake]

May 05, 2018

May 04, 2018

(NSFW) What Stefan Sees


(NSFW) What Stefan Sees

An Interview with Photographer Stefan Schmitz

Stefan Schmitz is a photographer living in Northern France and specializing in sensual and nude portraits. I stumbled upon his work during one of my searches for photographers using Free Software on Flickr, and as someone who loves shooting portraits his work was an instant draw for me.

Franzi Skamet by Stefan  Schmitz Franzi Skamet by Stefan Schmitz
Khiara Gray by Stefan  Schmitz Khiara Gray by Stefan Schmitz

He’s a member of the forums here (@beachbum) and was gracious enough recently to spare some time chatting with me. Here is our conversation (edited for clarity)…

Are you shooting professionally?

Nope, I’m not a professional photographer, and I think I’m quite happy about that. I do happen to photograph my surroundings for ±40 years now, and I have a basic idea about camera-handling and light. Being a pro is about paying invoices by shooting photos, and I fear that the pressure at the end of some months or quarters can easily take the fun out of photography. I’m an engineer and photography is my second love behind wife and kids.

Every now and then some of my pictures are requested and published by some sort of magazine, press or web-service, and I appreciate the attention and exposure, but there is no (or very little) money in the kind of photography I specialize in, so … everything’s OK the way it is.

Khiara Gray by Stefan Schmitz Khiara Gray by Stefan Schmitz

What would you say are your biggest influences?

Starting with photographers: Andreas Feininger, Peter Lindbergh and Alfred Stieglitz. Check out the portrait of Georgia O’Keeffe by Alfred Stieglitz: it’s 100 years old and it’s all there. Pose, light, intensity, personality - nobody has invented anything [like it] afterwards. We all just try to get close. I feel the same when I look at images taken by Peter Lindbergh, but my eternal #1 is Andreas Feininger.

Georgia O’Keeffe by Alfred Stieglitz

I got the photo-virus from my father and I learned nearly everything from daddy’s well-worn copy of The Complete Photographer [amzn] (Feininger) from 1965. Every single photo in that book is a masterpiece, even the strictly “instructional” ones. You measure every photo-book in the world against this one and they all finish second. Get your copy!

How would you describe your own style overall?

I shoot portraits of women and most of the time they don’t wear clothes. The portrait-part is very important for me: the model must connect with the viewer and ideally the communication goes beyond skin-deep. I want to see (and show) more than just the surface, and when that happens, I just press the shutter-button and try to get out of the way of the model’s performance.

Jennifer Polska by Stefan Schmitz Jennifer Polska by Stefan Schmitz
Franzi Skamet by Stefan Schmitz Franzi Skamet by Stefan Schmitz

What motivates you when deciding what/how/who to shoot?

I like women, so I take photos of women. If I were interested in beetles, I’d buy a macro lens and shoot beetles. All kidding aside, I think it’s a natural thing to do. I am married to a beautiful woman, an ex-model, and when she got fed-up with my eternal “can we do one more shoot” requests, we discussed things and she allowed me to go ahead and shoot models. Her support is very important to me, but her taste is very different from mine. I really never asked myself “why” I shoot sensual portraits and nudes. It just feels like “I want to do that” and I feel comfy with it. Does there have to be a reason?

The location is very important for me. Nothing is more boring than blinding a person with a flashlight in front of a gray wallpaper. A room, a window-sill, a landmark - there’s a lot of inspiration out there, and I often think “this is where I want to shoot”. Sometimes my wife tells me of some place she has been to or seen, and I check that out.

If you had to pick your own favorite 3 images of your work, which ones would you choose and why?

Jennifer Polska by Stefan Schmitz Jennifer Polska by Stefan Schmitz

Jennifer is a very professional and inspiring model. We’ve worked together quite a number of times and while you may think that this shot was inspired by The Who’s “Pinball Wizard”, I’d answer “right band, wrong song”. It’s The Who, alright, but the song’s “A quick one while he’s away”. I chose this photo because it’s all about Jennifer’s pose and facial expression. It’s sensual, even sexy, but looking at Jennifer’s face you forget about the naked skin and all. There’s beauty, there’s depth … that’s what I’m after.

Alice by Stefan Schmitz Alice by Stefan Schmitz

This shot of Alice is an example for the importance of natural light. There are photographers out there who can arrange light in a similar way, but I doubt that Alice would express this natural serenity in a studio setup with cables and stands and electric-transformers humming. She’s at ease, the light is perfect - I just try to be invisible because I don’t want to ruin the moment.

Khiara Gray by Stefan  Schmitz Khiara Gray by Stefan Schmitz

Try to escape Khiara’s eyes. Go, do it. It’s all there, the pose, the room, the ribbon-chair and the little icon, but those eyes make the picture. I did NOT whiten the eyeballs nor did I dodge the iris, and of course it’s all natural/available light.

If you had to pick 3 favorite images from someone else, which ones would you choose and why?

I already named Stieglitz’ Georgia O’Keeffe as an inspiration further up - next to that there’s Helmut Newton’s Big Nude III, Henrietta and Kim Basinger’s striptease in 9 12 weeks (white silk nighty and all). Each one a masterpiece, each one very influential for me. Imagine the truth and depth of Georgia with the force and pride of Henrietta and the erotic playfulness of Kim Basinger. That photo would rule the world.

Big Nude III, Henrietta, Helmut Newton

Is there something outside of your comfort zone you wish you could try/shoot more of?

I would like to work more with women above the age of 35, but it’s hard to find them. In general they stop modeling nude when the kids arrive.

Shooting more often outdoors would be cool, too, but that’s not easy here in northern France - there is no guarantee for good weather, and it’s frustrating when you organize a shoot two weeks in advance just to call it off in the very last minute due to bad weather.

Last but not least there’s a special competition among photographers; it’s totally unofficial and called “the white shirt contest”. Shoot a woman in a white shirt and make everybody “feel” the texture of that shirt. I give it a try on every shoot and very few pictures come out the way I wish. Go for it - it’s way harder than I thought!

Alice by Stefan Schmitz Alice by Stefan Schmitz

How do you find your models usually?

There are websites where models and photographers can present their work and get in contact. The biggest-one worldwide is modelmayhem.com, and I highly recommend to become a member. Another good place is tumblr.com, but you have to go through a lot of dirt before you find some true gems. I have made contact via both sites and I recommend them.

You will need some pictures in your portfolio in order to show that you are - in fact - a photographer with a basic idea of portrait-work. If you shoot portraits (I mean really portraits, not some snapshots of granny and the kids under the Christmas-tree), you probably have enough photos on your disk to state the point. But if you don’t and you want to start (nude) portraits, spend some money on a workshop. I did that twice and it really helped me in several ways: communication with the model, how to start a session, do’s and don’ts - and at the end of the day you will drive home with a handful of pictures for your portfolio.

Hannah by Stefan Schmitz Hannah by Stefan Schmitz

Speaking of gear, what are you shooting with currently (or what is your favorite setup)?

Gear is overrated. I am with Nikon since 1979 and today I own and use two bodies: a 1975 Nikon F2 photomic (bought used in 82), loaded with Kodak Tri-X and a Nikon D610 DSLR. 90% of my pictures are shot with a 50mm standard lens. Next on the list is the 35mm - you will need that in small rooms when the 50mm is already a bit too long and you want to keep some distance. I happen to own a 85mm, but the locations I book and shoot rarely offer enough space to make use of that lens.

There are these cheap, circular 1m silver reflectors on amazon. They cost about 15 €/$ and you get a crappy stand for the same price. That stuff is pure gold - I use the reflector a lot and I highly recommend to learn how to work with it. It’s my little secret weapon when I shoot against the light (see Alice here above).

A camera with a reasonably fast standard lens, a second battery and a silver reflector is all I need. The rest is luxury for me, but I am pretty much a one-trick-pony. Other photographers will benefit more from a bigger kit.

Most of your images appear to be making great use of natural light. Do you use other lighting gear (speedlights, monoblocks, modifiers, etc)?

Right - available light is where it’s at. I very rarely shoot with a flash kit today because it distracts me from the work with the model. I’m a loner on the set, no assistants or friends who come and help, so everything must be totally simple and foolproof.

Saying that, I own an alarming number of speedlights, umbrellas, triggers and softboxes, but I don’t need that gear very often. I try to visit the locations before I shoot. I check the directions and plan for a realistic timeframe, so today I will neither find myself in a totally dark dungeon nor in a sun-filled room with contrasts à gogo. Windows to the west - shoot in the morning, windows facing south-east: shooting in the (late) afternoon.

Karolina Lewschenko by Stefan Schmitz Karolina Lewschenko by Stefan Schmitz

Here’s a shot of Karolina Lewschenko. We took this photo in a hotel room by the end of October and the available (window) light got too weak, so I used an Aurora Firefly 65 cm softbox with a Metz speedlight and set-up some classic Rembrandt-Light. I packed that gear because I knew that our timeframe wasn’t guaranteed to work out perfectly. “Better be safe than sorry”.

Franzi Skamet by Stefan  Schmitz Franzi Skamet by Stefan Schmitz

Do you pre-visualize and plan your shoots ahead of time usually, or is there a more organic interaction with the model and the space you’re shooting in?

Yes, I do. When I visit a place, a possible location, I have some Ideas of where to shoot, what furniture to push around and what pose to try. I can pretty much see the final picture (or my idea of it) before I book the model. Having said that, you know that no battle-plan has ever survived the first shot fired…

When the model arrives, we take some time to walk around the locations and discuss possible sets. We will then start to shoot fully clothed in order to get used to another and see how the light will be on the final shots. It’s very important for me to get feedback from the model. She might say that a pose is difficult for her or hurts after a few seconds, that she’s not comfy with something or that she would like to try a totally different thing here. I always pay a lot of attention to those ideas and - out of experience - those shots based on the model’s ideas are in general among the best of the day.

Karolina Lewschenko by Stefan Schmitz Karolina Lewschenko by Stefan Schmitz

I mean we’re not here because I shoot bugs or furniture, you don’t give me the opportunity to express myself here because you are a fan of crickets; all the attention is linked to the beautiful women on my photos and how they connect with the beholder. I am just the one who captures the moments, it’s the models who fill those moments with intensity and beauty. It would be very stupid of me not to cooperate with a model who knows how to present herself and who comes up with her own ideas.

Always listen to the model, always communicate, never go quiet.

The discussion with the model also includes what degree of nudity we consider. So the second round of photos starts with the “open shirt” or topless shots before the model undresses completely. If we take photos in lingerie, we do that last (after the nudes) because lingerie often leaves traces on the skin and we don’t want that to show.

Franzi Skamet by Stefan Schmitz Franzi Skamet by Stefan Schmitz

It is important to know what to do and in what order. You don’t want to have a nude model standing in front of you, asking “what’s next?” and you answer “I dunno - maybe (!) try this or that again”. If you lose your directions for a moment, just say so or say “please get your bathrobe and let’s have a look at the last pictures together”. If you are “not sure”, the model might be “not comfy”, and that’s something we want to avoid.

Would you describe your workflow a bit? Which projects do you use regularly?

A typical session is 90 to 120 minutes and I will end-up with about 500 exposures on the SD-card and maybe a roll of exposed Kodak Tri-X. The film goes to a lab and I will get the negatives and scans back within 15 to 30 days.

There’s two SD-cards, one with RAW files that I import with gThumb to /photos/year/month/day. The other card holds fine-quality JPG and those go to /pictures/year/name_of_model. My camera is already set to monochrome, I get every picture I shoot in b/w on the camera-screen and the JPG-files are also monochrome.

Next step is a pre-selection in Geeqie. That’s one great picture viewer and I delete all the missed shots (bad framing, out of focus etc.) and note/mark all the promising/good shots here. This is normally the end of day one.

Switching from RAWstudio to darktable has been a giant step for me. dt is just a great program and I still learn about new functions and modules every day. The file comes in, is converted to monochrome and afterwards color saturation and lights (red and yellow) are manipulated . This way I can treat the skin (brighter or darker) without influencing the general brightness of the picture. Highlights and lowlights may be pushed a bit to the left and I add the signature and a frame 0,5% wide, lens correction is set automatically. That’s the whole deal. On very rare occasions I add some vignette or drop the brightness gradually from top to bottom, but again: it doesn’t happen all that often. I never cut, crop or re-frame a shot. WYSIWYG. Cropping something out, turning the picture in order to get perfectly vertical lines or the likes - it all feels like cheating. I have no client to please, no deadline to meet, I can take a second longer and frame my photo when I look through the viewfinder.

Franzi Skamet by Stefan Schmitz Franzi Skamet by Stefan Schmitz

The photos will then be treated in the GIMP. Some dodge and burn (especially when there are problematic, very high or low contrasts), maybe stamp an electric plug away and in the end I re-size them down to 2560 on the long side (big enough for A3 prints) and (sometimes) apply the sharpening tool with value 20 or 25. Done. I can’t save a crappy shot in post-prod and I won’t try. Out of the 500 or so frames, 10 to 15 will be processed like that and it feels like nothing has changed over the last 40 years. The golden rule was “one good shot per roll of film” and I happen to be there, too. Spot-on!

I load those 15 pictures up on my Flickr account and about once or twice a week I place a shot in the many Flickr groups. Also once a week (or every ten days) I post a photo on my Tumblr account. Today I have about 5k followers and my photos are seen between 500’000 and one million times a month, depending on the time of year and weather. There’s less traffic on warm summer days and more during cold and rainy winter-nights.

It takes me some time before I add a shot to my own website. In comparison I show few photos there, every one for a reason and I point point people to that address, so I hope I only show the best.

Aya Kashi by Stefan Schmitz Aya Kashi by Stefan Schmitz

Is your choice to use Free Software for pragmatic reasons, or more idealistic?

I owned an Apple II in 1983 and a digital MicroVax in 1990 or so. My way to FOSS started out pragmatic and it became a conviction later on. In the late 90’s and early 2000’s I had my own small business and worked with MS Office on a Win NT machine. Photos were processed with a Nikon film-scanner through the proprietary software into an illegal copy of Adobe PS4. It was OK, stable and I didn’t fear anything, but I wasn’t really happy neither. One day I swung over to Star-Office/OpenOffice.org for financial reasons and I also got rid of that unlicensed PS and installed the GIMP (I don’t know what version, but I upgraded some time later to 1.2, that’s for sure). I had internet access and an email address since 1994, but in the late 90’s big programs still came on CDs attached to computer-magazines. Downloading the GIMP was out of question.

Gaming was never my thing and when I installed Win XP, all hell broke lose - keeping a computer safe, virus-free and running wasn’t easy before the first service pack, but MS reacted way too slow in my opinion - I tried debian (10 CD kit) on my notebook, got it running, found the GIMP and OOo - and that was it. It took a bit of trial and error and I had to buy a number of W-Lan sticks because very few were supported and so on, but in the end I got the machines running.

Later on I got hold of an Ubuntu 7.10 CD, tried that and never looked back. The few changes on my system were from Gnome to XFCE desktop and from Thunderbird to a browser-based mail-client. Xubuntu is a no-brainer, it runs stable and fast. I contribute every December 100.- € to FOSS. That’s in general 50 and 40 to two projects and a tenner to Wikipedia. I’d spend an extra tenner to any project that helps to convert old star-office files (.sdw and so on) to today’s standards (odt…), but nobody seems interested.

What is one piece of advice you would offer to another photographer?

Don’t take any advise from me, i’m still learning myself. Or wait: be kind and a gentleman with the models. They all - each and everyone of them - have had bad experiences with photographers who forgot that the models are nude for the camera, not for the man behind it. They all have been in a room with a photographer who breathes a bit too hard and doesn’t get his gear working … don’t be that arsewipe!

Irina by Stefan Schmitz Irina by Stefan Schmitz

Arrange for a place where the model can undress in privacy - she didn’t come for a strip-show and you shouldn’t try to make it one. Have some bottles of water at hand and talk about your plans, poses and sets with the model. Few people can read minds, so communication works best when you say what you have in mind and the model says how she thinks this can be realized. The more you talk, the better you communicate, the better the pictures. No good photo has ever been shot during a quiet session, believe me.

In general the model will check your portfolio/website and expect to do more or less the same kind of work with you. If you want to do something different, say so when booking the model. If your website shows a lot of nude portraits, models will expect to do that kind of photos. They may be a bit upset if you ask them out of nowhere to wear a latex suit because it’s fetish-Friday in your world. The more open and honest you are from the beginning, the better the shooting will go down.

Irina by Stefan Schmitz Irina by Stefan Schmitz

Don’t overdo the gear-thingy. 90% of my photos are taken with the 50mm standard lens. Period. Sometimes I have to switch to 35mm because the room is a bit to small and the distance too close for the one four-fifty, so everything I bring to an indoor-shooting is the camera, a 50, a 35, an el-cheap-o 100cm reflector from amazon (+/- 15 €/$) and an even cheaper stand for the reflector. Gear is not important, communication is.

Want to spend 300 €/$ on new gear? Spend it on a workshop. Learn how to communicate, get inspiration and fill your portfolio with a first set of pictures, so the next model you email can see that you already have some experience in the field of (nude) portraits. That’s more important than a new flashlight in your bag.

Isabelle Descamps by Stefan Schmitz Isabelle Descamps by Stefan Schmitz

Thank You Stefan!

I want to thank Stefan again for taking the time and being patient enough to chat with me!

Stefan is currently living in Northern France. Before that he lived and worked in Miami, FL, and Northern Germany where he is from, went to school, and met his wife. His main website is at https://whatstefansees.com/, and he can be found on Flickr, Facebook, Twitter, Instagram, and Tumblr.

Unless otherwise noted, all of the images are copyright Stefan Schmitz (all rights reserved) and are used with permission.

May 02, 2018

Bíonn gach tosach lag*

Tá mé ag foghlaim Gaeilge; tá uaim scríobh postálacha blag as Gaeilge, ach níl mé oilte ar labhairt nó scríbh as Gaeilge go fóill. Tiocfaidh sé le tuilleadh cleachtaidh.**

Catching up

I have definitely fallen off the blog wagon; as you may or may not know the past year has been quite difficult for me personally, far beyond being an American living in Biff Tannen’s timeline these days. Blogging definitely was pushed to the bottom of the formidable stack I must balance but in hindsight I think the practice of writing is beneficial matter what it’s about so I will carve regular time out to do it.

Tá mé ag foghlaim Gaeilge

This post title and opening is in Irish; I am learning Irish and trying to immerse myself as much as one can outside of a Gaeltacht. There’s quite a few reasons for this:

  • The most acute trigger is that I have been doing some genealogy and encountered family records written in Irish. I couldn’t recall enough of the class I’d taken while in college and got pulled in wanting to brush up.
  • Language learning is really fun, and Irish is of course part of my heritage and I would love to be able to teach my kids some since it’s theirs, too.
  • One of the main reasons I took Japanese in college for 2 years is because I wanted to better understand how kanji worked and how to write them. With Irish, I want to understand how to pronounce words, because from a native English speaker point of view they sound very different than they look!
  • Right now appears to be an exciting moment for the language; it has shed some of the issues that I think plagued it during ‘The Troubles’ and you can actually study and speak it now without making some kind of unintentional political statement. There’s far more demand for Gaelscoils (schools where the medium for education in all subjects is Irish) than can be met. In the past year, the Pop Up Gaeltacht movement has started and really caught on, a movement run in an open source fashion I might add!
  • I am interested in how the brain recovers from trauma and I’ve a little theory that language acquisition could be used as a model for brain recovery and perhaps suggest more effective therapies for that. Being knee deep in language learning, at the least, is an interesting perspective in this context.
  • I also think – as a medium that permeates everything you do, languages are similar to user interfaces – you don’t really pay attention to a language when you speak it if you’re fluent, it’s just the medium. Where you pay attention to the language rather than the content is where you have a problem speaking it or understanding it. (Yes, the medium is the message except when it isn’t. 🙂 )Similarly, user interfaces aren’t something you should pay attention to – you should pay attention to the content, or your work, rather than focus on the intricacies of how the interface works. I think drawing connections between these two things is at least interesting, if not informative. (Can you tell I like mashing different subjects together to see what comes out?)

Anyway, I could go on and on, but yes, $REASONS. I’m trying to learn a little bit every day rather than less frequent intensive courses. For example, I’m trying to ‘immerse’ as I can by using my computers and phone in the Irish language, keep long streaks in the Duolingo course, listen to RnaG and watch TG4 and some video courses, and some light conversation with other Irish learners and speakers.

Maybe I’ll talk more about the approach I’m taking in detail in another post. In general, I think a good approach to language learning is a policy I try to subscribe to in all areas of life – just f*ing do it (speak it, write it, etc. Do instead of talking about doing. Few things infuriate me more although I’m as guilty as anyone. 🙂 ) There you go for now, though.

What else is going on?

I have been working on some things that will be unveiled at the Red Hat Summit and don’t want to be a spoiler. I am planning to talk a bit more about that kind of work here. One involves a coloring book :), and another involves a project Red Hat is working on with Boston University and Boston Children’s Hospital.

Just this week, I received my laptop upgrade 🙂 It is the Thinkpad Yoga X1 3rd Gen and I am loving it so far. I have pre-release Fedora 28 on it and am very happy with the out-of-the-box experience. I’m planning to post a review about running Fedora 28 on it soon!

Slán go fóill!

(Bye for now!)

* Every beginning is weak.

** I’m learning Irish; I want to write blog posts in Irish, but I don’t speak or write Irish well enough yet. It’ll come with practice. (Warning: This is likely Gaeilge bhriste / broken Irish)

FreeCAD BIM development news - April 2018

Hello everybody, This is time for a new report on FreeCAD development, particularly the development of BIM tools. To resume the affair for who is new to this column, I recently started to "divide" the development of BIM tools in FreeCAD between the original Arch, which is included in FreeCAD itself, and the new BIM...

May 01, 2018

Goodbye Kansas Studios

Goodbye Kansas Studios is a VFX studio that creates award-winning visual effects, digital animation and motion capture for movies, game trailers and commercials. Goodbye Kansas Studios main office lies in Stockholm, Sweden, but they are also located in Los Angeles, London, Hamburg and Uppsala.

Goodbye Kansas Studio

Text by Nils Lagergren and Daniel Bystedt, Goodbye Kansas

We pride ourselves in having a structure at work where we put the artists first and the administration works a lot to support the artists. This has in turn created a company culture where artists help each other out as soon they run into any CG related issue. We also have a very strong creative atmosphere where artists feel ownership of their tasks and go out of their way to achieve visual excellence.

At Goodbye Kansas Studios we use several 3D applications, such as Houdini, Blender, Zbrush and Maya. We always try to approach a challenge with the tool that is best suited for solving the problem at hand. Blender first caught our eye because some of our artists had started trying it out and were surprised over how much faster they could produce models. Even though not every artist at the company use Blender it is becoming more and more popular in the modeling department at the Stockholm office. Let’s have a look at some projects!

Characters for Unity – Adam Demo

Characters were modeled in Blender and Zbrush. The low poly version of the character was entirely done in Blender.

Blender fits nicely into our pipeline because of its powerful modeling tools. We also use it for hair grooming, which then is exported as curves and used for procedural hair setups in other packages. Blender has a very nice mix between procedural tools, standard box modeling and sculpting. Generally we use Zbrush for character work and Blender for hard surface and props/environment work. We also use it in parts of our environment workflow for scattering objects.

Walking dead – season 8

Retopology and UV-mapping of human actor scans were done in Zbrush and Blender. Grooming of hairstyles were also done in Blender.

Here are something that artists say about Blender.

“Things that are very complex to achieve in other applications are suddenly easy!”
“As a modeler it’s a program that works with you, instead of against you.”
“Suddenly I love Dutch people”
“It made box modelling fun again”
“It feels so strange that Blender is free when it’s actually better than most other modeling programs on the market”

Overkill’s: The Walking dead – Aidan trailer

“Upresolution” of zombie game assets were made both in Zbrush and Blender. Grooming of zombie hairstyles were done in Blender, and we also made a bunch of environment assets.

Along with the gods – The two worlds

The stone chamber was created with Blender. There was a lot of tedious work with placing rocks so they would no intersect in this environment. Thanks to Blender’s fast rigid body simulation system, we could simulate a low resolution version of the rocks and drop them in place. The rocks were then relinked to a high resolution version and published as an environment model. The stone characters in this scene were also done in Blender in two passes. First, the rocks were scattered onto a human base mesh and then they were nudged around by hand for better art direction. The big stone walls were also sculpted in Blender.

Biomutant – cinematic trailer

The little hero character was modeled in Zbrush and Blender. Grooming of the fur was done in Blender.

Raid: World War 2 – Cinematic Trailer

Several environments were done in Blender. We started the layout process using Grease Pencil. This was great, since we could do it very quickly, side-by-side with the art director and address his thoughts and notes. This Grease Pencil sketch was later linked into each environment artists’ scene so they had a good reference when building it. The environment artists did also link each others scene so that they could see each others work update. This made it easy to tie the separate rooms together.

Mass Effect: The Andromeda Initiative

The Moon environment was made in Blender. Being able to sculpt the ground at the same time as scattering out rocks made it really easy to iterate the shot and see how everything looked in the camera. By importing the character animation with Alembic from Maya to Blender, the environment artist could make sure that nothing intersected the characters feet while they were walking. This also enabled us to create the environment simultaneously as we were animating the shots.

April 30, 2018

Interview with JK Riki

Could you tell us something about yourself?

Hi everyone! My name is JK. I am an animator, graphic designer, author, and the Art-half of the Weekend Panda game studio.

Do you paint professionally, as a hobby artist, or both?

My full time job in game development has me doing art professionally, but I’m always working on improving my skills by doing digital painting as a hobby as well – so a little bit of both.

What genre(s) do you work in?

My most practiced genre is the comic/cartoon art style seen in the image above, which I have a lot of fun doing. I also strive to push beyond my comfort zone and try everything from fully rendered illustrations to graphic styles.

I want to continue to improve all-around as an artist so every genre becomes a possibility.

Whose work inspires you most — who are your role models as an artist?

* In animation: Glen Keane, who worked on things like Ariel in The Little Mermaid and Ratigan in The Great Mouse Detective (or as some know it, Basil of Baker Street).
* In comics: Bill Amend, who does the syndicated comic strip Fox Trot.
* In figure drawing: Samantha Youssef, who runs Studio Technique and has been a wonderful mentor.
* In painting: There are so many, and I seem to find more every day!

How and when did you get to try digital painting for the first time?

I imagine the first time I tried it was back in Art School, though that’s probably close to 15 years ago, so the memories are hazy.

What makes you choose digital over traditional painting?

I am a big proponent of “Fail fast and often.” Digital painting allows for just that. I can make (and try to correct) 20 mistakes digitally in the time it takes to pinpoint and alter one mistake traditionally.

Of course, I still love traditional art, even though I find it takes far longer to do. I have sketchbooks littered around my office, and would happily animate with paper and pencil any time any day.

How did you find out about Krita?

It was actually from my wife, who is a software engineer! She needed to do some graphics for a project at her old job, and wanted to find a free program to do it. After Adobe went to a forced subscription-only model, I was looking to make a change, and she showed me Krita.

What was your first impression?

Well, to be honest, I have a hard time learning new programs, so initially I was a little bit resistant! There were so many brushes, and I had to adapt to the differences between Krita and Photoshop. It won me over far more quickly than any other program, though. The flow and feel of painting and drawing in Krita is on a whole different level, probably because it was designed with that in mind! I would never want to go back now.

What do you love about Krita?

Every day I find new tools and tricks in Krita that blow me away. I recently discovered the Assistant Tool and it was practically life-changing. I can do certain things so much faster thanks to learning about that magical little icon.

I also adore so many of the brush presets. They seem much more aligned with what I’m trying to do than the ones that come with other art programs.

The fact that Krita is free is icing on the cake. (Spoiler: Artists love free stuff.)

What do you think needs improvement in Krita? Is there anything that
really annoys you?

I’ve never quite gotten used to the blending mode list/UI in Krita vs. Photoshop. The PS one just feels more intuitive to me. I’d love to see an option to make the Krita drop down menu more like that one.

What sets Krita apart from the other tools that you use?

Apart from the price tag, Krita is just more fun to work in than most other programs I use. I genuinely enjoy creating art in Krita. Sometimes with other programs it feels like half of my job is fighting the software. Rarely do I feel that way in Krita.

If you had to pick one favourite of all your work done in Krita so far,
what would it be, and why?

You torture me, how can I choose?! I suppose it would be this one:

It may not be the most finished or technically impressive art I’ve ever done, but it was one of the first times digital painting really clicked with me and I thought “Hey, maybe I can do this!” I’ve always felt an affinity for comic and cartoon style, but realism often eludes me. This piece proved in some small way that my practice was starting to pay off and I was getting somewhere. It felt like a turning point. So even if no one else feels the same way, this little bird will always be special to me.

What techniques and brushes did you use in it?

My most-used brushes are Ink_tilt_10 and Ink_tilt_20 (as seen in this screen capture!)

These days I use many more brushes and techniques, but that whole image was done with just those two, and different levels of flow and opacity. I didn’t even know about the Alpha Lock on the layers panel for this, which I use now in almost every digital painting.

Where can people see more of your work?

People can PLAY some of my work in the mobile game The Death of Mr. Fishy! All the art assets for that game were done in Krita. I’m doing more art for our next game right now as well. The latest details will always be posted at WeekendPanda.com.

I also share my practice art and work-in-progress on my personal Twitter account which is @JK_Riki.

Anything else you’d like to share?

Yes. A note to other artists out there: You can have the greatest tools and knowledge in the world but if you don’t practice, and truly put in the work, you will never achieve your best art. It is hard. I know, I’m with you there. It’s worth it, though. Work hard, practice a ton, and we’ll all improve together. Let’s do it! And if you ever need someone to encourage you to keep going, send me a note! 🙂

April 28, 2018

Displaying PDF with Python, Qt5 and Poppler

I had a need for a Qt widget that could display PDF. That turned out to be surprisingly hard to do. The Qt Wiki has a page on Handling PDF, which suggests only two alternatives: QtPDF, which is C++ only so I would need to write a wrapper to use it with Python (and then anyone else who used my code would have to compile and install it); or Poppler. Poppler is a common library on Linux, available as a package and used for programs like evince, so that seemed like the best route.

But Python bindings for Poppler are a bit harder to come by. I found a little one-page example using Poppler and Gtk3 via gi.repository ... but in this case I needed it to work with a Qt5 program, and my attempts to translate that example to work with Qt were futile. Poppler's page.render(ctx) takes a Cairo context, and Cairo is apparently a Gtk-centered phenomenon: I couldn't find any way to get a Cairo context from a Qt5 widget, and although I found some web examples suggesting renderToImage(), the Poppler available in gi.repository doesn't have that function.

But it turns out there's another Poppler: popplerqt5, available in the Debian package python3-poppler-qt5. That Poppler does have renderToImage, and you can take that image and paint it in a paint() callback or turn it into a pixmap you can use with a QLabel. Here's the basic sequence:

    document = Poppler.Document.load(filename)
    document.setRenderHint(Poppler.Document.TextAntialiasing)
    page = document.page(pageno)
    img = self.page.renderToImage(dpi, dpi)

    # Use the rendered image as the pixmap for a label:
    pixmap = QPixmap.fromImage(img)
    label.setPixmap(pixmap)

The line to set text antialiasing is not optional. Well, theoretically it's optional; go ahead, try it without that and see for yourself. It's basically unreadable.

Of course, there are plenty of other details to take care of. For instance, you can get the size of the rendered image:

    size = page.pageSize()
... after which you can use size.width() and size.height(). They're in points. There are 72 points per inch, so calculate accordingly in the dpi values you pass to renderToImage if you're targeting a specific DPI or need it to fit in a specific window size.

Window Resize and Efficient Rendering

Speaking of fitting to a window size, I wanted to resize the content whenever the window was resized, which meant redefining resizeEvent(self, event) on the widget. Initially my PDFWidget inherited from Qwidget with a custom paintEvent(), like this:

        # Create self.img once, early on:
        self.img = self.page.renderToImage(self.dpi, self.dpi)

    def paintEvent(self, event):
        qp = QPainter()
        qp.begin(self)
        qp.drawImage(QPoint(0, 0), self.img)
        qp.end()
(Poppler also has a function page.renderToPainter(), but I never did figure out how to get it to do anything useful.)

That worked, but when I added resizeEvent I got an infinite loop: paintEvent() called resizeEvent() which triggered another paintEvent(), ad infinitum. I couldn't find a way around that (GTK has similar problems -- seems like nearly everything you do generates another expose event -- but there you can temporarily disable expose events while you're drawing). So I rewrote my PDFWidget class to inherit from QLabel instead of QWidget, converted the QImage to a QPixmap and passed it to self.setPixmap(). That let me get rid of the paintEvent() function entirely and let QLabel handle the painting, which is probably more efficient anyway.

Showing all pages in a scrolled widget

renderToImage gives you one image corresponding to one page of the PDF document. More often, you'll want to see the whole document laid out, with all the pages. So you need a way to stack a bunch of widgets vertically, one for each page. You can do that with a QVBoxLayout on a widget inside a QScrollArea.

I haven't done much Qt5 programming, so I wasn't familiar with how these QVBoxes work. Most toolkits I've worked with have a VBox container widget to which you add child widgets, but in Qt5, you create a widget (no particular type -- a QWidget is enough), then create a layout object that modifies the widget, and add the sub-widgets to the layout object. There isn't much documentation for any of this, and very few examples of doing it in Python, so it took some fiddling to get it working.

Initial Window Size

One last thing: Qt5 doesn't seem to have a concept of desired initial window size. Most of the examples I found, especially the ones that use a .ui file, use setGeometry(); but that requires an (X, Y) position as well as (width, height), and there's no way to tell it to ignore the position. That means that instead of letting your window manager place the window according to your preferences, the window will insist on showing up at whatever arbitrary place you set in the code. Worse, most of the Qt5 examples I found online set the geometry to (0, 0): when I tried that, the window came up with the widget in the upper left corner of the screen and the window's titlebar hidden above the top of the screen, so there's no way to move the window to a better location unless you happen to know your window manager's hidden key binding for that. (Hint: on many Linux window managers, hold Alt down and drag anywhere in the window to move it. If that doesn't work, try holding down the "Windows" key instead of Alt.)

This may explain why I've been seeing an increasing number of these ill-behaved programs that come up with their titlebars offscreen. But if you want your programs to be better behaved, it works to self.resize(width, height) a widget when you first create it.

The current incarnation of my PDF viewer, set up as a module so you can import it and use it in other programs, is at qpdfview.py on GitHub.

April 26, 2018

GIMP 2.10.0 Released

The long-awaited GIMP 2.10.0 is finally here! This is a huge release, which contains the result of 6 long years of work (GIMP 2.8 was released almost exactly 6 years ago!) by a small but dedicated core of contributors.

The Changes in short

We are not going to list the full changelog here, since you can get a better idea with our official GIMP 2.10 release notes. To get an even more detailed list of changes please see the NEWS file.

Still, to get you a quick taste of GIMP 2.10, here are some of the most notable changes:

  • Image processing nearly fully ported to GEGL, allowing high bit depth processing, multi-threaded and hardware accelerated pixel processing, and more.
  • Color management is a core feature now, most widgets and preview areas are color-managed.
  • Many improved tools, and several new and exciting tools, such as the Warp transform, the Unified transform and the Handle transform tools.
  • On-canvas preview for all filters ported to GEGL.
  • Improved digital painting with canvas rotation and flipping, symmetry painting, MyPaint brush support…
  • Support for several new image formats added (OpenEXR, RGBE, WebP, HGT), as well as improved support for many existing formats (in particular more robust PSD importing).
  • Metadata viewing and editing for Exif, XMP, IPTC, and DICOM.
  • Basic HiDPI support: automatic or user-selected icon size.
  • New themes for GIMP (Light, Gray, Dark, and System) and new symbolic icons meant to somewhat dim the environment and shift the focus towards content (former theme and color icons are still available in Preferences).
  • And more, better, more, and even more awesome!

» READ COMPLETE RELEASE NOTES «

Enjoy GIMP!

Wilber likes it spicy!

Profiling a camera with darktable-chart


Profiling a camera with darktable-chart

Figure out the development process of your camera

What is a camera profile?

A camera profile is a combination of a color lookup table (LUT) and a tone curve which is applied to a RAW file to get a developed image. It translates the colors that a camera captures into the colors they should look like. If you shoot in RAW and JPEG at the same time, the JPEG file is already a developed picture. Your camera can do color corrections to the data it gets from the sensor when developing a picture. In other words, if a certain camera tends to turn blue into turquoise, the profile will correct for the color shift and convert those turquoise values back to their proper hue.

The camera manufacturer creates a tone curve for the camera and understands what color drifts the camera tends to capture and can correct it. We can mimic what the camera does using a tone curve and a color LUT.

Why do we want a color profile?

The camera captures light as linear RGB values. RAW development software needs to transform those into CIE XYZ tristimulus values for mathematical calculations. The color transformation is often done under the assumption that the conversion from camera RGB to CIE XYZ is a linear 3x3 mapping. Unfortunately it is not because the process is spectral and the camera sensor sensitivity also absorbs spectral light. In darktable the conversion is done the following way: The camera RGB values are transformed using the color matrix (either coming from the Adobe DNG Converter or dcraw) to arrive at approximately profiled XYZ values. darktable provides color lookup table in Lab color space to fix inaccuracies or implement styles which are semi-camera independent. A very cool feature is that a user can edit the color LUT. This color LUT can be created by darktable-chart as this article will show so that you don’t have to create it yourself.

What we want to have is the same knowlege about colors in our raw development software as the manufacturer put into the camera. Therefore we have two ways to achieve this. Either we fit to a JPEG generated by the camera, which can also apply creative styles, or we fit against real color. For real color a color target ships with a file providing the color values for each patch it has. Software for raw development normally just has a standard color matrix to tweak colors so that it looks acceptable and they apply a reasonable tone curve to ensure good shadow detail. We want to do better than that!

We can develop a profile for our development process which improves the colors. We can also take advantage of the color calibration a manufacturer has done for its cameras by fitting a JPEG.

Creating pictures for color profiling

To create the required pictures for camera profiling we need a color chart (aka Color Checker) or an IT8 chart as our target. The difference between a color chart and and IT8 chart is the number of patches and the price. As the IT8 chart has more patches the result will be much better. Optimal would be if the color target comes with a grey card for creating a custom White Balance. I can recommend the X-Rite ColorChecker Passport Photo. It is small, lightweight, all plastic, a good quality tool and also has a grey card. An alternative is the Spyder Checkr. If you want a better profiling result, a good IT8 chart is the ColorChecker Digital SG.

We are creating a color profile for sunlight conditions which can be used in various scenarios. For this we need some special conditions.

The Color Checker needs to be photographed in direct sunlight, which helps to reduce any metamerism of colors on the target and ensures a good match to the data file, that tells the profiling software what the colors on the target should look like. However a major concern is glare, but we can reduce it with some tricks.

One of the things we can do to reduce glare, is to build a simple shooting box. For this we need a cardboard box and three black t-shirts. The box should be open on the top and on the front like in the following picture (Figure 1).

A cardboard box Figure 1: Cardboard box suitable for color profiling

Normally you just need to cut one side open. Then coat the inside of the box with black t-shirts like this:

A cardboard box coated with black t-shirts Figure 2: A simple box for color profiling

To further reduce glare we just need the right location to shoot the picture. Of course, a lot depends on where you are located and the time of year, but in general, the best time to shoot the target is either 1-2 hours before mid-day or 1-2 hours after mid-day (when the sun has the highest elevation, keep Daylight Saving Time (DST) in mind). Try to shoot on a day with minimal clouds so the sun isn’t changing intensity while you shoot. The higher the temperature the more water is in the atmosphere, which means the quality of the images for profiling are reduced. Temperatures below 20°C are better than above.

Shooting outdoor

If you want to shoot outdoor, look for an empty tared parking lot. It should be pretty big, like from a mall, without any cars or trees. You should be far away from walls or anything which can reflect. Put the box on the ground and shoot with the sun above your right or left shoulder behind you. You can use a black fabric (bed sheets) if the ground reflects.

Shooting indoor

Find a place indoor where you can put the box in the sun and place you camera with a tripod in the shadow. The darker the room the better! Garages with an additional garage door are great. Also the sun needs to shine at an angle on the Color Checker. This means when you photograph the color chart with the sun above your right or left shoulder behind you. Use a black fabric to cover anything which could reflect.

How to shoot the target?

  1. Put your shooting box in the sun and setup your camera on a tripod. The best is to have the camera looking down on on the color chart like in the following picture:
A camera pointing into the profiling box Figure 3: Camera doing a custom white balance with the color profiling box
  1. You should use a prime lens for taking the pictures. If possible a 50mm or 85mm lens (or anything in between). The less glass the light has to travel through the better it is for profiling. Thus those two lenses are a good choice in the number of glass elements they have and their field of view! With a tele lens we would be too far away and with a wide angle lens we would need to be too near to have just the black box in the picture.

  2. Set your metering mode to matrix metering and use an aperture of at least f/4.0. Make sure the color chart is parallel to plane of the camera sensors so all patches of the chart are in focus. The color chart should be in the middle of the image using about 1/3 of the screen so that vignetting is not an issue.

  3. Set the camera to capture “RAW & JPEG” and disable lens corrections (vignetting corrections) for JPEG files if possible.

  4. If your camera has a custom white balance feature and you have a gray card provided by your target, create a custom white balance with it and use it (see figure 3). Put the gray card in your black box in the sunlight at the same position as the Color Checker.

  5. We want to have a camera profile for the most used ISO values. So for each ISO value you need to take 4 pictures of your target. One photo for -1/3 EV, 0 EV, 1/3 EV and 2/3 EV. Start with ISO 100, don’t shoot for Extended ISO values (50, 64, 80). Normally they are captured with ISO 100 and overexposed and then exposure is reduced. Use the ISO 100 profile for them. If you hit the maximum shutter speed (1/8000), start to close the aperture. Creating profile for values above ISO 12800 doesn’t really make sense. Probably with ISO 6400 the result start to be not 100% accurate anymore! You can use the profile for ISO 6400 on higher values.

Once you have done all the required shots, it is time to download the RAW and JPEG files to your computer.

Verifying correct images in darktable

For verifying the images we need to know the L-value from the Lab color space of the neutral gray field in the gray ramp of our color target. For the ColorChecker Passport we can look it up in the color information (CIE) file (ColorCheckerPassport.cie) shipping with ArgyllCMS, which should be located at:

/usr/share/color/argyll/ref/ColorCheckerPassport.cie

Note: ArgllCMS offers CIE and CHT files for different color charts, if you already have one or are going to buy one, check if ArgyllCMS offers support for it first! You can always add support to your color chart to ArgylCMS, but the process is much more complex.

The ColorChecker Passport has actually two gray ramps. The neutral gray field is the field on the bottom right on both sides. On the left it is called NEU8 and on the right side it is D1. If we check the CIE file, we will find out that the neutral gray field has an L-value of: L=96.260066. Lets round it to L=96. For other color targets you can find the L-value in the description or specification of your target, often it is L=92. Better check the CIE file!

You then open the RAW file in darktable and disable most modules, especially the base curve! Select the standard input matrix in the input color profile module and disable gamut clipping. Make sure “camera white balance” in the white balance module is selected. If lens corrections are automatically applied to your JPEG files, you need to enable lens corrections for your RAW files too! Only apply what has been applied to the JPEG file too.

Apply the changes to all RAW files you have created!

You can also crop the image but you need to apply exactly the same crop to the RAW and JPEG file!

Now we need to use the global color picker module in darkroom to find out the value of the natural white field on the color target.

  • Open the first RAW file in darkroom and expand the global color picker module on the left.
  • Select area, mean and Lab in the color picker and use the eye-dropper to select the natural gray field (bottom right) on the Color Checker you photographed. Here is an example:
darktable global color picker Figure 4: Determining the color of the neutral white patch
  • If the value displayed in the color picker module matches the L-value of the field or is close (+/-2), give the RAW file and the corresponding JPEG file 5 stars. In the picture above it is the first value of: (96.491, -0.431, 3.020). This means L=96.491, which is what you’re looking for on this color target. You might be looking for e.g. L=92 if you are using a different Color Checker. See above how to find our the L-value for your target.

Exporting images for darktable-chart

For exporting we need to select Lab as output color profile. This color space is not visible in the combo box by default. You can enable it by starting darktable with the following command line argument:

darktable --conf allow_lab_output=true

Or you always enable it by setting allow_lab_output to TRUE in

~/.config/darktable/darktablerc

As the output format select “PFM (float)” and for the export path you can use:

$(FILE_FOLDER)/PFM/$(MODEL)_ISO$(EXIF_ISO)_$(FILE_EXTENSION)

Select all 5 star RAW and JPEG files and export them.

darktable export dialog Figure 5: Exporting the images for profiling

Profiling with darktable-chart

Before we can start you need the chart file for your color target. The chart file contains the layout of the color checker. For example it tells the profiling software where the gray ramp is located or which field contains which color. For the “X-Rite Colorchecker Passport Photo” there is a (ColorCheckerPassport.cht) file provided by ArgyllCMS. You can find it here:

/usr/share/color/argyll/ref/ColorCheckerPassport.cht

Now it is time to start darktable-chart. The initial screen will look like this:

darktable-chart startup Figure 6: The darktable-chart screen after startup

Source Image

In the source image tab, select your PFM exported RAW file as image and for chart your Color Checker chart file. Then fit the displayed grid on your image.

darktable-chart source image Figure 7: Selecting the source image in darktable-chart

Make sure that the inner rectangular of the grid is completely inside of the color field, see Figure 8. If it is to big, you can use the size slider in the top right corner to adjust it.

darktable-chart source image with grid Figure 8: Placing the chart grid on the source image

Reference values

In the next tab as the mode you have to select color chart image and as the reference image select the PFM exported JPEG file which corresponds to the RAW file in the source image tab. Once opened you need to resize the grid again to match the Color Checker in your image. Adjust the size with the slider if necessary.

(If you want to fit for real color instead the camera produced JPEG, leave mode as cie/it8 file and load the corresponding CIE file for your color chart.)

darktable-chart selecting reference values Figure 9: Selecting the reference value for profiling in darktable-chart

Process

In this tab you’re asked to select the patches with the gray ramp. For the ‘X-Rite Color Checker Passport’ these are the ‘NEU1 .. NEU8’ fields. The input number of final patches defines how many editable color patches the resulting style will use within the color look up table module. More patches gives a better result but slows down the process. I think 32 is a good compromise.

Once you have done this click on ‘process’ to start the calculation. The quality of the result in terms of average delta E and maximum delta E are displayed below the button. These data show how close the resulting style applied to the source image will be able to match the reference values – the lower the better.

Click on ‘export’ to save the darktable style.

darktable-chart export Figure 10: Processing the image in darktable-chart

In the export window you should already get a good name for the style. Add a leading zero for ISO values smaller than 1000 get get correct sorting in the styles module, for example: ILCE-7M3_ISO0100_JPG.dtstyle. The JPG in the name should indicate that we fitted against a JPG file. If you fitted against a CIE file, remove it. If you applied a creative style to the JPG, probably add it at the end of the file name and style name.

Importing your dtstyle in darktable

To use your just created style, you need to import it in the style module in the lighttable. In the lighttable open the module on the right and click on ‘import’. Select the dtstyle file you created to add it. Once imported you can select a raw file and then double click on the style in the ‘style module’ to apply it.

Open the image in darkroom and you will notice that the base curve has been disabled and a few modules been enabled. The additional modules activated are normally: input color profile, color lookup table and tone curve.

Verifying your profile

To verify the style you created you can either apply it to one of the RAW files you created for profiling. Then use the global color picker to compare the color in the RAW with the style applied to the one in the JPEG file.

I also shoot a few normal pictures with nice colors like flowers in RAW and JPEG and then compare the result. Sometimes some colors can be off which can indicate that your pictures for profiling are not the best. This can be because there were some kind of clouds, glare or the wrong daytime. Redo the shots till you get the result you’re satisfied with.

How does the result look like?

In the following screenshot (Figure 11) you can see the calculated tone curve by darktable chart and the Sony base curve of darktable. The tone curve is based on the color LUT. It will look flat if you apply it without the LUT.

darktable base curve vs. tone curve Figure 11: Comparison of the default base curve with the new generated tone curve

Here is a comparison between the base curve for Sony on the left and the dtstyle (color LUT + tone curve) created with darktable-chart:

darktable comparison Figure 12: Side by side comparison on an image (left the standard base curve, right the calculated dtstyle)

Discussion

As always the ways to get better colors are open for discussion an it can be improved in collaboration.

Feedback is very welcome.

Thanks to the darktable developers for such a great piece of software! :-)

April 25, 2018

Who is Producer X?

Astute observers of Seder-Masochism will notice one “Producer X” on the poster:

Poster_ProducerX

This is consistent with the film’s opening credits:

Moses_ProducerX_edit

and end credits:

Endcredit_ProducerX_edit

Why? Who? WTF?

I made Sita Sings the Blues almost entirely alone. That caused an unforeseen problem when it came time to send the film out into the world: I was usually the only person who could represent it at festivals. Other films have producers who aren’t also the director. Other films also have crews, staff, multiple executives, and money. As SSTB’s only executive, I couldn’t be everywhere at once. Often I couldn’t be anywhere at once, due to having a life that includes occasional crises. Sometimes, if I was lucky, I could send an actor like Reena Shah, or musician like Todd Michaelesen, or narrator like Aseem Chaabra, or sound designer Greg Sextro. But most of the time it meant there was no human being representing the film when it screened at film festivals.

I’m even more hermitic now, and made Seder-Masochism in splendid isolation in Central Illinois. This time I worked with no actors, narrators, or musicians. I did try recording some friends discussing Passover, but that experiment didn’t make it into the film. Greg Sextro is again doing the sound design, but we’re working remotely (he’s in New York).

I like working alone. But I don’t like going to film festivals alone. And sometimes, I can’t go at all.

Such as right now: in June, Seder-Masochism is having its world premiere at Annecy, but I have to stay in Illinois and get surgery. I have an orange-sized fibroid in my cervix, and finally get to have my uterus removed. (I’ve suffered a lifetime of debilitating periods, but was consistently instructed to just suck it up, buttercup; no doctor bothered looking for fibroids over the last 30 years in spite of my pain. But now that I’m almost menopausal, out it goes at last!)

Film festivals are “people” events, and having a human there helps bring attention to the film. The reason I want my film in festivals is to increase attention. The more attention, the better for the film, especially as a Free Culture project. So I want a producer with it at festivals.

Fortunately, Producer X has been with Seder-Masochism from the very beginning. After Sita’s festival years, I knew that credit would be built into my next film.

So who is Producer X?

Whoever I say it is.

She’ll see you in Annecy!

Share/Bookmark

flattr this!

April 24, 2018

3 Students Accepted for Google Summer of Code 2018

Since 2006, we have had the opportunity for Google to sponsor students to help out with Krita. For 2018 we have 3 talented students working over the summer. Over the next few months they will be getting more familiar with the Krita code base and working on their projects. They will be blogging about their experience and what they are learning along the way. We will be sure to share any progress or information along the way.

Here is a summary of their projects and what they hope to achieve.

Ivan Yossi – Optimize Krita Soft, Gaussian and Stamp brushes mask generation to use AVX with Vc Library

Krita digital painting app relies on quick painting response to give a natural experience. A painted line is composed of thousands of images placed one after the other. This image mask creation hast to be performed super fast as it is done thousands of times each second. If the process of applying the images on canvas is not fast enough the painting process gets compromised and the enjoyment of painting is reduced.

Optimizing the mask creation can be done using the AVX instructions sets to apply transformation in vectors of data in one step. In this case the data is the image component coordinates composing the mask. Programming AVX can be done using Vc optimization library, which manages low level optimization adaptable to the user processor features. However the data must be prepared so it optimizes effectively. Optimization has already been done on the Default brush mask engine allowing it to be as much as 5 times faster than the current Gaussian mask engine.

The project aims to improve painting performance by implementing AVX optimization code for Circular Gauss, Circular Soft, Rectangular Gaussian, Rectangular Soft Rectangular and Stamp mask.

Michael Zhou – A Swatches Docker for Krita

This project intends to create a swatches docker for Krita. It’s similar to the palette docker that’s already in Krita today, but it has the following advantages:

  • Users can easily add, delete, drag and drop colors to give the palette a better visual pattern so that it’s easier for them to keep track of the colors.
  • Users can store a palette with a work so that they can ensure the colors they use throughout a painting is consistent.
  • It will have a more intuitive UI design

Andrey Kamakin Optimize multithreading in Krita’s Tile Manager

This project is about improving Krita overall performance by introducing lock-free hash table for storing tiles and improving locks described in proposal

Problem: In single threaded execution of program there is no need to monitor shared resources, because it is guaranteed that only one thread can access resource. But in multi-threaded program flow it is a common problem that resources must be shared between threads, furthermore, situations such as dirty read, etc must be excluded for normal program behavior. So the simplest solution is to use locks on table operations so that only one thread can access resources and read/write

We wish all the students the best of luck this summer!

darktable 2.4.3 released

we’re proud to announce the third bugfix release for the 2.4 series of darktable, 2.4.3!

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.4.3.

as always, please don’t use the autogenerated tarball provided by github, but only our tar.xz. the checksums are:

$ sha256sum darktable-2.4.3.tar.xz
1dc5fc7bd142f4c74a5dd4706ac1dad772dfc7cd5538f033e60e3a08cfed03d3 darktable-2.4.3.tar.xz
$ sha256sum darktable-2.4.3.dmg
290ed5473e3125a9630a235a4a33ad9c9f3718f4a10332fe4fe7ae9f735c7fa9 darktable-2.4.3.1.dmg
$ sha256sum darktable-2.4.3-win64.exe
a34361924b4d7d3aa9cb4ba7e5aeef928c674822c1ea36603b4ce5993678b2fa darktable-2.4.3-win64.exe
$ sha256sum darktable-2.4.3-win64.zip
3e14579ab0da011a422cd6b95ec409565d34dd8f7084902af2af28496aead5af darktable-2.4.3-win64.zip

when updating from the currently stable 2.2.x series, please bear in mind that your edits will be preserved during this process, but it will not be possible to downgrade from 2.4 to 2.2.x any more.

Important note: to make sure that darktable can keep on supporting the raw file format for your camera, please read this post on how/what raw samples you can contribute to ensure that we have the full raw sample set for your camera under CC0 license!

and the changelog as compared to 2.4.2 can be found below.

New Features

  • Support for tags and ratings in the watermark module
  • Read Xmp.exif.DateTimeOriginal from XMP sidecars
  • Build and install noise tools
  • Add a script for converting .dtyle to an .xmp

Bugfixes

  • Don’t create unneeded folders during export in some cases
  • When collecting by tags, don’t select subtags
  • Fix language selection on OSX
  • Fix a crash while tethering

Camera support, compared to 2.4.2

Warning: support for Nikon NEF ‘lossy after split’ raws was unintentionally broken due to the lack of such samples. Please see this post for more details. If you have affected raws, please contribute samples!

Base Support

  • Fujifilm X-H1 (compressed)
  • Kodak EOS DCS 3
  • Olympus E-PL9
  • Panasonic DC-GX9 (4:3)
  • Sony DSC-RX1RM2
  • Sony ILCE-7M3

White Balance Presets

  • Sony ILCE-7M3

Noise Profiles

  • Canon PowerShot G1 X Mark III
  • Nikon D7500
  • Sony ILCE-7M3

Blender at FMX 2018

FMX 2018 (Stuttgart, April 24-27) is one of Europe’s most influential conference dedicated to Digital Visual Arts, Technologies, and Business. This year Blender is going to take part in 3 events, featuring Ton Roosendaal and artists from the Blender studio crew.

Blender at FMX 2018

Presentations and Panels

Blender will be represented at the following events on April 26th:

Come and see us!

If you are attending FMX and would like to hang out on Thursday, get in touch with francesco@blender.org or reach out to us directly on social media!

April 20, 2018

UEFI booting and RAID1

I spent some time yesterday building out a UEFI server that didn’t have on-board hardware RAID for its system drives. In these situations, I always use Linux’s md RAID1 for the root filesystem (and/or /boot). This worked well for BIOS booting since BIOS just transfers control blindly to the MBR of whatever disk it sees (modulo finding a “bootable partition” flag, etc, etc). This means that BIOS doesn’t really care what’s on the drive, it’ll hand over control to the GRUB code in the MBR.

With UEFI, the boot firmware is actually examining the GPT partition table, looking for the partition marked with the “EFI System Partition” (ESP) UUID. Then it looks for a FAT32 filesystem there, and does more things like looking at NVRAM boot entries, or just running BOOT/EFI/BOOTX64.EFI from the FAT32. Under Linux, this .EFI code is either GRUB itself, or Shim which loads GRUB.

So, if I want RAID1 for my root filesystem, that’s fine (GRUB will read md, LVM, etc), but how do I handle /boot/efi (the UEFI ESP)? In everything I found answering this question, the answer was “oh, just manually make an ESP on each drive in your RAID and copy the files around, add a separate NVRAM entry (with efibootmgr) for each drive, and you’re fine!” I did not like this one bit since it meant things could get out of sync between the copies, etc.

The current implementation of Linux’s md RAID puts metadata at the front of a partition. This solves more problems than it creates, but it means the RAID isn’t “invisible” to something that doesn’t know about the metadata. In fact, mdadm warns about this pretty loudly:

# mdadm --create /dev/md0 --level 1 --raid-disks 2 /dev/sda1 /dev/sdb1 mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90

Reading from the mdadm man page:

-e, --metadata= ... 1, 1.0, 1.1, 1.2 default Use the new version-1 format superblock. This has fewer restrictions. It can easily be moved between hosts with different endian-ness, and a recovery operation can be checkpointed and restarted. The different sub-versions store the superblock at different locations on the device, either at the end (for 1.0), at the start (for 1.1) or 4K from the start (for 1.2). "1" is equivalent to "1.2" (the commonly preferred 1.x format). "default" is equivalent to "1.2".

First we toss a FAT32 on the RAID (mkfs.fat -F32 /dev/md0), and looking at the results, the first 4K is entirely zeros, and file doesn’t see a filesystem:

# dd if=/dev/sda1 bs=1K count=5 status=none | hexdump -C 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00001000 fc 4e 2b a9 01 00 00 00 00 00 00 00 00 00 00 00 |.N+.............| ... # file -s /dev/sda1 /dev/sda1: Linux Software RAID version 1.2 ...

So, instead, we’ll use --metadata 1.0 to put the RAID metadata at the end:

# mdadm --create /dev/md0 --level 1 --raid-disks 2 --metadata 1.0 /dev/sda1 /dev/sdb1 ... # mkfs.fat -F32 /dev/md0 # dd if=/dev/sda1 bs=1 skip=80 count=16 status=none | xxd 00000000: 2020 4641 5433 3220 2020 0e1f be77 7cac FAT32 ...w|. # file -s /dev/sda1 /dev/sda1: ... FAT (32 bit)

Now we have a visible FAT32 filesystem on the ESP. UEFI should be able to boot whatever disk hasn’t failed, and grub-install will write to the RAID mounted at /boot/efi.

However, we’re left with a new problem: on (at least) Debian and Ubuntu, grub-install attempts to run efibootmgr to record which disk UEFI should boot from. This fails, though, since it expects a single disk, not a RAID set. In fact, it returns nothing, and tries to run efibootmgr with an empty -d argument:

Installing for x86_64-efi platform. efibootmgr: option requires an argument -- 'd' ... grub-install: error: efibootmgr failed to register the boot entry: Operation not permitted. Failed: grub-install --target=x86_64-efi WARNING: Bootloader is not properly installed, system may not be bootable

Luckily my UEFI boots without NVRAM entries, and I can disable the NVRAM writing via the “Update NVRAM variables to automatically boot into Debian?” debconf prompt when running: dpkg-reconfigure -p low grub-efi-amd64

So, now my system will boot with both or either drive present, and updates from Linux to /boot/efi are visible on all RAID members at boot-time. HOWEVER there is one nasty risk with this setup: if UEFI writes anything to one of the drives (which this firmware did when it wrote out a “boot variable cache” file), it may lead to corrupted results once Linux mounts the RAID (since the member drives won’t have identical block-level copies of the FAT32 any more).

To deal with this “external write” situation, I see some solutions:

  • Make the partition read-only when not under Linux. (I don’t think this is a thing.)
  • Create higher-level knowledge of the root-filesystem RAID configuration is needed to keep a collection of filesystems manually synchronized instead of doing block-level RAID. (Seems like a lot of work and would need redesign of /boot/efi into something like /boot/efi/booted, /boot/efi/spare1, /boot/efi/spare2, etc)
  • Prefer one RAID member’s copy of /boot/efi and rebuild the RAID at every boot. If there were no external writes, there’s no issue. (Though what’s really the right way to pick the copy to prefer?)

Since mdadm has the “--update=resync” assembly option, I can actually do the latter option. This required updating /etc/mdadm/mdadm.conf to add <ignore> on the RAID’s ARRAY line to keep it from auto-starting:

ARRAY <ignore> metadata=1.0 UUID=123...

(Since it’s ignored, I’ve chosen /dev/md100 for the manual assembly below.) Then I added the noauto option to the /boot/efi entry in /etc/fstab:

/dev/md100 /boot/efi vfat noauto,defaults 0 0

And finally I added a systemd oneshot service that assembles the RAID with resync and mounts it:

[Unit] Description=Resync /boot/efi RAID DefaultDependencies=no After=local-fs.target [Service] Type=oneshot ExecStart=/sbin/mdadm -A /dev/md100 --uuid=123... --update=resync ExecStart=/bin/mount /boot/efi RemainAfterExit=yes [Install] WantedBy=sysinit.target

(And don’t forget to run “update-initramfs -u” so the initramfs has an updated copy of /dev/mdadm/mdadm.conf.)

If mdadm.conf supported an “update=” option for ARRAY lines, this would have been trivial. Looking at the source, though, that kind of change doesn’t look easy. I can dream!

And if I wanted to keep a “pristine” version of /boot/efi that UEFI couldn’t update I could rearrange things more dramatically to keep the primary RAID member as a loopback device on a file in the root filesystem (e.g. /boot/efi.img). This would make all external changes in the real ESPs disappear after resync. Something like:

# truncate --size 512M /boot/efi.img # losetup -f --show /boot/efi.img /dev/loop0 # mdadm --create /dev/md100 --level 1 --raid-disks 3 --metadata 1.0 /dev/loop0 /dev/sda1 /dev/sdb1

And at boot just rebuild it from /dev/loop0, though I’m not sure how to “prefer” that partition…

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

April 16, 2018

GIMP 2.10.0 Release Candidate 2 Released

Hot on the heels of the first release candidate, we’re happy to have a second RC ready! In the last 3 weeks since releasing GIMP 2.10.0-RC1, we’ve fixed 44 bugs and introduced important performance improvements.

As usual, for a complete list of changes please see NEWS.

Optimizations and multi-threading for painting and display

A major regression of GIMP 2.10, compared to 2.8, was slower painting. To address this issue, several contributors (Ell, Jehan, Massimo Valentini, Øyvind Kolås…) introduced improvements to the GIMP core, as well as to the GEGL and babl libraries. Additionally, Elle Stone and Jose Americo Gobbo contributed performance testing.

The speed problems pushed Ell to implement multi-threading within GIMP, so that painting and display are now run on separate threads, thus greatly speeding up feedback of the graphical interface.

The new parallelization framework is not painting-specific and could be used for improving other parts of GIMP.

Themes rewritten

Since the development version 2.9.4, we had new themes shipped with GIMP, and in particular dark themes (as is now common for creative applications). Unfortunately they were unmaintained, bugs kept piling up, and the user experience wasn’t exactly stellar.

GIMP Themes Light, Gray, and Dark themes.

Our long-time contributor Ville Pätsi took up the task of creating brand new themes without any of the usability issues and glitches of previous ones. While cleaning up, only the Gray theme has been kept, whereas Light and Dark were rewritten from scratch. Darker and Lighter themes have been removed (they won’t likely reappear unless someone decides to rewrite and contribute them as well, and unless this person stays around for maintenance).

Gradient tool improved to work in linear color space

Thanks to Michael Natterer and Øyvind Kolås, the gradient tool can now work in either perceptual RGB, linear RGB, or CIE LAB color space at your preference.

Gradient tool in linear space Gradient tool in perceptual and linear spaces

We also used the opportunity to rename the tool, which used to be called “Blend tool” until now, even though barely anyone uses such name. “Gradient tool” is a much more understandable naming.

New on-canvas control for 3D rotation

A new widget for on-canvas interaction of 3D rotation (yaw, pitch, roll) has been implemented by Ell. This new widget is currently only used for the Panorama Projection filter.

GEGL Panorama View Panorama projection filter (image: Hellbrunn Banquet Hall by Matthias Kabel (cba))

Improvements in handling masks, channels, and selections

GIMP doesn’t do any gamma conversion when converting between selection, channels, and masks anymore. This makes the selection -> channel -> selection roundtrips correct and predictable.

Additionally, for all >8-bit per channel images, GIMP now uses linear color space for channels. This and many other fixes in the new release were done by Michael Natterer.

Translations

8 translations have been updated between the two release candidates. We are very close to releasing the final version of GIMP 2.10.0. If you plan to update a translation into your language and be in time for the release, we recommend starting now.

GEGL changes

Mosty of the changes in GEGL since the release in March are performance improvements and micro-optimizations in display paths. Additionally, avoiding incorrectly gamma/ungamma correcting alpha in u8 formats provides a tiny 2-3% performance boost.

For further work on mipmaps support, GEGL now keeps track of valid/invalid areas on smaller granularity than tiles in mipmap.

The Panorama Projection operation got reverse transform, which permits using GIMP for retouching zenith, nadir or other arbitrary gaze directions in equirectangular, also known as 360×180 panoramas.

Finally, abyss policy support in the base class for scale operations now makes it possible to achieve hard edges on rescaled buffers.

What’s Next

We are now 7 blocker bugs away from the final release.

On your marks, get set…

Interview with Runend

Could you tell us something about yourself?

Hi! I’m Faqih Muhammad and my personal brand name is runend. I’m 22 years old and live in Medan in Indonesia. I love film animation, concept art, game making, 3d art, and everything illustration.

Do you paint professionally, as a hobby artist, or both?

It can be said that I’m a hobbyist now, but I keep learning, practicing, experimenting to find new forms and new styles of self-expression, all to improve my skills and to be a professional artist in the near future!

What genre(s) do you work in?

So far I’ve made scenery background with character as a base to learn something. Starting from the basic we can make something more interesting, but still it was quite difficult for me.

Whose work inspires you most — who are your role models as an artist?

Hhmmm, there are many artists who give me inspiration. Mainly I follow Jeremy Fenske, Atey Ghailan and Ruan Jia. I won’t forget to mention masters like Rizal Abdillah, Agung Oka and Yogei, as well as my friends and mentors.

How and when did you get to try digital painting for the first time?

It was in 2014 using photoshop, which I used to create photo-manipulations with. In 2015 I finally bought my wacom intuos manga tablet and could finally begin learning about digital painting.

What makes you choose digital over traditional painting?

Digital painting has many features that make it easy to create art. Of course there’s no need to buy art supplies: with a computer, pen and tablet you can make art.

Lately I’ve been learning traditional painting using poster color, and that makes me feel both happy and challenged.

How did you find out about Krita?

I used Google to search for “free digital painting software” and I found Krita :D.

What was your first impression?

I was like “WOW”, grateful to find software as good as this.

What do you love about Krita?

I have tried some of the features, especially the brush engine, UI/UX, layering, animation tools, I love all of them! And of course it’s free and open source.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Probably the filter layer and filter mask performance. Those run very slowly, I think it would be better if they ran more smoothly and more realtime.

What sets Krita apart from the other tools that you use?

Free open source software that runs cross-platform, no need to spend more. If you get a job or a paid project with Krita, there is a donate button to make Krita better still.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I love all my work, sometimes some paintings look inconsistent, then I will make it better.

What techniques and brushes did you use in it?

Before starting I think about what I want to create like situation and color mood. If that’s difficult from only imagination I usually use some reference.

I first make a sketch, basic color, shading, texture, refine the painting, and check the value using fill black layer blending mode in color.

In Krita 4.0 beta there are many new brush presets, I think that’s enough to make awesome art.

Where can people see more of your work?

Artstation: https://www.artstation.com/runend
Twitter: https://twitter.com/runendarts
Facebook: https://web.facebook.com/runendartworks
Instagram: https://www.instagram.com/runend.artworks/

Anything else you’d like to share?

Krita is an amazing program, I’d like to thank the Krita team. I wish Krita a good future, I hope Krita can be better known to the people of Indonesia, for instance on campus, schools, the creative industry etcetera.

How to create camera noise profiles for darktable


How to create camera noise profiles for darktable

An easy way to create correct profiling pictures

Noise in digital images is similar to film grain in analogue photography. In digital cameras, noise is either created by the amplification of digital signals or heat produced by the sensor. It appears as random, colored speckles on an otherwise smooth surface and can significantly degrade image quality.

Noise is always present, and if it gets too pronounced, it detracts from the image and needs to be mitigated. Removing noise can decrease image quality or sharpness. There are different algorithms to reduce noise, but the best option is if having profiles for a camera to understand the noise patterns a camera model produces.

Noise reduction is an image restoration process. You want to remove the digital artefacts from the image in such a way that the original image is discernible. These artefacts can be just some kind of grain (luminance noise) or colorful, disturbing dots (chroma noise). It can either add to a picture or detract from it. If the noise is disturbing, we want to remove it. The following pictures show a picture with noise and a denoised version:

Noisy cup Denoised cup

To get the best noise reduction, we need to generate noise profiles for each ISO value for a camera.

Creating the pictures for noise profling

For every ISO value your camera has, you have to take a picture. The pictures need to be exposed a particular way to gather the information correctly. The photos need to be out of focus with a widespread histogram like in the following image:

Histogram

We need overexposed and underexposed areas, but mostly particularly the grey areas in between. These areas contain the information we are looking for.

Let’s go through the noise profile generation step by step. For easier creation of the required pictures, we will create a stencil which will make it easier to capture the photos.

Stencil for DSLM/DSLR lenses

You need to get some thicker black paper or cardboard. No light should shine through it! First we need to use the lens hood to get the size. The lens hood helps to move the paper away from the lens a bit and the lens hood gives us something to attach it to. Then we need to create a punch card. For wide angle lenses you need a close raster and for longer focal lengths, a wider raster. It is harder to create it for compact cameras with small lenses (check below).

Find the middle and mark the size of the lens hood:

Stencil Step 1

If you have the size, draw a grid on the paper:

Stencil Step 2

Once you have done that you need to choose a punch card raster for your focal length. I use a 16mm wide angle lens on a full frame body, so I choose a raster with a lot of holes:

Stencil Step 3

Untested: For a 50mm or 85mm lens I think you should start with 5 holes in the middle created just with a needle. Put your stencil on the lens hood and check. Then you know if you need bigger holes and maybe how much. Please share your findings in the comments below!

Stencil for compact cameras

I guess you would create a stencil, like for bigger lenses, but create a funnel to the camera. Contributions and ideas are welcome!

Taking the pictures

Wait for a cloudy day with thick clouds and no sun to take the pictures. The problem is the shutter speed and it is likely that you’ll hit the limit. My camera has 37 ISO values (including extended iso), so I need to start with 0.6 seconds exposure time to take the last picture with the limit of my camera, 1/8000 of a second exposure time. So a darker day helps to start with a slow shutter speed.

Use a tripod and point the camera to the sky, attach the lens hood and put the punch card on it. Better make sure that all filters are removed, so we don’t get any strange artefacts. In the end the setup should look like this:

Punch card on camera

Choose the fastest aperture available on your lens (e.g. f/2.8 or even faster), change the camera to manual focus, and focus on infinity. Take the shot! The result should look like this:

punch card picture

The holes will overexpose the picture, but you also need an underexposed area. So start to put most of my dark areas in the middle of the histogram and moved it to the black (left) side of the histogram until the first values start to clip. It is important to not to clip to much, as we are mostly interested the grey values between the overexposed and underexposed areas.

Once you’re done taking the pictures it is time to move to the computer.

Creating the noise profiles

STEP 1

Run

/usr/lib/darktable/tools/darktable-gen-noiseprofile --help

If this gives you the help of the tool, continue with STEP 2 othersise go to STEP 1a

STEP 1a

Your darktable installation doesn’t offer the noise tools so you need to compile it yourself. Before you start make sure that you have the following dependencies installed on your system:

  • git
  • gcc
  • make
  • gnuplot
  • convert (ImageMagick)
  • darktable-cli

Get the darktable source code using git:

git clone https://github.com/darktable-org/darktable.git

Now change to the source and build the tools for creating noise profiles using:

mkdir build
cd build
cmake -DCMAKE_INSTALL_PREFIX=/opt/darktable -DBUILD_NOISE_TOOLS=ON ..
cd tools/noise
make
sudo make install

STEP 2

Download the pictures from your camera and change to the directory on the commandline:

cd /path/to/noise_pictures

and run the following command:

/usr/lib/darktable/tools/darktable-gen-noiseprofile -d $(pwd)

or if you had to download and build the source, run:

/opt/darktable_source/lib/tools/darktable-gen-noiseprofile -d $(pwd)

This will automatically do everything for you. Note that this can take quite some time to finish. I think it took 15 to 20 minutes on my machine. If a picture is not shot correctly, the tool will tell you the image name and you have to recapture the picture with that ISO.

The tool will tell you, once completed, how to test and verify the noise profiles you created.

Once the tool finished, you end up with a tarball you can send to darktable for inclusion. You can open a bug at:

https://redmine.darktable.org/

The interesting files are the presets.json file (darktable input) and, for the developers, the noise_result.pdf file. You can find an example PDF here. It is a collection of diagrams showing the histogram for each picture and the results of the calculations.

A detailed explanation of the diagrams and the math behind it can be found in the original noise profile tutorial by Johannes Hanika.

For discussion

I’ve created the stencil above to make it easier to create noise profiles. However I’ve tried different ways to create the profiles and here is one which was a good idea but failed for low ISO values (ISO <= 320). We are in the open source world, and I think it is important to share failures too. Others may have an idea to improve it or at least learn from it.

For a simpler approach than the one described above, I’ve created a gradient from black to white. Then I used some black cardboard to attached it to the monitor to get some real black. Remember you need an underexposed area and the monitor is not able to output real black, as it is backlit.

In the end my setup looked liked this:

Gradient on Monitor

I’ve turned off the lights and took the shots. However the results for ISO values below and equal to ISO320 are not good. All other ISO values looked fine.

If you’re interested in the results, you can find them here:

Please also share pictures of working stencils you created.

Feedback is very much welcome in the comments below!

April 15, 2018

Hero – Blender Grease Pencil showcase

After a series of successful short film production focused on high-end 3D computer animation pipelines, the Blender team presents a 3 minutes short film showcasing Blender’s upcoming Grease Pencil 2.0.

Grease Pencil means 2D animation tools within a full 3D pipeline. In Blender. In Open Source. Free for everyone!

The original Grease Pencil technology has been in Blender for many years now, and it already got the attention of story artists in the animation industry worldwide. The upcoming Grease Pencil is meant to push the boundaries and allows feature quality animation production in Blender 2.8.

The Hero animation showcase is a fruit of the collaboration between Blender developers and a team of artist based in Barcelona, Spain, led by Daniel M. Lara. This is the 6th short film funded by the Blender Cloud, confirming once more the value of a financial model that combines crowdfunding of artistic and technical goals through the creation of Open Content.

The inclusion of Grease Pencil in Blender for mainstream release is part of the Blender 2.8 Code Quest, an outstanding development effort that is currently happening at the Blender headquarters in Amsterdam. The first beta of Blender 2.8 will be available in the second part of 2018.

Press Contact:
Francesco Siddi, Producer
francesco@blender.org

April 13, 2018

security things in Linux v4.16

Previously: v4.15.

Linux kernel v4.16 was released last week. I really should write these posts in advance, otherwise I get distracted by the merge window. Regardless, here are some of the security things I think are interesting:

KPTI on arm64

Will Deacon, Catalin Marinas, and several other folks brought Kernel Page Table Isolation (via CONFIG_UNMAP_KERNEL_AT_EL0) to arm64. While most ARMv8+ CPUs were not vulnerable to the primary Meltdown flaw, the Cortex-A75 does need KPTI to be safe from memory content leaks. It’s worth noting, though, that KPTI does protect other ARMv8+ CPU models from having privileged register contents exposed. So, whatever your threat model, it’s very nice to have this clean isolation between kernel and userspace page tables for all ARMv8+ CPUs.

hardened usercopy whitelisting
While whole-object bounds checking was implemented in CONFIG_HARDENED_USERCOPY already, David Windsor and I finished another part of the porting work of grsecurity’s PAX_USERCOPY protection: usercopy whitelisting. This further tightens the scope of slab allocations that can be copied to/from userspace. Now, instead of allowing all objects in slab memory to be copied, only the whitelisted areas (where a subsystem has specifically marked the memory region allowed) can be copied. For example, only the auxv array out of the larger mm_struct.

As mentioned in the first commit from the series, this reduces the scope of slab memory that could be copied out of the kernel in the face of a bug to under 15%. As can be seen, one area of work remaining are the kmalloc regions. Those are regularly used for copying things in and out of userspace, but they’re also used for small simple allocations that aren’t meant to be exposed to userspace. Working to separate these kmalloc users needs some careful auditing.

Total Slab Memory: 48074720 Usercopyable Memory: 6367532 13.2% task_struct 0.2% 4480/1630720 RAW 0.3% 300/96000 RAWv6 2.1% 1408/64768 ext4_inode_cache 3.0% 269760/8740224 dentry 11.1% 585984/5273856 mm_struct 29.1% 54912/188448 kmalloc-8 100.0% 24576/24576 kmalloc-16 100.0% 28672/28672 kmalloc-32 100.0% 81920/81920 kmalloc-192 100.0% 96768/96768 kmalloc-128 100.0% 143360/143360 names_cache 100.0% 163840/163840 kmalloc-64 100.0% 167936/167936 kmalloc-256 100.0% 339968/339968 kmalloc-512 100.0% 350720/350720 kmalloc-96 100.0% 455616/455616 kmalloc-8192 100.0% 655360/655360 kmalloc-1024 100.0% 812032/812032 kmalloc-4096 100.0% 819200/819200 kmalloc-2048 100.0% 1310720/1310720

This series took quite a while to land (you can see David’s original patch date as back in June of last year). Partly this was due to having to spend a lot of time researching the code paths so that each whitelist could be explained for commit logs, partly due to making various adjustments from maintainer feedback, and partly due to the short merge window in v4.15 (when it was originally proposed for merging) combined with some last-minute glitches that made Linus nervous. After baking in linux-next for almost two full development cycles, it finally landed. (Though be sure to disable CONFIG_HARDENED_USERCOPY_FALLBACK to gain enforcement of the whitelists — by default it only warns and falls back to the full-object checking.)

automatic stack-protector

While the stack-protector features of the kernel have existed for quite some time, it has never been enabled by default. This was mainly due to needing to evaluate compiler support for the feature, and Kconfig didn’t have a way to check the compiler features before offering CONFIG_* options. As a defense technology, the stack protector is pretty mature. Having it on by default would have greatly reduced the impact of things like the BlueBorne attack (CVE-2017-1000251), as fewer systems would have lacked the defense.

After spending quite a bit of time fighting with ancient compiler versions (*cough*GCC 4.4.4*cough*), I landed CONFIG_CC_STACKPROTECTOR_AUTO, which is default on, and tries to use the stack protector if it is available. The implementation of the solution, however, did not please Linus, though he allowed it to be merged. In the future, Kconfig will gain the knowledge to make better decisions which lets the kernel expose the availability of (the now default) stack protector directly in Kconfig, rather than depending on rather ugly Makefile hacks.

execute-only memory for PowerPC

Similar to the Protection Keys (pkeys) hardware support that landed in v4.6 for x86, Ram Pai landed pkeys support for Power7/8/9. This should expand the scope of what’s possible in the dynamic loader to avoid having arbitrary read flaws allow an exploit to read out all of executable memory in order to find ROP gadgets.

That’s it for now; let me know if you think I should add anything! The v4.17 merge window is open. :)

Edit: added details on ARM register leaks, thanks to Daniel Micay.

Edit: added section on protection keys for POWER, thanks to Florian Weimer.

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

April 11, 2018

Krita 4.0.1 Released

Today the Krita team releases Krita 4.0.1, a bug fix release of Krita 4.0.0. We fixed more than fifty bugs since the Krita 4.0.0 release! See below for the full list of fixed isses. Translations work again with the appimage and the macOS build. Please note that:

  • The reference image docker has been removed. Krita 4.1.0 will have a new reference images tool. You can test the code-in-progress by downloading the nightly builds for Windows and Linux
  • There is no scripting available on macOS. We had it almost working when the one macbook the project has received a broken update, which undid all our work. G’Mic is also not available on macOS.
  • The lock and collapse icons on the docker titlebars are removed: too many people were too confused by them.

If you find a new issue, please consult this draft document on reporting bugs, before reporting an issue. After the 4.0 release, more than 150 bugs were reported, but most of those reports were duplicates, requests for help or just not useful at all. This puts a heavy strain on the developers, and makes it harder to actually find time to improve Krita. Please be helpful!

Improvements

Windows

  • Patch QSaveFile so working on images stored in synchronized folders (dropbox, google drive) is safe

Shortcuts

  • Fix duplicate shortcut on Photoshop scheme
  • Alphabetize shortcut to make the diffs easier to read when doing changes

UI

  • Make the triangles larger on the categorized list view so they are more visible
  • Disable the macro recorder and playback plugin
  • Remove the docker titlebar lock and collapse buttons. BUG:385238 BUG:392235
  • Set the pixel grid to show up at 2400% zoom by default. BUG:392161
  • Improve the layout of the palette docker
  • Disable drag and drop in the palette view: moving swatches around did not actually change the palette. BUG:392349
  • Fix selecting the last used template in the new document dialog when using appimages. BUG:391973
  • Fix canvas lockup when using Guides at the top of the image. BUG:391098
  • Do not reset redo history when changing layer’s visibility. BUG:390581
  • Fix shifting the pan position after using the popup widget rotation circle. BUG:391921
  • Fix height map to normal map in wraparound mode. BUG:392191

Text

  • Make it possible to edit the font size in the svg text tool. BUG:392714
  • Let Text Shape have empty lines. BUG:392471
  • Fix updates of undo/redo actions. BUG:392257
  • Implement “Convert text into path” function. BUG:391294
  • Fix a crash in SvgTextTool when deleting hovered/selected shape. BUG:392128
  • Make the text editor window application modal. BUG:392248
  • Fix alignment of RTL text. BUG:392065 BUG:392064
  • Fix painting parts of text outside the bounding box on the canvas. BUG:392068
  • Fix rendering of the text with relative offsets. BUG:391160
  • Fix crash when transforming text with Transform Tool twice. BUG:392127

Animation

  • Fix handling of keyframes when saving. BUG:392233 BUG:392559
  • Keep show in timeline and onion skin options when merging layers. BUG:377358
  • Keep keyframe color labels when merging layers. BUG:388913
  • Fix exporting out audio with video formats MKV and OGV.

File handling

  • Do not load/save layer channel flags anymore (channel flags were removed from the UI in Krita 2.9). BUG:392504
  • Fix saving of Transform Mask into rendered formats. BUG:392229
  • Fix reporting errors when loading fails. BUG:392413
  • Fix a memory leak when loading file layers
  • Fix loading a krita file with a loop in the clone layers setup. BUG:384587
  • Fix showing a wait cursor after loading a PNG image. BUG:392249
  • Make bundle loading feedback a bit clearer regarding the bundle.

Vector bugs

  • Fix crash when creating a vector selection. BUG:391292
  • Fix crash when doing right-click on the gradient fill stop opacity input box of a vector BUG:392726
  • Fix setting the aspect ratio of vector shapes. BUG:391911
  • Fix a crash if a certain shape is not valid when writing SVG. BUG:392240
  • Fix hidden stroke and fill widgets not to track current shape selection BUG:391990

Painting and brush engines

  • Fix crash when creating a new spray preset. BUG:392869
  • Fix rounding of the the pressure curve
  • Fix painting with colorsmudge brushes on transparency masks. BUG:391268
  • Fix uninitialized distance info for KisHairyPaintOp BUG:391940
  • Fix rounding of intermediate pressure values
  • Fix the colorsmudge brush when painting in wraparound mode. BUG:392312

Layers and masks

  • Fix flattening of group layers with Inherit Alpha property set. BUG:390095
  • Fix a crash when using a transformation mask on a file layer. BUG:391270
  • Improve performance of the transformation mask

Download

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

When it is updated, you can also use the Krita Lime PPA to install Krita 4.0.1 on Ubuntu and derivatives. We are working on an updated snap.

OSX

Note: the gmic-qt and python plugins are not available on OSX.

Source code

md5sum

For all downloads:

Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here (filenames ending in .sig).

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

April 10, 2018

FreeCAD 0.17 is released

Hello everybody, Finally, after two years of intense work, the FreeCAD community is happy and proud to announce the release 0.17 of FreeCAD. You can grab it at the usual places, either via the Downloads page or directly via the github release page. There are installers for Windows and Mac, and an AppImage for Linux. Our...

April 05, 2018

Cave Creek Hiking and Birding Trip

A week ago I got back from a trip to the Chiricahua mountains of southern Arizona, specifically Cave Creek on the eastern side of the range. The trip was theoretically a hiking trip, but it was also for birding and wildlife watching -- southern Arizona is near the Mexican border and gets a lot of birds and other animals not seen in the rest of the US -- and an excuse to visit a friend who lives near there.

Although it's close enough that it could be driven in one fairly long day, we took a roundabout 2-day route so we could explore some other areas along the way that we'd been curious about.

First, we wanted to take a look at the White Mesa Bike Trails northwest of Albuquerque, near the Ojito Wilderness. We'll be back at some point with bikes, but we wanted to get a general idea of the country and terrain. The Ojito, too, looks like it might be worth a hiking trip, though it's rather poorly signed: we saw several kiosks with maps where the "YOU ARE HERE" was clearly completely misplaced. Still, how can you not want to go back to a place where the two main trails are named Seismosaurus and Hoodoo?

[Cabezon] The route past the Ojito also led past Cabezon Peak, a volcanic neck we've seen from a long distance away and wanted to see closer. It's apparently possible to climb it but we're told the top part is fairly technical, more than just a hike.

Finally, we went up and over Mt Taylor, something we've been meaning to do for many years. You can drive fairly close to the top, but this being late spring, there was still snow on the upper part of the road and our Rav4's tires weren't up to the challenge. We'll go back some time and hike all the way to the top.

We spent the night in Grants, then the following day, headed down through El Malpais, stopping briefly at the beautiful Sandstone Overlook, then down through the Datil and Mogollon area. We wanted to take a look at a trail called the Catwalk, but when we got there, it was cold, blustery, and starting to rain and sleet. So we didn't hike the Catwalk this time, but at least we got a look at the beginning of it, then continued down through Silver City and thence to I-10, where just short of the Arizona border we were amused by the Burma Shave dust storm signs about which I already wrote.

At Cave Creek

[Beautiful rocks at Cave Creek] Cave Creek Ranch, in Portal, AZ, turned out to be a lovely place to stay, especially for anyone interested in wildlife. I saw several "life birds" and mammals, plus quite a few more that I'd seen at some point but had never had the opportunity to photograph. Even had we not been hiking, just hanging around the ranch watching the critters was a lot of fun. They charge $5 for people who aren't staying there to come and sit in the feeder area; I'm not sure how strictly they enforce it, but given how much they must spend on feed, it would be nice to help support them.

The bird everyone was looking for was the Elegant Trogon. Supposedly one had been seen recently along the creekbed, and we all wanted to see it.

They also had a nifty suspension bridge for pedestrians crossing a dry (this year) arroyo over on another part of the property. I guess I was so busy watching the critters that I never went wandering around, and I would have missed the bridge entirely had Dave not pointed it out to me on the last day.

The only big hike I did was the Burro Trail to Horseshoe Pass, about 10 miles and maybe 1800 feet of climbing. It started with a long hike up the creek, during which everybody had eyes and ears trained on the sycamores (we were told the trogon favored sycamores). No trogon. But it was a pretty hike, and once we finally started climbing out of the creekbed there were great views of the soaring cliffs above Cave Creek Canyon. Dave opted to skip the upper part of the trail to the saddle; I went, but have to admit that it was mostly just more of the same, with a lot of scrambling and a few difficult and exposed traverses. At the time I thought it was worth it, but by the time we'd slogged all the way back to the cars I was doubting that.

[ Organ Pipe Formation at Chiricahua National Monument ] On the second day the group went over the Chiricahuas to Chiricahua National Monument, on the other side. Forest road 42 is closed in winter, but we'd been told that it was open now since the winter had been such a dry one, and it wasn't a particularly technical road, certainly easy in the Rav4. But we had plans to visit our friend over at the base of the next mountain range west, so we just made a quick visit to the monument, did a quick hike around the nature trail and headed on.

Back with the group at Cave Creek on Thursday, we opted for a shorter, more relaxed hike in the canyon to Ash Spring rather than the brutal ascent to Silver Peak. In the canyon, maybe we'd see the trogon! Nope, no trogon. But it was a very pleasant hike, with our first horned lizard ("horny toad") spotting of the year, a couple of other lizards, and some lovely views.

Critters

We'd been making a lot of trogon jokes over the past few days, as we saw visitor after visitor trudging away muttering about not having seen one. "They should rename the town of Portal to Trogon, AZ." "They should rename that B&B Trogon's Roost Bed and Breakfast." Finally, at the end of Thursday's hike, we stopped in at the local ranger station, where among other things (like admiring their caged gila monster) we asked about trogon sightings. Turns out the last one to be seen had been in November. A local thought maybe she'd heard one in January. Whoever had relayed the rumor that one had been seen recently was being wildly optimistic.

[ Northern Cardinal ] [ Coati ] [ Javalina ] [ white-tailed buck ]
Fortunately, I'm not a die-hard birder and I didn't go there specifically for the trogon. I saw lots of good birds and some mammals I'd never seen before (full list), like a coatimundi (I didn't realize those ever came up to the US) and a herd (pack? flock?) of javalinas. And white-tailed deer -- easterners will laugh, but those aren't common anywhere I've lived (mule deer are the rule in California and Northern New Mexico). Plus some good hikes with great views, and a nice visit with our friend. It was a good trip.

On the way home, again we took two days for the opportunity to visit some places we hadn't seen. First, Cloudcroft, NM: a place we'd heard a lot about because a lot of astronomers retire there. It's high in the mountains and quite lovely, with lots of hiking trails in the surrounding national forest. Worth a visit some time.

From Cloudcroft we traveled through the Mescalero Apache reservation, which was unexpectedly beautiful, mountainous and wooded and dotted with nicely kept houses and ranches, to Ruidoso, a nice little town where we spent the night.

Lincoln

[ Lincoln, NM ] Our last stop, Saturday morning, was Lincoln, site of the Lincoln County War (think Billy the Kid). The whole tiny town is set up as a tourist attraction, with old historic buildings ... that were all closed. Because why would any tourists be about on a beautiful Saturday in spring? There were two tiny museums, one at each end of town, which were open, and one of them tried to entice us into paying the entrance fee by assuring us that the ticket was good for all the sites in town. Might have worked, if we hadn't already walked the length of the town peering into windows of all the closed sites. Too bad -- some of them looked interesting, particularly the general store. But we enjoyed our stroll through the town, and we got a giggle out of the tourist town being closed on Saturday -- their approach to tourism seems about as effective as Los Alamos'.

Photos from the trip are at Cave Creek and the Chiricahuas.

April 03, 2018

The LVFS CDN will change soon

tl;dr: If you have https://s3.amazonaws.com/lvfsbucket/downloads/firmware.xml.gz in /etc/fwupd/remotes.d/lvfs.conf then you need to nag your distribution to update the fwupd package to 1.0.6.

Slightly longer version:

The current CDN (~$100/month) is kindly sponsored by Amazon, but that won’t last forever and the donations I get for the LVFS service don’t cover the cost of using S3. Long term we are switching to a ‘dumb’ provider (currently BunnyCDN) which works out 1/10th of the cost — which is covered by the existing kind donations. We’ve switched to use a new CNAME to make switching CDN providers easy in the future, which means this should be the only time this changes client side.

If you want to patch older versions of fwupd, you can apply this commit.

April 02, 2018

Fun at SCaLE 2018

I am finally back and have a moment to write a bit about the wonderful time I had out in Pasadena at the Southern California Linux Expo (SCaLE 16x)!

SCaLE 16x Logo

SCaLE has been held annualy in southern California for many years (the “16x” indicates this is the sixteenth annual meeting - though they’ve been holding meetings for longer as a LUG).

Libre Graphics Track

This year, Nate Willis reached out to see if we might be willing to help organize the first ever “Libre Graphics” track at the meeting. Usually the conference is geared towards enterprise technologies and users, but we thought it might be a nice opportunity to bring to light some of the awesome graphics projects that are out there.

It was an awesome opportunity to share the stage with some really talented folks. The days track and presentations can all be seen here:

  • Laidout
    by Tom Lechner

  • Extending Inkscape with SVG Filters
    by Ted Gould

  • Busting Things Up with the Fracture Modifier VFX Branch of Blender
    by JT Nelson

  • Making freely licensed movies with freely licensed tools
    by Matt Lee

  • Developers, Developers, Developers—How About Creatives
    by Ryan Gorley

  • Why the GIMP Team Obviously Hates You
    by Pat David

  • Git for Photographers
    by Mica Semrick

Overall it was a great day filled with some really neat presentations. More importantly was the opportunity to demonstrate to the attendees that the world of Libre Graphics projects is alive and well! The talks were well attended (approx 30-40 visitors depending on the talk) and the interest and participation was quite nice. Each speaker found a receptive audience with interested follow-on questions (my presentation had about 12 minutes of questions at the end).

One of the most ineresting take-aways at the end of my presentation (and in the following weeks through email) was the astonishment people had at the size of the team working on GIMP. It seemed that the overall impression was that there was some large team of folks hacking on the project, and many people were amazed that the crew is actually as small as it is.

What was heartening was the number of attendees after my presentation who took the time to offer their help in some way. These were all offers to help with writing tutorials or other non-development roles. Possible tasking for various areas of help will be communicated to those offering which should result in some new and/or updated tutorials soon!

GIMP + Inkscape Expo Booth

Even better was the opportunity to share a booth at the Expo with the Inkscape team. Presenting is fantastic fun, and I love it, but it’s ridiculously humbling to get a chance to meet face-to-face with users (in the booth on the expo floor) and to hear their stories, soak in their praise, or deflect their anger to someone else while quietly sneaking away (kidding of course).

Thanks to the great work of Ryan Gorley we even had a pair of fantastic banners to hang in the booth:

GIMP + Inkscape Banners Ryan Gorley was kind enough to design this pair of banners we hung in the booth.

There was great foot traffic during the expo and we had an opportunity to meet with and chat with quite a few folks making their way through the expo floor. There were even a few folks who had heard of GIMP but hadn’t really taken the time to look at it (which was a great opportunity to talk about the project and what they could do with it). Everyone was extremely kind and gracious.

GIMP + Inkscape Booth The booth! With yours truly in the bottom left.

Overall the conference was a success, I’d say! We had an opportunity to help represent the world of Free Software graphics applications and to showcase works using these tools to an audience that might not have otherwise considered them. There were quite a few attendees who were surprised to see us and very engaged both in the booth and during the Libre Graphics track and we sparked a nice interest in people volunteering to help with non-programming related tasks (whose willingness to help out is greatly appreciated).

Interview with Christopher

Could you tell us something about yourself?

Sure, my name is Christopher, and I’m an illustrator and visual designer who lives in California. I am presently exploring writing my own graphic novel, definitely a challenge. Talking about writing for sequential art is a whole interview on its own.

Do you paint professionally, as a hobby artist, or both?

Yes. I’ve been both for a while now, mostly as a freelancer. However, I will practice trying out new things with personal work. It helps me grow as an artist. I’m always looking for the opportunity to work with new and interesting people and projects.

What genre(s) do you work in?

Science Fiction, Fantasy, and Comic Book/Sequential art where much of my childhood inspiration to create art came from. The Fantastic Four, Elric of Melniboné, The Metamorphosis Odyssey, books and art from this kind of work will always be inspirational to me.

 

Whose work inspires you most — who are your role models as an artist?

N. C. Wyeth and The Brandywine School of artists has always been influential to me. They were my first and lasting impression of Illustration. I saw their work at The Delaware Art Museum during a school field trip and was transfixed. These artists’ sense of dramatic storytelling and compositional choices made a lasting impression. It’s a great collection. Other artists like Brom, Adolf Hiremy-Hirschl, Mead Schaffer, Dean Corwell, J. C. Leyendecker and Ricardo Fernandez Ortega have been study material for me recently. I don’t think I could ever claim just one source as the epicenter of my inspiration.

How and when did you get to try digital painting for the first time?

About 10 years ago. Until then I worked predominantly in oils. I’ve always been willing to try new mediums. I thought digital painting would be a great area to explore because I could still do color work without the worry of solvents and preparing my work area.

My first experience using a professional stylus was at a Comic Con demo in 2009. That was amazing! I had observed its use and tried a really basic stylus but never really had the opportunity to use one of high quality, the pressure sensitivity was surprisingly responsive. The lag was something I noticed right off but as I adjusted my hand it was a negligible issue when working with this type of interface. It took time to orient myself but after I got the hang of it it was really cool. After that experience I knew I needed a digital setup. It was just that much fun!

What makes you choose digital over traditional painting?

For now: Immediacy.

I can start a study or painting with almost zero prep time on a digital platform. When you only have a certain amount of time to work with, either within a deadline or around other responsibilities. For example if a client or person asks you to render an image within a certain time, it’s expected within that time frame. Rarely do you get more time, usually you get less. Time is a resource that can’t be replaced. Time is something that you can’t get back, when it’s gone it’s gone. So to have a tool that vastly improves the amount of time alterations & color adjustments take and make them faster to complete is invaluable in a time sensitive environment. Especially when someone wants something changed or altered for a bevy of reasons.

Also the physicality of traditional medium has different challenges and advantages, however the solution will take longer to accomplish in general.

With all that said I’m still keeping my paints and brushes!

How did you find out about Krita?

A friend of mine back East who is really into Open Source does digital painting from time to time. He knew I was dissatisfied with Painter X and CS so he recommended Krita. Painter wasn’t particularly intuitive and CS, while OK, I wanted something different. Just because something is popular doesn’t mean it’s the right fit for everyone. So then I asked him where I could get Krita. He said to me “Open Source. Just download it. From their site”. I was like “it couldn’t be that simple”. But it was. I installed it and I was hooked.

What was your first impression?

Intuitive, more features than I had expected. It had a UI that I had very few difficulties with and could arrange to my liking. The ability to customize so much is really appealing.

What do you love about Krita?

The brushes. Their responsiveness, the ability to customize brushes was also a huge plus. I found the brush creation tool to be quite approachable and easy to use. I love the test area.

The interface was easy to navigate. There wasn’t any odd interaction with types of layers that I found with other paint programs.

Krita just felt comfortable.

What do you think needs improvement in Krita?

Well, while the UI is one of the strengths of Krita when I tried to install it recently on a Microsoft Surface there was no way to make the UI a size to use it effectively on that device. If the UI was scalable in certain parts or overall, I think that would be very helpful.

Also import/export color history as a file within an open Krita document.

What sets Krita apart from the other tools that you use?

Krita has created a community of people willing to help each other to not only make their work better but to have an evolving tool to create it with. Krita doesn’t impede talent being explored, it freely supports it.

What techniques and brushes do you like to use?

Over time I have collected a set of brushes that I use frequently. They consist of some of the Muses brush set, Cazu, Nlynook and custom brushes I have created. I use some of David Revoy’s Brushes, specifically the Splatter brushes. I am using a more limited palette and my use of layer effects these days. This is part of developing a new approach while still adhering to plein air techniques.

Where can people see more of your work?

You can see my work in print. A comic book called “After The Gold Rush” has an illustration of mine featured in issue #4. It was created in Krita. I also have a site redacesmedia.com, you can reach me there!

Anything else you’d like to share?

Yes.

To the artists reading this: keep drawing and painting. There’s artists in the Krita community who will be supportive of your work!

Also, thanks for the interview and a special thanks to the developers and community that make Krita something special.

March 30, 2018

FreeCAD BIM development news - March 2018

I hope you noticed the small improvement in the title... It's not that I became suddenly a big fan of the "BIM" term, but really the word "Arch" is too narrow in todays construction field. Besides, as I explained last month, I am now starting to split the BIM stuff in FreeCAD in two parts:...

March 27, 2018

darktable 2.4.2 released

we’re proud to announce the second bugfix release for the 2.4 series of darktable, 2.4.2!

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.4.2.

as always, please don’t use the autogenerated tarball provided by github, but only our tar.xz. the checksums are:

$ sha256sum darktable-2.4.2.tar.xz
19cccb60711ed0607ceaa844967b692a3b8666b12bf1d12f2242ec8942fa5a81 darktable-2.4.2.tar.xz
$ sha256sum darktable-2.4.2.dmg
2b0b456f6efbc05550e729a388c55e195eecc827b0b691cd42d997b026f0867c darktable-2.4.2.dmg
$ sha256sum darktable-2.4.2-win64.exe
5181dad9afd798090de8c4d54f76ee4d43cbf76ddf2734364ffec5ccb1121a34 darktable-2.4.2-win64.exe
$ sha256sum darktable-2.4.2-win64.zip
935ba4756e208369b9cabf1ca441ed0b91acb73ebf9125dcaf563210ebe4524d darktable-2.4.2-win64.zip

when updating from the currently stable 2.2.x series, please bear in mind that your edits will be preserved during this process, but it will not be possible to downgrade from 2.4 to 2.2.x any more.

Important note: to make sure that darktable can keep on supporting the raw file format for your camera, please read this post on how/what raw samples you can contribute to ensure that we have the full raw sample set for your camera under CC0 license!

and the changelog as compared to 2.4.1 can be found below.

New Features

  • Add presets to location search in map mode
  • Add timestamps to the output of -d command line switches
  • Add a compression level slider to the TIFF export module
  • Add native binary NetPNM loading, without using GraphicsMagick
  • Add a battery indicator for people running darktable on a laptop. This is not very portable code and disabled by default
  • Allow to use /? to show the help message on Windows

Bugfixes

  • Turn off smooth scrolling for X11/Quartz. That might help with oversensitive scrolling
  • Fix reading and writing of TIFFs with non-ASCII filenames on Windows
  • Ellipsize background job labels when too long
  • Hard code D50 white point when exporting to OpenEXR
  • Add tootips to the haze removal module
  • Fix a crash when changing lenses while tethering
  • Fix incorrect Atom CPU detection on Windows
  • Revised performance configuration
  • Don’t overlay the colorbalance sliders on the left for a cleaner look
  • Honor local copy in copy export format
  • Make trashing of files on Windows silent
  • Fix string termination override on memmove
  • Fix a use after free and some memleaks
  • Fix a crash in PDF export
  • Fix the min color picker
  • Don’t hardcode ‘/’ in OpenCL paths on Windows

Camera support, compared to 2.4.1

Warning: support for Nikon NEF ‘lossy after split’ raws was unintentionally broken due to the lack of such samples. Please see this post for more details. If you have affected raws, please contribute samples!

Base Support

  • Canon PowerShot G1 X Mark III
  • Panasonic DMC-FZ2000 (3:2)
  • Panasonic DMC-FZ2500 (3:2)
  • Panasonic DMC-ZS100 (3:2)
  • Sony DSC-RX0
  • Sony DSC-RX10M4

Noise Profiles

  • Canon EOS 200D
  • Canon EOS Kiss X9
  • Canon EOS Rebel SL2
  • Canon EOS 760D
  • Canon EOS 8000D
  • Canon EOS Rebel T6s
  • Canon PowerShot G1 X Mark II
  • Canon PowerShot G9 X
  • Fujifilm X100F
  • Nikon D850
  • Panasonic DC-G9
  • Panasonic DMC-GF6
  • Panasonic DMC-LX10
  • Panasonic DMC-LX15
  • Panasonic DMC-LX9
  • Panasonic DMC-TZ70
  • Panasonic DMC-TZ71
  • Panasonic DMC-ZS50

Translations

  • Dutch
  • French
  • German
  • Hungarian
  • Italian

March 26, 2018

Dust Storm Burma Shave Signs

I just got back from a trip to the Chiricahuas, specifically Cave Creek. More on that later, after I've done some more photo triaging. But first, a story from the road.

[NM Burma Shave dust storm signs]

Driving on I-10 in New Mexico near the Arizona border, we saw several signs about dust storms. The first one said,

ZERO VISIBILITY IS POSSIBLE

Dave commented, "I prefer the ones that say, 'may exist'." And as if the highway department heard him, a minute or two later we passed a much more typical New Mexico road sign:

DUST STORMS MAY EXIST
New Mexico, the existential state.

But then things got more fun. We drove for a few more miles, then we passed a sign that obviously wasn't meant to stand alone:

IN A DUST STORM

"It's a Burma Shave!" we said simultaneously. (I'm not old enough to remember Burma Shave signs in real life, but I've heard stories and love the concept.) The next sign came quickly:

PULL OFF ROADWAY

"What on earth are they going to find to rhyme with 'roadway'?" I wondered. I racked my brains but couldn't come up with anything. As it turns out, neither could NMDOT. There were three more signs:

TURN VEHICLE OFF
FEET OFF BRAKES
STAY BUCKLED

"Hmph", I thought. "What an opportunity missed." But I still couldn't come up with a rhyme for "roadway". Since we were on Interstate 10, and there's not much to do on a long freeway drive, I penned an alternative:

IN A DUST STORM
PULL OFF TEN
YOU WILL LIVE
TO DRIVE AGAIN

Much better, isn't it? But one thing bothered me: you're not really supposed to pull all the way off Interstate 10, just onto the shoulder. How about:

IN A DUST STORM
PULL TO SHOULDER
YOU WILL LIVE
TO GET MUCH OLDER

I wasn't quite happy with it. I thought my next attempt was an improvement:

IN A DUST STORM
PULL TO SHOULDER
YOU MAY CRASH IF
YOU ARE BOLDER
but Dave said I should stick with "GET MUCH OLDER".

Oh, well. Even if I'm not old enough to remember real Burma Shave signs, and even if NMDOT doesn't have the vision to make their own signs rhyme, I can still have fun with the idea.

March 25, 2018

GIMP 2.10.0 Release Candidate 1 Released

Newly released GIMP 2.10.0-RC1 is the first release candidate before the GIMP 2.10.0 stable release. With 142 bugs fixed and more than 750 commits since the 2.9.8 development version from mid-December, the focus has really been on getting the last details right.

All the new features we added for this release are instrumental in either improving how GIMP handles system resources, or helping you to report bugs and recover lost data. For a complete list of changes please see NEWS.

(Update): Thanks to Ell the windows installer (64-bit) is now available from the Development Downloads page.

New features

Dashboard dockable

A new Dashboard dock helps with monitoring GIMP’s resource usage to keep things in check, allowing you to make more educated decisions about various configuration options.

Dashboard dock

On the developer side, it also helps us in debugging and profiling various operations or parts of the interface, which is important in our constant quest to improve GIMP and GEGL, and detect which parts are the biggest bottlenecks.

The feature was contributed by Ell — one of GIMP’s most productive developers of late.

Debug dialog

What we consistently hear from users is that they have had zero GIMP crashes in years of using it. Still, as with any software, it is not exempt from bugs, and unfortunately sometimes might even crash.

While we encourage you to report all bugs you encounter, we do admit that producing useful information for a report can be difficult, and there is little we can do about a complaint that says GIMP crashed. I don’t know what I was doing and I have no logs”.

So GIMP now ships with a built-in debugging system that gathers technical details on errors and crashes.

Debug dialog to simplify bug reporting

On development versions, the dialog will be raised on all kind of errors (even minor ones). On stable releases, it will be raised only during crashes. The default behavior can be customized in Edit > Preferences > Debugging.

Note: you are still expected to write down contextual information when you report bugs, i.e.: What were you doing when the bug happened? If possible, step by step reproduction procedures are a must.

The feature was contributed by Jehan Pages from ZeMarmot project.

Image recovery after crash

With the debugging system in place to detect a crash, it was easy enough to add crash recovery. In case of a crash, GIMP will now attempt to backup all images with unsaved changes, then suggest to reopen them the next time you start the application.

Crash recovery dialog

This is not a 100%-guaranteed procedure, since a program state during a crash is unstable by nature, so backing up images might not always succeed. What matters is that it will succeed sometimes, and this might rescue your unsaved work!

This feature was also contributed by the ZeMarmot project.

Shadows-Highlights

This new filter is now available in GIMP in the Colors menu thanks to a contribution by Thomas Manni who created a likewise named GEGL operation.

Shadows-Highlights

The filter allows adjusting shadows and highlights in an image separately, with some options available. The implementation closely follows its counterpart in the darktable digital photography software.

Completed features

Layer masks on layer groups

Masks on layer groups are finally possible! This work, started years ago, has now been finalized by Ell. Group-layer masks work similarly to ordinary-layer masks, with the following considerations.

Mask on a layer group

The group’s mask size is the same as group’s size (i.e., the bounding box of its children) at all times. When the group’s size changes, the mask is cropped to the new size — areas of the mask that fall outside of the new bounds are discarded, and newly added areas are filled with black (and hence are transparent by default).

JPEG 2000 support ported to OpenJPEG

JPEG 2000 images importing was already supported, using the library called Jasper. Yet this library is now deprecated and slowly disappearing from most distributions. This is why we moved to OpenJPEG.

The port was initially started by Mukund Sivaraman. It was later completed by Darshan Kadu, under the FSF internship program, and mentored by Jehan who polished it up.

In particular, now GIMP can properly import JPEG 2000 images in any bit depth (over 32-bit per channel will be clamped to 32-bit and non-multiple of 8-bit will be promoted, for instance 12-bit will end up as 16-bit per channel in GIMP). Images in YCbCr and xvYCC color spaces will be converted to sRGB.

Imported JPEG 2000 file

JPEG 2000 codestream files are also supported. While color space can be detected for JPEG 2000 images, for codestream files you will be asked to specify the color space.

Linear workflow updates

Curves and Levels filters have been updated to have a switch between linear and perceptual (non-linear) modes, depending on which one you need.

Curves in linear mode

You can apply Levels in perceptual mode to a linear image, or Curves in linear mode to a perceptual image — whichever suits you best for the task at hand.

The same switch in the Histogram dock has been updated accordingly.

Screenshot and color-picking

On Linux, taking screenshots with the Freedesktop API has been implemented. This should become the preferred API in the hopefully near future, especially because it is meant to work inside sandboxed applications. Though for the time being, it is still not given priority because it lacks some basic features and is not color-managed in any implementation we know of, which makes it a regression compared to other implementations.

On Windows, Simon Mueller has improved the screenshot plug-in to handle hardware-accelerated software and multi-monitor displays.

On macOS, color picking with the Color dock is now color-managed.

Metadata preferences

Settings were added for metadata export handling in the “Image Import & Export” page of the Preferences dialog. By default, the settings are checked, which means that GIMP will export all metadata, but you can uncheck them (since metadata can often contain a lot of sensitive private information).

Metadata preservation

Note that these options can also be changed per format (“Load Defaults” and “Save Defaults” button), and of course per file during exporting, just like any other option.

Lock brush to view

GIMP finally gives you a choice whether you want a brush locked to a certain zoom level and rotation angle of the canvas.

Lock brush to view demo

The option is available for all painting tools that use a brush except for the MyPaint Brush tool.

Missing icons

8 new icons were added by Alexandre Prokoudine, Aryeom Han (ZeMarmot film director), and Ell.

Various GUI refining

Many last-minute details have been handled, such as renaming the composite modes to be more descriptive, shortened color channel labels with their conventional 1- or 2-letter abbreviations, color models rearranged in the Color dock, and much more!

Translations

String freeze has started and GIMP has received updates from: Basque, Brazilian Portuguese, Catalan, Chinese (Taiwan), Danish, Esperanto, French, German, Greek, Hungarian, Icelandic, Italian, Japanese, Latvian, Polish, Russian, Serbian, Slovenian, Spanish, Swedish, Turkish.

The Windows installer is now also localized with gettext.

GEGL changes

The GEGL library now used by GIMP for all image processing has also received numerous updates.

Most importantly, all scaling for display is now done on linear data. This produces more accurate scaled-down thumbnails and more valid results of mipmap computations. GIMP 2.10.0-RC1 doesn’t use mipmaps yet, but it will further down the line.

More work has been done to improve performance of GEGL across many parts of the source code. Improvements to pixel data fetching and setting functions have led to performance boosts across many GEGL operations (in particular, Gaussian blur), and for some performance-critical display cases, performance should have improved two- to three-fold since the release in December 2017.

There are 5 new operations in the workshop now. Among those, enlarge and inpaint are part of the new experimental inpainting framework by Øyvind Kolås, domain transform by Felipe Einsfeld Kersting is an edge-preserving smoothing filter, and recursive-transform is Ell’s take on the famous Droste effect.

Helping GIMP

We’d like to remind you that GIMP is free software. Therefore the first way to help is to contribute your time. You can report bugs and send patches, whether they are code patches, icons, brushes, documentation, tutorials, translations, etc.

In this release for instance, about 15% of changes were done by non-regular contributors.

You can also contribute tutorials or news for our website, as Pat David explained so well in his talk Why the GIMP Team Obviously Hates You. Pat David is himself one of the important GIMP contributors on the community side (he also created our current website back in 2015).

Last but not least, we remind that you can contribute financially in a few ways. You can donate to the project itself, or you can support the core team developers who raise funds individually, in particular Øyvind Kolås for his work on GEGL, GIMP graphics engine, and ZeMarmot project (Aryeom & Jehan) for their work on GIMP itself (about 35% of this release is contributed by their project).

What’s Next

This is the last stretch before the final GIMP 2.10.0 release. There are a few more changes planned before we wrap it up. For instance, Americo Gobbo is working (with minor help from ZeMarmot) on improving our default brush set. His work will be available either in another release candidate (if we make another one) or in the final release.

We are currently 12 blocker bugs away from making the final release. We’ll do our best to make it quick!

March 23, 2018

LVFS Mailing List

I have created a new low-volume lvfs-announce mailing list for the Linux Vendor Firmware Service, which will only be used to make announcements about new features and planned downtime. If you are interested in what’s happening with the LVFS you can subscribe here. If you need to contact me about anything LVFS-related, please continue to email me (not the mailing list) as normal.

The Great Gatsby and onboarding new contributors

I am re-reading “The Great Gatsby” – my high-school son is studying it in English, and I would like to be able to discuss it with him with the book fresh in my mind –  and noticed this passage in the first chapter which really resonated with me.

…I went out to the country alone. I had a dog — at least I had him for a few days until he ran away — and an old Dodge and a Finnish woman, who made my bed and cooked breakfast and muttered Finnish wisdom to herself over the electric stove.

It was lonely for a day or so until one morning some man, more recently arrived than I, stopped me on the road.

“How do you get to West Egg village?” he asked helplessly.

I told him. And as I walked on I was lonely no longer. I was a guide, a pathfinder, an original settler. He had casually conferred on me the freedom of the neighborhood.

In particular, I think this is exactly how people feel the first time they can answer a question in an open source community for the first time. A switch is flipped, a Rubicon is crossed. They are no longer new, and now they are in a space which belongs, at least in part, to them.

March 22, 2018

Krita 4.0.0 Released!

Today we’re releasing Krita 4.0! A major release with major new features and improvements: improved vector tools, SVG support, a new text tool, Python scripting and much, much, much more!

The new splash screen for Krita 4.0, created by Tyson Tan, shows Kiki among the plum blossoms. We had wanted to release Krita 4 last year already, but trials and tribulations caused considerable delays. But, like the plum blossoms that often bloom most vibrantly when it’s coldest, we have overcome, and Krita 4 is now ready!

Highlights

We’ve again created a long, long page with all the details of everything that’s new and improved in Krita 4.

See the full release notes with all changes!

We already mentioned SVG support, a new text tool and Python scripting, so here are some other highlights:

  • Masked brushes: add a mask to your brush tip for a more lively effect. This opens up some really cool possibilities!

  • New brush presets! We overhauled the entire brush set for Krita 4. Brush presets are now packaged as a bundle, too. And Krita 3’s brush set is available as well, it’s just disabled by default.

Known issues

Krita 4 is a huge step for the Krita project, as big as, if not bigger than the 3.0 release. There are some known issues and caveats:

  • Krita 4 uses SVG for vector layers. This means that Krita 3 files with vector layers may not be loaded entirely correctly. Keep backups!
  • Krita 4’s new text tool is still limited compared to what we wanted to implement. We focused on creating a reliable base and making the text tool work reliably for just one, simple use-case: creating text for comic book balloons, and we’ll continue working on improving and extending the text tool.
  • We have a new binary build factory for Windows and Linux. Unfortunately, we don’t have 32 bits Windows builds at this point in time.
  • Because macOS has a very low limit on shared memory segments, G’Mic cannot work on macOS at the moment.
  • The Reference Images Docker has been removed. It was too easy to crash it if invalid image files where present. In Krita 4.1 it will be replaced by a new reference images tool.

Download

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

At this moment, we do not have 32 bits Windows builds available.

Note that on Windows 7 and 8 you need to install the Universal C Runtime separately to enable Python scripting. See the manual.

Linux

At the moment, the appimage does not have working translations.

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

You can also use the Krita Lime PPA to install Krita 4.0.0 on Ubuntu and derivatives. We are working on an updated snap.

OSX

Note: the gmic-qt and python plugins are not available on macOS.

Source code

md5sums

For all downloads:

Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here.

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

Artwork by Ramon Miranda

March 21, 2018

Builder Nightly

One of the great aspects of the Flatpak model, apart from separating apps from the OS, is that you can have multiple versions of the same app installed concurrently. You can rely on the stable release while trying things out in the development or nightly built version. This creates a need to easily identify the two versions apart when launching it with the shell.

I think Mozilla has set a great precendent on how to manage multiple version identities.

Thus came the desire to spend a couple of nights working on the Builder nightly app icon. While we’ve generally tried to simplify app icons to match what’s happening on the mobile platforms and trickling down to the older desktop OSes, I’ve decided to retain the 3D wokflow for the builder icon. Mainly because I want to get better at it, but also because it’s a perfect platform for kit bashing.

For Builder specifically I’ve identified some properties I think should describe the ‘nightly’ icon:

  • Dark (nightly)
  • Modern (new stuff)
  • Not as polished – dangling cables, open panels, dirty
  • Unstable / indicating it can move (wheels, legs …)

Next up is giving a stab at a few more apps and then it’s time to develop some guidelines for these nightly app icons and emphasize it with some Shell styling. Overlaid emblems haven’t particularly worked in the past, but perhaps some tag style for the label could do.

March 19, 2018

Interview with Jennifer

Could you tell us something about yourself?

I’m almost 35 years old, from a city called Passo Fundo, state of Rio Grande do Sul, Brazil. I like cats, cartoons and rock and roll. 1994 was the year when I started to have some interest in drawing. I looked for learning how to draw just for fun and sometimes to let my soul talk. But I can say that the digital art that I started practicing last year has helped me get rid of a recent depression.

Do you paint professionally, as a hobby artist, or both?

As a hobby. At least for now.

What genre(s) do you work in?

I usually draw cartoons. But I also like painting nature and fantasy elements.

Whose work inspires you most — who are your role models as an artist?

Most times when I draw, I don’t look up to a specific artist. I search random images on internet or the painting comes from my own mind. I think any kind of art should come from the artist’s inner soul.

How and when did you get to try digital painting for the first time?

It occurred last year, in May, I guess. I had no job but I had a nasty depression. Then my husband said he would like to learn how to draw and start work with that. It was when I started to draw again. Yes, I had stopped drawing, limiting myself to draw just when I had nothing more to do. Then we got an online course from Ivan Quirino and here I am, less than an year later, doing all kinds of digital painting.

What makes you choose digital over traditional painting?

The practicality. It is really hard when you do something wrong drawing at the traditional way. In digital painting, you can redo as many times as necessary.

How did you find out about Krita?

At Youtube or at some blog. I can’t tell for sure.

What was your first impression?

When I used Krita for the first time I already knew most of the tools, so it was easy to use. But I needed to learn more, then I watched a video that explained the basic tools and method to paint. I thought then that Krita was a good tool for painting. Today I can tell it’s a great tool for digital artists. My personal opinion: Krita is the best and I really can’t use a different program.

What do you love about Krita?

The quick access to the tools I need. The ease to work with it. I like so much of that function that allows you to paint just the line art. It’s awesome.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Nothing! As I said before: I really enjoy work with Krita and I recommend it to anyone who is choosing this path of digital art.

What sets Krita apart from the other tools that you use?

The brushes, the way Krita works with layers (for example: if you have a line on the top layer and you paint a background on the layer below, you won’t paint over what is drawn on the top layer). I don’t know about the functionality of all painting software, but I think this is pretty cool.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I like my latest work. At the time I made the work, I hadn’t thought about a name yet. But looking at it now, I could call it “The peace of the mermaid”. I think it fits well.

What techniques and brushes did you use in it?

Well, I’m not good with names of techniques, but I used the default brushes and some of those by David Revoy (airbrush, fill brush, wet brushes, some ink for the little details, some customized brush “LJF water brush 3”). I also used effect layers. So I started with the water base, just filling the area with blue and white tones. Then I painted the sky, mixing tones of blue with white. The sun was made with an airbrush, mixing yellow and white. After that, I used the customized brush to do the details of the water. Always mixing the colors to get the vision that I was looking for. Then I painted the blocks of sand, leaving the details, done with “splat_texture – Marcas”, for the end. At this point, I could start drawing the mermaid. Started by doing a shadow mermaid. After that, I put the colors, lights and shadows and the details. The effect layers were used to get more luminance on a specific element (In Portuguese: Luz viva – used on the mermaid and the starfish -, Luz suave – used to get the luminance on the full scene – , Desvio linear – to get the effect on the water light).

Where can people see more of your work?

For the moment I have:
Blog: https://jbh-digitalart.blogspot.com.br/
Facebook page: https://www.facebook.com/JBHdigitalart/?ref=bookmarks

Anything else you’d like to share?

I just wanted to thank the people that work to improve Krita, an amazing box of tools for digital painting!

March 12, 2018

A follow up on Fedora 28’s background art

A quick post – I have a 4k higher-quality render of one of Fedora 28 background candidates mentioned in a recent post about the Fedora 28 background design process. Click on the image below to grab it if you would like to try / test it and hopefully give some feedback on it:

3D render of the Fedora logo in blue fiber optic light strands against a black background. Image is angled with some blur and bokeh effects. the angling of this version is such that it comes from below and looks up.

One of the suggestions I’ve received from your feedback is to try to vary the height between the ‘f’ and the infinity symbol so they stand out. I’m hoping to find some time this week to figure out how exactly to do that (I’m a Blender newbie 😳), but if you want to try your hand, the Blender source file is available.

March 10, 2018

Intel Galileo v2 Linux Basics

[Intel Galileo Gen2 by Mwilde2 on Wikimedia commons] Our makerspace got a donation of a bunch of Galileo gen2 boards from Intel (image from Mwilde2 on Wikimedia commons).

The Galileo line has been discontinued, so there's no support and no community, but in theory they're fairly interesting boards. You can use a Galileo in two ways: you can treat it like an Arduino, after using the Arduino IDE to download a Galileo hardware definition since they're not Atmega chips. They even have Arduino-format headers so you can plug in an Arduino shield. That works okay (once you figure out that you need to download the Galileo v2 hardware definitions, not the regular Galileo). But they run Linux under the hood, so you can also use them as a single-board Linux computer.

Serial Cable

The first question is how to talk to the board. The documentation is terrible, and web searches aren't much help because these boards were never terribly popular. Worse, the v1 boards seem to have been more widely adopted than the v2 boards, so a lot of what you find on the web doesn't apply to v2. For instance, the v1 required a special serial cable that used a headphone jack as its connector.

Some of the Intel documentation talks about how you can load a special Arduino sketch that then disables the Arduino bootloader and instead lets you use the USB cable as a serial monitor. That made me nervous: once you load that sketch, Arduino mode no longer works until you run a command on Linux to start it up again. So if the sketch doesn't work, you may have no way to talk to the Galileo. Given the state of the documentation I'd already struggled with for Arduino mode, it didn't sound like a good gamble. I thought a real serial cable sounded like a better option.

Of course, the Galileo documentation doesn't tell you what needs to plug in where for a serial cable. The board does have a standard FTDI 6-pin header on the board next to the ethernet jack, and the labels on the pins seemed to correspond to the standard pinout on my Adafruit FTDI Friend: Gnd, CTS, VCC, TX, RX, RTS. So I tried that first, using GNU screen to connect to it from Linux just like I would a Raspberry Pi with a serial cable:

screen /dev/ttyUSB0 115200

Powered up the Galileo and sure enough, I got boot messages and was able to log in as root with no password. It annoyingly forces orange text on a black background, making it especially hard to read on a light-background terminal, but hey, it's a start.

Later I tried a Raspberry Pi serial cable, with just RX (green), TX (white) and Gnd (black) -- don't use the red VCC wire since the Galileo is already getting power from its own power brick -- and that worked too. The Galileo doesn't actually need CTS or RTS. So that's good: two easy ways to talk to the board without buying specialized hardware. Funny they didn't bother to mention it in the docs.

Blinking an LED from the Command Line

Once connected, how do you do anything? Most of the Intel tutorials on Linux are useless, devoting most of their space to things like how to run Putty on Windows and no space at all to how to talk to pins. But I finally found a discussion thread with a Python example for Galileo. That's not immediately helpful since the built-in Linux doesn't have python installed (nor gcc, natch). Fortunately, the Python example used files in /sys rather than a dedicated Python library; we can access /sys files just as well from the shell.

Of course, the first task is to blink an LED on pin 13. That apparently corresponds to GPIO 7 (what are the other arduino/GPIO correspondences? I haven't found a reference for that yet.) So you need to export that pin (which creates /sys/class/gpio/gpio7 and set its direction to out. But that's not enough: the pin still doesn't turn on when you echo 1 > /sys/class/gpio/gpio7/value. Why not? I don't know, but the Python script exports three other pins -- 46, 30, and 31 -- and echoes 0 to 30 and 31. (It does this without first setting their directions to out, and if you try that, you'll get an error, so I'm not convinced the Python script presented as the "Correct answer" would actually have worked. Be warned.)

Anyway, I ended up with these shell lines as preparation before the Galileo can actually blink:

# echo 7 >/sys/class/gpio/export

# echo out > /sys/class/gpio/gpio7/direction

# echo 46 >/sys/class/gpio/export
# echo 30 >/sys/class/gpio/export
# echo 31 >/sys/class/gpio/export

# echo out > /sys/class/gpio/gpio30/direction
# echo out > /sys/class/gpio/gpio31/direction
# echo 0  > /sys/class/gpio/gpio30/value
# echo 0  > /sys/class/gpio/gpio31/value

And now, finally, you can control the LED on pin 13 (GPIO 7):

# echo 1 > /sys/class/gpio/gpio7/value
# echo 0 > /sys/class/gpio/gpio7/value
or run a blink loop:
# while /bin/true; do
> echo 1  > /sys/class/gpio/gpio7/value
> sleep 1
> echo 0  > /sys/class/gpio/gpio7/value
> sleep 1
> done

Searching Fruitlessly for a "Real" Linux Image

All the Galileo documentation is emphatic that you should download a Linux distro and burn it to an SD card rather than using the Yocto that comes preinstalled. The preinstalled Linux apparently has no persistent storage, so not only does it not save your Linux programs, it doesn't even remember the current Arduino sketch. And it has no programming languages and only a rudimentary busybox shell. So finding and downloading a Linux distro was the next step.

Unfortunately, that mostly led to dead ends. All the official Intel docs describe different download filenames, and they all point to generic download pages that no longer include any of the filenames mentioned. Apparently Intel changed the name for its Galileo images frequently and never updated its documentation.

After forty-five minutes of searching and clicking around, I eventually found my way to Intel® IoT Developer Kit Installer Files, which includes sizable downloads with names like

  • iss-iot-linux_12-09-16.tar.bz2 (324.07 MB),
  • intel-iot-yocto.tar.xz (147.53 MB),
  • intel-iot-wrs-pulsar-64.tar.xz (283.86 MB),
  • intel-iot-wrs-32.tar.xz (386.16 MB), and
  • intel-iot-ubuntu.tar.xz (209.44 MB)

From the size, I suspect those are all Linux images. But what are they and how do they differ? Do any of them still have working repositories? Which ones come with Python, with gcc, with GPIO support, with useful development libraries? Do any of them get security updates?

As far as I can tell, the only way to tell is to download each image, burn it to a card, boot from it, then explore the filesystem trying to figure out what distro it is and how to try updating it.

But by this time I'd wasted three hours and gotten no further than the shell commands to blink a single LED, and I ran out of enthusiasm. I mean, I could spend five more hours on this, try several of the Linux images, and see which one works best. Or I could spend $10 on a Raspberry Pi Zero W that has abundant documentation, libraries, books, and community howtos. Plus wi-fi, bluetooth and HDMI, none of which the Galileo has.

Arduino and Linux Living Together

So that's as far as I've gone. But I do want to note one useful thing I stumbled upon while searching for information about Linux distributions:

Starting Arduino sketch from Linux terminal shows how to run an Arduino sketch (assuming it's already compiled) from Linux:

sketch.elf /dev/ttyGS0 &

It's a fairly cool option to have. Maybe one of these days, I'll pick one of the many available distros and try it.

March 08, 2018

Code Quest Campaign: A Success Story

A software project is a living thing, and every few years it needs to take a leap. A leap for survival, for innovation, to respond and adapt to new trends and technologies, to lay the foundation for future trends. This is a risky endeavour. Ambitious targets tend to significantly slow down development momentum due to complex engineering decisions, disagreements between team members or lack of outwards communication to the user community.

Blender has achieved this feat once before during its existence –the well known 2.5 project in 2010– thanks to the relentless leadership of Ton Roosendaal and a tight-knit team of developers and power users. After nearly 8 years of gradual improvements, during which Blender’s user base more than quadrupled, it was time for another jump. Enter Blender 2.8.

Blender 2.8

The main goal of Blender 2.8 is to further improve support for diverse workflows, complemented by features such as a high quality PBR viewport, 2D animation tools, advanced asset management and a powerful animation system. While Blender is often regarded as an oddity, its flexibility is being discovered and appreciated by a growing audience.

After over one year of work, the project needed a final sprint to deliver the first beta of Blender 2.8. To achieve this, the idea of a “Code Quest” was proposed: to bring together nearly all of the core developers for three months in one location, in the Blender Institute in Amsterdam.

This period would enable the team to tackle fundamental engineering issues, as well as to more efficiently focus on interface design and usability.

Code Quest Launch

How to successfully fund an Open Source project

The funding of the Code Quest, which was estimated 200K USD, has been divided between four parties.

The first was the Blender Foundation, the non-profit entity which is coordinating worldwide developers outreach and runs the official online platforms for the project. The Blender Foundation, via the Blender Development Fund is awarding grants to independent developers.

The second was Blender Institute, the Amsterdam-based Open Content powerhouse, who provided the initial funding for the campaign, public relations, communications and logistics. Blender Institute is hiring several of the Blender core developers and is funding part of the Code Quest costs via the Blender Cloud, the open production and training platform.

The remaining two parties were industry sponsors and the Blender user community, together targeted to cover nearly half of the total budget via a crowdfunding campaign.

A rocket ride

With the animation studio Tangent Animation and the makers of Lulzbot 3D printer Aleph Objects signing up as sponsors, industry support started well, and continued for the better with several other Blender-based businesses joining the effort.

However, the biggest challenge was to involve the user community. After reviewing several strategies (including using popular crowdfunding platforms) the Blender Institute team decided to focus the entire campaign on selling a memorable reward token – a space rocket shaped USB drive. Each rocket would cost 39 USD, with its price raising 10 USD within 3 weeks. Rockets would be produced right after the campaign, to give the immediate reward of having supported an ongoing project. Target was set to sell a 1000 rockets minimally.

Code Quest Rocket

And then the user community pulled off something truly outstanding. The goal was achieved in just 4 days, which resulted in confirming the official start in April and a new target of 2500 rockets. This stretch goal was set to expand the Code Quest team. The new target was achieved in less than 3 weeks.

Thanks to this additional support, almost 100K USD were raised, an amount comparable to the historic campaign that made Blender become Open Source back in 2002.

Code Quest months

The Code Quest is an unprecedented opportunity to document the development process in an open and transparent way, building up excitement in anticipation for Blender’s beta release due in July 2018. The Code Quest will be frequently covered using the official code.blender.org blog, via video logs, live streams and demos.

At the same time, two high-profile Blender Open Movies that are in production will be the ultimate stress test for the upcoming release. These are Hero, the first short film combining traditional animation in a three dimensional space, and Spring, a poetic visual journey that will raise the bar set by previous Blender Open Movies.

Blender 2.8 carries a lot of expectations. The Code Quest campaign has proven, once again, that the community is there to make it happen!

Francesco Siddi

Code Quest Landing

What 3 Words?

I dig online maps like everyone else, but it is somewhat clumsy sharing a location. The W3W service addesses the issue by chunking up the whole world into 3x3m squares and assigning each a name (Supposedly around 57 trillion). Sometimes it’s a bit of a tongue twister, but most of the time it’s fun to say to meet at a “massive message chuckle” for some fpv flying. I’m really surprised this didn’t take off.

March 07, 2018

Jupyter lab with an Octave kernel

Octave is a good choice for getting some serious computing done (it’s largely an open-source Matlab). But for interactive exploration, it feels a bit awkward. If you’ve done any data science work lately, you’ll undoubtedly have used the fantastic Jupyter.

There’s a way to combine both and have the great UI of Jupyter with the processing core of Octave:

Jupyter lab with an Octave kernel

I’ve built a variant of the standard Jupyter Docker images that uses Octave as a kernel, to make it trivial to run this combination. You can find it here.


Comments | More on rocketeer.be | @rubenv on Twitter

March 06, 2018

Fedora 28’s Desktop Background Design

Fedora 28 (F28) is slated to release in May 2018. On the Fedora Design Team, we’ve been thinking about the default background wallpaper for F28 since November. Let’s walk through the Fedora 28 background process thus far as a sort of pre-mortem; we’d love your feedback on where we’ve ended up.

November: Inspiration

As of the past 3 releases, we choose a sequential letter of the alphabet and come up with a list of scientists / mathematicians / technologists to serve as an inspiration for the desktop background’s visual concept:

F25's wallpaper - an almost floral blue gradiated blade design, F26 a black tree line reflected in water against a wintry white landscape (the trees + reflection resemble a sound wave), F27 a blue and purple gradiated underwater scene with several jellyfish - long tendrils drifting and twisting - floating up the right side of the image

Backgrounds from Fedora 25, 26, and 27. 25’s inspiration was Archimedes, and the visual concept was an organic Archimedes’ screw. F26’s inspiration was Alexander Graham Bell, and the visual concept was a sound wave of a voice saying “Fedora.” F27’s inspiration was underwater researcher Jacques Cousteau, and the inspiration was transparency in the form of jellyfish.

Gnokii kicked off the process in November by starting the list of D scientists for F28 and holding a vote on the team: we chose Emily Duncan, an early technologist who invented several types of banking calculators.

December: First concepts

We had a meeting in IRC (which I seem to have forgotten to run meetbot on 🙁 ) where we brainstormed different ways to riff off of Emily Duncan’s work as an inspiration. One of the early things we looked at were some of the illustrations from one of Duncan’s patents:

Diagram etchings from 1903 Duncan calculator patent. Center is a cylindrical object covered in a grid with numbers and various mechanical bits

Gnokii started drafting some conceptual mockups, starting with a rough visualization of an Enigma machine and moving to visuals of electric wires and gears:

3D perspective alpha cryptography keys scrolling vertically in 3D space
wires with bright sparks traveling along them atop a gear texture, black background
wires with bright sparks traveling along them atop a gear texture, blue background

During a regular triage meeting, the team met in IRC and we discussed the mockups and had some critique and suggestions which we shared in the ticket.

February: Solidifying Concept

After the holidays, we got back to it with the beta freeze deadline in mind. Note, we don’t have alpha releases in Fedora anymore, which means we need to have more polish in our initial wallpaper than we had traditionally in order to get useful feedback for the final wallpaper. This started with a regular triage meeting where the F28 wallpaper ticket came up. We brainstormed a lot of ideas and went through a lot of different and of-the-moment visual styles. Maria shared a link to a Behance article on 2018 design trends and it seemed 3D styles in a lot of different ways are the trend of the moment. Some works that particularly inspired us:

Rose Pilkington’s Soft Bodies for Electric Objects

Gently-textured pastel hues of bright cyan, orange, yellow, and pink in a softly gradiated set of flat but almost 3D like rounded abstract shapes

Ari Weinkle’s Wormholes

Almost psychedelic, cavelike, wavy environment made with cascading 3D ridges, orange and purple hued palette

Ari Weinkle’s Paint waves

Vibrant, rainbow hued, gracefully curving and spiraling super thick sculpted 3D paint with a ridged texture

Both myself and terezahl, taking these inspirations as directions, started on another round of mockups.

Terezahl created mockups, one which appears to be inspired by Pilkington’s work, based of the concept of 28’s being a triangular number:

On top, a black to greenish blue shaded abstract composition with a floating triangle floating in front of a background with an inverse gradient. On bottom, rounded abstract shapes in purple, blue, and cyan jewel tones.

I was inspired by Weinkle’s paint waves, but couldn’t figure out a technique to approximate it in Blender. Conceptually, I wanted to take gnokii’s wires with data ‘lights’ travelling down the wires, and have those lights travel down the ridges in an abstract swirled wave. I figured probably it would take some work with Blender’s particular system, since the mass of a character’s hair is typically created that way. I had never used Blender’s particle system before, so I took a tutorial that seemed the closest to the effect I wanted – a Blender Guru tutorial by Andrew Price:

As per the feedback I received from gnokii – the end result was too close to the output you’d expect from such a tutorial. I wasn’t able to achieve a more solid mass than the fiber optic strands, although they visually represented the ‘data light’ concept fork I was going for:

Sparkling blue-hued fiber optic threads against a black background, their ends glowing light blue, with some blurring and bokeh effects - 3D rendered

Time was short, so we ended up deciding to ship this mockup – as close to the tutorial as it was – in the F28 beta to see what kind of feedback we got on the look. Thankfully Luya was able to package it up for us with some time to spare! So far, the preliminary feedback we’ve gotten from folks on social media and/or who’ve seen it via Luya’s package for beta has been positive.

March: Finalization

Since the time-consuming work of building the platform in Blender from the tutorial is done, I’ve started playing around with the idea to see what kind of visuals we could get. The obvious, of course, is to work the Fedora logo into it. Fedora 26’s wallpaper had a sound wave depicting the vocalization of the word “Fedora” – I was trying to think of how to have the fiber optic ‘data’ show the same. Perhaps this is too literal. Anyhow, here are the two crowd favorites thus far:

#3

3D render of the Fedora logo in blue fiber optic light strands against a black background. Image is angled with some blur and bokeh effects

#9

3D render of the Fedora logo in blue fiber optic light strands against a black background. Image is angled with some blur and bokeh effects. the angling of this version is such that it comes from below and looks up.

we need your help!

Anyway, this is where you come in. Take a look at these. With the system built in Blender, we have a lot of things we can tweak easily – the angles, the lens / bokeh / focus, the shape / path of the strands (like how the latest renderings follow the Fedora f/infinity), the shape / type of object the strands are made of (right now long / narrow cylinders.) These kinds of tweaks are quick. Any ideas you have on a path forward here, or just simple feedback, would be much appreciated. 🙂

March 05, 2018

Interview with Johan Brits

Could you tell us something about yourself?

I’m from South Africa. I’ve been drawing my whole life, mostly with graphite pencil but when I discovered digital drawing I was hooked. I started out just using a standard desktop mouse and GIMP and got kind of good at it. Since then I have improve a lot and plan to keep improving and creating new art for as long as I can.

Do you paint professionally, as a hobby artist, or both?

I paint as a hobby, but I sometimes use the skills I’ve learned from painting in a professional capacity when I need to edit or create images.

What genre(s) do you work in?

I don’t really have a specific genre besides perhaps drawing in a more realistic style. I like to challenge myself to draw new things. I usually paint something with life in it like creatures or people.

Whose work inspires you most — who are your role models as an artist?

Jazza from the YouTube channel Draw with Jazza. Although he mostly does traditional art his ability to draw amazing things from random prompts really inspire me. There are also amazing artists on ArtStation.com and I only need to scroll through a few images before I feel the urge to draw something myself.

How and when did you get to try digital painting for the first time?

I found GIMP on a Linux computer in college and I played around with some of the filters. I was amazed at what was possible with a few simple steps. After browsing around on YouTube I saw some artists drawing pictures from scratch in Photoshop. Because I already knew how to draw with pencil I wanted to give it a try using free software and quickly fell in love with it.

What makes you choose digital over traditional painting?

So many things. The ease of changing things when you are already far into the drawing, the fact that you can undo mistakes and best of all it’s not as messy. I also love computers so drawing digitally is like having best of both worlds.

How did you find out about Krita?

A friend told me about it after trying it with his Wacom tablet. I am a software developer so any new software is like a new toy for me. I checked out the website and what other people had created using it and I was intrigued.

What was your first impression?

The interface was so much more modern than GIMP, and I’m a firm believer that the interface makes a big difference in first impressions. I played around with it a bit and quickly saw that it had all the features I use with GIMP and more.

What do you love about Krita?

I love the interface. I also like the fact that you can do animations with it. I have only started dabbling in animation but so far I am fascinated by it. I also love how responsive Krita is and the fact that it supports my tablet, which GIMP did not. And finally I love that it is still being improved upon by the developers. It means any issues I might encounter can still be solved.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Having spent many hours drawing in Krita I can honestly say there is nothing that is really annoying. There is the occasional odd thing that happens as with any drawing software but nothing I haven’t been able to find a workaround for.

What sets Krita apart from the other tools that you use?

The amount of things you can do with it, all neatly wrapped up in a beautiful design. Also the fact that it is free but still has the quality of paid software.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

Usually my newest drawing is my favourite but the Ferret mount I drew really stands out for me. I tried to push myself to create a sense of depth and a scene that I haven’t been able to achieve in any of my previous drawings. I learned a lot from drawing it and it was a lot of fun to do.

What techniques and brushes did you use in it?

Some of the techniques I used is to blur the foreground and background and add a bright light source to create the impression of depth. I used the default brushes that come with Krita to create everything from the fur to the texture of the dirt.

Where can people see more of your work?

YouTube: https://www.youtube.com/johanjbrits
Facebook: https://www.facebook.com/JohanBritsArt/
ArtStation: https://www.artstation.com/britsie_1
Instagram: https://www.instagram.com/britsie_1
DeviantArt: http://britsie1.deviantart.com/
Twitter: https://twitter.com/britsie_1

Anything else you’d like to share?

I would just like to thank the team working on Krita for the amazing job they’ve done in creating a truly awesome drawing application.

March 02, 2018

FreeCAD Arch development news - February 2018

Hi all, Time for our monthly development update. This month again, no new feature has landed in the FreeCAD codebase, because we are still in "feature freeze mode", which means no new feature (that might break something) can be added to the FreeCAD source code, only bug fixes. We hoped to release version 0.17 in February, but,...

March 01, 2018

Re-enabling PHP when a Debian system upgrade disables it

I updated my Debian Testing system via apt-get upgrade, as one does during the normal course of running a Debian system. The next time I went to a locally hosted website, I discovered PHP didn't work. One of my websites gave an error, due to a directive in .htaccess; another one presented pages that were full of PHP code interspersed with the HTML of the page. Ick!

In theory, Debian updates aren't supposed to change configuration files without asking first, but in practice, silent and unexpected Apache bustage is fairly common. But for this one, I couldn't find anything in a web search, so maybe this will help.

The problem turned out to be that /etc/apache2/mods-available/ includes four files:

$ ls /etc/apache2/mods-available/*php*
/etc/apache2/mods-available/php7.0.conf
/etc/apache2/mods-available/php7.0.load
/etc/apache2/mods-available/php7.2.conf
/etc/apache2/mods-available/php7.2.load

The appropriate files are supposed to be linked from there into /etc/apache2/mods-enabled. Presumably, I previously had a link to ../mods-available/php7.0.* (or perhaps 7.1?); the upgrade to PHP 7.2 must have removed that existing link without replacing it with a link to the new ../mods-available/php7.2.*.

The solution is to restore those links, either with ln -s or with the approved apache2 commands (as root, of course):

# a2enmod php7.2
# systemctl restart apache2

Whew! Easy fix, but it took a while to realize what was broken, and would have been nice if it didn't break in the first place. Why is the link version-specific anyway? Why isn't there a file called /etc/apache2/mods-available/php.* for the latest version? Does PHP really change enough between minor releases to break websites? Doesn't it break a website more to disable PHP entirely than to swap in a newer version of it?

February 28, 2018

Searching for hardware on the LVFS

The LVFS acquired another new feature today.

You can now search for firmware and hardware vendors — but the algorithm is still very much WIP and we need some real searches from real users. If you have a spare 10 seconds, please search for your hardware on the LVFS. I’ll be fixing up the algorithm as we find problems. I’ll also be using the search data to work out what other vendors we need to reach out to. Comments welcome.

February 26, 2018

Everything is Better in Slow Motion

Powerslidin’ Sunday from jimmac on Vimeo.

Superb weather over the weekend, despite the thermometer dipping below 10°C.

February 24, 2018

G’MIC 2.2 : New features and filters!

The IMAGE team of the GREYC laboratory (UMR CNRS 6072, Caen, France) is pleased to announce the release of a new 2.2 version of G’MIC, its open-source, generic, and extensible framework for image processing. As we already did in the past, we take this opportunity to look at the latest notable features added since the previous major release (2.0, last June).



Note 1: click on a picture to view a larger version.
Note 2: This is a translation of an original article, in French, published on Linuxfr.

1. Context and recent evolutions

G’MIC is a free and open-source software developed since August 2008 (distributed under the CeCILL license), by folks in the IMAGE team at the GREYC, a French public research laboratory located in Caen and supervised by three institutions: the CNRS, the University of Caen, and the ENSICAEN engineering school. This team is made up of researchers and lecturers specialized in the fields of algorithms and mathematics for image processing.
As one of the main developer of G’MIC, I wanted to sum up the work we’ve made on this software during these last months.

G'MIC logo
Fig. 1.1: The G’MIC project logo, and its cute little mascot “Gmicky” (designed by David Revoy).

G’MIC is multi-platform (GNU/Linux, MacOS, Windows …) and provides many ways of manipulating generic image data, i.e. still images or image sequences acquired as hyperspectral 2D or 3D floating-point arrays (including usual color images). More than 950 different image processing functions are already available in the G’MIC framework, this number being expandable through the use of the G’MIC scripting capabilities.

G'MIC plugin for GIMP
Fig.1.2: The G’MIC-Qt plugin for GIMP, currently the most popular G’MIC interface.

Since the last major version release there have been two important events in the project life:

1.1. Port of the G’MIC-Qt plugin to Krita

When we released version 2.0 of G’MIC a few months ago, we were happy to announce a complete rewrite (in Qt) of the plugin code for GIMP. An extra step has been taken, since this plugin has been extended to fit into the open-source digital painting software Krita.
This has been made possible thanks to the development work of Boudewijn Rempt (maintainer of Krita) and Sébastien Fourey (developer of the plugin). The G’MIC-Qt plugin is now available for Krita versions 3.3+ and, although it does not yet implement all the I/O functionality of its GIMP counterpart, the feedback we’ve had so far is rather positive.
This new port replaces the old G’MIC plugin for Krita which has not been maintained for some time. The good news for Krita users (and developers) is that they now have an up-to-date plugin whose code is common with the one running in GIMP and for which we will be able to ensure the maintenance and further developments.
Note this port required the writing of a source file host_krita.cpp (in C++) implementing the communication between the host software and the plugin, and it is reasonable to think that a similar effort would allow other programs to get their own version of the G’MIC plugin (and the 500 image filters that come with it!).

G'MIC for Krita
Fig. 1.3: Overview of the G’MIC-Qt plugin running on Krita.

1.2. CeCILL-C, a more permissive license

Another major event concerns the new license of use : The CeCILL-C license (that is in the spirit of the LGPL) is now available for some components of the G’MIC framework. This license is more permissive than the previously proposed CeCILL license (which is GPL-compatible) and is more suitable for the distribution of software libraries. This license extension (now double licensing) applies precisely to the core files of G’MIC, i.e. its C++ library libgmic. Thus, the integration of the libgmic features (therefore, all G’MIC image filters) is now allowed in software that are not themselves licensed under GPL/CeCILL (including closed source products).
The source code of the G’MIC-Qt plugin, meanwhile, remains distributed under the single CeCILL license (GPL-like).

2. Fruitful collaboration with David Revoy

If you’ve followed us for a while, you may have noticed that we very often refer to the work of illustrator David Revoy for his multiple contributions to G’MIC: mascot design, ideas of filters, articles or video tutorials, tests of all kinds, etc. More generally, David is a major contributor to the world of free digital art, as much with the comic Pepper & Carrot he produces (distributed under free license CC -BY), as with his suggestions and ongoing bug reports for the open-source software he uses.
Therefore, it seems quite natural to devote a special section to him in this article, summarizing the different ideas, contributions and experiments he has brought to G’MIC just recently. A big thank you, David for your availability, the sharing of your ideas, and for all your work in general!

2.1. Improving the lineart colorization filter

Let’s first mention the progress made on the Black & White / Colorize lineart (smart-coloring) filter that had appeared at the time of the 2.0 G’MIC release.
This filter is basically a lineart colorization assistant which was developed in collaboration with David. It tries to automatically generate a colorization layer for a given lineart, from the analysis of the contours and the geometry of that lineart. Following David‘s suggestions, we were able to add a new colorization mode, named “Autoclean“. The idea is to try to automatically “clean” a coloring layer (made roughly by the user) provided in addition to the lineart layer, using the same geometric analysis as for the previous colorization modes.
The use of this new mode is illustrated below, where a given lineart (left) has been colorized approximately by the user. From the two layers line art + color layer, our “Autoclean” algorithm generates an image (right), where the colors do not overflow the lineart contours (even for “virtual” contours that are not closed). The result is not always perfect, but nevertheless reduces the time spent in the tedious process of colorization.

Gmic_autoclean
Fig. 2.1: The new “Autoclean” mode of the lineart colorization filter can automatically “clean” a rough colorization layer.

Note that this filter is also equipped with a new hatch detection module, which makes it possible to avoid generating too many small areas when using the previously available random colorization mode, particularly when the input lineart contains a large number of hatches (see figure below).

Gmic_hatch_detect
Fig. 2.2: The new hatching detection module limits the number of small colored areas generated by the automatic random coloring mode.

2.2. Color equalizer in HSI, HSL and HSV spaces

More recently, David suggested the idea of a filter to separately vary the hue and saturation of colors having certain levels of luminosity. The underlying idea is to give the artist the ability to draw or paint digitally using only grayscale, then colorize his masterpiece afterwards by re-assigning specific colors to the different gray values of the image. The obtained result has of course a limited color range, but the overall color mood is already in place. The artist only has to retouch the colors locally rather than having to colorize the entire painting by hand.
The figure below illustrates the use of this new filter Colors/Equalize HSI/HSL/HSV available in the G’MIC plugin : each category of values can be finely adjusted, resulting in preliminary colorizations of black and white paintings.

Equalize HSI1
Equalize HSI2
Equalize HSI3
Fig. 2.3: Equalization in HSI/HSL/HSV colorspaces allows to easily set the global color mood for B&W paintings.

Note that the effect is equivalent to applying a color gradient to the different gray values of the image. This is something that could already be done quite easily in GIMP. But the main interest here is we can ensure that the pixel brightness remains unchanged during the color transformation, which is not an obvious property to preserve when using a gradient map.
What is nice about this filter is that it can apply to color photographs as well. You can change the hue and saturation of colors with a certain brightness, with an effect that can sometimes be surprising, like with the landscape photography shown below.

Equalize HSI4
Fig. 2.4: The filter “Equalize HSI/HSL/HSV” applied on a color photograph makes it possible to change the colorimetric environment, here in a rather extreme way.

2.3. Angular deformations

Another one of the David‘s ideas concerned the development of a random local deformation filter, having the ability to generate angular deformations. From an algorithmic point of view, it seemed relatively simple to achieve.
Note that once the implementation has been done (in concise style: 12 lines!) and pushed into the official filter updates, David just had to press the “Update Filters” button of his G’MIC-Qt plug-in, and the new effect Deformations/Crease was there immediately for testing. This is one of the practical side of developing new filters using the G’MIC script language!

G'MIC Crease
Fig. 2.5: New effect “Crease” for local angular deformations.

However, I must admit I didn’t really have an idea on what this could be useful for in practice. But the good thing about cooperating with David is that HE knows exactly what he’s going to do with it! For instance, to give a crispy look to the edges of his comics, or for improving the render of his alien death ray.

G'MIC Crease 2
G'MIC Crease 3
Fig. 2.6: Using the G’MIC “Crease” filter for two real cases of artistic creation.

3. Filters, filters, filters…

David Revoy is not the only user of G’MIC: we sometimes count up to 900 daily downloads from the main project website. So it happens, of course, that other enthusiastic users inspire us new effects, especially during those lovely discussions that take place on our forum, kindly made available by the PIXLS.US community.

3.1. Bring out the details without creating “halos”

Many photographers will tell you that it is not always easy to enhance the details in digital photographs without creating naughty artifacts that often have to be masked manually afterwards. Conventional contrast enhancement algorithms are most often based on increasing the local variance of pixel lightness, or on the equalization of their local histograms. Unfortunately, these operations are generally done by considering neighborhoods with a fixed size and geometry, where each pixel of a neighborhood is always considered with the same weight in the statistical calculations related to these algorithms.
It is simpler and faster, but from a qualitative point of view it is not an excellent idea: we often get “halos” around contours that were already very contrasted in the image. This classic phenomenon is illustrated below with the application of the Unsharp mask filter (the one present by default in GIMP) on a part of a landscape image. This generates an undesirable “halo” effect at the frontier between the mountain and the sky (this is particularly visible in full resolution images).

G'MIC details filters
Fig. 3.1: Unwanted “halo” effects often occur with conventional contrast enhancement filters.

The challenge of the detail enhancement algorithms is to be able to analyze the geometry of the local image structures in a more fine way, to take into account geometry-adaptive local weights for each pixel of a given neighborhood. To make it simple, we want to create anisotropic versions of the usual enhancement methods, orienting them by the edges detected in the images.
Following this logic, we have added two new G’MIC filters recently, namely Details/Magic details and Details/Equalize local histograms, which try to better take the geometric content of the image into account for local detail enhancement (e.g. using the bilateral filter).

G'MIC magic details
G'MIC equalize local histograms
G'MIC equalize local histograms
Fig. 3.2: The new G’MIC detail enhancement filters.

Thus, the application of the new G’MIC local histogram equalization on the landscape image shown before gives something slightly different : a more contrasted result both in geometric details and colors, and reduced halos.

G'MIC magic details
G'MIC magic details
Fig. 3.3: Differences of results between the standard GIMP Unsharp Mask filter and the local histogram equalization of G’MIC, for details enhancement.

3.2. Different types of image deformations

New filters to apply geometric deformations on images are added to G’MIC on a regular basis, and this new major version 2.2 offers therefore a bunch of new deformation filters.
So let’s start with Deformations/Spherize, a filter which allows to locally distort an image to give the impression that it is projected on a 3D sphere or ellipsoid. This is the perfect filter to turn your obnoxious office colleague into a Mr. Potato Head!

G'MIC spherize
G'MIC spherize
Fig .3.4: Two examples of 3D spherical deformations obtained with the G’MIC “Spherize” filter.

On the other hand, the filter Deformations/Square to circle implements the direct and inverse transformations from a square domain (or rectangle) to a disk (as mathematically described on this page), which makes it possible to generate this type of deformations.

G'MIC square to circle
Fig. 3.5: Direct and inverse transformations from a square domain to a disk.

The effect Degradations/Streak replaces an image area masked by the user (filled with a constant color) with one or more copies of a neighboring area. It works mainly as the GIMP clone tool but prevents the user to fill the entire mask manually.

G'MIC streak
Fig. 3.6: The “Streak” filter clones parts of the image into a user-defined color mask.

3.3. Artistic Abstractions

You might say that image deformations are nice, but sometimes you want to transform an image in a more radical way. Let’s introduce now the new effects that turn an image into a more abstract version (simplification and re-rendering). These filters have in common the analysis of the local image geometry, followed by a step of image synthesis.

For example, G’MIC filter Contours/Super-pixels locally gathers the image pixels with the same color to form a partitioned image, like a puzzle, with geometric shapes that stick to the contours. This partition is obtained using the SLIC method (Simple Linear Iterative Clustering), a classic image partitioning algorithm, which has the advantage of being relatively fast to compute.

G'MIC super pixels 1
G'MIC super pixels 2
Fig. 3.7: Decomposition of an image in super-pixels by the Simple Linear Iterative Clustering algorithm (SLIC).

The filter Artistic/Linify tries to redraw an input image by superimposing semi-transparent colored lines on an initially white canvas, as shown in the figure below. This effect is the re-implementation of the smart algorithm initially proposed on the site http://linify.me (initially implemented in JavaScript).

G'MIC linify 1
G'MIC linify 2
Fig. 3.8: The “Linify” effect tries to redraw an image by superimposing only semi-transparent colored lines on a white canvas.

The effect Artistic/Quadtree variations first decomposes an image as a quadtree, then re-synthesize it by drawing oriented and plain ellipses on a canvas, one ellipse for each quadtree leaf. This renders a rather interesting “painting” effect. It is likely that with more complex shapes, even more attractive renderings could be synthesized. Surely an idea to keep in mind for the next filters update 🙂

G'MIC quadtree 1
G'MIC quadtree 2
Fig. 3.9: Decomposing an image as a quadtree allows to re-synthesize it by superimposing only plain colored ellipses.

3.4. “Are there any more?”

And now that you have processed so many beautiful pictures, why not arrange them in the form of a superb photo montage? This is precisely the role of the filter Arrays & tiles/Drawn montage, which allows to create a juxtaposition of photographs very quickly, for any kind of shapes.
The idea is to provide the filter with a colored template in addition to the serie of photographs (Fig.3.10a), and then to associate each photograph with a different color of the template (Fig.3.10b). Next, the arrangement is done automatically by G’MIC, by resizing the images so that they appear best framed within the shapes defined in the given montage template (Fig.3.10c).
We made a video tutorial illustrating the use of this specific filter.

G'MIC drawn montage
Fig. 3.10a: Step 1: The user draws the desired organization of the montage with shapes of different colors.

G'MIC drawn montage
Fig. 3.10b: Step 2: G’MIC’s “Drawn Montage” filter allows you to associate a photograph for each template color.

G'MIC drawn montage
Fig. 3.10c: Step 3: The photo montage is then automatically synthetized by the filter.

But let’s go back to more essential questions: have you ever needed to draw gears? No?! It’s quite normal, that’s not something we do everyday! But just in case, the new G’MIC filter Rendering/Gear will be glad to help, with different settings to adjust gear size, colors and number of teeth. Perfectly useless, so totally indispensable!

G'MIC drawn montage
Fig. 3.11: The Gear filter, running at full speed.

Need a satin texture right now? No?! Too bad, the filter Patterns / Satin could have been of a great help!

G'MIC satin
Fig. 3.12: G’MIC’s satin filter will make your life more silky.

And finally, to end up with the series of these “effects that are useless until we need them”, note the apparition of the new filter Degradations/JPEG artifacts which simulates the appearance of JPEG compression artifacts due to the quantization of the DCT coefficients encoding 8×8 image blocks (yes, you will get almost the same result saving your image as a JPEG file with the desired quality).

Simulate JPEG Artifacts
Simulate JPEG Artifacts
Fig. 3.13: The “JPEG artifacts” filter simulates the image degradation due to 8×8 block DCT compression.

4. Other notable improvements

This review of these new available G’MIC filters should not overshadow the various improvements that have been made “under the hood” and that are equally important, even if they are less visible in practice for the user.

4.1. A better G’MIC-Qt plugin interface

A big effort of cleaning and restructuring the G’MIC-Qt plugin code has been realized, with a lot of little inconsistencies fixed in the GUI. Let’s also mention in bulk order some new interesting features that have appeared in the plugin:

  • The ability to set a timeout when trying to preview some computationnaly intensive filters.
  • A better management of the input-output parameters for each filter (with persistence, better menu location, and a reset button).
  • Maximizing the size of the preview area is now easier. Editing its zoom level manually is now possible, as well as chosing the language of the interface (regardless of the language used for the system), etc.

All these little things gathered together globally improves the user experience.

G'MIC Preferences
Fig. 4.1: Overview of the G’MIC-Qt plugin interface in its latest version 2.2.

4.2. Improvements in the G’MIC core

Even less visible, but just as important, many improvements have appeared in the G’MIC computational core and its associated G’MIC script language interpreter. You have to know that all of the available filters are actually written in the form of scripts in the G’MIC language, and each small improvement brought to the interpreter may have a beneficial consequence for all filters at once. Without going too much into the technical details of these internal improvements, we can highlight those points:

  • The notable improvement in the syntax of the language itself, which goes along with better performances for the analysis of the language syntax (therefore for the script executions), all this with a smaller memory footprint.
  • The G’MIC built-in mathematical expression evaluator is also experiencing various optimizations and new features, to consider even more possibilities for performing non-trivial operations at the pixel level.

  • A better support of raw video input/outputs (.yuv format) with support for4:2:2 and 4:4:4 formats, in addition to4:2:0 which was the only mode supported before.

  • Finally, two new animations have been added to the G’MIC demos menu (which is displayed e.g. when invoking gmic without arguments from the command-line):

    • First, a 3D starfield animation:

    Starfield demo
    Fig.4.2: New 3D starfield animation added to the G’MIC demo menu.

    Hanoi Demo
    Fig. 4.3: The playable 3D version of the “Tower of Hanoi”, available in G’MIC.

  • Finally, let us mention the introduction of the command tensors3d dedicated to the 3D representation of second order tensor fields. In practice, it does not only serve to make you want to eat Smarties®! It can be used for example to visualize certain regions of MRI volumes of diffusion tensors:

Tensors3d
Fig. 4.4: G’MIC rendering of a 3D tensor field, with command tensors3d.

4.3. New design for G’MIC Online

To finish this tour, let us also mention the complete redesign of G’MIC Online during the year 2017, done by Christophe Couronne and Véronique Robert from the development departement of the GREYC laboratory.
G’MIC Online is a web service allowing you to apply a subset of G’MIC filters on your images, directly inside a web browser. These web pages now have a responsive design, which makes them more enjoyable than before on mobile devices (smartphones and tablets). Shown below is a screenshot of this service running in Chrome/Android, on a 10” tablet.

G'MICol
Fig. 4.5: New responsive design of the G’MIC Online web service, running here on a 10″ tablet.

5. Conclusion and perspectives

The overview of this new version 2.2 of G’MIC is now over.
One possible conclusion could be: “There are plenty of perspectives!“.

G’MIC is a free project that can be considered as mature: the first lines of code were composed almost ten years ago, and today we have a good idea of the possibilities (and limits) of the beast. We hope to see more and more interest from FOSS users and developers, for example for integrating the G’MIC-Qt generic plugin in various software focused on image or video processing.

The possibility of using the G’MIC core under a more permissive CeCILL-C license can also be a source of interesting collaborations in the future (some companies have already approached us about this). While waiting for potential collaborations, we will do our best to continue developping G’MIC and feed it with new filters and effects, according to the suggestions of our enthusiastic users. A big thanks to them for their help and constant encouragement (the motivation to write code or articles, past 11pm, would not be the same without them!).

“Long live to open-source image processing and artistic creation!”

February 23, 2018

PEEC Planetarium Show: "The Analemma Dilemma"

[Analemma by Giuseppe Donatiello via Wikimedia Commons] Dave and I are giving a planetarium show at PEEC tonight on the analemma.

I've been interested in the analemma for years and have written about it before, here on the blog and in the SJAA Ephemeris. But there were a lot of things I still didn't understand as well as I liked. When we signed up three months ago to give this talk, I had plenty of lead time to do more investigating, uncovering lots of interesting details regarding the analemmas of other planets, the contributions of the two factors that go into the Equation of Time, why some analemmas are figure-8s while some aren't, and the supposed "moon analemmas" that have appeared on the Astronomy Picture of the Day. I added some new features to the analemma script I'd written years ago as well as corresponding with an expert who'd written some great Equation of Time code for all the planets. It's been fun.

I'll write about some of what I learned when I get a chance, but meanwhile, people in the Los Alamos area can hear all about it tonight, at our PEEC show: The Analemma Dilemma, 7 pm tonight, Friday Feb 23, at the Nature Center, admission $6/adult, $4/child.

February 21, 2018

G'MIC 2.2


G'MIC 2.2

New features and filters!

The IMAGE team of the GREYC laboratory (UMR CNRS 6072, Caen, France) is pleased to announce the release of a new 2.2 version of G’MIC, its open-source, generic, and extensible framework for image processing. As we already did in the past, we take this opportunity to look at the latest notable features added since the previous major release (2.0, last June).



Note 1: click on a picture to view a larger version. Note 2: This is a translation of an original article, in French, published on Linuxfr.

1. Context and recent evolutions

G’MIC is a free and open-source software developed since August 2008 (distributed under the CeCILL license), by folks in the IMAGE team at the GREYC, a French public research laboratory located in Caen and supervised by three institutions: the CNRS, the University of Caen, and the ENSICAEN engineering school. This team is made up of researchers and lecturers specialized in the fields of algorithms and mathematics for image processing. As one of the main developer of G’MIC, I wanted to sum up the work we’ve made on this software during these last months.

G'MIC logo Fig. 1.1: The G’MIC project logo, and its cute little mascot “Gmicky” (designed by David Revoy).

G’MIC is multi-platform (GNU/Linux, MacOS, Windows …) and provides many ways of manipulating generic image data, i.e. still images or image sequences acquired as hyperspectral 2D or 3D floating-point arrays (including usual color images). More than 950 different image processing functions are already available in the G’MIC framework, this number being expandable through the use of the G’MIC scripting capabilities.

G'MIC plugin for GIMP Fig.1.2: The G’MIC-Qt plugin for GIMP, currently the most popular G’MIC interface.

Since the last major version release there have been two important events in the project life:

1.1. Port of the G’MIC-Qt plugin to Krita

When we released version 2.0 of G’MIC a few months ago, we were happy to announce a complete rewrite (in Qt) of the plugin code for GIMP. An extra step has been taken, since this plugin has been extended to fit into the open-source digital painting software Krita. This has been made possible thanks to the development work of Boudewijn Rempt (maintainer of Krita) and Sébastien Fourey (developer of the plugin). The G’MIC-Qt plugin is now available for Krita versions 3.3+ and, although it does not yet implement all the I/O functionality of its GIMP counterpart, the feedback we’ve had so far is rather positive. This new port replaces the old G’MIC plugin for Krita which has not been maintained for some time. The good news for Krita users (and developers) is that they now have an up-to-date plugin whose code is common with the one running in GIMP and for which we will be able to ensure the maintenance and further developments. Note this port required the writing of a source file host_krita.cpp (in C++) implementing the communication between the host software and the plugin, and it is reasonable to think that a similar effort would allow other programs to get their own version of the G’MIC plugin (and the 500 image filters that come with it!).

G'MIC for Krita Fig. 1.3: Overview of the G’MIC-Qt plugin running on Krita.

1.2. CeCILL-C, a more permissive license

Another major event concerns the new license of use : The CeCILL-C license (that is in the spirit of the LGPL) is now available for some components of the G’MIC framework. This license is more permissive than the previously proposed CeCILL license (which is GPL-compatible) and is more suitable for the distribution of software libraries. This license extension (now double licensing) applies precisely to the core files of G’MIC, i.e. its C++ library libgmic. Thus, the integration of the libgmic features (therefore, all G’MIC image filters) is now allowed in software that are not themselves licensed under GPL/CeCILL (including closed source products). The source code of the G’MIC-Qt plugin, meanwhile, remains distributed under the single CeCILL license (GPL-like).

2. Fruitful collaboration with David Revoy

If you’ve followed us for a while, you may have noticed that we very often refer to the work of illustrator David Revoy for his multiple contributions to G’MIC: mascot design, ideas of filters, articles or video tutorials, tests of all kinds, etc. More generally, David is a major contributor to the world of free digital art, as much with the comic Pepper & Carrot he produces (distributed under free license CC -BY), as with his suggestions and ongoing bug reports for the open-source software he uses. Therefore, it seems quite natural to devote a special section to him in this article, summarizing the different ideas, contributions and experiments he has brought to G’MIC just recently. A big thank you, David for your availability, the sharing of your ideas, and for all your work in general!

2.1. Improving the lineart colorization filter

Let’s first mention the progress made on the Black & White / Colorize lineart (smart-coloring) filter that had appeared at the time of the 2.0 G’MIC release. This filter is basically a lineart colorization assistant which was developed in collaboration with David. It tries to automatically generate a colorization layer for a given lineart, from the analysis of the contours and the geometry of that lineart. Following David‘s suggestions, we were able to add a new colorization mode, named “Autoclean“. The idea is to try to automatically “clean” a coloring layer (made roughly by the user) provided in addition to the lineart layer, using the same geometric analysis as for the previous colorization modes. The use of this new mode is illustrated below, where a given lineart (left) has been colorized approximately by the user. From the two layers line art + color layer, our “Autoclean“ algorithm generates an image (right), where the colors do not overflow the lineart contours (even for “virtual” contours that are not closed). The result is not always perfect, but nevertheless reduces the time spent in the tedious process of colorization.

Gmic_autoclean Fig. 2.1: The new “Autoclean” mode of the lineart colorization filter can automatically “clean” a rough colorization layer.

Note that this filter is also equipped with a new hatch detection module, which makes it possible to avoid generating too many small areas when using the previously available random colorization mode, particularly when the input lineart contains a large number of hatches (see figure below).

Gmic_hatch_detect Fig. 2.2: The new hatching detection module limits the number of small colored areas generated by the automatic random coloring mode.

2.2. Color equalizer in HSI, HSL and HSV spaces

More recently, David suggested the idea of a filter to separately vary the hue and saturation of colors having certain levels of luminosity. The underlying idea is to give the artist the ability to draw or paint digitally using only grayscale, then colorize his masterpiece afterwards by re-assigning specific colors to the different gray values of the image. The obtained result has of course a limited color range, but the overall color mood is already in place. The artist only has to retouch the colors locally rather than having to colorize the entire painting by hand. The figure below illustrates the use of this new filter Colors/Equalize HSI/HSL/HSV available in the G’MIC plugin : each category of values can be finely adjusted, resulting in preliminary colorizations of black and white paintings.

Equalize HSI1 Equalize HSI2 Equalize HSI3 Fig. 2.3: Equalization in HSI/HSL/HSV colorspaces allows to easily set the global color mood for B&W paintings.

Note that the effect is equivalent to applying a color gradient to the different gray values of the image. This is something that could already be done quite easily in GIMP. But the main interest here is we can ensure that the pixel brightness remains unchanged during the color transformation, which is not an obvious property to preserve when using a gradient map. What is nice about this filter is that it can apply to color photographs as well. You can change the hue and saturation of colors with a certain brightness, with an effect that can sometimes be surprising, like with the landscape photography shown below.

Equalize HSI4 Fig. 2.4: The filter “Equalize HSI/HSL/HSV” applied on a color photograph makes it possible to change the colorimetric environment, here in a rather extreme way.

2.3. Angular deformations

Another one of the David‘s ideas concerned the development of a random local deformation filter, having the ability to generate angular deformations. From an algorithmic point of view, it seemed relatively simple to achieve. Note that once the implementation has been done (in concise style: 12 lines!) and pushed into the official filter updates, David just had to press the “Update Filters“ button of his G’MIC-Qt plug-in, and the new effect Deformations/Crease was there immediately for testing. This is one of the practical side of developing new filters using the G’MIC script language!

G'MIC Crease Fig. 2.5: New effect “Crease” for local angular deformations.

However, I must admit I didn’t really have an idea on what this could be useful for in practice. But the good thing about cooperating with David is that HE knows exactly what he’s going to do with it! For instance, to give a crispy look to the edges of his comics, or for improving the render of his alien death ray.

G'MIC Crease 2 G'MIC Crease 3 Fig. 2.6: Using the G’MIC “Crease” filter for two real cases of artistic creation.

3. Filters, filters, filters…

David Revoy is not the only user of G’MIC: we sometimes count up to 900 daily downloads from the main project website. So it happens, of course, that other enthusiastic users inspire us new effects, especially during those lovely discussions that take place on our forum, kindly made available by the PIXLS.US community.

3.1. Bring out the details without creating “halos”

Many photographers will tell you that it is not always easy to enhance the details in digital photographs without creating naughty artifacts that often have to be masked manually afterwards. Conventional contrast enhancement algorithms are most often based on increasing the local variance of pixel lightness, or on the equalization of their local histograms. Unfortunately, these operations are generally done by considering neighborhoods with a fixed size and geometry, where each pixel of a neighborhood is always considered with the same weight in the statistical calculations related to these algorithms. It is simpler and faster, but from a qualitative point of view it is not an excellent idea: we often get “halos” around contours that were already very contrasted in the image. This classic phenomenon is illustrated below with the application of the Unsharp mask filter (the one present by default in GIMP) on a part of a landscape image. This generates an undesirable “halo” effect at the frontier between the mountain and the sky (this is particularly visible in full resolution images).

G'MIC details filters Fig. 3.1: Unwanted “halo” effects often occur with conventional contrast enhancement filters.

The challenge of the detail enhancement algorithms is to be able to analyze the geometry of the local image structures in a more fine way, to take into account geometry-adaptive local weights for each pixel of a given neighborhood. To make it simple, we want to create anisotropic versions of the usual enhancement methods, orienting them by the edges detected in the images. Following this logic, we have added two new G’MIC filters recently, namely Details/Magic details and Details/Equalize local histograms, which try to better take the geometric content of the image into account for local detail enhancement (e.g. using the bilateral filter).

G'MIC magic details G'MIC equalize local histograms G'MIC equalize local histograms Fig. 3.2: The new G’MIC detail enhancement filters.

Thus, the application of the new G’MIC local histogram equalization on the landscape image shown before gives something slightly different : a more contrasted result both in geometric details and colors, and reduced halos.

G'MIC magic details G'MIC magic details Fig. 3.3: Differences of results between the standard GIMP Unsharp Mask filter and the local histogram equalization of G’MIC, for details enhancement.

3.2. Different types of image deformations

New filters to apply geometric deformations on images are added to G’MIC on a regular basis, and this new major version 2.2 offers therefore a bunch of new deformation filters. So let’s start with Deformations/Spherize, a filter which allows to locally distort an image to give the impression that it is projected on a 3D sphere or ellipsoid. This is the perfect filter to turn your obnoxious office colleague into a Mr. Potato Head!

G'MIC spherize G'MIC spherize Fig .3.4: Two examples of 3D spherical deformations obtained with the G’MIC “Spherize” filter.

On the other hand, the filter Deformations/Square to circle implements the direct and inverse transformations from a square domain (or rectangle) to a disk (as mathematically described on this page), which makes it possible to generate this type of deformations.

G'MIC square to circle Fig. 3.5: Direct and inverse transformations from a square domain to a disk.

The effect Degradations/Streak replaces an image area masked by the user (filled with a constant color) with one or more copies of a neighboring area. It works mainly as the GIMP clone tool but prevents the user to fill the entire mask manually.

G'MIC streak Fig. 3.6: The “Streak” filter clones parts of the image into a user-defined color mask.

3.3. Artistic Abstractions

You might say that image deformations are nice, but sometimes you want to transform an image in a more radical way. Let’s introduce now the new effects that turn an image into a more abstract version (simplification and re-rendering). These filters have in common the analysis of the local image geometry, followed by a step of image synthesis.

For example, G’MIC filter Contours/Super-pixels locally gathers the image pixels with the same color to form a partitioned image, like a puzzle, with geometric shapes that stick to the contours. This partition is obtained using the SLIC method (Simple Linear Iterative Clustering), a classic image partitioning algorithm, which has the advantage of being relatively fast to compute.

G'MIC super pixels 1 G'MIC super pixels 2 Fig. 3.7: Decomposition of an image in super-pixels by the Simple Linear Iterative Clustering algorithm (SLIC).

The filter Artistic/Linify tries to redraw an input image by superimposing semi-transparent colored lines on an initially white canvas, as shown in the figure below. This effect is the re-implementation of the smart algorithm initially proposed on the site http://linify.me (initially implemented in JavaScript).

G'MIC linify 1 G'MIC linify 2 Fig. 3.8: The “Linify” effect tries to redraw an image by superimposing only semi-transparent colored lines on a white canvas.

The effect Artistic/Quadtree variations first decomposes an image as a quadtree, then re-synthesize it by drawing oriented and plain ellipses on a canvas, one ellipse for each quadtree leaf. This renders a rather interesting “painting” effect. It is likely that with more complex shapes, even more attractive renderings could be synthesized. Surely an idea to keep in mind for the next filters update :)

G'MIC quadtree 1 G'MIC quadtree 2 Fig. 3.9: Decomposing an image as a quadtree allows to re-synthesize it by superimposing only plain colored ellipses.

3.4. “Are there any more?”

And now that you have processed so many beautiful pictures, why not arrange them in the form of a superb photo montage? This is precisely the role of the filter Arrays & tiles/Drawn montage, which allows to create a juxtaposition of photographs very quickly, for any kind of shapes. The idea is to provide the filter with a colored template in addition to the serie of photographs (Fig.3.10a), and then to associate each photograph with a different color of the template (Fig.3.10b). Next, the arrangement is done automatically by G’MIC, by resizing the images so that they appear best framed within the shapes defined in the given montage template (Fig.3.10c). We made a video tutorial illustrating the use of this specific filter.

G'MIC drawn montage Fig. 3.10a: Step 1: The user draws the desired organization of the montage with shapes of different colors.
G'MIC drawn montage Fig. 3.10b: Step 2: G’MIC’s “Drawn Montage” filter allows you to associate a photograph for each template color.
G'MIC drawn montage Fig. 3.10c: Step 3: The photo montage is then automatically synthetized by the filter.

But let’s go back to more essential questions: have you ever needed to draw gears? No?! It’s quite normal, that’s not something we do everyday! But just in case, the new G’MIC filter Rendering/Gear will be glad to help, with different settings to adjust gear size, colors and number of teeth. Perfectly useless, so totally indispensable!

G'MIC drawn montage Fig. 3.11: The Gear filter, running at full speed.

Need a satin texture right now? No?! Too bad, the filter Patterns / Satin could have been of a great help!

G'MIC satin Fig. 3.12: G’MIC’s satin filter will make your life more silky.

And finally, to end up with the series of these “effects that are useless until we need them”, note the apparition of the new filter Degradations/JPEG artifacts which simulates the appearance of JPEG compression artifacts due to the quantization of the DCT coefficients encoding 8×8 image blocks (yes, you will get almost the same result saving your image as a JPEG file with the desired quality).

Simulate JPEG Artifacts Simulate JPEG Artifacts Fig. 3.13: The “JPEG artifacts” filter simulates the image degradation due to 8×8 block DCT compression.

4. Other notable improvements

This review of these new available G’MIC filters should not overshadow the various improvements that have been made “under the hood” and that are equally important, even if they are less visible in practice for the user.

4.1. A better G’MIC-Qt plugin interface

A big effort of cleaning and restructuring the G’MIC-Qt plugin code has been realized, with a lot of little inconsistencies fixed in the GUI. Let’s also mention in bulk order some new interesting features that have appeared in the plugin:

  • The ability to set a timeout) when trying to preview some computationnaly intensive filters.
  • A better management of the input-output parameters for each filter (with persistence, better menu location, and a reset button).
  • Maximizing the size of the preview area is now easier. Editing its zoom level manually is now possible, as well as chosing the language of the interface (regardless of the language used for the system), etc.

All these little things gathered together globally improves the user experience.

G'MIC Preferences Fig. 4.1: Overview of the G’MIC-Qt plugin interface in its latest version 2.2.

4.2. Improvements in the G’MIC core

Even less visible, but just as important, many improvements have appeared in the G’MIC computational core and its associated G’MIC script language interpreter. You have to know that all of the available filters are actually written in the form of scripts in the G’MIC language, and each small improvement brought to the interpreter may have a beneficial consequence for all filters at once. Without going too much into the technical details of these internal improvements, we can highlight those points:

  • The notable improvement in the syntax of the language itself, which goes along with better performances for the analysis of the language syntax (therefore for the script executions), all this with a smaller memory footprint.
  • The G’MIC built-in mathematical expression evaluator is also experiencing various optimizations and new features, to consider even more possibilities for performing non-trivial operations at the pixel level.

  • A better support of raw video input/outputs (.yuv format) with support for4:2:2 and 4:4:4 formats, in addition to4:2:0 which was the only mode supported before.

  • Finally, two new animations have been added to the G’MIC demos menu (which is displayed e.g. when invoking gmic without arguments from the command-line):

    • First, a 3D starfield animation:
    Starfield demo Fig.4.2: New 3D starfield animation added to the G’MIC demo menu.
    Hanoi Demo Fig. 4.3: The playable 3D version of the “Tower of Hanoi”, available in G’MIC.
  • Finally, let us mention the introduction of the command tensors3d dedicated to the 3D representation of second order tensor fields. In practice, it does not only serve to make you want to eat Smarties®! It can be used for example to visualize certain regions of MRI volumes of diffusion tensors:

    Tensors3d Fig. 4.4: G’MIC rendering of a 3D tensor field, with command tensors3d.

4.3. New design for G’MIC Online

To finish this tour, let us also mention the complete redesign of G’MIC Online during the year 2017, done by Christophe Couronne and Véronique Robert from the development departement of the GREYC laboratory. G’MIC Online is a web service allowing you to apply a subset of G’MIC filters on your images, directly inside a web browser. These web pages now have a responsive design, which makes them more enjoyable than before on mobile devices (smartphones and tablets). Shown below is a screenshot of this service running in Chrome/Android, on a 10’’ tablet.

G'MICol Fig. 4.5: New responsive design of the G’MIC Online web service, running here on a 10” tablet.

5. Conclusion and perspectives

The overview of this new version 2.2 of G’MIC is now over. One possible conclusion could be: “There are plenty of perspectives!“.

G’MIC is a free project that can be considered as mature: the first lines of code were composed almost ten years ago, and today we have a good idea of the possibilities (and limits) of the beast. We hope to see more and more interest from FOSS users and developers, for example for integrating the G’MIC-Qt generic plugin in various software focused on image or video processing.

The possibility of using the G’MIC core under a more permissive CeCILL-C license can also be a source of interesting collaborations in the future (some companies have already approached us about this). While waiting for potential collaborations, we will do our best to continue developping G’MIC and feed it with new filters and effects, according to the suggestions of our enthusiastic users. A big thanks to them for their help and constant encouragement (the motivation to write code or articles, past 11pm, would not be the same without them!).

“Long live open-source image processing and artistic creation!”

February 20, 2018

CSS Grid

This would totally have been a tweet or a facebook post, but I’ve decided to invest a little more energy and post these on my blog, accessible to everybody. Getting old, I guess. We’re all mortal and the web isn’t open by its own.

In the past few days I’ve been learning about CSS grid while redesigning Flatpak and Flathub sites (still coming). And with the knowledge of really grokking only a fraction of it, I’m in love. So far I really dig:

  • Graceful fallback
  • Layout fully controlled by the theme
  • Controlled whitespace (meaning the layout won’t fall apart when you add or remove some whitespace)
  • Reasonable code legibility
  • Responsive layouts even without media queries

Whitespace on the Web

The fact that things are sized and defined very differently and getting grips with implicit sizing will take some time, but it seems to have all the answers to the problems I ran into so far. Do note that I never got super fluent in the flexbox, either.

I love the few video bites that Jen Simmons publishes periodically. The only downside to all this is seeing the mess with the legacy grid systems I have on the numerous websites like this one.