## July 26, 2017

A quick update, as we've touched upon Evince recently.

I mentioned that we switched from using external tools for decompression to using libarchive. That's not the whole truth, as we switched to using libarchive for CBZ, CB7 and the infamous CBT, but used a copy/paste version of unarr to support RAR files, as libarchive support lacks some needed features.

We hope to eventually remove the internal copy of unarr, but, as a stop-gap, that allowed us to start supporting CBR comics out of the box, and it's always a good thing when you have one less non-free package to grab from somewhere to access your media.

The second new format is really two formats, from either side of the 2-digit-year divide: PostScript-based Adobe Illustrator and PDF-based Adobe Illustrator. Evince now declares to support "the format" if both of the backends are built and supported. It only took 12 years, and somebody stumbling upon the feature request while doing bug triaging. The nooks and crannies of free software where the easy feature requests get lost :)

Both features will appear in GNOME 3.26, the out-of-the-box CBR support is however available now in an update for the just released Fedora 26.

## July 25, 2017

As of 2017:

• I have been at the company I helped to start for 18 years
• I have been married for 12 years
• I have a 9-year-old child (and 6, and 1)

I’m going for a personal high-score.

## July 23, 2017

This week's hike was to Nambé Lake, high in the Sangre de Cristos above Santa Fe.

It's a gorgeous spot, a clear, shallow mountain lake surrounded by steep rocky slopes up to Lake Peak and Santa Fe Baldy. I assume it's a glacial cirque, though I can't seem to find any confirmation of that online.

There's a raucous local population of Clark's nutcrackers, a grey relative of the jays (but different from the grey jay) renowned for its fearlessness and curiosity. One of my hiking companions suggested they'd take food from my hand if I offered. I broke off a bit of my sandwich and offered it, and sure enough, a nutcracker flew right over. Eventually we had three or four of them hanging around our lunch spot.

The rocky slopes are home to pikas, but they're shy and seldom seen. We did see a couple of marmots in the rocks, and I caught a brief glimpse of a small, squirrel-sized head that looked more grey than brown like I'd expect from a rock squirrel. Was it a pika? I'll never know.

We also saw some great flowers. Photos: Nambé Lake Nutcrackers.

## July 22, 2017

We’re releasing the second beta for Krita 3.2.0 today! These beta builds contain the following fixes, compared to the first 3.2.0 beta release. Keep in mind that this is a beta: you’re supposed to help the development team out by testing it, and reporting issues on bugs.kde.org.

• There are still problems on Windows with the integration with the gmic-qt plugin, but several lockups have been fixed.
• The smart patch tool merge was botched: this is fixed now.
• It wasn’t possible anymore to move vector objects with the mouse (finger and tablet worked fine). This is fixed now.
• Fixed the size and flow sliders
• Fixes to saving jpg or png images without a transparency channel

#### Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

#### Linux

A snap image for the Ubuntu App Store will be available from the Ubuntu application store. When it is updated, you can also use the Krita Lime PPA to install Krita 3.2.0-beta.2 on Ubuntu and derivatives.

### Source code

#### Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here.

#### Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

## July 21, 2017

@GodTributes took over my title, soz.

Dude, where's my maintainer?

Last year, probably as a distraction from doing anything else, or maybe because I was asked, I started reviewing bugs filed as a result of automated flaw discovery tools (from Coverity to UBSan via fuzzers) being run on gdk-pixbuf.

Apart from the security implications of a good number of those problems, there was also the annoyance of having a busted image file bring down your file manager, your desktop, or even an app that opened a file chooser either because it was broken, or because the image loader for that format didn't check for the sanity of memory allocations.

(I could have added links to Bugzilla entries for each one of the problems above, but that would just make it harder to read)

Two big things happened in gdk-pixbuf 2.36.1, which was used in GNOME 3.24:

• the removal of GdkPixdata as a stand-alone image format loader. We really don't want to load GdkPixdata files from sources other than generated sources or embedded data structures, and removing that loader closed off those avenues. We still ended up fixing a fair number of naive assumptions in helper functions though.
• the addition of a thumbnailer for gdk-pixbuf supported images. Images would not be special-cased any more in gnome-desktop's thumbnailing code, making the file manager, the file chooser and anything else navigating directories full of broken and huge images more reliable.
But that's just the start. gdk-pixbuf continues getting bug fixes, and we carry on checking for overflows, underflows and just flows, breaks and beats in general.

Programmatic Thumbellina portrait-maker

Picture, if you will, a website making you download garbage files from the Internet, the ROM dump of a NES cartridge that wasn't properly blown on and digital comic books that you definitely definitely paid for.

That's a nice summary of the security bugs foisted upon GNOME in past year or so, even if, thankfully, we were ahead of the curve in terms of fixing those issues (the GStreamer NSF decoder bug was removed in 2013, the comics backend in evince was rewritten over a period of 2 years and committed in March 2017).

Still, 2 pieces of code were running on pretty much every file downloaded, on purpose or not, from the Internet: Tracker's indexers and the file manager's thumbnailers.

Tracker started protecting itself not long after the NSF vulnerability, even if recent versions of GStreamer weren't vulnerable, as we mentioned.

That left the thumbnailers. Some of those are first party, like the gdk-pixbuf, and those offered by core applications (Evince, Videos), written by GNOME developers (yours truly for both epub/mobi and Nintendo DS).

They're all good quality code I'd vouch for (having written or maintained quite a few of them), but they can rely on third-party libraries (say GStreamer, poppler, or libarchive), have naive or insufficiently defensive code (gdk-pixbuf loaders,  GStreamer plugins) or, worst of all: THIRD-PARTY EXTENSIONS.

There are external plugins and extensions for image formats in gdk-pixbuf, for video and audio formats in GStreamer, and for thumbnailers pretty much anywhere. We can't control those, but the least we can do when they explode in a wet mess is make sure that the toilet door is closed.

Not even Nicholas Cage can handle this Alcatraz

For GNOME 3.26 (and today in git master), the thumbnailer stall will be doubly bolted by a Bubblewrap sandbox and a seccomp blacklist.

This closes a whole vector of attack for the GNOME Desktop, but doesn't mean we're completely out of the woods. We'll need to carry on maintaining and fixing security bugs in those libraries and tools we depend on, as GStreamer plugin bugs still affect Videos, gdk-pixbuf bugs still affect Photos and Eye Of Gnome, etc.

And there are limits to what those 2 changes can achieve. The sandboxing and syscall blacklisting avoids those thumbnailers writing anything but an image file in PNG format in a specific directory. There's no network, the filename of the original file is hidden and sanitised, but the thumbnailer could still create a crafted PNG file, and the sandbox doesn't work inside a sandbox! So no protection if the application running the thumbnailer is inside Flatpak.

In fine

GNOME 3.26 will have better security for thumbnailers, so you won't "need to delete GNOME Files".

But you'll probably want to be careful with desktops that forked our thumbnailing code, namely Cinnamon and MATE, which don't implement those security features.

The next step for the thumbnailers will be beefing up our protection against greedy thumbnailers (in terms of CPU and memory usage), and sharing the code better between thumbnailers.

Note for later, more images of cute animals.

## July 18, 2017

We’re releasing the first beta for Krita 3.2.0 today! Compared to Krita 3.1.4, released 26th of May, there are numerous bug fixes and some very cool new features. Please test this release, so we can fix bugs before the final release!

### Known bugs

It’s a beta, so there are bugs. One of them is that the size and flow sliders are disabled. We promise faithfully we won’t release until that’s fixed, but in the meantime, no need to report it!

### Features

• Krita 3.2 will use the gmic-qt plugin created and maintained by the authors of G’Mic We’re still working with them to create binary builds that can run on Windows, OSX and most versions of Linux. This plugin replaces completely the older gmic plugin.

These brushes are good for create a strong painterly look:

• There are now shortcuts for changing layer states like visibility and lock.
• There have been many fixes to the clone brush
• There is a new dialog from where you can copy and paste relevant information about your system for bug reports.
• We’ve integrated the Smart Patch tool that was previously only in the 4.0 pre-alpha builds!

• The Gaussian Blur filter now can use kernels up to 1000 pixels in diameter

## Bug Fixes

Among the bigger bug fixes:

• Painting with your finger on touch screens is back. You can enable or disable this in the settings dialog.
• If previously you suffered from the “green brush outline” syndrome, that should be fixed now, too. Though we cannot guarantee the fix works on all OpenGL systems.
• There have been a number of performance improvements as well
• The interaction with the file dialog has been improved: it should be better at guessing which folder you want to open, which filename to suggest and which file type to use.

And of course, there were dozens of smaller bug fixes.

#### Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

#### Linux

A snap image for the Ubuntu App Store will be available from the Ubuntu application store. When it is updated, you can also use the Krita Lime PPA to install Krita 3.2.0-beta.1 on Ubuntu and derivatives.

### Source code

#### Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here.

#### Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

## July 16, 2017

For the Raspberry Pi Zero W book I'm writing, the publisher, Maker Media, wants submissions in Word format (but stressed that LibreOffice was fine and lots of people there use it, a nice difference from Apress). That's fine ... but when I'm actually writing, I want to be able to work in emacs; I don't want to be distracted fighting with LibreOffice while trying to write.

For the GIMP book, I wrote in plaintext first, and formatted it later. But that means the formatting step took a long time and needed exceptionally thorough proofreading. This time, I decided to experiment with Markdown, so I could add emphasis, section headings, lists and images all without leaving my text editor.

Of course, it would be nice to be able to preview what the formatted version will look like, and that turned out to be easy with a markdown editor called ReText, which has a lovely preview mode, as long as you enable Edit->Use WebKit renderer (I'm not sure why that isn't the default).

Okay, a chapter is written and proofread. The big question: how to get it into the Word format the publisher wants?

First thought: ReText has a File->Export menu. Woohoo -- it offers ODT. So I should be able to export to ODT then open the resulting file in LibreOffice.

Not so much. The resulting LibreOffice document is a mess, with formatting that doesn't look much like the original, and images that are all sorts of random sizes. I started going through it, resizing all the images and fixing the formatting, then realized what a big job it was going to be and decided to investigate other options first.

ReText's Export menu also offers HTML, and the HTML it produces looks quite nice in Firefox. Surely I could open that in LibreOffice, then save it (maybe with a little minor reformatting) as DOCX?

Well, no, at least not directly. It turns out LibreOffice has no obvious way to import an HTML file into a normal text document. If you Open the HTML file, it displays okay (except the images are all tiny thumbnails and need to be resized one by one); but LibreOffice can't save it in any format besides HTML or plaintext. Those are the only formats available in the menu in the Save dialog. LibreOffice also has a Document Converter, but it only converts Office formats, not HTML; and there's no Import... in LO's File. There's a Wizards->Web Page, but it's geared to creating a new web page and saving as HTML, not importing an existing HTML-formatted document.

But eventually I discovered that if I "Create a new Text Document" in LibreOffice, I can Select All and Copy in Firefox, followed by Paste into Libre Office. It works great. All the images are the correct size, the formatting is correct needing almost no corrections, and LibreOffice can save it as DOCX, ODT or whatever I need.

### Image Captions

I mentioned that the document needed almost no corrections. The exception is captions. Images in a book need captions and figure numbers, unlike images in HTML.

Markdown specifies images as

![Image description][path/to/image.jpg)


Unfortunately, the Image description part is only visible as a mouseover, which only works if you're exporting to a format intended for a web browser that runs on desktop and laptop computers. It's no help in making a visible caption for print, or for tablets or phones that don't have mouseover. And the mouseover text disappears completely when you paste the document from Firefox into LibreOffice.

I also tried making a table with the image above and the caption underneath. But I found it looked just as good in ReText, and much better in HTML, just to add a new paragraph of italics below the image:

![][path/to/image.jpg)

*Image description here*


That looks pretty nice in a browser or when pasted into LibreOffice. But before submitting a chapter, I changed them into real LibreOffice captions.

In LibreOffice, right-click on the image; Add Caption is in the context menu. It can even add numbers automatically. It initially wants to call every caption "Illustration" (e.g. "Illustration 1", "Illustration 2" and so on), and strangely, "Figure" isn't one of the available alternatives; but you can edit the category and change it to Figure, and that persists for the rest of the document, helpfully numbering all your figures in order. The caption dialog when you add each caption always says that the caption will be "Illustration 1: (whatever you typed)" even if it's the fourteenth image you've captioned; but when you dismiss the dialog it shows up correctly as Figure 14, not as a fourteenth Figure 1.

The only problem arises if you have to insert a new image in the middle of a chapter. If you do that, you end up with two Figure 6 (or whatever the number is) and it's not clear how to persuade LibreOffice to start over with its renumbering. You can fix it if you remove all the captions and start over, but ugh. I never found a better way, and web searches on LibreOffice caption numbers suggest this is a perennial source of frustration with LibreOffice.

The bright side: struggling with captions in LibreOffice convinced me that I made the right choice to do most of my work in emacs and markdown!

## July 13, 2017

It’s summer — a bit rainy, but still summer! So it’s time for a summer sale — and we’ve reduced the price for the Made with Krita 2016 art books to just 7,95. That means that shipping (outside the Netherlands) is more expensive than the book itself, but it’s a great chance to get acquainted with forty great artists and their work with Krita! The book is professionally printed on 130 grams paper and softcover bound in signatures. The cover illustration is by Odysseas Stamoglou. Every artist is showcased with a great image, as well as a short bio.

On sale: €7,95
Forty artists from all over the world, working in all kinds of styles and on all kinds of subjects show how Krita is used in the real world to create amazing and engaging art. The book also contains a biographical section with information about each individual artist. Made with Krita 2016 is now on sale: 7,95€, excluding shipping. Shipping is 11,25€ (3,65€ in the Netherlands).

## July 11, 2017

The official Blender release is now being downloaded over half a million times per month, and a total of 6.5M last year.

During the period of July 2016 and July 2017, Blender has seen the release of Blender 2.78 and a/b/c fix releases.

This is not counting:

• Experimental Builds on Buildbot
• Release Candidates and Test Builds
• Other services offering Blender (app stores like Steam or community sites like GraphicAll)
• Linux repositories

Below is the full report for each platform.

## July 10, 2017

Previously: v4.11.

Here’s a quick summary of some of the interesting security things in last week’s v4.12 release of the Linux kernel:

With kernel memory base randomization, it was stil possible to figure out the per-cpu base address via the “sgdt” instruction, since it would reveal the per-cpu GDT location. To solve this, Thomas Garnier moved the GDT to a fixed location. And to solve the risk of an attacker targeting the GDT directly with a kernel bug, he also made it read-only.

usercopy consolidation
After hardened usercopy landed, Al Viro decided to take a closer look at all the usercopy routines and then consolidated the per-architecture uaccess code into a single implementation. The per-architecture code was functionally very similar to each other, so it made sense to remove the redundancy. In the process, he uncovered a number of unhandled corner cases in various architectures (that got fixed by the consolidation), and made hardened usercopy available on all remaining architectures.

ASLR entropy sysctl on PowerPC
Continuing to expand architecture support for the ASLR entropy sysctl, Michael Ellerman implemented the calculations needed for PowerPC. This lets userspace choose to crank up the entropy used for memory layouts.

James Morris used __ro_after_init to make the LSM structures read-only after boot. This removes them as a desirable target for attackers. Since the hooks are called from all kinds of places in the kernel this was a favorite method for attackers to use to hijack execution of the kernel. (A similar target used to be the system call table, but that has long since been made read-only.)

KASLR enabled by default on x86
With many distros already enabling KASLR on x86 with CONFIG_RANDOMIZE_BASE and CONFIG_RANDOMIZE_MEMORY, Ingo Molnar felt the feature was mature enough to be enabled by default.

Expand stack canary to 64 bits on 64-bit systems
The stack canary values used by CONFIG_CC_STACKPROTECTOR is most powerful on x86 since it is different per task. (Other architectures run with a single canary for all tasks.) While the first canary chosen on x86 (and other architectures) was a full unsigned long, the subsequent canaries chosen per-task for x86 were being truncated to 32-bits. Daniel Micay fixed this so now x86 (and future architectures that gain per-task canary support) have significantly increased entropy for stack-protector.

Expanded stack/heap gap
Hugh Dickens, with input from many other folks, improved the kernel’s mitigation against having the stack and heap crash into each other. This is a stop-gap measure to help defend against the Stack Clash attacks. Additional hardening needs to come from the compiler to produce “stack probes” when doing large stack expansions. Any Variable Length Arrays on the stack or alloca() usage needs to have machine code generated to touch each page of memory within those areas to let the kernel know that the stack is expanding, but with single-page granularity.

That’s it for now; please let me know if I missed anything. The v4.13 merge window is open!

## July 06, 2017

It's official: I'm working on another book!

This one will be much shorter than Beginning GIMP. It's a mini-book for Make Media on the Raspberry Pi Zero W and some fun projects you can build with it.

I don't want to give too much away at this early stage, but I predict it will include light shows, temperature sensors, control of household devices, Twitter access and web scraping. And lots of code samples.

I'll be posting more about the book, and about various Raspberry Pi Zero W projects I'm exploring during the course of writing it. But for now ... if you'll excuse me, I have a chapter that's due today, and a string of addressable LEDs sitting on my desk calling out to be played with as part of the next chapter.

While my colleagues are working on mice that shine in all kinds of different colours, I went towards the old school.

For around 10 units of currency, you should be able to find the uDraw tablet for the PlayStation 3, the drawing tablet that brought down a company.

The device contains a large touchpad which can report one or two touches, for right-clicking (as long as the fingers aren't too close), a pen interface which will make the cheapest of the cheapest Wacom tablets feel like a professional tool from 30 years in the future, a 4-button joypad (plus Start/Select/PS) with the controls either side of the device, and an accelerometer to play Marble Madness with.

The driver landed in kernel 4.10. Note that it only supports the PlayStation 3 version of the tablet, as the Wii and XBox 360 versions require receivers that aren't part of the package. Here, a USB dongle should be provided.

The second driver landed in kernel 4.12, and is a primer for more work to be done. This driver adds support for the Retrode 2's joypad adapters.

The Retrode is a USB console cartridge reader which makes Sega Mega Drive (aka Genesis) and Super Nintendo (aka Super Famicom) cartridges show up as files on a mass storage devices in your computer.

It also has 4 connectors for original joypads which the aforementioned driver now splits up and labels, so you know which is which, as well as making the mouse work out of the box. I'd still recommend picking up the newer optical model of that mouse, from Hyperkin. Moving a mouse with a ball in it is like weighing a mobile phone from that same era.

I will let you inspect the add-ons for the device, like support for additional Nintendo 64 pads and cartridges, and Game Boy/GB Color/GB Advance, and Sega Master System adapters.

Recommended for: cartridge-based retro games, obviously.

Integrated firmware updates, and better integration with Games is in the plans.

I'll leave you with this video, which shows how you could combine GNOME Games, a Retrode, this driver, a SNES mouse, and a cartridge of Mario Paint. Let's get creative :)

## July 05, 2017

Many upstream applications are changing their application ID from something like boxes.desktop to org.gnome.Boxes.desktop so they can be packaged as flatpaks. Where upstream doesn’t yet have this, we rewrite the desktop file in flatpak-builder so it can be packaged and deployed safely. However, more and more upstreams are building flatpaks and thus more and more apps seem to be changing desktop ID every month.

This poses a problem for the ODRS review system: When we query using the reverse-DNS-style we don’t match the hundreds of reviews submitted against the old ID. This makes the application look bad, and users file bugs against GNOME Software saying it’s broken either because “no reviews are showing up”, or that “a previously 5 star application with 30 reviews is now 2 stars with just one review”.

This also happens when companies get taken over, or when the little toy project moves from a hosting site to a proper home, e.g. com.github.FeedReader to org.gnome.FeedReader.

So, what can we do? AppData (again) to the rescue. Adding the following XML to the file allows new versions of gnome-software to do the right thing; we then get reviews and ratings for both the old and new names.

  <provides>
<id>old-name.desktop</id>
</provides>


If you renamed your application in the last couple of years, I’d love you to help out and add this tag to your .appdata.xml file – in all supported branches if possible. I can’t promise cookies, but your application will have more reviews and that can’t be a bad thing. Thanks!

## July 04, 2017

I’ve just released the latest version of fwupd from the development branch. 0.9.5 has the usual bug fixes, translation updates and polish, but also provides two new goodies:

We now support for updating Logitech peripherals over a protocol helpfully called DFU, which is not to be confused with the standard USB DFU protocol. This allows us to update devices such as the K780 keyboard over the Unifying layer. Although it takes a few minutes to complete, it works reliably and allows us to finally fix the receiver end of the MouseJack vulnerability. Once the user has installed the Unifying dongle update and in some cases a peripheral update they are secure again. The K780 update is in “testing” on the LVFS if anyone wants to try this straight away. You should send huge thanks to Logitech as they have provided me access to the documentation, hardware and firmware engineers required to make this possible. All the released Logitech firmwares will move to the “stable” state once this new fwupd release has hit Fedora updates-testing.

The other main feature in this release is the Intel Management Engine plugin. The IME is the source of the recent AMT vulnerability that affects the “ME blob” that is included in basically every consumer PC sold in the last decade. Although we can’t flash the ME blob using this plugin, it certainly makes it easy to query the hardware and find out if you are running a very insecure system. This plugin is more that inspired by the AMT status checker for Linux by Matthew Garrett, so you should send him cookies, not me. Actually updating the ME blob would be achieved using the standard UEFI UpdateCapsule, but it would mean your vendor does have to upload a new system firmware to the LVFS. If you’ve got a Dell you are loved, other vendors are either still testing this and don’t want to go public yet (you know who you are) or don’t care about their users. If you still don’t know what the LVFS is about, see the whitepaper and then send me an email. Anyway, obligatory technical-looking output:

$fwupdmgr get-devices Guid: 2800f812-b7b4-2d4b-aca8-46e0ff65814c DeviceID: /dev/mei DisplayName: Intel AMT (unprovisioned) Plugin: amt Flags: internal Version: 9.5.30 VersionBootloader: 9.5.30  If the AMT device is present, and the display name has provisioned and the AMT version is between 6.0.x and 11.2.x, and you have not upgraded your firmware, you are vulnerable to CVE-2017-5689 and you should disable AMT in your system firmware. I’ve not yet decided if this should bubble up to the session in the form of a notification bubble, ideas welcome. The new release is currently building for Fedora, and might be available in other distributions at some point. ## June 30, 2017 With one of our internal web applications based on Ruby on Rails, we’ve discovered a file descriptor leak in one of the delayed job worker processes. The worker leaked descriptors whenever it invoked a message being send to the message bus using qpid-messaging. Since we’re using gems compiled as C++ and C extensions, in order to find the root cause, I used the packages provided through the package manager and gdb. Big thanks to Dan Callaghan who walked me through most of the process and then found the leak in the C++ sources. TL;DR; • identify the leaking descriptors and reproduce it with lsof • attach strace to the process and identify file descriptors which are not being closed • install debuginfo packages for all dependencies • use gdb to figure out what is going on ### Reproducer I’ve used lsof and a friend wrote a small script to quickly monitor the worker process. Looking at the opened files of the process revealed a long list which looked like half closed sockets. It turned out later, that it wasn’t the same problem since the sockets were created, but never bound/connected. I was unable to reproduce the problem on my local development environment, but found away to do it on our staging environment which resembles production much closer. So whenever I invoked an action in the UI which resulted in a message being sent, I was able to see another file descriptor leak with lsof. ### Strace the process With the reproducer at hand, I started to strace the process: # Note we're not filtering system calls with -e here. # Weirdly CLOSE was not reported when just filtering network calls strace -s 1000 -p -o strace_output_log.strace Dan helped me looking through the produced log output, which revealed that the system under investigation created a socket and called getpeername right after it, without binding it resulting in a leaked file descriptor. 10971 socket(PF_INET, SOCK_STREAM, IPPROTO_IP) = 35 10971 getpeername(35, 0x7fffae712a90, [112]) = -1 ENOTCONN (Transport endpoint is not connected) ### Install debuginfo packages and use gdb In order to debug the system, we need debuginfo packages installed, otherwise you wont be able to step through the sources using gdb. When you attach gdb to the process it will tell you what packages it is missing, for example: Missing separate debuginfos, use: debuginfo-install qpid-proton-c-0.10-3.fc25.x86_64 You then go install those (be mindful that you the repositories configured e.g. section name fedora-debuginfo): debuginfo-install qpid-proton-c-0.10-3.fc25.x86_64 and basically start debugging. Our first suspicion was the qpid messaging library and we check if it’s invocation of getpeername was leaking the file descriptors. I’ve added a break point at the point of the source code we thought was suspicious and in a separate terminal used lsof to see which file descriptor number is leaked. For example: # I've used a watch, which executes the lsof every 2 seconds by # default. The grep filters some of the files I'm not interested in$ watch "lsof -p  | grep -v REG"

The lsof output will show you the leaked file descriptor number in column 4 by default. With that you can check in gdb if the file descriptor being handled in the source code is the one which leaked.

Since that achieved no results, we used gdb to break on invocations of the getpeername identifier and used backtrace to pin point in the sources where the leak occurred.

## June 29, 2017

Hi all, This is time for a new review of what has been going on in Arch development this month. Quite a lot actually, it's exciting that several things I've been working on during the last couple of months begin to flourish into pretty interesting and usable features. There is another interesting thing happening, is a very...

## June 24, 2017

Someone forwarded me a message from the Albuquerque Journal. It was all about "New Mexico\222s schools".

Sigh. I thought I'd gotten all my Mutt charset problems fixed long ago. My system locale is set to en_US.UTF-8, and accented characters in Spanish and in people's names usually show up correctly. But I do see this every now and then.

When I see it, I usually assume it's a case of incorrect encoding: whoever sent it perhaps pasted characters from a Windows Word document or something, and their mailer didn't properly re-encode them into the charset they were using to send the message.

In this case, the message had User-Agent: SquirrelMail/1.4.13. I suspect it came from a "Share this" link on the newspaper's website.

I used vim to look at the source of the message, and it had

Content-Type: text/plain; charset=iso-8859-1

For the bad characters, in vim I saw things like
New Mexico<92>s schools


I checked an old web page I'd bookmarked years ago that had a table of the iso-8859-1 characters, and sure enough, hex 0x92 was an apostrophe. What was wrong?

I got some help on the #mutt IRC channel, and, to make a long story short, that web table I was using was wrong. ISO-8859-1 doesn't include any characters in the range 8x-9x, as you can see on the Wikipedia ISO/IEC 8859-1.

What was happening was that the page was really cp1252: that's where those extra characters, like hex 92/octal 222 for an apostrophe, or hex 96/octal 226 for a dash (nitpick: that's an en dash, but it was used in a context that called for an em dash; if someone is going to use something other than the plain old ASCII dash - you'd think they'd at least use the right one. Sheesh!)

Anyway, the fix for this is to tell mutt when it sees iso-8859-1, use cp1252 instead:

charset-hook iso-8859-1 cp1252


A happy find related to this: it turns out there's a better way of looking up ISO-8859 tables, and I can ditch that bookmark to the old, erroneous page. I've known about man ascii forever, but someone I'd never thought to try other charsets. Turns out man iso_8859-1 and man iso_8859-15 have built-in tables too. Nice!

(Sadly, man utf-8 doesn't give a table. Of course, that would be a long man page, if it did!)

‘Can you name one problem which type design (not engineering) solved and which is not predominantly aesthetic?’ Now that caught my attention on twitter. It appeared in a thread that started off with some saucy quotes: ‘the problems have already been solved’ and ‘the function of new typefaces is largely aesthetic.’

It piqued my interest because I have worked on the interaction design of a couple of font design applications. Besides that I have spent a lot of time investigating the general nature of design.

### making a splash

So I jumped in and joined the twitter discussion. After a false start (oh, the joys of social media) the heart of the matter came to light: ‘It is about problems that typefaces solve, not about the problems of typeface design practice, i.e. the categories of reasons why to draw.’ So I addressed it there, have been thinking about it quite a bit more since then, and now I am going to address it better.

Let’s start with a smooth aikido move to get this discussion positioned where it belongs. When a font is to be used by many (i.e. more than 100) people, then it is a product. The type designer is the product designer; the design process is the act of product realisation. The process starts with a product definition; methodical research and design follows. Eventually a set of design drawings is created, to be engineered into shippable fonts.

This insight rebases the discussion. It is a product issue, not a design issue. Asking what problems type design can solve today, is asking: are there any product definitions left that aim to bring useful, valuable font solutions to users?

Well, allow me to make some suggestions.

### think big, really big

If you write using Latin script, then it seems that you have 100.000 fonts to choose from. If you write in another script, you are suddenly scraping the barrel; 100 usable fonts is then super luxurious. Does this impact just some tiny minorities? Well, here is a handy list: writing scripts sorted by usage. If I skip Latin and add up the populations that actively use the next ten scripts, then I end up with 3.41 billion people.

The ten scripts are: Chinese, Arabic, Devanagari, Bengali‐Assamese, Cyrillic, Kana, Javanese, Hangul, Telugu and Tamil. Right after these on the list there is a group of six modestly used scripts (Gujarati, Kannada, Burmese, Malayalam, Thai and Sundanese) that nonetheless serve another 205 million people.

Here is my suggestion: Want to do something useful? Pick a popular Latin font (family), say from the top‐1000 in use, and pick one of the big‑10/modest‑6 scripts listed above. Your product definition is to design a font in that script that goes together seamlessly with the Latin font. That does not mean that your new font has to play second fiddle to the Latin one; just that they get along famously.

I can tell you from experience that it is very rewarding to design something this relevant; with the knowledge that tens, if not hundreds, of million of people are waiting out there for the results. Of the 3.615 billion people mentioned above, a good chunk owns a smartphone or will own one soon—you cannot stop commodity Android. It’s their access to the internet. They inform and express themselves using text. They need fonts.

#### fit for purpose

Read a book on typography and one discovers that non‐philistine typesetting starts with using proper numbers (oldstyle vs. lining, &c.), small caps, and so on. Now check 10 random fonts on the device you use to read this post. Do they contain a full set of numbers and small caps? It is going to be hit and miss. Personally, I am still waiting for oldstyle numbers for my favourite Swiss sans.

While typesetting got more efficient (hot lead, to photo, to digital), typography support got thrown overboard. Today, typography is banned to the graveyard called OpenType features. Yes I know, OT features UI is a world of hurt, but someday someone is going to do something about it (if that is you, ping me). Meanwhile, it is 2017 and fonts that just support the skimpiest of Latin glyph set feel very ‘ms‑dos’ to me; primitive, with a touch of nostalgia, but surely past due date.

So my second suggestion is to start doubling down on OpenType features. Go through the top‐1000 list of fonts and start extending them to make them useful, and valuable, for typographers. I realise there are some barriers of entry, like intellectual property and access to source files (so that they can be extended). Maybe you can pitch the company that owns them and get the gig. A straightforward case is open source fonts, you can get started right now.

#### fit for authors

Bold and italic are not just a good idea, they are (by default) the way to express certain conventions in text, e.g. emphasis, or denote publication titles. The defaults of html & css are an example of how enshrined this is. A week ago I was trying to select some typefaces for a new website, and it was an uphill struggle, littered with missing italic and/or bold font variants.

My third suggestion: there are plenty of holes in the top‐1000 fonts where it comes to bold and italics. Your product definition is to fill some of these, where you think it matters most. Designing a bold for someone else’s regular can be a drag. But italics can be worthwhile design work, because they start with a clean sheet and a different skeleton than the regulars. I suspect the amount of true‐design work involved is the reason italics got skipped in the first place.

Speaking of, here is a bonus product definition: real italics to be used seamlessly with that famous Swiss sans—to replace them obliques. Again a clean‐sheet project which solves a typographer’s problem. And an ambitious, high‐profile one too. You can call it Helvetalics, if you like.

#### fit for a new medium

Last year I was involved in an internet‐of‐things project. After a methodical start, it became immediately clear that for our combination of display (size, resolution), use (viewing distance, information density) and goals (not end up being cheap junk slapped together by engineers) we needed a font to make it work. Yes work; life or death.

New media to display text, with new properties—and new contexts for old ones—pop up regularly. These events naturally trigger product definitions for fonts to make new media work.

### aesthetics, schm’sthetics

In this blog post I have not gone into aesthetics, because I didn’t need to. All I have done is look beyond the glyph shapes and spot ocean‐sized holes in the font product landscape. Just following up on my first suggestion will keep the global type designing community busy for decades with work that is 100% non‐frivolous. Suddenly today’s ‘glut’ of type designers looks like it could use some serious reinforcement troops.

A business coach once told me: ‘this design work you do is political, isn’t it?’ She meant that my design clearly impacts society and that my design decision make the world better, or worse. That starts with the decision ‘what project do I work on?’

If you get to decide the product definitions at your font shack, then your work is political.
• Your choice of scripts is political; how many of 3.615+ billion people are you gonna throw under the bus?
• Your choice of OT features is political; a font for typographers, or only for simple business administration?
• Your choice of including a bold and italic is political; is your font going to be a drop‐in solution for those who just want to communicate?
• Your choice of targeting new display media is political; are you going to leave their new users out in the cold?

If you scour the font landscape, looking for pockets of user‐felt hurt (‘what, XYZ simply doesn’t exist?’) and then do something about it by creating, or updating, a font product, then your work is political, useful and valuable. And I salute you.

## June 23, 2017

Look what we got today by snail mail:

It’s a children’s nonfiction book, nice for adults too, by Jeremy Hyman (text) and Haude Levesque (art). All the art was made with Krita!

### Jeremy:

One of my favorite illustrations is the singing White-throated sparrow (page 24-25). The details of the wing feathers, the boldness of the black and white stripes, and the shine in the eye all make the bird leap off the page.

I love the picture of the long tailed manakins (page 32-33). I think this illustration captures the velvety black of the body plumage, and the soft texture of the blue cape, and the shining red of the cap. I also like the way the unfocused background makes the birds in the foreground seem so crisp. It reminds me of seeing these birds in Costa Rica – in dark and misty tropical forests, the world often seems a bit out of focus until a bright bird, flower, or butterfly focuses your attention.

I also love the picture of the red-knobbed hornbill (page 68-69). You can see the texture and detail of the feathers, even in the dark black feathers of the wings and back. The illustration combines the crispness and texture of the branches, leaves and fruits in the foreground, with the softer focus on leaves in the background and a clear blue sky. Something about this illustration reminds me of the bird dioramas at the American Museum of Natural History – a place I visited many times with my grandfather (to whom the book is dedicated). The realism of those dioramas made me fantasize about seeing those birds and those landscapes someday. Hopefully, good illustrations will similarly inspire some children to see the birds of the world.

### Haude:

My name is Haude Levesque and I am a scientific illustrator, writer and fish biologist. I have always been interested in both animal sciences and art, and it has been hard to choose between both careers, until I started illustrating books as a side job, about ten years ago while doing my post doc. My first illustration job was a book about insects behavior (Bug Butts), which I did digitally after taking an illustration class at the University of Minnesota. Since then, I have been both teaching biology, illustrating and writing books, while raising my two kids. The book “Bird Brains” belongs to a series with two other books that I illustrated, and Iwanted to have illustrations that look similar, which is full double page illustrations of a main animal in its natural habitat. I started using Krita only a year ago when illustrating “Bird Brains”, upon a suggestion from my husband, who is a software engineer and into open source software. I was getting frustrated with the software I had used previously, because it did not allow me to render life-like drawings, and required too many steps and time to do what I wanted. I also wanted my drawing to look like real paintings and also get the feeling that I am painting and Krita’s brushes do just that. It is hard for me to choose a favorite illustration in “Bird Brains”, I like them all and I know how many hours I spent on each. But, if I had to, I would say the superb lyrebird, page 28 and 29. I like how this bird is walking and singing at the same time and I like how I could render its plumage while giving him a real life posture.

I also like the striated heron, page 60 and 61. Herons are my favorite birds and I like the contrast between the pink and the green of the lilypads. Overall I am very happy with the illustrations in this book and I am planning on doing more scientific books for kids and possibly try fiction as well.

You can get it here from Amazon or here from Book Depository.

## June 21, 2017

Stellarium 0.16.0 is a stable version (based on Qt5.6 but it can still be built from sources with Qt5.4) that introduces some new features and closes 38 bug and wishlist reports.

New features include
- RemoteSync plugin, which allows running several connected instances of Stellarium.
- Non-spherical models for solar system objects like asteroids and small moons.
- Solar system config file is now split into two parts.
- AstroCalc feature extension: What's Up Tonight, graphs, ...
- DSO: Addition of catalogs of peculiar galaxies
- New Skycultures: Belarusian, Hawaiian Star Lines
- Telescope plugin: support for the RTS2 telescope system.
- Location can now be read from a GPS device.

A huge thanks to the people who helped us a lot by reporting bugs!

Full list of changes:
- Added support of irregular solar system objects (3D models of minor bodies) (LP: #1153171)
- Added GPS devices support (LP: #1448673)
- Added splitting ssystem.ini data - now have separate ssystem_major.ini and ssystem_minor.ini. Only the latter shall be editable for the users.
- Added a few more timezone replacements.
- Added support an asterisms for the sky cultures
- Added better identification for existing serial ports for GPS
- Added context support for constellations and asterisms names
- Added support RTS2 for Telescope Control Plugin
- Added a new option in config.ini file esp. for the planetariums (astro/
flag_forced_meteor_activity=(false|true) - to show a sporadic meteors activity without atmosphere).
- Added support of date and time formatting settings from main app to AstroCalc tool.
- Added a line for the approximate time of the meridian passing in AstroCalc tool (LP: #1652523)
- Added TLE tracking to RTS2 telescopes
- Added different star scales in the Oculars plugin, even separately for ocular and CCD views (LP: #1656940)
- Added information on magnification of the combination of eyepiece/lens/telescope in proportions of the telescope diameters.
- Added configurable options to AstroCalc tool
- Added support Catalan (Valencian) language
- Added 'What's Up Tonight' tool - AstroCalc subsystem (LP: #1080408)
- Added more data for analysis to the Exoplanets plugin.
- Added calculation of list of visible object for current location (AstroCalc)
- Added tool to remove custom markers by coordinates
- Added support double and variable stars for AstroCalc tool
- Added support translation novae names (parse nova name to extract constellation name and year of flash)
- Added lists of bright double and bright variable stars to Search Tool
- Added customized buttons for toggle ICRS/Galactic/Ecliptic grids (LP: #730689)
- Added customized buttons for toggle constellation boundaries (LP: #1249239)
- Added import/export bookmarks (LP: #1675078)
- Added confirmation before deleting landscape (LP: #1635137)
- Added guessing for name of location and use it when spaceship is landing (LP: #1220561)
- Added showing a proper motions for some stars
- Added an optional indication of mount mode (LP: #1172860)
- Added option to toggle the usage the buttons background on bottom bar (LP: #1589702)
- Added config option for planet apparent magnitude configuration
- Added description to the planets magnitude algorithm
- Added contrast index for DSO
- Added new option to configure behaviour of Satellites (Satellites/time_rate_limit = 1.0)
- Added a scripting function to retrieve property names (helpful for configuring RemoteSync).
- Added 3 additional catalogs to our DSO catalog (Arp, VV, PK)
- Added packing of DSO catalog
- Added the showing the groups of the artificial satellites
- Adding Hawaiian Starlines sky culture
- Added list of bright stars with high proper motion to the Search Tool/Lists and AstroCalc/Positions features
- Added lunar magnitude to sky brightness computation (brightness variation during lunar eclipses!) (LP: #1471546)
- Added property handling for the labels for ArchaeoLines plugin.
- Added Ukrainian translation for Belarusian skyculture
- Added Ukrainian translation for Hawaiian Starlines skyculture
- Added context and improve English phrase (AstroCalc)
- Added missed zh_HK zh_TW zh_CN descriptions for western sky-culture (LP: #1698473)
- Added Belarusian description for Belarusian sky culture (LP: #1698535)
- Added Bengali description for Belarusian sky culture (LP: #1698608)
- Added 'simulation speed' for tooltip of time (LP: #1698510)
- Added saving an angular separation option for phenomena
- Fixed crash when tried use of SIMBAD for offline mode (LP: #1674836)
- Fixed build scripts to update Index once more just before final run.
- Fixed COSPAR designation parser.
- Fixed wrong extinction coordinate frame of Zodiacal Light (LP: #1675699)
- Fixed a missing initialisation (avoids crash at program end)
- Fixed bug for loading default scenery on non-Englush locale in Scenery3D plugin
- Fixed typo in name of dark nebula LDN 935 (LP: #1679066)
- Fixed Scripting Engine: avoiding broken script when calling waitFor() after the point in time to wait.
- Fixed infoMap data for comets.
- Fixed issue of reloading of DSO names when filter of catalogs is updated.
- Fixed small cosmetic bug for SIMBAD status line.
- Fixed the opposition/conjunction longitude line: the line follows the ecliptic pole on date now (LP: #1687307)
- Fixed very stupid bug for Date & Time dialog.
- Fixed translation on-the-fly issue for AstroCalc tool.
- Fixed IAU constellation label for stars with high proper motion (LP: #1690615)
- Fixed crash when AstroCalc tool is active and we are on the spaceship
- Fixed several Coverity issues
- Fixed bug which disabled the bright flare-type Iridium point source drawing
- Fixed fullscreen behaviour on switching tasks (Alt+Tab) (LP: #550337)
- Fixed dynamic eye adaptation behaviour when persistent orbits are enabled
- Fixed switching horizontal/equatorial coordinates for Solar system objects (AstroCalc/Positions)
- Fixed crash when observer is flying on spaceship
- Fixed storing torchlight and coordinate display flag to config (Scenery3D) (LP: #1502245)
- Fixed placing a custom markers on HighDPI devices (LP: #1688985)
- Fixed infostring for ecliptical coordinates data (Nutation)
- Fixed strings consistency for Solar System Editor plug-in (LP: #1698783)
- Fixed string overlap with FOV and FPS labels (LP: #1698789)
- Restore searchable for telescope names (LP: #1686857)
- Updated Scenery3D plugin
- Updated Satellites plugin: refactoring the source code and speed-up rendering of satellites
- Updated rule to create a directory for screenshots (LP: #1626686)
- Updated GUI: refactoring blocks
- Updated a GUI behaviour: a map in LocationDialog resizable is now.
- Updated a GUI behaviour: enabled low resolution for High DPI devices.
- Updated a GUI: added a few GUI text improvements
- Updated InnoSetup script
- Updated and revised stars names
- Updated Historical Supernovae catalog (Added SN 2017cbv)
- Updated meteor showers catalog (Added data for year 2017)
- Updated AstroCalc tool: increased an accuracy of 'Altitude vs. Time' diagram
- Updated AstroCalc tool: speed-up calculations for some types of phenomena
- Updated AstroCalc tool: extension of features for Ephemeris Tool
- Updated AstroCalc tool: improve WUT
- Updated filters for DSO objects in AstroCalc/WUT tool
- Updated sorting rules for AstroCalc/Positions tool
- Updated filters for Solar system bodies (AstroCalc/Positions)
- Updated tab rules for Search Tool
- Updated list of locations
- Updated scripts and scripting engine
- Updated list of DSO's names
- Updated headers for AstroCalc tools
- Updated 'Go to home' feature.
- Updated Bookmarks tool.
- Updated API documentation
- Updated Exoplanets plugin: improve placing of the exoplanet systems
- Updated Oculars plugin: the limit for diameter of binoculars aperture upped to 200mm
- Updated calculation for boundaries of IAU constellations
- Updated default supernovae catalog
- Updated Belarusian translation for skycultures
- Updated cmake variable names
- Updated Ocean landscape
- Updated RemoteControl panels with new functionality
- Updated rules for TLE updated: never update TLE's for any date before Oct 4, 1957, 19:28:34GMT ;-)
- Removed script 'Analemma'
- Removed useless vertex color data from asteroid models.
- Removed Polynesian sky culture (replaced by Hawaiian starlines)

## June 20, 2017

This is one of these notes to self, in case I want to redo,extend or modify this in the future... Not sure it is of any interest to anybody less nerd, but here it goes for your enjoyment anyway :) I use powerline since quite some time, it prints fancy prompts in your terminal, where...

## June 17, 2017

Some time ago, we got in touch with a team from Microsoft that was reaching out to projects like Krita and Inkscape. They were offering to help our projects to publish in the Windows Store, doing the initial conversion and helping us get published.

We decided to take them up on their offer. We have had the intention to offer Krita on the Windows Store for quite some time already, only we never had the time to get it done.

Putting Krita in the Windows Store makes Krita visible to a whole new group of people. plus…

### Money

And we wanted to do the same as on Steam, and put a price-tag on Krita in the store. Publishing Krita on the Store takes time, and the Krita project really needs funding at the moment. (Note, though, that buying Krita in the Windows Store means part of your money goes to Microsoft: it’s still more effective to donate).

In return, if you get Krita from the Windows Store, you get automatic updates, and it becomes really easy to install Krita on all your Windows systems. Krita will also run in a sandbox, like other Windows apps.

Basically, you’re paying for convenience, and to help the project continue.

And there’s another reason to put Krita in the Windows Store: to make sure we’re doing it, and not someone else, unconnected to the project.

Krita is free software under the GNU Public License. Having Krita in the Windows Store doesn’t change that. The Store page has links to the source code (though they might be hardish to find, we don’t control the store layout), and that contains instructions on how to build Krita. If you want to turn your own build into an appx bundle, that’s easy enough.

You can use the Desktop App Converter directly on your build, or you can use it on the builds we make available.

There are no functional differences between Krita as downloaded from this website, and Krita as downloaded from the Windows store. It’s the same binaries, only differently packaged.

### Steam

We currently still have Krita on Steam, too. We intend to keep it on Steam, and are working on adding the training videos to Steam as well. People who have purchased the lifetime package of Krita Gemini will get all the videos as they are uploaded.

We’re also working on getting Krita 3 into Steam, as a new product, at the same price as Krita in the Windows store — and the same story. Easy updates and installs on all your systems, plus, a purchase supports Krita development.

Additionally, it looks like we might find some funding for updating Krita Gemini to a new version. It’ll be different, because the Gemini approach turns out to be impossible with Qt 5 and Qt Quick 2: we have already spent several thousands of euros on trying to get that to work.

Still, we have to admit that Krita on Steam is slow going. It’s not the easiest app store to work with (that is Ubuntu’s Snap), and uploading all the videos takes a lot of time!

## June 16, 2017

We've had a pair of ash-throated flycatchers in the nest box I set up in the yard. I've been watching them bring bugs to the nest for a couple of weeks now, but this morning they've been acting unusual: fluttering around the corner of the house near my office window, calling to each other, not spending nearly as much time near the nest. I suspect one or more of the chicks may have fledged this morning, though I have yet to see more than two flycatchers at once. They still return to the nest box occasionally (one of them just delivered a big grasshopper), so not all the chicks have fledged yet. Maybe if I'm lucky I'll get to see one fledge.

I hope they're not too affected by the smoky air. We have two fires filling the air with smoke: the Bonita Fire, 50 miles north, and as of yesterday a new fire in Jemez Springs, only about half that distance. Yesterday my eyes were burning, my allergies were flaring up, and the sky was worse than the worst days in Los Angeles in the 70s. But it looks like the firefighters have gotten a handle on both fires; today is still smoky, with a major haze down in the Pojoaque Valley and over toward Albuquerque, but the sky above is blue and the smoke plume from Jemez Springs is a lot smaller and less dark than it was yesterday. Fingers crossed!

And just a few minutes ago, a buck with antlers in velvet wandered into our garden to take a drink at the pond. Such a nice change from San Jose!

## June 15, 2017

It’s still early days, and we have to say this up-front: these builds are not ready for daily work. We mean it, you might luck out, but you might also seriously lose work. Please test, and please report issues on bugs.kde.org! (But check whether your issue has already been reported…) That said…

Here are the first builds of Krita 4.0 pre-alpha!

This is as big, if not bigger a step than it was going from Krita 2.9 to Krita 3.0. No, we haven’t ported Krita to Qt6 (phew!), but we have replaced the entire vector layer system: instead of using the Open Document Graphics standard, Krita now uses the SVG standard to store vector information. This makes Krita more compatible with other applications, like inkscape. Krita can still load existing Krita documents with ODG vector layers, but will only save SVG vector layers. Once a file has been saved with Krita 4.0 pre-alpha, Krita 3.x cannot open vector layers in that file any more.

We have also rewritten a lot of the interaction with vector objects, to make working with vector layers easier and more productive, and we’d like your feedback on that as well.

And of course, this isn’t the only big change.

There is a complete new airbrush system, developed by Allen Marshall, that replaces the existing airbrush system. This will affect brush presets that use the airbrush option, but the new system is so much better that there was no reason to keep the old system in parallel.

Eugene Ingerman has added a healing brush tool. Just select the sticky plaster icon in the toolbox, and paint over the area you want to be patched out!

There is a new system for saving images that should be much safer than the old one, and that warns you when saving to a format that will lose you data.

There’s a much improved palette docker, by Wolthera van Hövell tot Westerflier, that allows you to organize a palette in groups of colors, and reorganize the palettes using drag and drop, and edit swatches by double-clicking.

There’s a new docker that makes it possible to load SVG symbols and drag and drop them as shapes onto the image. Handy for speech bubbles, and we’ve included David Revoy’s Pepper and Carrot speech bubble library to get you started!

And there’s much more ready for you to explore and experiment with!

We’re still working on the new text tool. We got the basics working only last week, but that isn’t in these development builds yet. It’s too rough for even that!

The Python plugin has been merged, and is ready for testing, but here we’ve run into another problem: we haven’t been able to figure out yet how to bundle Python and the Python modules for scripting Krita yet. It works fine when building Krita from source on supported Linux systems. We haven’t managed to build it on Windows or on OSX yet, at all. If you can help us with that, please contact us!

The Scripter — ad-hoc scripting in Krita, created by Eliakin Costa.

#### Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

For development builds, we only create 64 bits windows portable zip files, Linux appimages and OSX disk images.

### Source code

#### Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here: 0x58b9596c722ea3bd.asc. The signatures are here.

#### Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

## June 12, 2017

The IMAGE team of the research laboratory GREYC in Caen/France is pleased to announce the release of a new major version (numbered 2.0) of its project G’MIC: a generic, extensible, and open source framework for image processing. Here, we present the main advances made in the software since our last article. The new features presented here include the work carried out over the last twelve months (versions 2.0.0 and 1.7.x, for x varying from 2 to 9).

# 1. G’MIC: A brief overview

G’MIC is an open-source project started in August 2008, by the IMAGE team. This French research team specializes in the fields of algorithms and mathematics for image processing. G’MIC is distributed under the CeCILL license (which is GPL compatible) and is available for multiple platforms (GNU/Linux, MacOS and Windows). It provides a variety of user interfaces for manipulating generic image data, that is to say, 2D or 3D multispectral images (or sequences) with floating-point pixel values. This includes, of course, “classic” color images.

Fig.1.1: Logo of the G’MIC project, an open-source framework for image processing, and its mascot Gmicky.

The popularity of G’MIC mostly comes from the plug-in it provides for GIMP (since 2009). To date, there are more than 480 different filters and effects to apply to your images, which considerably enlarges the list of image processing filters available by default in GIMP.

G’MIC also provides a powerful and autonomous command-line interface, which is complementary to the CLI tools you can find in the famous ImageMagick or GraphicsMagick projects. There is also a web service G’MIC Online, allowing to apply image processing effects directly from a browser. Other (but less well known) G’MIC-based interfaces exist: a webcam streaming tool ZArt, a plug-in for Krita, a subset of filters available in Photoflow, Blender or Natron… All these interfaces are based on the CImg and libgmic libraries, that are portable, thread-safe and multi-threaded, via the use of OpenMP.

G’MIC has more than 950 different and configurable processing functions, for a library of only 6.5Mio, representing a bit more than 180 kloc. The processing functions cover a wide spectrum of the image processing field, offering algorithms for geometric manipulations, colorimetric changes, image filtering (denoising and detail enhancement by spectral, variational, non-local methods, etc.), motion estimation and registration, display of primitives (2D or 3D mesh objects), edge detection, object segmentation, artistic rendering, etc. It is therefore a very generic tool for various uses, useful on the one hand for converting, visualizing and exploring image data, and on the other hand for designing complex image processing pipelines and algorithms (see these project slides for details).

# 2. A new versatile interface, based on Qt

One of the major new features of this version 2.0 is the re-implementation of the plug-in code, from scratch. The repository G’MIC-Qt developed by Sébastien (an experienced member of the team) is a Qt-based version of the plug-in interface, being as independent as possible of the widget API provided by GIMP.

Fig.2.1: Overview of version 2.0 of the G’MIC-Qt plug-in running for GIMP.

This has several interesting consequences:

• The plug-in uses its own widgets (in Qt) which makes it possible to have a more flexible and customizable interface than with the GTK widgets used by the GIMP plug-in API: for instance, the preview window becomes resizable at will, manages zooming by mouse wheel, and can be freely moved to the left or to the right. A filter search engine by keywords has been added, as well as the possibility of choosing between a light or dark theme. The management of favorite filters has been also improved and the interface even offers a new mode for setting the visibility of the filters. Interface personalization is now a reality.

• The plug-in also defines its own API, which is used to facilitate its integration in third party software (other than GIMP). In practice, a software developer has to write a single file host_software.cpp implementing the functions of the API to make the link between the plug-in and the host application. Currently, the file host_gimp.cpp does this for GIMP as a host. But there is now also a stand-alone version available (file host_none.cpp that runs this Qt interface in solo mode, from a shell (with command gmic_qt).

• Boudewijn Rempt, project manager and developer of the marvelous painting software Krita, has also started writing such a file host_krita.cpp to make this “new generation” plug-in communicate with Krita. In the long term, this should replace the previous G’MIC plug-in implementation they made (currently distributed with Krita), which is aging and poses maintenance problems for developers.

Minimizing the integration effort for developers, sharing the G’MIC plug-in code between different applications, and offering a user interface that is as comfortable as possible, have been the main objectives of this complete redesign. As you can imagine, this rewriting required a long and sustained effort, and we can only hope that this will raise interest among other software developers, where having a consistent set of image processing filters could be useful (a file host_blender.cpp available soon ? We can dream!). The animation below illustrates some of the features offered by this new Qt-based interface.

Fig.2.2: The new G’MIC-Qt interface in action.

Note that the old plug-in code written in GTK was updated also to work with the new version 2.0 of G’MIC, but has fewer features and probably will not evolve anymore in the future, unlike the Qt version.

# 3. Easing the work of cartoonists…

One of G’MIC’s purposes is to offer more filters and functions to process images. And that is precisely something where we have not relaxed our efforts, despite the number of filters already available in the previous versions!

In particular, this version comes with new and improved filters to ease the colorization of line-art. Indeed, we had the chance to host the artist David Revoy for a few days at the lab. David is well known to lovers of art and free software by his multiple contributions in these fields (in particular, his web comic Pepper & Carrot is a must-read!). In collaboration with David, we worked on the design of an original automatic line-art coloring filter, named Smart Coloring.

Fig.3.1: Use of the “Colorize line-art [smart coloring]” filter in G’MIC.

When drawing comics, the colorization of line-art is carried out in two successive steps: The original drawing in gray levels (Fig.3.2.1) is first pre-colored with solid areas, i.e. by assigning a unique color to each region or distinct object in the drawing (Fig.3.2.3). In a second step, the colourist reworks this pre-coloring, adding shadows, lights and modifying the colorimetric ambiance, in order to obtain the final colorization result (Fig.3.2.4). Practically, flat coloring results in the creation of a new layer that contains only piecewise constant color zones, thus forming a colored partition of the plane. This layer is then merged with the original line-art to get the colored rendering (merging both in multiplication mode, typically).

Fig.3.2: The different steps of a line-art coloring process (source: David Revoy).

Artists admit it themselves: flat coloring is a long and tedious process, requiring patience and precision. Classical tools available in digital painting or image editing software do not make this task easy. For example, even most filling tools (bucket fill) do not handle discontinuities in drawn lines very well (Fig.3.3.a), and even worse when lines are anti-aliased. It is then common for the artist to perform flat coloring by painting the colors manually with a brush on a separate layer (Fig.3.3.b), with all the precision problems that this supposes (especially around the contour lines, Fig.3.3.c). See also this link for more details.

Fig.3.3: Classical problems encountered when doing flat coloring (source: David Revoy).

It may even happen that the artist decides to explicitly constrain his style of drawing, for instance by using aliased brushes in a higher resolution image, and/or by forcing himself to draw only connected contours, in order to ease the flat colorization work that has to be done afterwards.

The Smart Coloring filter developed in version 2.0 of G’MIC allows to automatically pre-color an input line-art without much work. First, it analyses the local geometry of the contour lines (estimating their normals and curvatures). Second, it (virtually) does contour auto-completion using spline curves. This virtual closure allows then the algorithm to fill objects with disconnected contour plots. Besides, this filter has the advantage of being quite fast to compute and gives coloring results of similar quality to more expensive optimization techniques used in some proprietary software. This algorithm smoothly manages anti-aliased contour lines, and has two modes of colorization: by random colors (Fig.3.2.2 and Fig.3.4) or guided by color markers placed beforehand by the user (Fig.3.5).

Fig.3.4: Using the G’MICSmart Coloring” filter in random color mode, for line-art colorization (source: David Revoy).

In “random” mode, the filter generates a piecewise constant layer that is very easy to recolor with correct hues afterwards. This layer indeed contains only flat color regions, and the classic bucket fill tool is effective here to quickly reassign a coherent color to each existing region synthesized by the algorithm.

In the user-guided markers mode, color spots placed by the user are extrapolated in such a way that it respects the geometry of the original drawing as much as possible, taking into account the discontinuities in the pencil lines, as this is clearly illustrated by the figure below:

Fig.3.5: Using the G’MICSmart Coloring” filter in user-guided color markers mode, for line-art colorization (source: David Revoy).

This innovative, flat coloring algorithm has been pre-published on HAL (in French): A semi-guided high-performance flat coloring algorithm for line-arts. Curious people could find there all the technical details of the algorithm used. The recurring discussions we had with David Revoy on the development of this filter enabled us to improve the algorithm step by step, until it became really usable in production. This method has been used successfully (and therefore validated) for the pre-colorization of the whole episode 22 of the webcomic Pepper & Carrot.

The wisest of you know that G’MIC already had a line-art colorization filter! True, but unfortunately it did not manage disconnected contour lines so well (such as the example in Fig.3.5), and could then require the user to place a large number of color spots to guide the algorithm properly. In practice, the performance of the new flat coloring algorithm is far superior.

And since it does not see any objection to anti-aliased lines, why not create ones? That is the purpose of another new filter “Repair / Smooth [antialias]” able to add anti-aliasing to lines in cartoons that would have been originally drawn with aliased brushes.

Fig.3.6: Filter “Smooth [antialias]” smooths contours to reduce aliasing effect in cartoons (source: David Revoy).

# 4. …Not to forget the photographers!

“Colorizing drawings is nice, but my photos are already in color!”, kindly remarks the impatient photographer. Don’t be cruel! Many new filters related to the transformation and enhancement of photos have been also added in G’MIC 2.0. Let’s take a quick look of what we have.

## 4.1. CLUTs and colorimetric transformations

CLUTs (Color Lookup Tables) are functions for colorimetric transformations defined in the RGB cube: for each color (Rs,Gs,Bs) of a source image Is, a CLUT assigns a new color (Rd,Gd,Bd) transferred to the destination image Id at the same position. These processing functions may be truly arbitrary, thus very different effects can be obtained according to the different CLUTs used. Photographers are therefore generally fond of them (especially since these CLUTs are also a good way to simulate the color rendering of certain old films).

In practice, a CLUT is stored as a 3D volumetric color image (possibly “unwrapped” along the z = B axis to get a 2D version). This may quickly become cumbersome when several hundreds of CLUTs have to be managed. Fortunately, G’MIC has a quite efficient CLUT compression algorithm (already mentioned in a previous article), which has been improved version after version. So it was finally in a quite relax atmosphere that we added more than 60 new CLUT-based transformations in G’MIC, for a total of 359 CLUTs usable, all stored in a data file that does exceed 1.2 Mio. By the way, let us thank Pat David, Marc Roovers and Stuart Sowerby for their contributions to these color transformations.

Fig.4.1.1: Some of the new CLUT-based transformations available in G’MIC (source: Pat David).

But what if you already have your own CLUT files and want to use them in GIMP? No problem ! The new filter “Film emulation / User-defined” allows to apply such transformations from CLUT data file, with a partial support of files with extension .cube (CLUT file format proposed by Adobe, and encoded in ASCII o_O!).

And for the most demanding, who are not satisfied with the existing pre-defined CLUTs, we have designed a very versatile filter “Colors / Customize CLUT“, that allows the user to build their own custom CLUT from scratch: the user places colored keypoints in the RGB color cube and these markers are interpolated in 3D (according to a Delaunay triangulation) in order to rebuild a complete CLUT, i.e. a dense function in RGB. This is extremely flexible, as in the example below, where the filter has been used to change the colorimetric ambiance of a landscape, mainly altering the color of the sky. Of course, the synthesized CLUT can be saved as a file and reused later for other photographs, or even in other software supporting this type of color transformations (for example RawTherapee or Darktable).

Fig.4.1.2: Filter “Customize CLUT” used to design a custom color transform in the RGB cube.

Fig.4.1.3: Result of the custom colorimetric transformation applied to a landscape.

To stay in the field of color manipulation, let us also mention the appearance of the filter “Colors / Retro fade” which creates a “retro” rendering of an image with grain generated by successive averages of random quantizations of an input color image.

Fig.4.1.4: Filter “Retro fade” in the G’MIC plug-in.

## 4.2. Making the details pop out

Many photographers are looking for ways to process their digital photographs so as to bring out the smallest details of their images, sometimes even to exaggeration, and we can find some of them in the pixls.us forum. Looking at how they perform allowed us to add several new filters for detail and contrast enhancement in G’MIC. In particular, we can mention the filters “Artistic / Illustration look” and “Artistic / Highlight bloom“, which are direct re-implementations of the tutorials and scripts written by Sébastien Guyader as well as the filter “Light & Shadows / Pop shadows” suggested by Morgan Hardwood. Being immersed in such a community of photographers and cool guys always gives opportunities to implement interesting new effects!

Fig.4.2.1: Filters “Illustration look” and “Highlight bloom” applied to a portrait image.

In the same vein, G’MIC gets its own implementation of the Multi-scale Retinex algorithm, something that was already present in GIMP, but here enriched with additional controls to improve the luminance consistency in images.

Fig.4.2.2: Filter “Retinex” for improving luminance consistency.

Our friend and great contributor to G’MIC, Jérome Boulanger, also implemented and added a dehazing filter “Details / Dcp dehaze” to attenuate the fog effect in photographs, based on the Dark Channel Prior algorithm. Setting the parameters of this filter is kinda hard, but the filter gives sometimes spectacular results.

Fig.4.2.3: Filter “DCP Dehaze” to attenuate the fog effect.

And to finish with this subsection, let us mention the implementation in G’MIC of the Rolling Guidance algorithm, a method to simplify images that has become a key step used in many newly added filters. This was especially the case in this quite cool filter for image sharpening, available in “Details / Sharpen [texture]“. This filter works in two successive steps: First, the image is separated into a texture component + a color component, then the details of the texture component only are enhanced before the image is recomposed. This approach makes it possible to highlight all the small details of an image, while minimizing the undesired halos near the contours, a recurring problem happening with more classical sharpening methods (such as the well known Unsharp Mask).

Fig.4.2.4: The “Sharpen [texture]“” filter shown for two different enhancement amplitudes.

As you may know, a lot of photograph retouching techniques require the creation of one or several “masks”, that is, the isolation of specific areas of an image to receive differentiated processing. For example, the very common technique of luminosity masks is a way to treat differently shadows and highlights in an image. G’MIC 2.0 introduces a new interesting filter “Colors / Color mask [interactive]” that implements a relatively sophisticated algorithm (albeit computationally demanding) to help creating complex masks. This filter asks the user to hover the mouse over a few pixels that are representative of the region to keep. The algorithm learns in real time the corresponding set of colors or luminosities and deduces then the set of pixels that composes the mask for the whole image (using Principal Component Analysis on the RGB samples).

Once the mask has been generated by the filter, the user can easily modify the corresponding pixels with any type of processing. The example below illustrates the use of this filter to drastically change the color of a car

Fig.4.3.1: Changing the color of a car, using the filter “Color mask [interactive]“.

It takes no more than a minute and a half to complete, as shown in the video below:

Fig.4.3.2: Changing the color of a car, using filter “Color mask [interactive]” (video tutorial).

This other video exposes an identical technique to change the color of the sky in a landscape.

Fig.4.3.3: Changing the color of the sky in a landscape, using filter “Color mask [interactive]” (video tutorial).

# 5. And for the others…

Since illustrators and photographers are now satisfied, let’s move on to some more exotic filters, recently added to G’MIC, with interesting outcomes!

## 5.1. Average and median of a series of images

Have you ever wondered how to easily estimate the average or median frame of a sequence of input images? The libre aficionado Pat David, creator of the site pixls.us often asked the question. First of all when he tried to denoise images by combining several shots of a same scene. Then he wanted to simulate a longer exposure time by averaging photographs taken successively. And finally, calculating averages of various kind of images for artistic purposes (for example, frames of music video clips, covers of Playboy magazine or celebrity portraits).

Hence, with his cooperation, we added new commands -median_files,-median_videos, -average_files and-average_videos to compute all these image features very easily using the CLI tool gmic. The example below shows the results obtained from a sub-sequence of the « Big Buck Bunny” video. We have simply invoked the following commands from the Bash shell:

$gmic -average_video bigbuckbunny.mp4 -normalize 0.255 -o average.jpg$ gmic -median_video bigbuckbunny.mp4 -normalize 0.255 -o median.jpg


Fig.5.1.1: Sequence in the « Big Buck Bunny” video, directed by the Blender foundation.

Fig.5.1.2: Result: Average image of the « Big Buck Bunny” sequence above.

Fig.5.1.3: Result: Median image of the « Big Buck Bunny” sequence above.

And to stay in the field of video processing, we can also mention the addition of the commands -morph_files and -morph_video that render temporal interpolations of video sequences, taking the estimated intra-frame object motion into account, thanks to a quite smart variational and multi-scale estimation algorithm.

The video below illustrates the rendering difference obtained for the retiming of a sequence using temporal interpolation, with (right) and without (left) motion estimation.

Fig.5.1.4: Video retiming using G’MIC temporal morphing technique.

## 5.2. Deformations and “Glitch Art”

Those who like to mistreat their images aggressively will be delighted to learn that a bunch of new image deformation and degradation effects have appeared in G’MIC.

First of all, the filter “Deformations / Conformal maps” allows one to distort an image using conformal maps. These deformations have the property of preserving the angles locally, and are most often expressed as functions of complex numbers. In addition to playing with predefined deformations, this filter allows budding mathematicians to experiment with their own complex formulas.

Fig.5.2.1: Filter “Conformal maps” applying a angle-preserving transformation to the image of Mona Lisa.

Fans of Glitch Art may also be concerned by several new filters whose rendering look like image encoding or compression artifacts. The effect “Degradations / Pixel sort” sorts the pixels of a picture by row or by column according to different criteria and to possibly masked regions, as initially described on this page.

Fig.5.2.2: Filter “Pixel sort” for rendering a kind of “Glitch Art” effect.

Degradations / /Pixel sort also has two little brothers, filters “Degradations / Flip & rotate blocks” and “Degradations / Warp by intensity“. The first divides an image into blocks and allows to rotate or mirror them, potentially only for certain color characteristics (like hue or saturation, for instance).

Fig.5.2.3: Filter “Flip & rotate blocks” applied to the hue only to obtain a “Glitch Art” effect.

The second locally deforms an image with more or less amplitude, according to its local geometry. Here again, this can lead to the generation of very strange images.

Fig.5.2.4: Filter “Warp by intensity” applied to the image of Mona Lisa (poor Mona!).

It should be noted that these filters were largely inspired by the Polyglitch plug-in, available for Paint.NET, and have been implemented after a suggestion from a friendly user (yes, yes, we try to listen to our most friendly users!).

## 5.3. Image simplification

What else do we have in store? A new image abstraction filter, Artistic / Sharp abstract, based on the Rolling Guidance algorithm mentioned before. This filter applies contour-preserving smoothing to an image, and its main consequence is to remove the texture. The figure below illustrates its use to generate several levels of abstraction of the same input image, at different smoothing scales.

Fig.5.3.1: Creating abstractions of an image via the filter “Sharp abstract“.

In the same vein, G’MIC also gets a filter Artistic / Posterize which degrades an image to simulate posterization. Unlike the filter with same name available by default in GIMP (which mainly tries to reduce the number of colors, i.e. do color quantization), our version adds spatial simplification and filtering to approach a little more the rendering of old posters.

Fig.5.3.2: Filter “Posterize” of G’MIC, compared to the filter with same name available by default in GIMP.

## 5.4. Other filters

If you still want more (and in this case one could say you are damn greedy!), we will end this section by discussing some of the new, but unclassifiable filters.

We start with the filter “Artistic / Diffusion tensors“, which displays a field of diffusion tensors, calculated from the structure tensors of an image (structure tensors are symmetric and positive definite matrices, classically used for estimating the local image geometry). To be quite honest, this feature had not been originally developed for an artistic purpose, but users of the plug-in came across it by chance and asked to make a GIMP filter from it. And yes, this is finally quite pretty, isn’t it?

Fig.5.4.1: Filter “Diffusion Tensors” filter and its multitude of colored ellipses.

From a technical point of view, this filter was actually an opportunity to introduce new drawing features into the G’MIC mathematical evaluator, and it has now become quite easy to develop G’MIC scripts for rendering custom visualizations of various image data. This is what has been done for instance, with the command -display_quiver reimplemented from scratch, and which allows to generate this type of rendering:

Fig. 5.4.2: Rendering vector fields with the G’MIC command -display_quiver.

For lovers of textures, we can mention the apparition of two new fun effects: First, the “Patterns / Camouflage” filter. As its name suggests, this filter produces a military camouflage texture.

Fig. 5.4.3: Filter “Camouflage“, to be printed on your T-shirts to go unnoticed in parties!

Second, the filter “Patterns / Crystal background” overlays several randomly colored polygons in order to synthesize a texture that vaguely looks like a crystal seen under a microscope. Pretty useful to quickly render colored image backgrounds.

Fig.5.4.4: Filter “Crystal background” in action.

And to end this long overview of new G’MIC filters developed since last year, let us mention “Rendering / Barnsley fern“. This filter renders the well-known Barnsley fern fractal. For curious people, note that the related algorithm is available on Rosetta Code, with even a code version written in the G’MIC script language, namely:

# Put this into a new file 'fern.gmic' and invoke it from the command line, like this:
# $gmic fern.gmic -barnsley_fern barnsley_fern : 1024,2048 -skip {" f1 = [ 0,0,0,0.16 ]; g1 = [ 0,0 ]; f2 = [ 0.2,-0.26,0.23,0.22 ]; g2 = [ 0,1.6 ]; f3 = [ -0.15,0.28,0.26,0.24 ]; g3 = [ 0,0.44 ]; f4 = [ 0.85,0.04,-0.04,0.85 ]; g4 = [ 0,1.6 ]; xy = [ 0,0 ]; for (n = 0, n<2e6, ++n, r = u(100); xy = r<=1?((f1**xy)+=g1): r<=8?((f2**xy)+=g2): r<=15?((f3**xy)+=g3): ((f4**xy)+=g4); uv = xy*200 + [ 480,0 ]; uv[1][1] = h - uv[1][1]; I(uv) = 0.7*I(uv) + 0.3*255; )"} -r 40%,40%,1,1,2  And here is the rendering generated by this function: Fig.5.4.5: Fractal “Barnsley fern“, rendered by G’MIC. # 6. Overall project improvements All filters presented throughout this article constitute only the visible part of the G’MIC iceberg. They are in fact the result of many developments and improvements made “under the hood”, i.e., directly on the code of the G’MIC script language interpreter. This interpreter defines the basic language used to write all G’MIC filters and commands available to users. Over the past year, a lot of work has been done to improve the performances and the capabilities of this interpreter: • The mathematical expressions evaluator has been considerably enriched and optimized, with more functions available (especially for matrix calculus), the support of strings, the introduction of const variables for faster evaluation, the ability to write variadic macros, to allocate dynamic buffers, and so on. • New optimizations have been also introduced in the CImg library, including the parallelization of new functions (via the use of OpenMP). This C++ library provides the implementations of the “critical” image processing algorithms and its optimization has a direct impact on the performance of G’MIC (in this respect, note that CImg is also released with a major version 2.0). • Compiling G’MIC on Windows now uses a more recent version of g++ (6.2 rather than 4.5), with the help of Sylvie Alexandre. This has actually a huge impact on the performances of the compiled executables: some filters run up to 60 times faster than with the previous binaries (this is the case for example, with the Deformations / Conformal Maps filter, discussed in section 5.2). • The support of large .tiff images (format BigTIFF, with files that can be larger than 4Gb) is now enabled (read and write), as it is for 64-bit floating-point TIFF images • The 3D rendering engine built into G’MIC has also been slightly improved, with the support for bump mapping. No filter currently uses this feature, but we never know, and prepare ourselves for the future! Fig.6.1: Comparison of 3D textured rendering with (right) and without “Bump mapping” (left). • And as it is always good to relax after a hard day’s work, we added the game of Connect Four to G’MIC :). It can be launched via the shell command $ gmic -x_connect4 or via the plug-in filter “Various / Games & demos / Connect-4“. Note that it is even possible to play against the computer, which has a decent but not unbeatable skill (the very simple AI uses the Minimax algorithm with a two-level decision tree).

Fig.6.2: The game of “Connect Four“, as playable in G’MIC.

Finally, let us mention the undergoing redesign work of the G’MIC Online web service, with a beta version already available for testing. This re-development of the site, done by Christophe Couronne and Véronique Robert (both members of the GREYC laboratory), has been designed to better adapt to mobile devices. The first tests are more than encouraging. Feel free to experiment and share your impressions!

# 7. What to remember?

First, the version 2.0 of G’MIC is clearly an important step in the project life, and the recent improvements are promising for the future developments. It seems that the number of users are increasing (and they are apparently satisfied!), and we hope that this will encourage open-source software developers to integrate our new G’MIC-Qt interface as a plug-in for their own software. In particular, we are hopeful to see the new G’MIC in action under Krita soon, this would be already a great step!

Second, G’MIC continues to be an active project, and evolve through meetings and discussions with members of artists and photographers communities (particularly those who populate the forums and IRC of pixls.us and GimpChat). You will likely able to find us there if you need more information, or just if you want to discuss things related to (open-source) image processing.

And while waiting for a future hypothetical article about a future release of G’MIC, you can always follow the day-after-day progress of the project via our Twitter feed.

Until then, long live open-source image processing!

Credit: Unless explicitly stated, the various non-synthetic images that illustrate this post come from Pixabay.

CoreOS Fest 2017 happened earlier this month in San Francisco. I had the joy of attending this conference. With a vendor-organized conference there’s always the risk of it being mostly a thinly-veiled marketing excercise, but this didn’t prove to be the case: there was a good community and open-source vibe to it, probably because CoreOS itself is for the most part an open-source company.

Also fun was encountering a few old-time GNOME developers such as Matthew Garrett (now at Google) and Chris Kühl (who now runs kinvolk). It’s remarkable how good of a talent incubator the GNOME project is. Look at any reasonably successful project and chances are high you’ll find some (ex-)GNOME people.

I also had the pleasure of presenting the experiences and lessons learned related to introducing Kubernetes at Ticketmatic. Annotated slides and a video of the talk can be found here.

## June 11, 2017

We plan to release Stellarium 0.16.0 around June 21 (at the moment first release candidate was published for the testing of planetarium and translations checking).

This will an another major release with fixes of bugs and a few new important features - another one step to version 1.0. This version has many changes in the GUI and we added many new lines for translation. If you can assist with translation to any of the 140 languages which Stellarium supports, please go to Launchpad Translations and help us out: https://translations.launchpad.net/stellarium

If you can help translate description of sky cultures and landscapes on your language then we would be grateful to you for this. You need a create your branch, add 'description.YOUR-LANG-CODE.utf8' files from description.en.utf8 and translate it!

You can send translated description.YOUR-LANG-CODE.utf8 files to me for adding in Stellarium also.

Thank you!

## June 09, 2017

That worked, but the problem is that if you use your own copy of sgml-mode.el, you miss out on any other improvements to HTML and SGML mode. There have been some good ones, like smarter rewrap of paragraphs. I had previously tried lots of ways of customizing sgml-mode without actually replacing it, but never found a way.

Now, in emacs 24.5.1, I've found a easier way that seems to work. The annoying mis-indentation comes from the function sgml-comment-indent-new-line, which sets variables comment-start, comment-start-skip and comment-end and then calls comment-indent-new-line.

All I had to do was redefine sgml-comment-indent-new-line to call comment-indent-new-line without first defining the comment characters:

(defun sgml-comment-indent-new-line (&optional soft)
(comment-indent-new-line soft))


### Finding emacs source

I wondered if it might be better to call whatever underlying indent-new-line function comment-indent-new-line calls, or maybe just to call (newline-and-indent). But how to find the code of comment-indent-new-line?

Happily, describe-function (on C-h f, or if like me you use C-h for backspace, try F-1 h) tells you exactly what file defines a function, and it even gives you a link to click on to view the source. Wonderful!

It turned out just calling (newline-and-indent) wasn't enough, because sgml-comment-indent-new-line typically calls comment-indent-new-line when you've typed a space on the end of a line, and that space gets wrapped and then messes up indentation. But you can fix that by copying just a couple of lines from the source of comment-indent-new-line:

(defun sgml-comment-indent-new-line (&optional soft)
(save-excursion (forward-char -1) (delete-horizontal-space))
(delete-horizontal-space)
(newline-and-indent))


That's a little longer than the other definition, but it's cleaner since comment-indent-new-line is doing all sorts of extra work you don't need if you're not handling comments. I'm not sure that both of the delete-horizontal-space lines are needed: the documentation for delete-horizontal-space says it deletes both forward and backward. But I have to assume they had a good reason for having both: maybe the (forward-char -1) is to guard against spurious spaces already having been inserted in the next line. I'm keeping it, to be safe.

# G'MIC 2.0

## A second breath for open-source image processing.

The IMAGE team of the research laboratory GREYC in Caen/France is pleased to announce the release of a new major version (numbered 2.0) of its project G’MIC: a generic, extensible, and open source framework for image processing. Here, we present the main advances made in the software since our last article. The new features presented here include the work carried out over the last twelve months (versions 2.0.0 and 1.7.x, for x varying from 2 to 9).

# 1. G’MIC: A brief overview

G’MIC is an open-source project started in August 2008, by the IMAGE team. This French research team specializes in the fields of algorithms and mathematics for image processing. G’MIC is distributed under the CeCILL license (which is GPL compatible) and is available for multiple platforms (GNU/Linux, MacOS and Windows). It provides a variety of user interfaces for manipulating generic image data, that is to say, 2D or 3D multispectral images (or sequences) with floating-point pixel values. This includes, of course, “classic” color images.

The popularity of G’MIC mostly comes from the plug-in it provides for GIMP (since 2009). To date, there are more than 480 different filters and effects to apply to your images, which considerably enlarges the list of image processing filters available by default in GIMP.

G’MIC also provides a powerful and autonomous command-line interface, which is complementary to the CLI tools you can find in the famous ImageMagick or GraphicsMagick projects. There is also a web service G’MIC Online, allowing to apply image processing effects directly from a browser. Other (but less well known) G’MIC-based interfaces exist: a webcam streaming tool ZArt, a plug-in for Krita, a subset of filters available in Photoflow, Blender or Natron… All these interfaces are based on the CImg and libgmic libraries, that are portable, thread-safe and multi-threaded, via the use of OpenMP.

G’MIC has more than 950 different and configurable processing functions, for a library of only 6.5Mio, representing a bit more than 180 kloc. The processing functions cover a wide spectrum of the image processing field, offering algorithms for geometric manipulations, colorimetric changes, image filtering (denoising and detail enhancement by spectral, variational, non-local methods, etc.), motion estimation and registration, display of primitives (2D or 3D mesh objects), edge detection, object segmentation, artistic rendering, etc. It is therefore a very generic tool for various uses, useful on the one hand for converting, visualizing and exploring image data, and on the other hand for designing complex image processing pipelines and algorithms (see these project slides for details).

# 2. A new versatile interface, based on Qt

One of the major new features of this version 2.0 is the re-implementation of the plug-in code, from scratch. The repository G’MIC-Qt developed by Sébastien (an experienced member of the team) is a Qt-based version of the plug-in interface, being as independent as possible of the widget API provided by GIMP.

This has several interesting consequences:

• The plug-in uses its own widgets (in Qt) which makes it possible to have a more flexible and customizable interface than with the GTK widgets used by the GIMP plug-in API: for instance, the preview window becomes resizable at will, manages zooming by mouse wheel, and can be freely moved to the left or to the right. A filter search engine by keywords has been added, as well as the possibility of choosing between a light or dark theme. The management of favorite filters has been also improved and the interface even offers a new mode for setting the visibility of the filters. Interface personalization is now a reality.

• The plug-in also defines its own API, which is used to facilitate its integration in third party software (other than GIMP). In practice, a software developer has to write a single file host_software.cpp implementing the functions of the API to make the link between the plug-in and the host application. Currently, the file host_gimp.cpp does this for GIMP as a host. But there is now also a stand-alone version available (file host_none.cpp that runs this Qt interface in solo mode, from a shell (with command gmic_qt).

• Boudewijn Rempt, project manager and developer of the marvelous painting software Krita, has also started writing such a file host_krita.cpp to make this “new generation” plug-in communicate with Krita. In the long term, this should replace the previous G’MIC plug-in implementation they made (currently distributed with Krita), which is aging and poses maintenance problems for developers.

Minimizing the integration effort for developers, sharing the G’MIC plug-in code between different applications, and offering a user interface that is as comfortable as possible, have been the main objectives of this complete redesign. As you can imagine, this rewriting required a long and sustained effort, and we can only hope that this will raise interest among other software developers, where having a consistent set of image processing filters could be useful (a file host_blender.cpp available soon ? We can dream!). The animation below illustrates some of the features offered by this new Qt-based interface.

Note that the old plug-in code written in GTK was updated also to work with the new version 2.0 of G’MIC, but has fewer features and probably will not evolve anymore in the future, unlike the Qt version.

# 3. Easing the work of cartoonists…

One of G’MIC’s purposes is to offer more filters and functions to process images. And that is precisely something where we have not relaxed our efforts, despite the number of filters already available in the previous versions!

In particular, this version comes with new and improved filters to ease the colorization of line-art. Indeed, we had the chance to host the artist David Revoy for a few days at the lab. David is well known to lovers of art and free software by his multiple contributions in these fields (in particular, his web comic Pepper & Carrot is a must-read!). In collaboration with David, we worked on the design of an original automatic line-art coloring filter, named Smart Coloring.

When drawing comics, the colorization of line-art is carried out in two successive steps: The original drawing in gray levels (Fig.3.2.[1]) is first pre-colored with solid areas, i.e. by assigning a unique color to each region or distinct object in the drawing (Fig.3.2.[3]). In a second step, the colourist reworks this pre-coloring, adding shadows, lights and modifying the colorimetric ambiance, in order to obtain the final colorization result (Fig.3.2.[4]). Practically, flat coloring results in the creation of a new layer that contains only piecewise constant color zones, thus forming a colored partition of the plane. This layer is then merged with the original line-art to get the colored rendering (merging both in multiplication mode, typically).

Artists admit it themselves: flat coloring is a long and tedious process, requiring patience and precision. Classical tools available in digital painting or image editing software do not make this task easy. For example, even most filling tools (bucket fill) do not handle discontinuities in drawn lines very well (Fig.3.3.a), and even worse when lines are anti-aliased. It is then common for the artist to perform flat coloring by painting the colors manually with a brush on a separate layer (Fig.3.3.b), with all the precision problems that this supposes (especially around the contour lines, Fig.3.3.c). See also this link for more details.

It may even happen that the artist decides to explicitly constrain his style of drawing, for instance by using aliased brushes in a higher resolution image, and/or by forcing himself to draw only connected contours, in order to ease the flat colorization work that has to be done afterwards.

The Smart Coloring filter developed in version 2.0 of G’MIC allows to automatically pre-color an input line-art without much work. First, it analyses the local geometry of the contour lines (estimating their normals and curvatures). Second, it (virtually) does contour auto-completion using spline curves. This virtual closure allows then the algorithm to fill objects with disconnected contour plots. Besides, this filter has the advantage of being quite fast to compute and gives coloring results of similar quality to more expensive optimization techniques used in some proprietary software. This algorithm smoothly manages anti-aliased contour lines, and has two modes of colorization: by random colors (Fig.3.2.[2] and Fig.3.4) or guided by color markers placed beforehand by the user (Fig.3.5).

In “random” mode, the filter generates a piecewise constant layer that is very easy to recolor with correct hues afterwards. This layer indeed contains only flat color regions, and the classic bucket fill tool is effective here to quickly reassign a coherent color to each existing region synthesized by the algorithm.

In the user-guided markers mode, color spots placed by the user are extrapolated in such a way that it respects the geometry of the original drawing as much as possible, taking into account the discontinuities in the pencil lines, as this is clearly illustrated by the figure below:

This innovative, flat coloring algorithm has been pre-published on HAL (in French): A semi-guided high-performance flat coloring algorithm for line-arts. Curious people could find there all the technical details of the algorithm used. The recurring discussions we had with David Revoy on the development of this filter enabled us to improve the algorithm step by step, until it became really usable in production. This method has been used successfully (and therefore validated) for the pre-colorization of the whole episode 22 of the webcomic Pepper & Carrot.

The wisest of you know that G’MIC already had a line-art colorization filter! True, but unfortunately it did not manage disconnected contour lines so well (such as the example in Fig.3.5), and could then require the user to place a large number of color spots to guide the algorithm properly. In practice, the performance of the new flat coloring algorithm is far superior.

And since it does not see any objection to anti-aliased lines, why not create ones? That is the purpose of another new filter “Repair / Smooth [antialias]“ able to add anti-aliasing to lines in cartoons that would have been originally drawn with aliased brushes.

# 4. …Not to forget the photographers!

“Colorizing drawings is nice, but my photos are already in color!”, kindly remarks the impatient photographer. Don’t be cruel! Many new filters related to the transformation and enhancement of photos have been also added in G’MIC 2.0. Let’s take a quick look of what we have.

## 4.1. CLUTs and colorimetric transformations

CLUTs (Color Lookup Tables) are functions for colorimetric transformations defined in the RGB cube: for each color (Rs,Gs,Bs) of a source image Is, a CLUT assigns a new color (Rd,Gd,Bd) transferred to the destination image Id at the same position. These processing functions may be truly arbitrary, thus very different effects can be obtained according to the different CLUTs used. Photographers are therefore generally fond of them (especially since these CLUTs are also a good way to simulate the color rendering of certain old films).

In practice, a CLUT is stored as a 3D volumetric color image (possibly “unwrapped” along the z = B axis to get a 2D version). This may quickly become cumbersome when several hundreds of CLUTs have to be managed. Fortunately, G’MIC has a quite efficient CLUT compression algorithm (already mentioned in a previous article), which has been improved version after version. So it was finally in a quite relax atmosphere that we added more than 60 new CLUT-based transformations in G’MIC, for a total of 359 CLUTs usable, all stored in a data file that does exceed 1.2 Mio. By the way, let us thank Pat David, Marc Roovers and Stuart Sowerby for their contributions to these color transformations.

But what if you already have your own CLUT files and want to use them in GIMP? No problem ! The new filter “Film emulation / User-defined“ allows to apply such transformations from CLUT data file, with a partial support of files with extension .cube (CLUT file format proposed by Adobe, and encoded in ASCII o_O!).

And for the most demanding, who are not satisfied with the existing pre-defined CLUTs, we have designed a very versatile filter “Colors / Customize CLUT“, that allows the user to build their own custom CLUT from scratch: the user places colored keypoints in the RGB color cube and these markers are interpolated in 3D (according to a Delaunay triangulation) in order to rebuild a complete CLUT, i.e. a dense function in RGB. This is extremely flexible, as in the example below, where the filter has been used to change the colorimetric ambiance of a landscape, mainly altering the color of the sky. Of course, the synthesized CLUT can be saved as a file and reused later for other photographs, or even in other software supporting this type of color transformations (for example RawTherapee or Darktable).

To stay in the field of color manipulation, let us also mention the appearance of the filter “Colors / Retro fade“ which creates a “retro” rendering of an image with grain generated by successive averages of random quantizations of an input color image.

## 4.2. Making the details pop out

Many photographers are looking for ways to process their digital photographs so as to bring out the smallest details of their images, sometimes even to exaggeration, and we can find some of them in the pixls.us forum. Looking at how they perform allowed us to add several new filters for detail and contrast enhancement in G’MIC. In particular, we can mention the filters “Artistic / Illustration look“ and “Artistic / Highlight bloom“, which are direct re-implementations of the tutorials and scripts written by Sébastien Guyader as well as the filter “Light & Shadows / Pop shadows“ suggested by Morgan Hardwood. Being immersed in such a community of photographers and cool guys always gives opportunities to implement interesting new effects!

In the same vein, G’MIC gets its own implementation of the Multi-scale Retinex algorithm, something that was already present in GIMP, but here enriched with additional controls to improve the luminance consistency in images.

Our friend and great contributor to G’MIC, Jérome Boulanger, also implemented and added a dehazing filter “Details / Dcp dehaze“ to attenuate the fog effect in photographs, based on the Dark Channel Prior algorithm. Setting the parameters of this filter is kinda hard, but the filter gives sometimes spectacular results.

And to finish with this subsection, let us mention the implementation in G’MIC of the Rolling Guidance algorithm, a method to simplify images that has become a key step used in many newly added filters. This was especially the case in this quite cool filter for image sharpening, available in “Details / Sharpen [texture]“. This filter works in two successive steps: First, the image is separated into a texture component + a color component, then the details of the texture component only are enhanced before the image is recomposed. This approach makes it possible to highlight all the small details of an image, while minimizing the undesired halos near the contours, a recurring problem happening with more classical sharpening methods (such as the well known Unsharp Mask).

As you may know, a lot of photograph retouching techniques require the creation of one or several “masks”, that is, the isolation of specific areas of an image to receive differentiated processing. For example, the very common technique of luminosity masks is a way to treat differently shadows and highlights in an image. G’MIC 2.0 introduces a new interesting filter “Colors / Color mask [interactive]“ that implements a relatively sophisticated algorithm (albeit computationally demanding) to help creating complex masks. This filter asks the user to hover the mouse over a few pixels that are representative of the region to keep. The algorithm learns in real time the corresponding set of colors or luminosities and deduces then the set of pixels that composes the mask for the whole image (using Principal Component Analysis on the RGB samples).

Once the mask has been generated by the filter, the user can easily modify the corresponding pixels with any type of processing. The example below illustrates the use of this filter to drastically change the color of a car

It takes no more than a minute and a half to complete, as shown in the video below:

This other video exposes an identical technique to change the color of the sky in a landscape.

# 5. And for the others…

Since illustrators and photographers are now satisfied, let’s move on to some more exotic filters, recently added to G’MIC, with interesting outcomes!

## 5.1. Average and median of a series of images

Have you ever wondered how to easily estimate the average or median frame of a sequence of input images? The libre aficionado Pat David, creator of the site pixls.us often asked the question. First of all when he tried to denoise images by combining several shots of a same scene. Then he wanted to simulate a longer exposure time by averaging photographs taken successively. And finally, calculating averages of various kind of images for artistic purposes (for example, frames of music video clips, covers of Playboy magazine or celebrity portraits).

Hence, with his cooperation, we added new commands -median_files,-median_videos, -average_files and-average_videos to compute all these image features very easily using the CLI tool gmic. The example below shows the results obtained from a sub-sequence of the « Big Buck Bunny“ video. We have simply invoked the following commands from the Bash shell:

$gmic -average_video bigbuckbunny.mp4 -normalize 0.255 -o average.jpg$ gmic -median_video bigbuckbunny.mp4 -normalize 0.255 -o median.jpg


And to stay in the field of video processing, we can also mention the addition of the commands -morph_files and -morph_video that render temporal interpolations of video sequences, taking the estimated intra-frame object motion into account, thanks to a quite smart variational and multi-scale estimation algorithm.

The video below illustrates the rendering difference obtained for the retiming of a sequence using temporal interpolation, with (right) and without (left) motion estimation.

## 5.2. Deformations and “Glitch Art”

Those who like to mistreat their images aggressively will be delighted to learn that a bunch of new image deformation and degradation effects have appeared in G’MIC.

First of all, the filter “Deformations / Conformal maps“ allows one to distort an image using conformal maps. These deformations have the property of preserving the angles locally, and are most often expressed as functions of complex numbers. In addition to playing with predefined deformations, this filter allows budding mathematicians to experiment with their own complex formulas.

Fans of Glitch Art may also be concerned by several new filters whose rendering look like image encoding or compression artifacts. The effect “Degradations / Pixel sort“ sorts the pixels of a picture by row or by column according to different criteria and to possibly masked regions, as initially described on this page.

Degradations / /Pixel sort also has two little brothers, filters “Degradations / Flip & rotate blocks“ and “Degradations / Warp by intensity“. The first divides an image into blocks and allows to rotate or mirror them, potentially only for certain color characteristics (like hue or saturation, for instance).

The second locally deforms an image with more or less amplitude, according to its local geometry. Here again, this can lead to the generation of very strange images.

It should be noted that these filters were largely inspired by the Polyglitch plug-in, available for Paint.NET, and have been implemented after a suggestion from a friendly user (yes, yes, we try to listen to our most friendly users!).

## 5.3. Image simplification

What else do we have in store? A new image abstraction filter, Artistic / Sharp abstract, based on the Rolling Guidance algorithm mentioned before. This filter applies contour-preserving smoothing to an image, and its main consequence is to remove the texture. The figure below illustrates its use to generate several levels of abstraction of the same input image, at different smoothing scales.

In the same vein, G’MIC also gets a filter Artistic / Posterize which degrades an image to simulate posterization. Unlike the filter with same name available by default in GIMP (which mainly tries to reduce the number of colors, i.e. do color quantization), our version adds spatial simplification and filtering to approach a little more the rendering of old posters.

## 5.4. Other filters

If you still want more (and in this case one could say you are damn greedy!), we will end this section by discussing some of the new, but unclassifiable filters.

We start with the filter “Artistic / Diffusion tensors“, which displays a field of diffusion tensors, calculated from the structure tensors of an image (structure tensors are symmetric and positive definite matrices, classically used for estimating the local image geometry). To be quite honest, this feature had not been originally developed for an artistic purpose, but users of the plug-in came across it by chance and asked to make a GIMP filter from it. And yes, this is finally quite pretty, isn’t it?

From a technical point of view, this filter was actually an opportunity to introduce new drawing features into the G’MIC mathematical evaluator, and it has now become quite easy to develop G’MIC scripts for rendering custom visualizations of various image data. This is what has been done for instance, with the command -display_quiver reimplemented from scratch, and which allows to generate this type of rendering:

For lovers of textures, we can mention the apparition of two new fun effects: First, the “Patterns / Camouflage“ filter. As its name suggests, this filter produces a military camouflage texture.

Second, the filter “Patterns / Crystal background“ overlays several randomly colored polygons in order to synthesize a texture that vaguely looks like a crystal seen under a microscope. Pretty useful to quickly render colored image backgrounds.

And to end this long overview of new G’MIC filters developed since last year, let us mention “Rendering / Barnsley fern“. This filter renders the well-known Barnsley fern fractal. For curious people, note that the related algorithm is available on Rosetta Code, with even a code version written in the G’MIC script language, namely:

# Put this into a new file 'fern.gmic' and invoke it from the command line, like this:
# $gmic fern.gmic -barnsley_fern barnsley_fern : 1024,2048 -skip {" f1 = [ 0,0,0,0.16 ]; g1 = [ 0,0 ]; f2 = [ 0.2,-0.26,0.23,0.22 ]; g2 = [ 0,1.6 ]; f3 = [ -0.15,0.28,0.26,0.24 ]; g3 = [ 0,0.44 ]; f4 = [ 0.85,0.04,-0.04,0.85 ]; g4 = [ 0,1.6 ]; xy = [ 0,0 ]; for (n = 0, n<2e6, ++n, r = u(100); xy = r<=1?((f1**xy)+=g1): r<=8?((f2**xy)+=g2): r<=15?((f3**xy)+=g3): ((f4**xy)+=g4); uv = xy*200 + [ 480,0 ]; uv[1] = h - uv[1]; I(uv) = 0.7*I(uv) + 0.3*255; )"} -r 40%,40%,1,1,2  And here is the rendering generated by this function: # 6. Overall project improvements All filters presented throughout this article constitute only the visible part of the G’MIC iceberg. They are in fact the result of many developments and improvements made “under the hood”, i.e., directly on the code of the G’MIC script language interpreter. This interpreter defines the basic language used to write all G’MIC filters and commands available to users. Over the past year, a lot of work has been done to improve the performances and the capabilities of this interpreter: • The mathematical expressions evaluator has been considerably enriched and optimized, with more functions available (especially for matrix calculus), the support of strings, the introduction of const variables for faster evaluation, the ability to write variadic macros, to allocate dynamic buffers, and so on. • New optimizations have been also introduced in the CImg library, including the parallelization of new functions (via the use of OpenMP). This C++ library provides the implementations of the “critical” image processing algorithms and its optimization has a direct impact on the performance of G’MIC (in this respect, note that CImg is also released with a major version 2.0). • Compiling G’MIC on Windows now uses a more recent version of g++ (6.2 rather than 4.5), with the help of Sylvie Alexandre. This has actually a huge impact on the performances of the compiled executables: some filters run up to 60 times faster than with the previous binaries (this is the case for example, with the Deformations / Conformal Maps filter, discussed in section 5.2). • The support of large .tiff images (format BigTIFF, with files that can be larger than 4Gb) is now enabled (read and write), as it is for 64-bit floating-point TIFF images • The 3D rendering engine built into G’MIC has also been slightly improved, with the support for bump mapping. No filter currently uses this feature, but we never know, and prepare ourselves for the future! • And as it is always good to relax after a hard day’s work, we added the game of Connect Four to G’MIC :). It can be launched via the shell command $ gmic -x_connect4 or via the plug-in filter “Various / Games & demos / Connect-4“. Note that it is even possible to play against the computer, which has a decent but not unbeatable skill (the very simple AI uses the Minimax algorithm with a two-level decision tree).

Finally, let us mention the undergoing redesign work of the G’MIC Online web service, with a beta version already available for testing. This re-development of the site, done by Christophe Couronne and Véronique Robert (both members of the GREYC laboratory), has been designed to better adapt to mobile devices. The first tests are more than encouraging. Feel free to experiment and share your impressions!

# 7. What to remember?

First, the version 2.0 of G’MIC is clearly an important step in the project life, and the recent improvements are promising for the future developments. It seems that the number of users are increasing (and they are apparently satisfied!), and we hope that this will encourage open-source software developers to integrate our new G’MIC-Qt interface as a plug-in for their own software. In particular, we are hopeful to see the new G’MIC in action under Krita soon, this would be already a great step!

Second, G’MIC continues to be an active project, and evolve through meetings and discussions with members of artists and photographers communities (particularly those who populate the forums and IRC of pixls.us and GimpChat). You will likely able to find us there if you need more information, or just if you want to discuss things related to (open-source) image processing.

And while waiting for a future hypothetical article about a future release of G’MIC, you can always follow the day-after-day progress of the project via our Twitter feed.

Until then, long live open-source image processing!

Credit: Unless explicitly stated, the various non-synthetic images that illustrate this post come from Pixabay.

## We need your Flock session proposals!

This year’s Flock is more action-oriented compared to previous Flocks. The majority of session slots are hackfests and workshops; only one day (Tuesday the 29th) is devoted to traditional talks.

The registration system allows you to submit 4 different types of proposals:

• Talk (30 min) – A traditional talk, 30-minute time slot.
• Talk (60 min) – A traditional talk, 60-minute time slot.
• Do-Session (120 min) – A 2-hour long hackfest or workshop.
• Do-Session (120 min) – A 3-hour long hackfest or workshop.

There is no session proposal limit. Feel free to submit as many proposals as you have ideas for.

Our CFP ends June 15 so you have one week to get those awesome proposals in!

Submit your Flock session proposal now!

## How to create a strong proposal

How can you ensure your proposal is sufficiently strong enough for acceptance into Flock? Here are some tips and guidelines:

### Align your proposal to Fedora’s new mission statement.

Fedora’s mission statement was updated almost two months ago. The revised and final mission statement is:

Fedora creates an innovative platform for hardware, clouds, and containers that enables software developers and community members to build tailored solutions for their users.

If you can explain the connection between your session and this goal, you’ll make the proposal stronger. Even if you are not directly working on a hardware, cloud, or container effort, you can relate your session to the goal.

For example, say you’d like to propose a Fedora badges hackfest. Task the badges hackfest specifically with creating badges for activities associated with efforts aligned specifically with hardware, cloud, and container to strengthen it.

### Make sure the folks relevant to your topic are involved.

If you want to propose a Fedora badges workshop, that’s totally cool. You might want talk to Marie Nordin or Masha Leonova, and see what their plans are, give them a heads up, and coordinate or even propose it together with one or both of them.

The committee reviewing proposals occasionally sees duplicate / overlapping topics proposed. Generally, the committee chooses the proposal that has the subject matter experts most involved in the topic. A weak proposal on a topic has no indication of involvement or coordination with subject matter experts most actively involved in a topic.

### Make the audience for your topic clear.

Think about who you are giving your talk to or who you want to show up to your workshop or hackfest. If you’re proposing a Fedora Hubs hackfest, are there enough Pythonistas in Fedora to help? (Yes, yes, there are. )

Tailor your content for your audience – while you may be able to get folks familiar with Python, they may not be familiar with Flask or how Fedora Hubs widgets work, so make sure your proposal notes this material will be covered.

General user talks are discouraged. This Flock will be focused on empowering Fedora contributors and actively getting stuff done, so make sure your audience is a subset of existing Fedora contributors.

### Focus on taking or inspiring action.

A major focus of this year’s Flock is taking action, so talks that inspire action and hackfests / workshops where action will take place are going to be strong proposals.

## Questions?

Feel free to ask on the flock-planning list if you have any questions. Or, if you have private concerns / questions, you can email flock-staff@fedoraproject.org.

The Flock planning committee is looking forward to seeing your proposals!

Submit your Flock session proposal now!

Save

## June 06, 2017

I know, I know. We use mailers like mutt because we don't believe in HTML mail and prefer plaintext. Me, too.

But every now and then a situation comes up where it would be useful to send something with emphasis. Or maybe you need to highlight changes in something. For whatever reason, every now and then I wish I had a way to send HTML mail.

I struggled with that way back, never did find a way, and ended up writing a Python script, htmlmail.py to send an HTML page, including images, as email.

### Sending HTML Email

But just recently I found a neat mutt hack. It turns out it's quite easy to send HTML mail.

First, edit the HTML source in your usual mutt message editor (or compose the HTML some other way, and insert the file). Note: if there's any quoted text, you'll have to put a <pre> around it, or otherwise turn it into something that will display nicely in HTML.

Write the file and exit the editor. Then, in the Compose menu, type Ctrl-T to edit the attachment type. Change the type from text/plain to text/html.

That's it! Send it, and it will arrive looking like a regular HTML email, just as if you'd used one of them newfangled gooey mail clients. (No inline images, though.)

### Viewing HTML Email

Finding out how easy that was made me wonder why the other direction isn't easier. Of course, I have my mailcap set up so that mutt uses lynx automatically to view HTML email:

text/html; lynx -dump %s; nametemplate=%s.html; copiousoutput


Lynx handles things like paragraph breaks and does in okay job of showing links; but it completely drops all emphasis, like bold, italic, headers, and colors. My terminal can display all those styles just fine. I've also tried links, elinks, and w3m, but none of them seem to be able to handle any text styling. Some of them will do bold if you run them interactively, but none of them do italic or colors, and none of them will do bold with -dump, even if you tell them what terminal type you want to use. Why is that so hard?

I never did find a solution, but it's worth noting some useful sites I found along the way. Like tips for testing bold, italics etc. in a terminal:, and for testing whether the terminal supports italics, which gave me these useful shell functions:

echo -e "\e[1mbold\e[0m"
echo -e "\e[3mitalic\e[0m"
echo -e "\e[4munderline\e[0m"
echo -e "\e[9mstrikethrough\e[0m"
echo -e "\e[31mHello World\e[0m"
echo -e "\x1B[31mHello World\e[0m"

ansi()          { echo -e "\e[${1}m${*:2}\e[0m"; }
bold()          { ansi 1 "$@"; } italic() { ansi 3 "$@"; }
underline()     { ansi 4 "$@"; } strikethrough() { ansi 9 "$@"; }
red()           { ansi 31 "$@"; }  And in testing, I found that a lot of fonts didn't offer italics. One that does is Terminus, so if your normal font doesn't, you can run a terminal with Terminus: xterm -fn '-*-terminus-bold-*-*-*-20-*-*-*-*-*-*-*' Not that it matters since none of the text-mode browsers offer italic anyway. But maybe you'll find some other use for italic in a terminal. ## June 02, 2017 we're proud to announce the fifth bugfix release for the 2.2 series of darktable, 2.2.5! the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.2.5. as always, please don't use the autogenerated tarball provided by github, but only our tar.xz. the checksum is: $ sha256sum darktable-2.2.5.tar.xz
e303a42b33f78eb1f48d3b36d1df46f30873df4c5a7b49605314f61c49fbf281  darktable-2.2.5.tar.xz
$sha256sum darktable-2.2.5.dmg f6e8601fca9a08d988dc939484d03e137c16dface48351ef523b5e0bbbaecf18 darktable-2.2.5.dmg  Important note: to make sure that darktable can keep on supporting the raw file format for your camera, please help us by visiting https://raw.pixls.us/ and making sure that we have the full raw sample set for your camera under CC0 license! and the changelog as compared to 2.2.4 can be found below. ## New features: • When appending EXIF data to an exported image, do not fail if reading of EXIF from the original file fails • Support XYZ as proofing profile • Clear DerivedFrom from XMP before writing it • bauhaus: when using soft bounds, keep slider step constant ## Bugfixes: • Some GCC7 build fixes • cmstest: fix crash when missing XRandR extension. • Fix crash in Lua libs when collapsing libs • Mac packaging: some fixes • RawSpeed: TiffIFD: avoid double-free • Fix a few alloc-dealloc mismatches ## Base Support: • Canon EOS 77D • Canon EOS 9000D • Nikon D500 (14bit-uncompressed, 12bit-uncompressed) • Nikon D5600 (12bit-compressed, 12bit-uncompressed, 14bit-compressed, 14bit-uncompressed) • Panasonic DC-FZ82 (4:3) • Panasonic DMC-FZ80 (4:3) • Panasonic DMC-FZ85 (4:3) • Panasonic DC-GH5 (4:3) ## White Balance Presets: • Pentax K-3 II ## Noise Profiles: • Nikon D500 • Panasonic DMC-FZ300 • Panasonic DMC-LX100 • Pentax K-70 • Sony ILCE-5000 ## June 01, 2017 I got a tip that there were tiger salamanders with gills swimming around below Los Alamos reservoir, so I had to go see for myself. They're fabulous! Four to five inch salamanders with flattened tails and huge frilly gills behind their heads -- dozens of them, so many the pond is thick with them. Plenty of them are hanging out in the shallows or just below the surface of the water, obligingly posing for photos. I had stupidly brought only the pocket camera, not the DSLR -- and then the camera's battery turned out to be low -- so I was sparing with camera, but even so I was pleased at how well they came out, with the camera mostly managing to focus on the salamanders rather than (as I had feared) the surface of the murky water. I may go back soon with the DSLR. It's an easy, pleasant hike. ## May 29, 2017 So here we are for our monthly report of what has been going on on the FreeCAD front this month. As usual, I will mostly talk about what I have been doing myself, but don't forget that many people are working on FreeCAD, so there is always much more happening than what I talk about... ## May 26, 2017 SIGGRAPH 2017 (Los Angeles, 30 July – 3 August) is around the corner! To continue and celebrate the long standing tradition of Blender and SIGGRAPH, this year we have 3 announcements. ## Talk selected for SIGGRAPH 2017 Ton and the Blender Animation Studio team will present Beyond “Cosmos Laundromat”: Blender’s Open Source studio pipeline, a talk focused on Open Source pipelines and Blender. Here is the abstract: For “Cosmos Laundromat” – CAF 2016 Jury Award winner – the Blender team, headed by CG pioneer and producer Ton Roosendaal, developed and used a complete open source creation pipeline. The team released several other shorts since then, including a 360-degrees VR experience and a pitch for the feature animation “Agent 327”. Developing and sharing open source technologies is a great challenge, and leads to great bene for the small and medium animation studios. ## Blender booth SIGGRAPH hosts one of the largest exhibitions of the computer graphics industry, and this year Blender is going back to it. There will be demos, goodies and a new Blender demo-reel! ## Giveaway: free exhibit pass Follow this link for a free pass (worth$50 – get them before they run out), so you can drop by the exhibit hall in the LA Convention Center. We would love to see you there!

Today we’re releasing Krita 3.1.4. This strictly a bug-fix release, but everyone is encouraged to update.

• Fix a crash when trying to play an animation when OpenGL is disabled in Krita
• Fix rendering animation frames if the directory you’re trying to render to doesn’t exist
• Don’t open the tablet/screen resolution conflict dialog multiple times
• Don’t scale down previews that are too small in the transform tool: this fixes a rare crash with the transform tool
• Don’t crash when trying to close the last view on the last document while the document is modified.
• Fix a crash when cycling quickly through layers that have a color tag
• Fix loading some Gimp 2.9 files: note that Gimp 2.9’s file format is not officially supported in Krita
• Fully remove the macro recorder plugin: in 3.1.4, only the menu entries had stayed around.
• Make it impossible to hide the template selector in the new image dialog; hiding the template selector would also hide the cancel button in the dialog.

#### Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

#### Linux

A snap image for the Ubuntu App Store will be available from the Ubuntu application store. When it is updated, you can also use the Krita Lime PPA to install Krita 3.1.4 on Ubuntu and derivatives.

### Source code

#### Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here.

#### Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

Last year, GDQuest’s Nathan Lovato ran a succesful kickstarter: “Create Professional 2D Game Art: Krita Video Training”. Over the past year, he has produced a great number of videos for Krita, and has helped the Krita team out with release videos as well.

This year, he’s going to teach you how to use your art in a real game. Learn how to use Godot to create games with GDQuest, on Kickstarter now to bring you the first premium course for the engine, with the support of the Godot developers.

During the campaign, you get a free game creation tutorial on YouTube, every day!

Please check it out now, and spread the word: Make Professional 2d Games: Godot Engine Online Course

GDQuest reached the goal in less than 12 hours. Everything above it means more
content for the backers, but also for everyone! GDQuest will also contribute to
Godot 3.0’s demos and documentation. All the money will go to the
course’s production and official free educational resources.

Check out the Free daily tutorials on Youtube!.

## May 23, 2017

I'm working on a project involving PyQt5 (on which, more later). One of the problems is that there's not much online documentation, and it's hard to find out details like what signals (events) each widget offers.

Like most Python packages, there is inline help in the source, which means that in the Python console you can say something like

>>> from PyQt5.QtWebEngineWidgets import QWebEngineView
>>> help(QWebEngineView)

The problem is that it's ordered alphabetically; if you want a list of signals, you need to read through all the objects and methods the class offers to look for a few one-liners that include "unbound PYQT_SIGNAL".

If only there was a way to take help(CLASSNAME) and pipe it through grep!

A web search revealed that plenty of other people have wished for this, but I didn't see any solutions. But when I tried running python -c "help(list)" it worked fine -- help isn't dependent on the console.

That means that you should be able to do something like

python -c "from sys import exit; help(exit)"


Sure enough, that worked too.

From there it was only a matter of setting up a zsh function to save on complicated typing. I set up separate aliases for python2, python3 and whatever the default python is. You can get help on builtins (pythonhelp list) or on objects in modules (pythonhelp sys.exit). The zsh suffixes :r (remove extension) and :e (extension) came in handy for separating the module name, before the last dot, and the class name, after the dot.

#############################################################
# Python help functions. Get help on a Python class in a
# format that can be piped through grep, redirected to a file, etc.
# Usage: pythonhelp [module.]class [module.]class ...
pythonXhelp() {
python=$1 shift for f in$*; do
if [[ $f =~ '.*\..*' ]]; then module=$f:r
obj=$f:e s="from${module} import ${obj}; help($obj)"
else
module=''
obj=$f s="help($obj)"
fi
$python -c$s
done
}
alias pythonhelp="pythonXhelp python"
alias python2help="pythonXhelp python2"
alias python3help="pythonXhelp python3"


So now I can type

python3help PyQt5.QtWebEngineWidgets.QWebEngineView | grep PYQT_SIGNAL

and get that list of signals I wanted.

## May 22, 2017

Just over a year ago Bastille security announced the discovery of a suite of vulnerabilities commonly referred to as MouseJack. The vulnerabilities targeted the low level wireless protocol used by Unifying devices, typically mice and keyboards. The issues included the ability to:

• Pair new devices with the receiver without user prompting
• Inject keystrokes, covering various scenarios
• Inject raw HID commands

### Thank You!

Even so, we have had some folks who have donated to help us offset these costs and I want to take a moment to recognize their generosity and graciousness!

Dimitrios Psychogios has been a supporter of the site since the beginning. This past year he covered (more than) our hosting costs for the entire year, and for that I am infinitely grateful (yes, I have infinite gratitude). It also helps that based on his postings on G+ our musical tastes are very similarly aligned. As soon as I get the supporters page up you’re going to the top of the list! Thank you, Dimitrios, for your support of the community!

Jonas Wagner (@JonasWagner) and McCap (@McCap) both donated this past year as well. Which is doubly-awesome because they are both active in the community and have written some great content for everyone as well (@McCap is the author of the article A Masashi Wakui look with GIMP_, and has been active in the community since the beginning as well).

Mica (@paperdigits) and Luka are both recurring donators which I am particularly grateful for. It really helps for planning to know we have some recurring support like that.

I have a bunch of donations where the donators didn’t leave me a name to use for attribution and I don’t want to just assume it’s ok. If you know you donated and see your first name in the list below (and are ok with me using your full name and a link if you want) then please let me know and I’ll update this post (and for the donators page later).

These are the folks who are really making a difference by taking the time and being gracious enough to support us. Even if you don’t want your full name out here, I know who you are and am very, very grateful and humbled by your generosity and kindness. Thank you all so much!

• Marc W. (you rock!)
• Ulrich P.
• Luc V.
• Ben E.
• Keith A.
• Philipp H.
• Christian M.
• Matthieu M.
• Christian M.
• Christian K.
• Maria J.
• Kevin P.
• Maciej D.
• Christian K.
• Egbert G.
• Michael H.
• Jörn H.
• Boris H.
• Norman S.
• David O.
• Walfrido C.
• Philip S.
• David S.
• Keith B.
• Andrea V.
• Stephan R.
• David M.
• Bastian H.
• Chance J.
• Luka S.
• Nathanael S.
• Sven K.
• Pepijn V.
• Benjamin W.
• Jörg W.
• Patrick B.
• Joop K.
• Alain V.
• Egor S.
• Samuel S.

On that note. If anyone wanted to join the folks above in supporting what we’re up to, we have a page specifically for that:

https://pixls.us/support/

Remember, no amount is too small!

## Libre Graphics Meeting Rio

I wasn’t able to attend LGM this year, being held down in Rio (but the GIMP team did). That’s not to say that we didn’t have folks from the community there: Farid (@frd) from Estúdio Gunga was there!

I was able to help coordinate a presentation by Robin Mills (@clanmills) about the state (and future) of Exiv2. They’re looking for a maintainer to join the project, as Robin will be stepping down at the end of the year for studies. If you think you’d be interested in helping out, please get in touch with Robin on the forums and let him know!

I also put together (quickly) a few slides on the community that were included in the “State of the Libre Graphics” presentation that kicks off the meeting (presented this year by GIMPer Simon Budig):

This was just a short overview of the community and I think it makes sense to include it here was well. Since we stood the forum up two years ago we’ve seen about 3.2 million pageviews and have just under 1,400 users in the community. Which is just awesome to me.

@LebedevRI was also going to be mad if I didn’t take the time to at least let folks know about raw.pixls.us, where we currently have 693 raw files across 477 cameras. Please, take a moment to check raw.pixls.us and see if we are missing (or need better) files from a camera you may have, and get us samples for testing!

## raw.pixls.us

We set up raw.pixls.us so we can gather camera raw samples for regression testing of rawspeed as well to have a place for any other project that might need raw files to test with. As we blogged about previously, the new site is also a replacement for the now defunct rawsamples.ch website.

Stop in and see if we’re missing a sample you can provide, or if you can provide a better (or better licensed) version for your camera. We’re focusing specifically on CC0 contributions.

## Welcome digiKam!

As I mentioned in my last blog post, we learned that the digiKam team was looking for a new webmaster through a post on discuss. @Andrius posted a heads up on the digiKam 5.5.0 release in this thread.

Needless to say, less than a month or so later, @paperdigits had already finished up a nice new website for them! This is something we’re really trying to help out the community with and are super glad to be able to help out the digiKam team with this. The less time they have to worry about web infrastructure and security for it, the more time they can spend on awesome new features for their project and users.

Yes, we used a static site generator (Hugo in this case), and we were also able to move their commenting system to use discuss as its back-end! This is the same way we’re doing comments for PIXLS.US right now (scroll to the bottom of this post).

They’ve got their own category on discuss for both general digiKam discussion as well as their linked comments from their website.

Speaking of using discourse as a commenting system…

## Discourse upstream

We’ve been using discourse as our forum software from the beginning. It’s a modern, open, and full-featured forum software that I think works incredibly well as a modern web application.

The ability to embed comments in a website that are part of the forum was one of the main reasons I went with it. I didn’t want to expose users to unnecessary privacy concerns by embedding a third-party commenting system (cough, disqus, cough). If I was going to go through the trouble of setting up a way to comment on things, I wanted to homogenize it with a full community-building effort.

This past year they (the discourse devs) added the ability to embed comments in multiple hosts (it was only one host when we first stood things up). This means that we can now manage the comments for anyone else thay may need them! Of course, building out a new website for digiKam meant that this was a perfect time to test things.

It all works beautifully, with one minor nitpick. The ability to style the embedded comments was limited to a single style for all the places that they might be embedded. This may be fine if all of the sites look similar, but if you visit www.digikam.org and compare it to here, you can see they are a little bit different… (we’re on white, digikam.org is on a dark background).

We needed a way to isolate the styling on a per-host basis, which after much help from @darix (yet again :)) I was able to finally hack something together that worked and get it pushed upstream (and merged finally)!

## Play Raw

When RawTherapee migrated their official forums over to pixls they brought something really fun with them: Play Raw. They would share a single raw file amongst the community and then have everyone process and share their results (including their processing steps and associated .pp3 settings file).

If you haven’t seen it yet, we’ve had quite a few Play Raw posts over the past year with all sorts of wonderful images to practice on and share! There are portraits, children, dogs, cats, landscapes, HDR, and phở! There’s over 19 different raw files being shared right now, so come try your hand at processing (or even share a file of your own)!

The full list of play_raw posts can always be found here:
https://discuss.pixls.us/tags/play_raw

## Amazon S3

We are a photography forum, so it only made sense that we made it as easy as possible for community members to upload and share images (raw files, and more). It’s one of the things I love about discourse that it’s so easy to add these things to your posts (simply drag-and-drop into the post editor) and upload them.

While this is easy to do, it does mean that we have to store all of this data. The VPS we use from Digital Ocean only has a 40GB SSD and it has to include all of the main forum running on it. We did have a little space for a while, but to help alleviate the local storage as a possible problem down the line, I moved our file storage out to Amazon S3.

This means that we can upload all we want and won’t really hit a wall with actual available storage space. It costs more each month than trying to store it all on local storage for the site, but then we don’t have to worry about expansion (or migration) later. Plus our current upload size limit per file is 100MB!

As you can see, we’re only looking at about $5USD/month on average in storage and transfer costs for the site with Amazon. We’re also averaging about$22usd/month in hosting costs with Digital Ocean, so we’re still only about $27/month in total hosting costs. Maybe$30 if we include the hosting for the main website which is at Stablehost.

## IRC

We’ve had an IRC room for a long time (longer than discuss I think), but I only just got around to including a link on the site for folks to be able to join through a nice web client (Kiwi IRC).

It was included as part of an oft-requested set of links to get back to various parts of the main site from the forums. I also added these links in the menu for the site as well (the header links are hidden when on mobile, so this way you can still access the links from whatever device you’re using):

If you have your own IRC client then you can reach us on irc.freenode.net #pixls.us. Come and join us in the chat room! If you’re not there you are definitely missing out on a ton of stimulating conversation and enlightening discussions!

## May 11, 2017

At Collabora Productivity we recently encountered the need to investigate calls in a third-party application to COM services offered by one or more other applications. In particular, calls through the IDispatch mechanism.

In practice, it is use of the services that Microsoft Office offers to third-party applications that we want to trace and dump symbolically.

We looked around for existing tools but did not find anything immediately suitable, especially not anything available under an Open Source license. So we decided to hack a bit on one of the closest matches we found, which is Deviare-InProc. It is on GitHub, https://github.com/nektra/Deviare-InProc.

Deviare-InProc already includes code for much of the hardest things needed, like injecting a DLL into a target process, and hooking function calls. What we needed to do was to hook COM object creation calls and have the hook functions notice when objects that implement IDispatch are created, and then hook their Invoke implementations.

The DLL injection functionality is actually "just" part of the sample code included with Deviare-InProc. The COM tracing functionality that we wrote is based on the sample DLL to be injected.

One problem we encountered was that in some cases, we would need to trace IDispatch::Invoke calls that are made in a process that has already been started (through some unclear mechanism out of our control). The InjectDLL functionality in Deviare-InProc does have the functionality to inject the DLL into an existing process. But in that case, the process might already have performed its creation of IDispatch implementing COM objects, so it is too late to get anything useful from hooking CoGetClassObject().

We solved that with a hack that works nicely in many cases, by having the injected DLL itself create an object known to implement IDispatch, and hoping its Invoke implementation is the same as that used by the interesting things we want to trace.

Here is a snippet of a sample VBScript file:

Set objExcel = CreateObject("Excel.application")
set objExcelBook = objExcel.Workbooks.Open(FullName)

objExcel.application.visible=false

objExcelBook.SaveAs replace(FileName, actualFileName, prefix & actualFileName) & "csv", 23

objExcel.Application.Quit
objExcel.Quit

And here is the corresponding output from tracing cscript executing that file. (In an actual use case, no VBScript source would obviously be available to inspect directly.)

Process #10104 successfully launched with dll injected!
Microsoft (R) Windows Script Host Version 5.812

# CoGetClassObject({00024500-0000-0000-C000-000000000046}) (Excel.Application.15)
#   riid={00000001-0000-0000-C000-000000000046}
#   CoCreateInstance({0000032A-0000-0000-C000-000000000046}) (unknown)
#     riid={00000149-0000-0000-C000-000000000046}
#     result:95c668
#   CoCreateInstance({00000339-0000-0000-C000-000000000046}) (unknown)
#     riid={00000003-0000-0000-C000-000000000046}
#   result:95dd8c
# Hooked Invoke 0 of 95de1c (old: 487001d) (orig: 76bafec0)
95de1c:Workbooks() -> IDispatch:98ed74
98ed74:Open({"c:\temp\b1.xls"}) : ({"c:\temp\b1.xls"}) -> IDispatch:98ea14
95de1c:Application() -> IDispatch:95de1c
95de1c:putVisible(FALSE)
95de1c:Application() -> IDispatch:95de1c
98ea14:SaveAs(23,"c:\temp\converted_b1.csv")
95de1c:Application() -> IDispatch:95de1c
95de1c:Quit()
95de1c:Quit()

Our work on top of Deviare-InProc is available at https://github.com/CollaboraOnline/Deviare-InProc.

Binaries are available at https://people.collabora.com/~tml/injectdll/injectdll.zip (for 32-bit applications) and https://people.collabora.com/~tml/injectdll/injectdll64.zip (64-bit). The zip archive contains an executable, injectdll.exe (injectdll64.exe in the 64-bit case) and a DLL.

Unpack the zip archive somewhere. Then go there in Command Prompt, and in case the program you want to trace the IDispatch::Invoke use of is something you know how to start from the command line, you can enter this command:

injectdll.exe x:\path\to\program.exe “program arg1 arg2 …”

where program.exe is the executable you want to run, and arg1 arg2 … are command-line parameters it takes, if any.

if you can’t start the program you want to investigate from the command line, but you need to inspect it after it has already started, just pass only the process id of the program to injectdll.exe instead. (Or injectdll64.exe) This is somewhat less likely to succeed, depending on how the program uses IDispatch.

In any case, the output (symbolic trace) will go to the standard output of the program being traced, which typically is nowhere at all, and not useful. It will not go to the standard output of the injectdll.exe program.

In order to redirect the output to a file, set an environment variable DEVIARE_LOGFILE that contains the full pathname to the log file to produce. This environment variable must be visible in the program that is being traced; it is not enough to set it in the Command Prompt window where you run injectdll.exe.

Obviously all this is a work in progress, and as needed will be hacked on further. For instance, the name "injectdll" is just the name of the original sample program in upstream Deviare-InProc; we should really rename it to something specific for this use case.

## May 10, 2017

We are releasing GIMP 2.8.22 with various bug fixes.

All platforms will benefit from a change to the image window hierarchy in single window mode, which improves painting performance when certain GTK+ themes are used.

This version fixes an ancient CVE bug, CVE-2007-3126. Due to this bug, the ICO file import plug-in could be crashed by specially crafted image files. Our attempts to reproduce the bug failed with 2.8 and thus the impact had likely been minimal for years, but now it is gone for good.

Users on the Apple macOS platforms will benefit from fixes for crashes during drag&drop and copy&paste operations. On the Microsoft Windows platforms, crashes encountered when using the color picker with special multi-screen setups are gone, and picking the actual color instead of black from anywhere on the screen should finally be possible.

Check out the full list of fixed issues since 2.8.20.

The source code, the Microsoft Windows installer and the Apple Disk Image for GIMP 2.8.22 are available from our downloads page; so yes, this time we made an effort to publish everything in one go :)

The second premium Krita game art course, Make Cel Shaded Game Characters, is out! It contains 14 tutorials
and a full commented art time-lapse that will help you improve your lighting fundamentals and show you how to
make a game character.

The series is based on David Revoy’s webcomic, Pepper and Carrot! He paints it all using Krita, and that was
a great occasion to link our respective work. David released most of his work under the CC-By 4.0 licence,
allowing anyone to reuse it as long as you give proper credits.

## May 08, 2017

The Open Desktop Ratings service is a simple Flask web service that various software centers use to retrieve and submit application reviews. Today it processed the 3000th review, and I thought I should mark this occasion here. I wanted to give a huge thanks to all the people who have submitted reviews; you have made life easier for people unfamiliar with installing software. There are reviews in over a hundred different languages and over 600 different applications have been reviewed.

Over 4000 people have clicked the “was this useful to you” buttons in the reviews, which affect how the reviews are ordered for a particular system. Without people clicking those buttons we don’t really know how useful a review is. Since we started this project, 37 reviews have been reported for abuse, of which 15 have been deleted for things like swearing and racism.

Here are some interesting graphs, first, showing the number of requests we’re handling per month. As you can see we’re handling nearly a million requests a month.

The second show the number of people contributing reviews. At about 350 per month this is a tiny fraction compared to the people requesting reviews, but this is to be expected.

The third shows where reviews come from; the notable absence is Ubuntu, but they use their own review system rather than the ODRS. Recently Debian has been increasing the fastest, I assume because at last they ship a gnome-software package new enough to support user reviews, but the reviews are still coming in fastest from Fedora users. Maybe Fedora users are the kindest in the open source community? Maybe we just shipped the gnome-software package first? :)

One notable thing missing from the ODRS is a community of people moderating reviews; at the moment it’s just me deciding which reviews are indeed abuse, and also fixing up common spelling errors in the submitted text. If this is something you would like to help with, please let me know and I can spend a bit of time adding a user type somewhere in-between benevolent dictator (me) and unauthenticated. Ideas welcome.

## May 06, 2017

Don’t have time or money for an airport shoe shine? Use my hot tip to polish up those clogs while you’re on the move:

## May 05, 2017

Late notice, but Dave and I are giving a talk on the moon tonight at PEEC. It's called Moonlight Sonata, and starts at 7pm. Admission: $6/adult,$4/child (we both prefer giving free talks, but PEEC likes to charge for their Friday planetarium shows, and it all goes to support PEEC, a good cause).

We'll bring a small telescope in case anyone wants to do any actual lunar observing outside afterward, though usually planetarium audiences don't seem very interested in that.

If you're local but can't make it this time, don't worry; the moon isn't a one-time event, so I'm sure we'll give the moon show again at some point.

# Welcome digiKam!

## Lending a helping hand

One of the goals we have here at PIXLS.US is to help Free Software projects however we can, and one of those ways is to focus on things that we can do well that might help make things easier for the projects. It may not be much fun for project developers to deal with websites or community outreach necessarily. This is something I think we can help with, and recently we had an opportunity to do just that with the awesome folks over at the photo management project digiKam.

As part of a post announcing the release of digiKam 5.5.0 on discuss. we learned that they were in need of a new webmaster, and they needed something soon to migrate away from Drupal 6 for security reasons. They had a rudimentary Drupal 7 theme setup, but it was severely lacking (non-responsive and not adapted to the existing content).

Mica (@paperdigits) reached out to Gilles Caulier and the digiKam community and offered our help, which they accepted! At that point Mica gathered requirements from them and found in the end that a static website would be more than sufficient for their needs. We coordinated with the KDE folks to get a git repo setup for the new website, and rolled up our sleeves to start building!

Mica chose to use the Hugo static-site generator to build the site with. This was something new for us, but turned out to be quite fast and fun to work with (it generates the entire digiKam site in just about 5 seconds). Coupled with a version of the Foundation 6 blog theme we were able to get a base site framework up and running fairly quickly. We scraped all of the old site content to make sure that we could port everything as well as make sure we didn’t break any urls along the way.

We iterated some design stuff along the way, ported all of the old posts to markdown files, hacked at the theme a bit, and finally included comments that are now hosted on discuss. What’s wild is that we managed to pull the entire thing together in about 6 weeks total (of part-time working on it). The digiKam team seems happy with the results so far, and we’re looking forward to continue helping them by managing this infrastructure for them.

A big kudos to Mica for driving the new site and getting everything up and running. This was really all due to his hard work and drive.

Also, speaking of discuss, we also have a new category created specifically for digiKam users and hackers: https://discuss.pixls.us/c/software/digikam.

This is the same category that news posts from the website will post in, so feel free to drop in and say hello or share some neat things you may be working on with digiKam!

WebKitGTK+ has supported remote debugging for a long time. The current implementation uses WebSockets for the communication between the local browser (the debugger) and the remote browser (the debug target or debuggable). This implementation was very simple and, in theory, you could use any web browser as the debugger because all inspector code was served by the WebSockets. I said in theory because in the practice this was not always so easy, since the inspector code uses newer JavaScript features that are not implemented in other browsers yet. The other major issue of this approach was that the communication between debugger and target was not bi-directional, so the target browser couldn’t notify the debugger about changes (like a new tab open, navigation or that is going to be closed).

Apple abandoned the WebSockets approach a long time ago and implemented its own remote inspector, using XPC for the communication between debugger and target. They also moved the remote inspector handling to JavaScriptCore making it available to debug JavaScript applications without a WebView too. In addition, the remote inspector is also used by Apple to implement WebDriver. We think that this approach has a lot more advantages than disadvantages compared to the WebSockets solution, so we have been working on making it possible to use this new remote inspector in the GTK+ port too. After some refactorings to the code to separate the cross-platform implementation from the Apple one, we could add our implementation on top of that. This implementation is already available in WebKitGTK+ 2.17.1, the first unstable release of this cycle.

From the user point of view there aren’t many differences, with the WebSockets we launched the target browser this way:

#### Second screen – social details, personal requirements

This is the screen where you can fill out your badge details as well as indicate your personal requirements (T-shirt size, dietary preferences/restrictions, etc.)

#### Third screen – no funding needed

So depending, the next section may be split into a separate form or be a conditional based on whether or not the registrant is requesting funding. The reason we would want to split funding requests into a separate form is that applicants will need to do some research into cost estimates for their travel, and that could take some time, and we don’t want the form to time out while that’s going on.

Anyhow, this is what this page of the form looks like if you don’t need funding. We offer an opportunity to help out other attendees to those folks who don’t need funding here.

#### Third screen – travel details

This is the travel details page for those seeking financial assistance; it’s rather long, as we’ve many travel options, domestic and international.

#### Fourth screen – funding request review

This is a summary of the total funding request cost as well as the breakdown of partial funding options. I’d really like to hear your feedback on this, if it’s confusing or if it makes sense. Are there too many partial options?

#### Final screen – summary

This screen is just a summary of everything submitted as well as information about next steps.

#### What do you think?

Do these seem to make sense? Any confusion or issues come up as you were reading through them? Please let me know. You can drop a comment or join the convo on flock-planning.

Cheers!

(Update: Changed the language of the first questions in both of the 3rd screens; there were confusing double-negatives pointed out by Rebecca Fernandez. Thanks for the help!)

In an ideal world vendors could use the same GUID value for hardware matching in Windows and Linux firmware. When installing firmware and drivers in Windows vendors can always use some generated HardwareID GUIDs that match useful things like the BIOS vendor and the product SKU. It would make sense to use the same scheme as Microsoft. There are a few issues in an otherwise simple plan.

The first, solved with a simple kernel patch I wrote (awaiting review by Jean Delvare), exposes a few more SMBIOS fields into /sys/class/dmi/id that are required for the GUID calculation.

The second problem is a little more tricky. We don’t actually know how Microsoft joins the strings, what encoding is used, or more importantly the secret namespace UUID used to seed the GUID. The only thing we have got is the closed source ComputerHardwareIds.exe program in the Windows DDK. This, luckily, runs in Wine although Wine isn’t able to get the system firmware data itself. This can be worked around, and actually makes testing easier.

So, some research. All we know from the MSDN page is that Each hardware ID string is converted into a GUID by using the SHA-1 hashing algorithm which actually tells us quite a bit. Generating a GUID from a SHA-1 hash means this has to be a type 5 UUID.

The reference code for a type-5 UUID is helpfully available in the IETF RFC document so it’s quite quick to get started with research. From a few minutes of searching online, the most likely symbols the program will be using are the BCrypt* set of functions. From the RFC code, we call the checksum generation update function with first the encoded namespace (aha!) and then the encoded joined string (ahaha!). For Win32 programs, BCryptHashData is the function we want to trace.

So, to check:

wine /home/hughsie/ComputerHardwareIds.exe /mfg "To be filled by O.E.M."


…matches the reference HardwareID-14 output from Microsoft. So onto debugging, using +relay shows all the calling values and return values from each Win32 exported symbol:

WINEDEBUG=+relay winedbg --gdb ~/ComputerHardwareIds.exe
Wine-gdb> b BCryptHashData
Wine-gdb> r ~/ComputerHardwareIds.exe /mfg "To be filled by O.E.M." /family "To be filled by O.E.M."
005b:Call bcrypt.BCryptHashData(0011bab8,0033fcf4,00000010,00000000) ret=0100699d
Breakpoint 1, 0x7ffd85f8 in BCryptHashData () from /lib/wine/bcrypt.dll.so
Wine-gdb>


Great, so this is the secret namespace. The first parameter is the context, the second is the data address, the third is the length (0x10 as a length is indeed SHA-1) and the forth is the flags — so lets print out the data so we can see what it is:

Wine-gdb> x/16xb 0x0033fcf4
0x33fcf4:	0x70	0xff	0xd8	0x12	0x4c	0x7f	0x4c	0x7d
0x33fcfc:	0x00	0x00	0x00	0x00	0x00	0x00	0x00	0x00


Using either the uuid in python, or uuid_unparse in libuuid, we can format the namespace to 70ffd812-4c7f-4c7d-0000-000000000000 — now this doesn’t look like a randomly generated UUID to me! Onto the next thing, the encoding and joining policy:

Wine-gdb> c
005f:Call bcrypt.BCryptHashData(0011bb90,00341458,0000005a,00000000) ret=010069b3
Breakpoint 1, 0x7ffd85f8 in BCryptHashData () from /lib/wine/bcrypt.dll.so
Wine-gdb> x/90xb 0x00341458
0x341458:	0x54	0x00	0x6f	0x00	0x20	0x00	0x62	0x00
0x341460:	0x65	0x00	0x20	0x00	0x66	0x00	0x69	0x00
0x341468:	0x6c	0x00	0x6c	0x00	0x65	0x00	0x64	0x00
0x341470:	0x20	0x00	0x62	0x00	0x79	0x00	0x20	0x00
0x341478:	0x4f	0x00	0x2e	0x00	0x45	0x00	0x2e	0x00
0x341480:	0x4d	0x00	0x2e	0x00	0x26	0x00	0x54	0x00
0x341488:	0x6f	0x00	0x20	0x00	0x62	0x00	0x65	0x00
0x341490:	0x20	0x00	0x66	0x00	0x69	0x00	0x6c	0x00
0x341498:	0x6c	0x00	0x65	0x00	0x64	0x00	0x20	0x00
0x3414a0:	0x62	0x00	0x79	0x00	0x20	0x00	0x4f	0x00
0x3414a8:	0x2e	0x00	0x45	0x00	0x2e	0x00	0x4d	0x00
0x3414b0:	0x2e	0x00
Wine-gdb> q


So there we go. The encoding looks like UTF-16 (as expected, much of the Windows API is this way) and the joining character seems to be &.

I’ve written some code in fwupd so that this happens:

$fwupdmgr hwids Computer Information -------------------- BiosVendor: LENOVO BiosVersion: GJET75WW (2.25 ) Manufacturer: LENOVO Family: ThinkPad T440s ProductName: 20ARS19C0C ProductSku: LENOVO_MT_20AR_BU_Think_FM_ThinkPad T440s EnclosureKind: 10 BaseboardManufacturer: LENOVO BaseboardProduct: 20ARS19C0C Hardware IDs ------------ {c4159f74-3d2c-526f-b6d1-fe24a2fbc881} <- Manufacturer + Family + ProductName + ProductSku + BiosVendor + BiosVersion + BiosMajorRelease + BiosMinorRelease {ff66cb74-5f5d-5669-875a-8a8f97be22c1} <- Manufacturer + Family + ProductName + BiosVendor + BiosVersion + BiosMajorRelease + BiosMinorRelease {2e4dad4e-27a0-5de0-8e92-f395fc3fa5ba} <- Manufacturer + ProductName + BiosVendor + BiosVersion + BiosMajorRelease + BiosMinorRelease {3faec92a-3ae3-5744-be88-495e90a7d541} <- Manufacturer + Family + ProductName + ProductSku + BaseboardManufacturer + BaseboardProduct {660ccba8-1b78-5a33-80e6-9fb8354ee873} <- Manufacturer + Family + ProductName + ProductSku {8dc9b7c5-f5d5-5850-9ab3-bd6f0549d814} <- Manufacturer + Family + ProductName {178cd22d-ad9f-562d-ae0a-34009822cdbe} <- Manufacturer + ProductSku + BaseboardManufacturer + BaseboardProduct {da1da9b6-62f5-5f22-8aaa-14db7eeda2a4} <- Manufacturer + ProductSku {059eb22d-6dc7-59af-abd3-94bbe017f67c} <- Manufacturer + ProductName + BaseboardManufacturer + BaseboardProduct {0cf8618d-9eff-537c-9f35-46861406eb9c} <- Manufacturer + ProductName {f4275c1f-6130-5191-845c-3426247eb6a1} <- Manufacturer + Family + BaseboardManufacturer + BaseboardProduct {db73af4c-4612-50f7-b8a7-787cf4871847} <- Manufacturer + Family {5e820764-888e-529d-a6f9-dfd12bacb160} <- Manufacturer + EnclosureKind {f8e1de5f-b68c-5f52-9d1a-f1ba52f1f773} <- Manufacturer + BaseboardManufacturer + BaseboardProduct {6de5d951-d755-576b-bd09-c5cf66b27234} <- Manufacturer  Which basically matches the output of ComputerHardwareIds.exe on the same hardware. If the kernel patch gets into the next release I’ll merge the fwupd branch to master and allow vendors to start using the Microsoft HardwareID GUID values. I'm taking a MOOC that includes equations involving Greek letters like epsilon. I'm taking notes online, in Emacs, using the iimage mode tricks for taking MOOC class notes in emacs that I worked out a few years back. Iimage mode works fine for taking screenshots of the blackboard in the videos, but sometimes I'd prefer to just put the equations inline in my file. At first I was typing out things like E = epsilon * sigma * T^4 but that's silly, and of course the professor isn't spelling out the Greek letters like that when he writes the equations on the blackboard. There's got to be a way to type Greek letters on this US keyboard. I know how to type things like accented characters using the "Multi key" or "Compose key". In /etc/default/keyboard I have XKBOPTIONS="ctrl:nocaps,compose:menu,terminate:ctrl_alt_bksp" which, among other things, sets the compose key to be my "Menu" key, which I never used otherwise. And there's a file, /usr/share/X11/locale/en_US.UTF-8/Compose, that includes all the built-in compose key sequences. I have a shell function in my .zshrc, composekey() { grep -i$1 /usr/share/X11/locale/en_US.UTF-8/Compose
}

so I can type something like composekey epsilon and find out how to type specific codes. But that didn't work so well for Greek letters. It turns out this is how you type them:
<dead_greek> <A>            : "Α"   U0391    # GREEK CAPITAL LETTER ALPHA
<dead_greek> <a>            : "α"   U03B1    # GREEK SMALL LETTER ALPHA
<dead_greek> <B>            : "Β"   U0392    # GREEK CAPITAL LETTER BETA
<dead_greek> <b>            : "β"   U03B2    # GREEK SMALL LETTER BETA
<dead_greek> <D>            : "Δ"   U0394    # GREEK CAPITAL LETTER DELTA
<dead_greek> <d>            : "δ"   U03B4    # GREEK SMALL LETTER DELTA
<dead_greek> <E>            : "Ε"   U0395    # GREEK CAPITAL LETTER EPSILON
<dead_greek> <e>            : "ε"   U03B5    # GREEK SMALL LETTER EPSILON

... and so forth. And this <dead_greek> key isn't actually defined in most US/English keyboard layouts: you can check whether it's defined for you with: xmodmap -pke | grep dead_greek

Of course you can use xmodmap to define a key to be <dead_greek>. I stared at my keyboard for a bit, and decided that, considering how seldom I actually need to type Greek characters, I didn't see the point of losing a key for that purpose (though if you want to, here's a thread on how to map <dead_greek> with xmodmap).

I decided it would make much more sense to map it to the compose key with a prefix, like 'g', that I don't need otherwise. I can do that in ~/.XCompose like this:

<Multi_key> <g> <A>            : "Α"   U0391    # GREEK CAPITAL LETTER ALPHA
<Multi_key> <g> <a>            : "α"   U03B1    # GREEK SMALL LETTER ALPHA
<Multi_key> <g> <B>            : "Β"   U0392    # GREEK CAPITAL LETTER BETA
<Multi_key> <g> <b>            : "β"   U03B2    # GREEK SMALL LETTER BETA
<Multi_key> <g> <D>            : "Δ"   U0394    # GREEK CAPITAL LETTER DELTA
<Multi_key> <g> <d>            : "δ"   U03B4    # GREEK SMALL LETTER DELTA
<Multi_key> <g> <E>            : "Ε"   U0395    # GREEK CAPITAL LETTER EPSILON
<Multi_key> <g> <e>            : "ε"   U03B5    # GREEK SMALL LETTER EPSILON

... and so forth.

And now I can type [MENU] g e and a lovely ε appears, at least in any app that supports Greek fonts, which is most of them nowadays.

## April 24, 2017

It was a Wednesday morning. I just connected to email, to realise that something was wrong with the developer web site. People had been having issues accessing content, and they were upset. What started with “what’s wrong with Trac?” quickly escalated to “this is just one more symptom of how The Company doesn’t care about us community members”.

As I investigated the problem, I realised something horrible. It was all my fault.

I had made a settings change in the Trac instance the night before – attempting to impose some reason and structure in ACLs that had grown organically over time – and had accidentally removed a group, containing a number of community members not working for The Company, from having the access they had.

Oh, crap.

After the panic and cold sweats died down, I felt myself getting angry. These were people who knew me, who I had worked alongside for months, and yet the first reaction for at least a few of them was not to assume this was an honest mistake. It was to go straight to conspiracy theory. This was conscious, deliberate, and nefarious. We may not understand why it was done, but it’s obviously bad, and reflects the disdain of The Company.

Had I not done enough to earn people’s trust?

So I fixed the problem, and walked away. “Don’t respond in anger”, I told myself. I got a cup of coffee, talked about it with someone else, and came back 5 minutes later.

“Look at it from their side”, I said – before I started working with The Company, there had been a strained relationship with the community. Yes, they knew Dave Neary wouldn’t screw them over, but they had no way of knowing that it was Dave Neary’s mistake. I stopped taking it personally. There is deep-seated mistrust, and that takes time to heal, I said to myself.

Yet, how to respond on the mailing list thread? “We apologise for the oversight, blah blah blah” would be interpreted as “of course they fixed it, after they were caught”. But did I really want to put myself out there and admit I had made what was a pretty rookie mistake? Wouldn’t that undermine my credibility?

In the end, I bit the bullet. “I did some long-overdue maintenance on our Trac ACLs yesterday, they’re much cleaner and easier to maintain now that we’ve moved to more clearly defined roles. Unfortunately, I did not test the changes well enough before pushing them live, and I temporarily removed access from all non-The Company employees. It’s fixed now. I messed up, and I am sorry. I will be more careful in the future.” All first person – no hiding behind the corporate identity, no “we stand together”, no sugar-coating.

What happened next surprised me. The most vocal critic in the thread responded immediately to apologise, and to thank me for the transparency and honesty. Within half an hour, a number of people were praising me and The Company for our handling of the incident. The air went out of the outrage balloon, and a potential disaster became a growth opportunity – yes, the people running the community infrastructure are human too, and there is no conspiracy. The Man was not out to get us.

I no longer work for The Company, and the team has scattered to the winds. But I never forgot those cold sweats, that feeling of vulnerability, and the elation that followed the community reaction to a heartfelt mea culpa.

Part of the OSS Communities series – difficult conversations. Contribute your stories and tag them on Twitter with #osscommunities to be included.

## April 22, 2017

This is a short report of what I've been doing this month regarding the development of the Arch Workbench of FreeCAD. At the beginning of this year I was complaining that the economical crisis in Brazil was making things hard here, and guess what, now we have so much work coming in that it got hard...

## April 21, 2017

Last week, my hiking group had its annual trip, which this year was Bluff, Utah, near Comb Ridge and Cedar Mesa, an area particular known for its Anasazi ruins and petroglyphs.

(I'm aware that "Anasazi" is considered a politically incorrect term these days, though it still seems to be in common use in Utah; it isn't in New Mexico. My view is that I can understand why Pueblo people dislike hearing their ancestors referred to by a term that means something like "ancient enemies" in Navajo; but if they want everyone to switch from using a mellifluous and easy to pronounce word like "Anasazi", they ought to come up with a better, and shorter, replacement than "Ancestral Puebloans." I mean, really.)

The photo at right is probably the most photogenic of the ruins I saw. It's in Mule Canyon, on Cedar Mesa, and it's called "House on Fire" because of the colors in the rock when the light is right.

The light was not right when we encountered it, in late morning around 10 am; but fortunately, we were doing an out-and-back hike. Someone in our group had said that the best light came when sunlight reflected off the red rock below the ruin up onto the rock above it, an effect I've seen in other places, most notably Bryce Canyon, where the hoodoos look positively radiant when seen backlit, because that's when the most reflected light adds to the reds and oranges in the rock.

Sure enough, when we got back to House on Fire at 1:30 pm, the light was much better. It wasn't completely obvious to the eye, but comparing the photos afterward, the difference is impressive: Changing light on House on Fire Ruin.

The weather was almost perfect for our trip, except for one overly hot afternoon on Wednesday. And the hikes were fairly perfect, too -- fantastic ruins you can see up close, huge petroglyph panels with hundreds of different creatures and patterns (and some that could only have been science fiction, like brain-man at left), sweeping views of canyons and slickrock, and the geology of Comb Ridge and the Monument Upwarp.

And in case you read my last article, on translucent windows, and are wondering how those generated waypoints worked: they were terrific, and in some cases made the difference between finding a ruin and wandering lost on the slickrock. I wish I'd had that years ago.

Most of what I have to say about the trip are already in the comments to the photos, so I'll just link to the photo page:

## April 19, 2017

Barnstorm VFX embodies the diverse skills, freewheeling spirit, and daredevil attitude of the early days stunt plane pilots. Nominated for VES award for their outstanding work on the TV series “The Man in the High Castle”, they have been using Blender as integral part of their pipeline.

The following text is an edited version of the answers of a Reddit AMA held by the heads of the team (Lawson Deming and Cory Jamieson) on February 3, 2017.

## Getting into Blender

We’ve experimented with a variety of programs over the years, but for 3D work, we settled on using Blender starting about 3 years ago. It’s very unusual for VFX houses (at least in the US) to use Blender (as opposed to, say, Maya), but there are a number of great features that caused us to switch over to it. One of them was the Cycles render engine, that we’ve used for our rendering of most of the 3D elements in High Castle and other shows. In order to deal with the huge rendering needs of High Castle, we set up cloud rendering using Amazon’s own AWS servers through Deadline, which allowed us to have as many as 150 machines working at a time to render some of the big sequences.

In addition to Blender, we occasionally use other 3D programs, including Houdini for particle systems, fire, etc. Our texturing and material work is done in Substance Painter, and compositing is done in Nuke and After Effects.

The original decision to use Blender actually didn’t have anything to do with the cost (though it’s certainly helpful now that we have more people using it). We were already using Nuke and NukeX as a company (which are pretty expensive software packages) and had been using Maya for about a year. Before that, Lightwave was what we used.

## Assembling a team

The real turning point came when we had to pull together a small team of freelancers to do a sequence. The process went a little bit like this:

1) We hire a 3D artist to start modeling for us. He’s an experienced modeler but his background is in a studio environment where there are a lot of departments and a pretty hefty pipeline to help deal with everything. He’s nominally a Maya guy, but the studio he was at had their own custom modeling software which he’s more familiar with, so even though he’s working in Maya, it’s not his first choice.

2) The modeling guy only does modeling, so we need to bring in a texture artist. She doesn’t actually use Maya for UV work or texturing. Instead she uses Mari (a Foundry product). She and the Modeler have some issues making the texturing work back and forth between Mari and Maya because they aren’t used to being outside of a studio pipeline that takes care of everything for them.

3) Since neither of the above are experienced in layout or rendering, we hire a third guy to do the setup of the scene. He is a Maya guy as well, but once he starts working, he says “oh, you guys don’t have VRay? I can get by in Mental Ray (Maya’s renderer at the time) but I prefer Vray.” We spend a ton of time trying to work around Mental Ray’s idiosyncrasies, including weird behavior with the HDR lighting major gamma issues with the textures.

4) We need to do some particle simulation work and smoke and create some water in the same scene… Guess who uses Maya to do these things? No one, apparently. Water and particles are Houdini in this case. Smoke is FumeFX (which at the time only existed as a 3DStudio Max plugin and had no Maya version).

So, pop quiz. What is Maya doing for us in this instance? We’ve got a modeler who is begrudgingly using it but prefers other modeling software, a texture artist who isn’t using it at all, a layout/lighter who would rather be using a third party rendering engine, and the prospect of doing SFX that will require multiple additional third party softwares totaling thousands of dollars. At the time we were attempting this, the core team of our company was just 5 people, of which I was the only one who regularly did 3D work (in Lightwave).
￼￼

I consider myself a generalist and had been puttering along in Maya, but I found it very obtuse and difficult to approach from a generalist standpoint. I’d just started dabbling in Blender and found it very approachable and easy to use, with a lot of support and tutorials out there. At the same time our three freelancers were struggling with the above sequence, I managed to build and render another shot from the scene fully in Blender (a program that I was a novice in at the time), utilizing its internal smoke simulation tools and the ocean simulation toolkit (which is actually a port of the one in Houdini) to do SFX on my own, and I got a great looking render out of Cycles.

Blender has its weaknesses, and as a general 3D package, it’s not the best in any one area, but neither is Maya. Any specialty task will always be better in another program. But without a pre-existing Maya pipeline, and with the fact that Maya’s structure encourages the use of many specialists collaborating on a single task (rather than one well-rounded generalist working solo) it didn’t make sense to dump a lot of resources and money into making Maya work for such a small studio.

I ended up falling in love with working in Blender, and as we brought on and trained some other 3D artists, I encouraged them to use it. Eventually we found ourselves a Blender studio. That advantage of being good for a generalist, though, has also been a weakness as we’ve grown as a company, because it’s hard to find people who are really amazing artists in Blender. Our solution up until now has been to work hard on finding good Blender artists and to try and train others who want to learn.

## Blender in production

Also, since Blender acts as a hub for VFX work, it’s still possible for specialists to contribute from their respective programs. Initial modeling, for example, can be done in almost any program. It can be difficult, but the more people from other VFX studios I talk to, the more I realize that everybody’s pipeline is pretty messy, and even the studios who are fully behind Maya use a ton of other software and have a lot of custom scripts and techniques to get everything working the way they want it to.
￼￼

We use Blender for modeling, animation, and rendering. Our partners at Theory Animation have focused a lot on how to make Blender better for animation (they all came from a Maya background as well but fell in love with Blender the same way I did). We’ve used Blender’s fluid system and particle system (though both of these need work) and render everything in Cycles. We still use Houdini for the stuff that it’s good at. We used Massive to create character animations for “The Man in the High Castle”. We also started using Substance Painter and Substance Designer for texture work. Cycles is good at exporting render layers, which we composited mostly in Nuke.

One of the big hurdles that Blender has to overcome is the the fact that its licensing rules can make it difficult legally for it to interact with paid software. Most companies want to keep their code closed, so the open-source nature of Blender has made it tricky to, for example, get a Substance Designer plugin. It’s something we’re working on though.

When collaborating with other companies, we usually separate the 3d and compositing aspects of the work to keep the software issues from being a problem. It’s getting easier every day, though, especially now that Blender is starting to support Alembic. For season one, the sequence we worked on was completely separate and turnkey, so we didn’t have any issues sharing assets. For season 2, however, we did need to do a lot of conversion and re-modeling of elements. Also, many of the models we received were textured using UDIMs, which Blender does not currently support. It would be great for blender to eventually adopt the UDIM workflow for texturing.
We do get a lot of raised eyebrows from people when we tell them we use Blender professionally. Hopefully the popularity of the show (and the fact that we’ve been nominated for some VFX awards) will help remove some of the stigma that Blender has developed over the years. It’s a great program.
￼￼

We’ve developed a number of in-house solutions for Blender. We use Blender solely for 3D and NukeX for tracking and compositing, but we hand camera data back and forth between Nuke and Blender using .chan files (that’s technically built into blender but we’ve developed a system to make it a bit easier). Fitting Blender into a compositing pipeline (Nuke, EXR workflow) is surprisingly easy. Layer render rendering, and the ease of setting up Blender have made it pretty fast for passing around assets between artists and vendors. We also have a custom procedure and PBR shader setup for working with materials out of Substance Painter in Blender. A mix of Shotgun, our own asset tracking, and a workflow based on Blender Linking with a handful of add-ons are needed to make sure everything works.

## Production Design

We worked really hard to make it feel correct. You can also thank the Production Designer, Andrew Boughton, who designed the practical sets in the show. He has a lot of architectural knowledge and was very collaborative with us to help make sure our designs matched the feel of the rest of the stuff in the show.

Our visual bible for Germania was a book called “Albert Speer: Architecture 1932-1942”. There were extensive and detailed plans for the transformation of Berlin, including blueprints for buildings like the Volkshalle. We did take some creative liberties with the arrangement and positioning of buildings for the sake of the narrative and to better coordinate with the production designer’s aesthetic of the sets. We looked at old film reels including the famous “Triumph of the Will” for references of how Nazi rallies were organized. One video game that I remember paying attention to was “Wolfenstein: The New Order” because it presents a world that was taken over by the Nazis, though its presentation of post war Berlin (including the Volkshalle) was much more futuristic and sci-fi-ish that what we went for. Our goal in MITHC was to create a sense of the world that felt fairly mundane and grounded in reality. The more it felt like something that could really happen, the more effective the message of the show.

## April 18, 2017

Ryan Lerch put together an initial cut at a Flock 2017 logo and website design (flock2017-WIP branch of fedora-websites). It was an initial cut he offered for me to play with; in trying to work on some logistics for Flock to make sure things happen on time I felt locking in on a final logo design would be helpful at this point.

Here is the initial cut of the top part of the website with the first draft logo:

Overall, this is very Cape Cod. Ryan created a beautiful piece of work in the landscape illustration and the overall palette. Honestly, this would work fine as-is, but there were a few points of critique for the logo specifically that I decided to explore –

• There weren’t any standard Fedora fonts in it; I considered at least the date text could be in one of the standard Fedora fonts to tie back to Fedora / Flock. The standard ‘Flock’ logotype wasn’t used either; generally we try to keep stuff with logotypes in that logotype (if anything, so it seems more official.)
• The color palette is excellent and evocative of Cape Cod, but maybe some Fedora accent colors could tie it into the broader Fedora look and feel and make it seem more like part of the family.
• The hierarchy of the logo is very “Cape Cod”-centric and my gut told me that “Flock” should be primary and “Cape Cod” should be subordinate to that.
• Some helpful nautically-experienced folks in the broader community (particularly Pat David) pointed out the knot wasn’t tied quite correctly.

So here were the first couple of iterations I played with (B,C) based on Ryan’s design (A), but trying to take into account the critique / ideas above, with an illustration I created of the Lewis Bay lighthouse (the closest to the conference site):

I posted this to Twitter and Mastodon, and got a ton of very useful feedback. The main points I took away:

• The seagulls probably complicate things too much – ditch ’em.
• The Fedora logo was liked.
• There seemed to be a preference for having the full dates for the conference in the logo.
• The lighthouse beams in C were sloppily / badly aligned… I knew this and was lazy and posted it anyway.
• Some folks liked the dark blue ones because it was a Fedora color, some folks felt A’s color palette was more “Cape Cod” like.
• At least a couple folks felt C was reminiscent of a nuclear symbol.
• The simplicity / cleanness of A was admired.

So here’s the next round; things I tried:

• Took a position on the hierarchy and placed ‘Flock’ above ‘Cape Cod’ in the general hierarchy in the logo.
• Standardized all non-Flock logotype fonts on Montserrat, which is a standard Fedora brand font.
• Shifted to original color palette from A.
• Properly aligned lighthosue lights.
• Added full dates to every mockup.
• Corrected knot tie.

One more round, based on further helpful Mastodon feedback. You can see some play with fonts and mashing up elements from other iterations together based on folks’ suggestions:

I have a few favorites. Maybe you do too. I’m not sure which to go with yet – I have been staring at these too long for today. I did some quick mockups of how they look in the website:

I’ll probably sit on this and come back with fresh eyes later. I’m happy for any / all feedback in the comments here!

This is part of the opensource.com community blogging challenge: Maintaining Existing Community.

There are a lot of parallels between the world of politics and open source development. Open source community members can learn a lot about how political parties cultivate grass-roots support and local organizations, and empower those local organizations to keep people engaged. Between 2005 and 2009, Howard Dean was the chairman of the Democratic National Congress in the United States, and instituted what was known as the “50 state strategy” to grow the Democratic grass roots. That strategy, and what happened after it was changed, can teach community managers some valuable lessons about keeping community contributors. Here are three lessons community managers can learn from it.

## Growing grass roots movements takes effort

The 50 state strategy meant allocating rare resources across parts of the country where there was little or no hope of electing a congressman, as well as spending some resources in areas where there was no credible opposition. Every state and electoral district had some support from the national organization. Dean himself travelled to every state, and identified and empowered young, enthusiastic activists to lead local organizations. This was a lot of work, and many senior democrats did not agree with the strategy, arguing that it was more important to focus effort on the limited number of races where the resources could make a difference between winning and losing (swing seats). Similarly, for community managers, we have a limited number of hours in the day, and investing in outreach in areas where we do not have a big community already takes attention away from keeping our current users happy. But growing the community, and keeping community members engaged, means spending time in places where the short-term return on that investment is not clear. Identifying passionate community users and empowering them to create local user groups, or to man a stand aty a small local conference, or speak at a local meet-up helps keep them engaged and feel like part of a greater community, and it also helps grow the community for the future.

## Regular contact maintains engagement

After Howard Dean finished his term as head of the DNC in 2009, and Debbie Wasserman-Schultz took over as the DNC chair, the 50 state strategy was abandoned, in favour of a more strategic and focussed investment of efforts in swing states. While there are many possible reasons that can be put forward, it is undeniable that the local Democratic party structures which flourished under Dean have lost traction. The Democratic party has lost hundreds of state legislature seats, dozens of state senate seats, and a number of governorships  in “red” states since 2009, in spite of winning the presidency in 2012. The Democrats have lost control of the House and the Senate nationally, in spite of winning the popular vote in 2016 and 2012. For community managers, it is equally important to maintain contact with local user groups and community members, to ensure they feel empowered to act for the community, and to give the resources they need to be successful. In the absence of regular maintenance, community members are less inclined to volunteer their time to promote the project and maintain a local community.

## Summary

Growing local user groups and communities is a lot of work, but it can be very rewarding. Maintaining regular contact, empowering new community members to start a meet-up or a user group in their area, and creating resources for your local community members to speak about and promote your project is a great way to grow the community, and also to make life-long friends. Political organizations have a long history of organizing people to buy into a broader vision and support and promote it in their local communities.

What other lessons can community managers and organizers learn from political organizations?

Blessed Easter!

## April 14, 2017

I’m currently working on a small Haskell tool which helps me minimize the waiting time for catching a train into the city (or out). One feature I’ve implemented recently is an automated import of aprox. 25MB compressed CSV data into an SQLite3 database, which was very slow in the beginning. Not focusing on the first results of the profiling information helped to optimize the implementation for a swift import.

### Background

The data comes as a 25MB zip archive of text files in a CSV format. All imported, the SQLite database grows to about 800 MiB. My work-in-progress solution was a cruddy shell + SQL script which imports the CSV files into an SQLite database. With this solution, the import takes about 30 seconds, excluding the time you need to manually download the zip file. But this is not very portable, as I wanted to have a more user friendly solution.

The initial Haskell implementation using mostly the esqueleto and persistent DSL functions showed an abysmal performance. I had to stop the process after half an hour.

### Finding the culprit

A first profiling pass showed this result summary:

COST CENTRE          MODULE                         %time %alloc

stepError            Database.Sqlite                 77.2    0.0
concat.ts'           Data.Text                        1.8   14.5
compareText.go       Data.Text                        1.4    0.0
concat.go.step       Data.Text                        1.0    8.2
concat               Data.Text                        0.9    1.4
concat.len           Data.Text                        0.8   13.9
sumP.go              Data.Text                        0.8    2.1
concat.go            Data.Text                        0.7    2.6
singleton_           Data.Text.Show                   0.6    4.0
run                  Data.Text.Array                  0.5    3.1
escape               Database.Persist.Sqlite          0.5    7.8
>>=.\                Data.Attoparsec.Internal.Types   0.5    1.4
singleton_.x         Data.Text.Show                   0.4    2.9
parseField           CSV.StopTime                     0.4    1.6
toNamedRecord        Data.Csv.Types                   0.3    1.2
fmap.\.ks'           Data.Csv.Conversion              0.3    2.9
insertSql'.ins       Database.Persist.Sqlite          0.2    1.4
compareText.go.(...) Data.Text                        0.1    4.3
compareText.go.(...) Data.Text                        0.1    4.3

Naturally I checked the implementation of the first function, since that seemed to have the largest impact. It is a simple foreign function call to C. Fraser Tweedale made me aware, that there is not more speed to gain here, since it’s already calling a C function. With that in mind I had to focus on the next entries. It turned out that’s where I gained most of the speed to something more competitive against the crude SQL script and having it more user friendly.

It turned out that Data.Persistent uses primarily Data.Text concatenation to create the SQL statements. That being done for every insert statement is very costly, since it prepares, binds values and executes the statement for each insert (for reference see this Stack Overflow answer).

### The solution

My current solution is to prepare the statement once and only bind the values for each insert.

Having done another benchmark, the import time now comes down to approximately a minute on my Thinkpad X1 Carbon.

## April 13, 2017

I’ve created a mailing list for fwupd and LVFS discussions. If you’re interested in firmware updating on Linux, or want to know what’s happening on the Linux Vendor Firmware Service you probably want to join. There are a few interesting things I’ll post in a few days.

### Graupner

Graupner is a remote control model equipment company, originally founded 1930 in Germany. It went bankrupt in 2012 and was taken over by South Korean manufacturer SJ Ltd one year later. Graupner now continues as brand and sales organization. A large part of the product palette are stick-type transmitters for RC aircraft.

### X-8E RC transmitter

The X-8E pistol-style transmitter for surface vehicles was first announced in 2013, but delayed until 2016. I can only assume this was caused by the change of ownership and restructuring. Given the rich feature-set and a price point of € 469.99 in their own shop, it is clearly in the high-end category and must face comparisons to Futaba 4PX, KO Propo EX1, Sanwa M12s and Spektrum DX6R.

However, the screen design seems incredibly rushed and not at all befitting to a flagship model in its category. Let’s have a look at the dashboard screen, which should be visible fairly often.

### Import

I imported the dashboard screen from the PDF manual into Inkscape and scaled it to match the resolution of 320 x 480 pixels, with a little tweaking to have the raster image icons  at their original 40 x 40 and lined up on the pixel grid. A photo of the real thing and the result of these first steps:

As you can see, the layout is all over the place. At least the varying corner radii seem to appear only in the PDF.

### Quick and easy improvements

A: In the second row, I removed TX, 2x RX and 4.8V as they are absent in the photo, though their visibility seems to be conditional. There’s space left for them, anyway.

B: Making things line up within a grid. A table section with left aligned labels and units (%) in their own row.

C: Vertical steering and throttle meters (ST and TH) are aligned with the physical controls (wheel and trigger), but steering is better shown on a left-right axis and having the same orientation for all 4 channel meters tames the layout. Graupner is already written above the screen; the space can be put to much better use.

There are several deeper issues I did not touch:

• Lack of differentiation between pure indicators, toggles and menu buttons.
• Questionable icons, especially the two in the third row.
• Just white outlines for some elements, where filled backgrounds would make them more defined.
• Lacking and bad labelling with unnecessary abbreviations. The O.TIME in the bottom left is explained as model use time in the manual

Filed under: User Experience

## April 10, 2017

My friend and colleague Stormy Peters just launched a challenge to the community – to blog on a specific community related topic before the end of the week. This week, the topic is “Encouraging new contributors”.

I have written about the topic of encouraging new contributors in the past, as have many others. So this week, I am kind of cheating, and collecting some of the “Greatest Hits”, articles I have written, or which others have written, which struck a chord on this topic.

Some of my own blog posts I have particular affection for on the topic are:

I also have a few go-to articles I return to often, for the clarity of their ideas, and for their general usefulness:

• Open Source Community, Simplified” by Max Kanat-Alexander, does a great job of communicating the core values of communities which are successful at recruiting new contributors. I particularly like his mantra at the end: “be really, abnormally, really, really kind, and don’t be mean“. That about sums it up…
• Building Belonging“, by Jono Bacon: I love Jono’s ability to weave a narrative from personal stories, and the mental image of an 18 year old kid knocking on a stranger’s door and instantly feeling like he was with “his people” is great. This is a key concept of community for me – creating a sense of “us” where newcomers feel like part of a greater whole. Communities who fail to create a sense of belonging leave their engaged users on the outside, where there is a community of “core developers” and those outside. Communities who suck people in and indoctrinate them by force-feeding them kool-aid are successful at growing their communities.
• I love all of “Producing Open Source Software“, but in the context of this topic, I particularly love the sentiment in the “Managing Participants” chapter: “Each interaction with a user is an opportunity to get a new participant. When a user takes the time to post to one of the project’s mailing lists, or to file a bug report, she has already tagged herself as having more potential for involvement than most users (from whom the project will never hear at all). Follow up on that potential.”

To close, one thing I think is particularly important when you are managing a team of professional developers who work together is to ensure that they understand that they are part of a team that extends beyond their walls. I have written about this before as the “water cooler” anti-pattern. To extend on what is written there, it is not enough to have a policy against internal discussion and decisions – creating a sense of community, with face to face time and with quality engagements with community members outside the company walls, can help a team member really feel like they are part of a community in addition to being a member of a development team in a company.

### Could you tell us something about yourself?

My name is Marcos Ebrahim. I’m an Egyptian artist and illustrator specialized in children’s book art, having 5 years experience with children’s animation episodes as computer graphics artist. I have just finished my first whole book as children’s illustrator on a freelance basis that will be on the market at Amazon soon. I’m also working on my own children’s book project as author and illustrator.

### What genre(s) do you work in?

Children’s illustrations and concept art in general or children’s book art specifically.

### Do you paint professionally, as a hobby artist, or both?

Because I’m not a member of any children’s illustration agencies, associations or publishing houses yet, I’m now doing this work on a small scale as a freelancer. I changed careers to be an illustrator a few months ago, so I can’t call myself a professional yet. I hope to achieve that soon.

### Whose work inspires you most — who are your role models as an artist?

Nathan Fowkes‘ works and illustrations. I found this illustrator and concept artist on a website that called him “the master of value and colour”. I was lucky engough to study online (an art program by Schoolism) under his supervision and learn a lot. However, I’m not a brilliant student

Other great illustrators and artists I like: Goro Fujita, Marco Bucci, Patrice Barton, Will Terry, Lynne Chapman, John Manders and many others. I always look forward to seeing their art works to learn from them.

### How and when did you get to try digital painting for the first time?

About three years ago, when I was trying to use my new Wacom Intuos tablet for painting and drawing, practicing studies from the great Renaissance masters as fan art.

### What makes you choose digital over traditional painting?

Let me describe it like this: “The Undo-Time Machine — the great digital button”. Beside the ability to make changes in illustrations easily, I found its benefit when I tried to work with authors and they asked me to make changes that I couldn’t have made in traditional painting without redoing the illustration from scratch.

### How did you find out about Krita?

I used to surf Youtube for viewing Illustrations and artists demo their work. Then I heard about Krita as open source art software. So I decided to search more and found out that the illustrations made with it could be similar to my work, so I should devote some of my time to know more about it and try it. Then I searched out more learning videos on Youtube. Frankly, the most impressive and helpful one was a long video tutorial by the art champion of the Krita community, David Revoy, making a whole comic page from scratch using Krita. He showed the whole illustration process, as well as the brushes and tools he provides for others to use (thank you very much!).

### What was your first impression?

I think the Krita program has a user-friendly interface and tools that become more familiar when I configure the shortcuts similarly to most popular other art programs. This make it easier to work without the need to learn many things in a short time.

### What do you love about Krita?

I think the most wonderful thing is the brush sets and the way they look like real-world tools. In addition some other tools like the transformation tool (perspective) and the pop-up tool.

Also I can say that working with Krita is the first time that I can work on one of my previous sketches and achieve a good result (according to my current art skills) that I’m happy with.

### What do you think needs improvement in Krita? Is there anything that really annoys you?

As I mentioned to the Krita team, there are some issues that we could call bugs. However, I know that Krita is in development and the great Krita team makes things better from one version to another and add new features all the time. Thanks to them and hoping that they will continue their great work!

### What sets Krita apart from the other tools that you use?

I think that Krita, as open-source art software, could soon compete with commercial art software if it continues on this path (fixing bugs and adding new features).

### If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

Frankly, I’ve only recently started to use Krita and I’ve finished only two pictures. One I could call an illustration, the other is the background of my art blog, trying to use the wrap tool to make a tiled pattern. You can see this in the screenshots.

### What techniques and brushes did you use in it?

I prefer to make a sketch, then work on it adding base colors and adjusting
the lighting value to reach the final details.

### Where can people see more of your work?

I add new works to my art blog once or twice every month or more
often, depending on the time I have available and whether I make anything
new.

http://marcosebrahimart.tumblr.com

### Anything else you’d like to share?

All the best wishes to the great Krita team for continuing success in their
work in developing Krita open source software for all artists and all people.

## April 07, 2017

A week after the alpha release, we present the beta release for Krita 3.1.3. Krita 3.1.3 will be a stable bugfix release, 4.0 will have the vector work and the python scripting. The final release of 3.1.3 is planned for end of April.

We’re still working on fixing more bugs for the final 3.1.3 release, so please test these builds, and if you find an issue, check whether it’s already in the bug tracker, and if not, report it!

Things fixed in this release, compared to 3.1.3-alpha:

• Added the credits for the 2016 Kickstarter backers to the About Krita dialog
• Don’t cover startup dialogs (for instance, for the pdf import filter) with the splash screen
• Fix a race condition that made the a transform mask with a liquify transformation unreliable
• Fix canvas blackouts when using the liquify tool at a high zoom level
• Use the native color selector on OSX: Krita’s custom color selector cannot pick screen colors on OSX
• Set the default PNG compression to 3 instead of 9: this makes saving png’s much faster and the resulting size is the same.
• Fix a crash when pressing the V shortcut to draw straight lines
• Fix a warning when the installation is incomplete that still mentioned Calligra
• Make dragging the guides with a tablet work correctly

#### Note for Windows Users

We are still struggling with Intel’s GPU drivers; recent Windows updates seem to have broken Krita’s OpenGL canvas on some systems, and since we don’t have access to a broken system, we cannot work around the issue. For now, if you are affected, you have to disable OpenGL in krita/settings/configure Krita/display.

#### Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

#### Linux

A snap image for the Ubuntu App Store is also available. You can also use the Krita Lime PPA to install Krita 3.1.3-beta.1 on Ubuntu and derivatives.

### Source code

#### Key

The Linux appimage and the source tarbal are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here.

## April 06, 2017

It happened again: someone sent me a JPEG file with an image of a topo map, with a hiking trail and interesting stopping points drawn on it. Better than nothing. But what I really want on a hike is GPX waypoints that I can load into OsmAnd, so I can see whether I'm still on the trail and how to get to each point from where I am now.

My PyTopo program lets you view the coordinates of any point, so you can make a waypoint from that. But for adding lots of waypoints, that's too much work, so I added an "Add Waypoint" context menu item -- that was easy, took maybe twenty minutes. PyTopo already had the ability to save its existing tracks and waypoints as a GPX file, so no problem there.

But how do you locate the waypoints you want? You can do it the hard way: show the JPEG in one window, PyTopo in the other, and do the "let's see the road bends left then right, and the point is off to the northwest just above the right bend and about two and a half times as far away as the distance through both road bends". Ugh. It takes forever and it's terribly inaccurate.

More than once, I've wished for a way to put up a translucent image overlay that would let me click through it. So I could see the image, line it up with the map in PyTopo (resizing as needed), then click exactly where I wanted waypoints.

I needed two features beyond what normal image viewers offer: translucency, and the ability to pass mouse clicks through to the window underneath.

## A translucent image viewer, in Python

The first part, translucency, turned out to be trivial. In a class inheriting from my Python ImageViewerWindow, I just needed to add this line to the constructor:

    self.set_opacity(.5)


Plus one more step. The window was translucent now, but it didn't look translucent, because I'm running a simple window manager (Openbox) that doesn't have a compositor built in. Turns out you can run a compositor on top of Openbox. There are lots of compositors; the first one I found, which worked fine, was xcompmgr -c -t-6 -l-6 -o.1

The -c specifies client-side compositing. -t and -l specify top and left offsets for window shadows (negative so they go on the bottom right). -o.1 sets the opacity of window shadows. In the long run, -o0 is probably best (no shadows at all) since the shadow interferes a bit with seeing the window under the translucent one. But having a subtle .1 shadow was useful while I was debugging.

That's all I needed: voilà, translucent windows. Now on to the (much) harder part.

## A click-through window, in C

X11 has something called the SHAPE extension, which I experimented with once before to make a silly program called moonroot. It's also used for the familiar "xeyes" program. It's used to make windows that aren't square, by passing a shape mask telling X what shape you want your window to be. In theory, I knew I could do something like make a mask where every other pixel was transparent, which would simulate a translucent image, and I'd at least be able to pass clicks through on half the pixels.

But fortunately, first I asked the estimable Openbox guru Mikael Magnusson, who tipped me off that the SHAPE extension also allows for an "input shape" that does exactly what I wanted: lets you catch events on only part of the window and pass them through on the rest, regardless of which parts of the window are visible.

Knowing that was great. Making it work was another matter. Input shapes turn out to be something hardly anyone uses, and there's very little documentation.

In both C and Python, I struggled with drawing onto a pixmap and using it to set the input shape. Finally I realized that there's a call to set the input shape from an X region. It's much easier to build a region out of rectangles than to draw onto a pixmap.

I got a C demo working first. The essence of it was this:

    if (!XShapeQueryExtension(dpy, &shape_event_base, &shape_error_base)) {
printf("No SHAPE extension\n");
return;
}

/* Make a shaped window, a rectangle smaller than the total
* size of the window. The rest will be transparent.
*/
region = CreateRegion(outerBound, outerBound,
XWinSize-outerBound*2, YWinSize-outerBound*2);
XShapeCombineRegion(dpy, win, ShapeBounding, 0, 0, region, ShapeSet);
XDestroyRegion(region);

/* Make a frame region.
* So in the outer frame, we get input, but inside it, it passes through.
*/
region = CreateFrameRegion(innerBound);
XShapeCombineRegion(dpy, win, ShapeInput, 0, 0, region, ShapeSet);
XDestroyRegion(region);


CreateRegion sets up rectangle boundaries, then creates a region from those boundaries:

Region CreateRegion(int x, int y, int w, int h) {
Region region = XCreateRegion();
XRectangle rectangle;
rectangle.x = x;
rectangle.y = y;
rectangle.width = w;
rectangle.height = h;
XUnionRectWithRegion(&rectangle, region, region);

return region;
}


CreateFrameRegion() is similar but a little longer. Rather than post it all here, I've created a GIST: transregion.c, demonstrating X11 shaped input.

Next problem: once I had shaped input working, I could no longer move or resize the window, because the window manager passed events through the window's titlebar and decorations as well as through the rest of the window. That's why you'll see that CreateFrameRegion call in the gist: -- I had a theory that if I omitted the outer part of the window from the input shape, and handled input normally around the outside, maybe that would extend to the window manager decorations. But the problem turned out to be a minor Openbox bug, which Mikael quickly tracked down (in openbox/frame.c, in the XShapeCombineRectangles call on line 321, change ShapeBounding to kind). Openbox developers are the greatest!

## Input Shapes in Python

Okay, now I had a proof of concept: X input shapes definitely can work, at least in C. How about in Python?

There's a set of python-xlib bindings, and they even supports the SHAPE extension, but they have no documentation and didn't seem to include input shapes. I filed a GitHub issue and traded a few notes with the maintainer of the project. It turned out the newest version of python-xlib had been completely rewritten, and supposedly does support input shapes. But the API is completely different from the C API, and after wasting about half a day tweaking the demo program trying to reverse engineer it, I gave up.

Fortunately, it turns out there's a much easier way. Python-gtk has shape support, even including input shapes. And if you use regions instead of pixmaps, it's this simple:

    if self.is_composited():
region = gtk.gdk.region_rectangle(gtk.gdk.Rectangle(0, 0, 1, 1))
self.window.input_shape_combine_region(region, 0, 0)


My transimageviewer.py came out nice and simple, inheriting from imageviewer.py and adding only translucency and the input shape.

If you want to define an input shape based on pixmaps instead of regions, it's a bit harder and you need to use the Cairo drawing API. I never got as far as working code, but I believe it should go something like this:

    # Warning: untested code!
bitmap = gtk.gdk.Pixmap(None, self.width, self.height, 1)
cr = bitmap.cairo_create()
# Draw a white circle in a black rect:
cr.rectangle(0, 0, self.width, self.height)
cr.set_operator(cairo.OPERATOR_CLEAR)
cr.fill();

# draw white filled circle
cr.arc(self.width / 2, self.height / 2, self.width / 4,
0, 2 * math.pi);
cr.set_operator(cairo.OPERATOR_OVER);
cr.fill();



The translucent image viewer worked just as I'd hoped. I was able to take a JPG of a trailmap, overlay it on top of a PyTopo window, scale the JPG using the normal Openbox window manager handles, then right-click on top of trail markers to set waypoints. When I was done, a "Save as GPX" in PyTopo and I had a file ready to take with me on my phone.

We are sorry to inform you that we had to disable comments on this website. Currently there are more than 21 thousand messages in the spam queue plus another 2.6 thousand in the review queue. There is no way we can handle those. If you want to get in touch with us then head over to the contact page and find what suits you best – mailing lists, IRC, bug tracker, …
We hope to be able to get some alternative up and running, but that might take some time as it's not really a high priority for us.

we're proud to announce the fourth bugfix release for the 2.2 series of darktable, 2.2.4!

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.2.4.

as always, please don't use the autogenerated tarball provided by github, but only our tar.xz. the checksum is:

$sha256sum darktable-2.2.4.tar.xz bd5445d6b81fc3288fb07362870e24bb0b5378cacad2c6e6602e32de676bf9d8 darktable-2.2.4.tar.xz$ sha256sum darktable-2.2.4.6.dmg
b7e4aeaa4b275083fa98b2a20e77ceb3ee48af3f7cc48a89f41a035d699bd71c  darktable-2.2.4.6.dmg

Important note: to make sure that darktable can keep on supporting the raw file format for your camera, please help us by visiting https://raw.pixls.us/ and making sure that we have the full raw sample set for your camera under CC0 license!

and the changelog as compared to 2.2.3 can be found below.

## New features:

• Better brush trace handing of opacity to get better control.
• tools: Add script to purge stale thumbnails
• tools: A script to watch a folder for new images

## Bugfixes:

• DNG: fix camera name demangling. It used to report some wrong name for some cameras.
• When using wayland, prefer XWayland, because native Wayland support is not fully functional yet
• EXIF: properly handle image orientation '2' and '4' (swap them)
• OpenCL: a few fixes in profiled denoise, demosaic and colormapping
• tiling: do not process uselessly small end tiles
• masks: avoid assertion failure in early phase of path generation,
• masks: reduce risk of unwanted self-finalization of small path shapes
• Fix rare issue when expanding $() variables in import/export string • Camera import: fix ignore_jpg setting not having an effect • Picasa web exporter: unbreak after upstream API change • collection: fix query string for folders ( 'a' should match 'a/b' and 'a/c', but not 'ac/' ) ## Base Support: • Fujifilm X-T20 (only uncompressed raw, at the moment) • Fujifilm X100F (only uncompressed raw, at the moment) • Nikon COOLPIX B700 (12bit-uncompressed) • Olympus E-M1MarkII • Panasonic DMC-TZ61 (4:3, 3:2, 1:1, 16:9) • Panasonic DMC-ZS40 (4:3, 3:2, 1:1, 16:9) • Sony ILCE-6500 ## Noise Profiles: • Canon PowerShot G7 X Mark II • Olympus E-M1MarkII • Lge Nexus 5X • Last week we had a Rust + GNOME hackfest in Mexico City (wiki page), kindly hosted by the Red Hat office there, in its very new and very cool office in the 22nd floor of a building and with a fantastic view of the northern part of the city. Allow me to recount the event briefly. Inexplicably, in GNOME's 20 years of existence, there has never been a hackfest or event in Mexico. This was the perfect chance to remedy that and introduce people to the wonders of Mexican food. My friend Joaquín Rosales, also from Xalapa, joined us as he is working on a very cool Rust-based monitoring system for small-scale spirulina farms using microcontrollers. Alberto Ruiz started getting people together around last November, with a couple of video chats with Rust maintainers to talk about making it possible to write GObject implementations in Rust. Niko Matsakis helped along with the gnarly details of making GObject's and Rust's memory management play nicely with each other. ## GObject implementations in Rust During the hackfest, I had the privilege of sitting next to Niko to do an intensive session of pair programming to function as a halfway-reliable GObject reference while I fixed my non-working laptop (intermission: kids, never update all your laptop's software right before traveling. It will not work once you reach your destination.). The first thing was to actually derive a new class from GObject, but in Rust. In C there is a lot of boilerplate code to do this, starting with the my_object_get_type() function. Civilized C code now defines all that boilerplate with the G_DEFINE_TYPE() macro. You can see a bit of the expanded code here. What G_DEFINE_TYPE() does is to define a few functions that tell the GType system about your new class. You then write a class_init() function where you define your table of virtual methods (just function pointers in C), you register signals which your class can emit (like "clicked" for a GtkButton), and you can also define object properties (like "text" for the textual contents of a GtkEntry) and whether they are readable/writable/etc. You also define an instance_init() function which is responsible for initializing the memory allocated to instances of your class. In C this is quite normal: you allocate some memory, and then you are responsible for initializing it. In Rust things are different: you cannot have uninitialized memory unless you jump through some unsafe hoops; you create fully-initialized objects in a single shot. Finally, you define a finalize function which is responsible for freeing your instance's data and chaining to the finalize method in your superclass. In principle, Rust lets you do all of this in the same way that you would in C, by calling functions in libgobject. In practice it is quite cumbersome. All the magic macros we have to define the GObject implementation boilerplate in gtype.h are there precisely because doing it in "plain C" is quite a drag. Rust makes this no different, but you can't use the C macros there. ### A GObject in Rust The first task was to write an actual GObject-derived class in Rust by hand, just to see how it could be done. Niko took care of this. You can see this mock object here. For example, here are some bits: #[repr(C)] pub struct Counter { parent: GObject, } struct CounterPrivate { f: Cell<u32>, dc: RefCell<Option<DropCounter>>, } #[repr(C)] pub struct CounterClass { parent_class: GObjectClass, add: Option<extern fn(&Counter, v: u32) -> u32>, get: Option<extern fn(&Counter) -> u32>, set_drop_counter: Option<extern fn(&Counter, DropCounter)>, }  Here, Counter and CounterClass look very similar to the GObject boilerplate you would write in C. Both structs have GObject and GObjectClass as their first fields, so when doing C casts they will have the proper size and fields within those sub-structures. CounterPrivate is what you would declare as the private structure with the actual fields for your object. Here, we have an f: Cell<u32> field, used to hold an int which we will mutate, and a DropCounter, an utility struct which we will use to assert that our Rust objects get dropped only once from the C-like implementation of the finalize() function. Also, note how we are declaring two virtual methods in the CounterClass struct, add() and get(). In C code that defines GObjects, that is how you can have overridable methods: by exposing them in the class vtable. Since GObject allows "abstract" methods by setting their vtable entries to NULL, we use an Option around a function pointer. The following code is the magic that registers our new type with the GObject machinery. It is what would go in the counter_get_type() function if it were implemented in C: lazy_static! { pub static ref COUNTER_GTYPE: GType = { unsafe { gobject_sys::g_type_register_static_simple( gobject_sys::g_object_get_type(), b"Counter\0" as *const u8 as *const i8, mem::size_of::<CounterClass>() as u32, Some(CounterClass::init), mem::size_of::<Counter>() as u32, Some(Counter::init), GTypeFlags::empty()) } };  If you squint a bit, this looks pretty much like the corresponding code in G_DEFINE_TYPE(). That lazy_static!() means, "run this only once, no matter how many times it is called"; it is similar to g_once_*(). Here, gobject_sys::g_type_register_static_simple() and gobject_sys::g_object_get_type() are the direct Rust bindings to the corresponding C functions; they come from the low-level gobject-sys module in gtk-rs. Here is the equivalent to counter_class_init(): impl CounterClass { extern "C" fn init(klass: gpointer, _klass_data: gpointer) { unsafe { let g_object_class = klass as *mut GObjectClass; (*g_object_class).finalize = Some(Counter::finalize); gobject_sys::g_type_class_add_private(klass, mem::size_of::<CounterPrivate>>()); let klass = klass as *mut CounterClass; let klass: &mut CounterClass = &mut *klass; klass.add = Some(methods::add); klass.get = Some(methods::get); klass.set_drop_counter = Some(methods::set_drop_counter); } } }  Again, this is pretty much identical to the C implementation of a class_init() function. We even set the standard g_object_class.finalize field to point to our finalizer, written in Rust. We add a private structure with the size of our CounterPrivate... ... which we later are able to fetch like this: impl Counter { fn private(&self) -> &CounterPrivate { unsafe { let this = self as *const Counter as *mut GTypeInstance; let private = gobject_sys::g_type_instance_get_private(this, *COUNTER_GTYPE); let private = private as *const CounterPrivate; &*private } } }  I.e. we call g_type_instance_get_private(), just like C code would, to get the private structure. Then we cast it to our CounterPrivate and return that. ### But that's all boilerplate Yeah, pretty much. But don't worry! Niko made it possible to get rid of it in a comfortable way! But first, let's look at the non-boilerplate part of our Counter object. Here are its two interesting methods: mod methods { #[allow(unused_imports)] use super::{Counter, CounterPrivate, CounterClass}; pub(super) extern fn add(this: &Counter, v: u32) -> u32 { let private = this.private(); let v = private.f.get() + v; private.f.set(v); v } pub(super) extern fn get(this: &Counter) -> u32 { this.private().f.get() } }  These should be familar to people who implement GObjects in C. You first get the private structure for your instance, and then frob it as needed. ### No boilerplate, please Niko spent the following two days writing a plugin for the Rust compiler so that we can have a mini-language to write GObject implementations comfortably. Instead of all the gunk above, you can simply write this: extern crate gobject_gen; use gobject_gen::gobject_gen; use std::cell::Cell; gobject_gen! { class Counter { struct CounterPrivate { f: Cell<u32> } fn add(&self, x: u32) -> u32 { let private = self.private(); let v = private.f.get() + x; private.f.set(v); v } fn get(&self) -> u32 { self.private().f.get() } } }  This call to gobject_gen!() gets expanded to the the necessary boilerplate code. That code knows how to register the GType, how to create the class_init() and instance_init() functions, how to register the private structure and the utility private() to get it, and how to define finalize(). It will fill the vtable as appropriate with the methods you create. We figured out that this looks pretty much like Vala, except that it generates GObjects in Rust, callable by Rust itself or by any other language, once the GObject Introspection machinery around this is written. That is, just like Vala, but for Rust. And this is pretty good! We are taking an object system in C, which we must keep around for compatibility reasons and for language bindings, and making an easy way to write objects for it in a safe, maintained language. Vala is safer than plain C, but it doesn't have all the machinery to guarantee correctness that Rust has. Finally, Rust is definitely better maintained than Vala. There is still a lot of work to do. We have to support registering and emitting signals, registering and notifying GObject properties, and probably some other GType arcana as well. Vala already provides nice syntax to do this, and we can probably use it with only a few changes. Finally, the ideal situation would be for this compiler plugin, or an associated "cargo gir" step, to emit the necessary GObject Introspection information so that these GObjects can be called from other languages automatically. We could also spit C header files to consume the Rust GObjects from C. ## And the other people in the hackfest? I'll tell you in the next blog post! ## April 02, 2017 Stellarium 0.12.9 has been released today! The series 0.12 is LTS for owners of old computers (old with weak graphics cards) and this is release with once fix for Solar System Editor plugin. ## March 31, 2017 Used to be that you could see your mounted filesystems by typing mount or df. But with modern Linux kernels, all sorts are implemented as virtual filesystems -- proc, /run, /sys/kernel/security, /dev/shm, /run/lock, /sys/fs/cgroup -- I have no idea what most of these things are except that they make it much more difficult to answer questions like "Where did that ebook reader mount, and did I already unmount it so it's safe to unplug it?" Neither mount nor df has a simple option to get rid of all the extraneous virtual filesystems and only show real filesystems. http://unix.stackexchange.com/questions/177014/showing-only-interesting-mount-p oints-filtering-non-interesting-types had some suggestions that got me started: mount -t ext3,ext4,cifs,nfs,nfs4,zfs mount | grep -E --color=never '^(/|[[:alnum:]\.-]*:/)'  Another answer there says it's better to use findmnt --df, but that still shows all the tmpfs entries (findmnt --df | grep -v tmpfs might do the job). And real mounts are always mounted on a filesystem path starting with /, so you can do mount | grep '^/'. But it also turns out that mount will accept a blacklist of types as well as a whitelist: -t notype1,notype2... I prefer the idea of excluding a blacklist of filesystem types versus restricting it to a whitelist; that way if I mount something unusual like curlftpfs that I forgot to add to the whitelist, or I mount a USB stick with a filesystem type I don't use very often (ntfs?), I'll see it. On my system, this was the list of types I had to disable (sheesh!): mount -t nosysfs,nodevtmpfs,nocgroup,nomqueue,notmpfs,noproc,nopstore,nohugetlbfs,nodebugfs,nodevpts,noautofs,nosecurityfs,nofusectl  df is easier: like findmnt, it excludes most of those filesystem types to begin with, so there are only a few you need to exclude: df -hTx tmpfs -x devtmpfs -x rootfs  Obviously I don't want to have to type either of those commands every time I want to check my mount list. SoI put this in my .zshrc. If you call mount or df with no args, it applies the filters, otherwise it passes your arguments through. Of course, you could make a similar alias for findmnt. # Mount and df are no longer useful to show mounted filesystems, # since they show so much irrelevant crap now. # Here are ways to clean them up: mount() { if [[$# -ne 0 ]]; then
/bin/mount $* return fi # Else called with no arguments: we want to list mounted filesystems. /bin/mount -t nosysfs,nodevtmpfs,nocgroup,nomqueue,notmpfs,noproc,nopstore,nohugetlbfs,nodebugfs,nodevpts,noautofs,nosecurityfs,nofusectl } df() { if [[$# -ne 0 ]]; then
/bin/df \$*
return
fi

# Else called with no arguments: we want to list mounted filesystems.
/bin/df -hTx tmpfs -x devtmpfs -x rootfs
}


Update: Chris X Edwards suggests lsblk or lsblk -o 'NAME,MOUNTPOINT'. it wouldn't have solved my problem because it only shows /dev devices, not virtual filesystems like sshfs, but it's still a command worth knowing about.

Initially I was going to do a more elaborate workflow tutorial, but time flies when you’re having fun on 3.24. With the release out, I’d rather publish this than let it rot. Maybe the next one!

Recipe Icon

We’re working like crazy on the next versions of Krita — 3.1.3 and 4.0. Krita 3.1.3 will be a stable bugfix release, 4.0 will have the vector work and the python scripting. This week we’ve prepared the first 3.1.3 alpha builds for testing! The final release of 3.1.3 is planned for end of April.

We’re still working on fixing more bugs for the final 3.1.3 release, so please test these builds, and if you find an issue, check whether it’s already in the bug tracker, and if not, report it!

#### Note for Windows Users

We are still struggling with Intel’s GPU drivers; recent Windows updates seem to have broken Krita’s OpenGL canvas on some systems, and since we don’t have access to a broken system, we cannot work around the issue.

#### Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

#### Linux

A snap image for the Ubuntu App Store will be available soon. You can also use the Krita Lime PPA to install Krita 3.1.3-alpha.2 on Ubuntu and derivatives.

### Source code

#### Key

The Linux appimage and the source tarbal are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here.

## March 30, 2017

Here’s Nathan with a piece of good news:

After months of work, I’m glad to announce that Make Professional Painterly Game Art with Krita is out! It is the first Game Art training for your favourite digital painting program.

In this course, you’ll learn:
1. The techniques professionals use to make beautiful sprites
2. How to create characters, background and even simple UI
3. How to build smart, reusable assets

With the pro and premium versions, you’ll also get the opportunity to improve your art fundamentals, become more efficient with Krita, and build a detailed game mockup for your portfolio.

The course page has free sample tutorials and the answers to all of your questions.

# GIMP is Going to LGM!

## Tall and tan and young and lovely...

This years Libre Graphics Meeting (2017) is going to be held in the lovely city seen above, Rio de Janeiro, Brazil! This is an important meeting for so many people in the Free/Libre art community as it’s one of the only times they have an opportunity to meet face to face.

We’ve had some folks attending the past LGM’s (Leipzig and London) and it’s a wonderful opportunity to spend some time with friends. (Also, @frd from the community will be there!)

So in the spirit of camaraderie, I have a request…

The GIMP team will be in attendance this year. I happen to have a fondness for them so I’m asking anyone reading this to please head over and donate to the project.

That link is for the GNOME PayPal account, but there are other ways to donate as well.

This is one of the few times that the GIMP team gets a chance to meet in person. They use the time to hack at GIMP and to manage internal business. The time they get to spend together is invaluable to the project and by extension everyone that uses GIMP.

Just look at these faces! Surely this (Brady) Bunch of folks is worth helping to get a better GIMP?

## Attending

Besides @frd I’m not sure who else from the community might be attending, so if I’ve missed you I apologize! Please feel free to use this topic to communicate and coordinate if you’d like.

It appears that personally I’m on a biennial schedule with attending LGM - so I’m looking forward to next year to be able to catch up with everyone!

## March 27, 2017

### Could you tell us something about yourself?

My nickname is Dolly, I am 11 years old, I live in Cannock, Staffordshire, England. I am at Secondary school, and at the weekends I attend drama, dance and singing lessons, I like drawing and recently started using the Krita app.

### Do you draw on paper too, and which is more fun, paper or computer?

I draw on paper, and I like Krita more than paper art as there’s a lot more colours instantly available than when I do paper art.

### What kind of pictures do you draw?

I mostly draw my original character (called Phantom), I draw animals, trees and stars too.

### What is easy to do with Krita? What is difficult to do?

I think choosing the colour is easy, its really good, I find getting the right brush size a little difficult due to the scrolling needed to select the brush size.

### Which thing about Krita is most fun?

The thing most fun for me is colouring in my pictures as there is a great range of colour available, far more than in my pencil case.

### Is there anything in Krita that you’d like to be different?

I think Krita is almost perfect the way it is at the moment however if the brush selection expanded automatically instead of having to scroll through it would be better for me.

### Can you show us a picture you made with Krita?

I can, I have attached some of my favourites that I have done for my friends.

### How did you make it?

I usually start with the a standard base line made up of a circle for the face and the ears, I normally add the hair and the other features (eyes, noses and mouth) and finally colour and shade and include any accessories.

### Is there anything else you’d like to tell us?

I really enjoy Krita, I think its one of the best drawing programs there is!

## March 25, 2017

As part of preparation for Everyone Does IT, I was working on a silly hack to my Python script that plays notes and chords: I wanted to use the computer keyboard like a music keyboard, and play different notes when I press different keys. Obviously, in a case like that I don't want line buffering -- I want the program to play notes as soon as I press a key, not wait until I hit Enter and then play the whole line at once. In Unix that's called "cbreak mode".

There are a few ways to do this in Python. The most straightforward way is to use the curses library, which is designed for console based user interfaces and games. But importing curses is overkill just to do key reading.

Years ago, I found a guide on the official Python Library and Extension FAQ: Python: How do I get a single keypress at a time?. I'd even used it once, for a one-off Raspberry Pi project that I didn't end up using much. I hadn't done much testing of it at the time, but trying it now, I found a big problem: it doesn't block.

Blocking is whether the read() waits for input or returns immediately. If I read a character with c = sys.stdin.read(1) but there's been no character typed yet, a non-blocking read will throw an IOError exception, while a blocking read will wait, not returning until the user types a character.

In the code on that Python FAQ page, blocking looks like it should be optional. This line:

fcntl.fcntl(fd, fcntl.F_SETFL, oldflags | os.O_NONBLOCK)

is the part that requests non-blocking reads. Skipping that should let me read characters one at a time, block until each character is typed. But in practice, it doesn't work. If I omit the O_NONBLOCK flag, reads never return, not even if I hit Enter; if I set O_NONBLOCK, the read immediately raises an IOError. So I have to call read() over and over, spinning the CPU at 100% while I wait for the user to type something.

The way this is supposed to work is documented in the termios man page. Part of what tcgetattr returns is something called the cc structure, which includes two members called Vmin and Vtime. man termios is very clear on how they're supposed to work: for blocking, single character reads, you set Vmin to 1 (that's the number of characters you want it to batch up before returning), and Vtime to 0 (return immediately after getting that one character). But setting them in Python with tcsetattr doesn't make any difference.

(Python also has a module called tty that's supposed to simplify this stuff, and you should be able to call tty.setcbreak(fd). But that didn't work any better than termios: I suspect it just calls termios under the hood.)

But after a few hours of fiddling and googling, I realized that even if Python's termios can't block, there are other ways of blocking on input. The select system call lets you wait on any file descriptor until has input. So I should be able to set stdin to be non-blocking, then do my own blocking by waiting for it with select.

And that worked. Here's a minimal example:

import sys, os
import termios, fcntl
import select

fd = sys.stdin.fileno()
newattr = termios.tcgetattr(fd)
newattr[3] = newattr[3] & ~termios.ICANON
newattr[3] = newattr[3] & ~termios.ECHO
termios.tcsetattr(fd, termios.TCSANOW, newattr)

oldterm = termios.tcgetattr(fd)
oldflags = fcntl.fcntl(fd, fcntl.F_GETFL)
fcntl.fcntl(fd, fcntl.F_SETFL, oldflags | os.O_NONBLOCK)

print "Type some stuff"
while True:
inp, outp, err = select.select([sys.stdin], [], [])
if c == 'q':
break
print "-", c

# Reset the terminal:
termios.tcsetattr(fd, termios.TCSAFLUSH, oldterm)
fcntl.fcntl(fd, fcntl.F_SETFL, oldflags)


A less minimal example: keyreader.py, a class to read characters, with blocking and echo optional. It also cleans up after itself on exit, though most of the time that seems to happen automatically when I exit the Python script.

# RawTherapee and Pentax Pixel Shift

## What is Pixel Shift?

Modern digital sensors (with a few exceptions) use an arrangement of RGB filters over a square grid of photosites. For a given 2x2 square of photosites the filters are designed to allow two green, and one each red and blue colors through to the photosite. These are arranged on a grid:

The pattern is known as a Bayer pattern (after the creator Bryce Bayer of Eastman Kodak). The resulting pattern shows how each RGB is offset into the grid.

Each of the pixel sites captures a single color. In order to produce a full color representation at each pixel, the other color values need to be interpolated from the surrounding grid. This interpolation and methods for calculating it are referred to as demosaicing. The methods for accomplishing this vary across different algorithms.

Unfortunately, this can often result in problems. There can be chromatic aliasing problems resulting in odd color fringing and roughness on edges or a loss of detail and sharpness.

### Pixel Shift

Pentax‘s Pixel Shift (Available on the K-1, K-3 II, KP, K-70) attempts to alleviate some of these problems through a novel approach of capturing four images quickly in succession and by moving the entire camera sensor a single pixel for each shot. This has the effect of capturing a full RGB value at each pixel location:

This means a full RGB value for a pixel location can be created without having to interpolate from neighboring values.

#### Less Noise

If you look carefully at the Bayer pattern, you’ll notice that when shifting to adjacent pixels there will always be two green values captured per pixel. The average of these green values helps to suppress noise that may have been interpolated and spread through a normal, single-shot raw file.

#### Less Moiré

Avoiding the interpolation of pixel colors from surrounding photosites helps to reduce the appearance of Moiré in the final result:

#### Increased Resolution

This method is similar in concept to what was previously seen when Olympus announced their “High Resolution” mode for the OMD E-M5mkII camera (or manually as we previously described in this blog post). In that case they combine 8 frames moved by sub-pixel amounts to increase the overall resolution. The difference here is that Olympus generates a single, combined raw file from the results, while Pixel Shift gets you access to each of the four raw files before they’re combined.

In each case, a higher resolution image can be created from the results:

#### Movement

As with most approaches for capturing multiple images and combining them, a particularly problematic area is when there are objects in motion between the frames being captured. This is a common problem when stitching panoramic photography, when creating image stacks for noise reduction, and when combining images using methods such as Pixel Shift.

Although…

## The RawTherapee Approach

Simply combining four static frames together is really trivial, and is something that all the other Pixel Shift-capable software can do without issue. The real world is not often so accommodating as a studio setup, and that is where the recent work done by @Ingo and @Ilias on RawTherapee really begins to shine.

What they’ve been working on in RawTherapee is to improve the detection of movement in a scene. There are several types of movement possible:

• Objects showing at different places in a scene such as fast moving cars.
• Partly moving objects like foliage in the wind.
• Moving objects reflecting light onto static objects in the scene
• Changing illumination conditions such as long exposures at sunset.

All of these types of movement need to be detected to avoid the artifacts they may cause in the final shot.

One of the key features of Pixel Shift movement detection in RawTherapee is that it allows you to show the movement mask, so you get feedback on which regions of the image are detected as movement and which are static. For the regions with movement RawTherapee will then use the demosaiced frame of your choice to fill it in, and for regions without movement it will use the Pixel Shift combined image with more detail and less noise.

The accuracy of movement detection in RawTherapee leads to much better handling of motion artifacts that works well in places where proprietary solutions fall short. For most cases the Automatic motion correction mode works well, but you can also fine tune the parameters in custom mode to correctly detect motion in high ISO shots.

Besides being the only option (barring dcrawps possibly) to process Pixel Shift files in Linux, RawTherapee has some other neat options that aren’t found in other solutions. One of them is the ability to export the actual movement mask separate from the image. This will let users generate separate outputs from RT, and to combine them later using the movement mask. Another option is the ability to choose which of the other frames to use for filling in the movement areas on the image.

## Pixel Shift Support in Other Software

Pentax’s own Digital Camera Utility (a rebranded version of SilkyPix) naturally supports Pixel Shift, but as with most vendor-bundled software it can be slow, unwieldy, and a little buggy sometimes. Having said that, the results do look good, and at least the “Motion Correction” is able to be utilized with this software.

Adobe Camera Raw (ACR) got support for Pixel Shift files in version 9.5.1 (but doesn’t utilize the “Motion Correction”). In fact, ACR didn’t have support at the time that DPReview.com looked at the feature last year, causing them to retract the article and re-post when they had a chance to use a version of ACR with support.

A recent look at Pixel Shift processing over at DPReview.com showed some interesting results.

We’re going to look at some 100% crops from that article and compare them to the results available using RawTherapee (the latest development version, to be released as 5.1 in April). The RawTherapee versions were set to the most neutral settings with only an exposure adjustment to match other samples better.

Looking first at an area of foliage with motion, the places where there are issues becomes apparent.

For reference, here is the Adobe Camera Raw (ACR) version of a single frame from a Pixel Shift file:

The results with Pixel Shift on, and motion correction on, from straight-out-of-camera (SOOC), Adobe Camera Raw (ACR), SilkyPix, and RawTherapee (RT) are decidedly mixed. In all but the RT version, there’s a very clear problem with effective blending and masking of the frames in areas with motion:

Things look much worse for Adobe Camera Raw when looking at high-motion areas like the water spray at the foot of the waterfall, though SilkyPix does a much better job here.

The ACR version of a single frame for reference:

Both the SOOC and SilkyPix versions handle all of the movement well here. RawTherapee also does a great job blending the frames despite all of the movement. Adobe Camera Raw is not doing well at all…

Finally, in a frame full of movement, such as the surface of the water.

The ACR version of a single frame for reference:

In a frame full of movement the SOOC, ACR, and SilkyPix processing all struggle to combine a clean set of frames. They exhibit a pixel pattern from the processing, and the ACR version begins to introduce odd colors:

As mentioned earlier, a unique feature of RawTherapee is the ability to show the motion mask. Here is an example of the motion mask for this image

Also worth mentioning is the “Smooth Transitions” feature in RawTherapee. When there are regions with and without motion, the regions with motion are masked and filled in with data from a demosaiced frame of your choice. The other regions are taken from the Pixel Shift combined image. This can occasionally lead to harsh transitions between the two.

For instance, a transition as processed in SilkyPix:

RawTherapee’s “Smooth Transitions” feature does a much better job handling the transition:

### In Conclusion

In another example of the power and community of Free/Libre and Open Source Software we have a great enhancement to a project based on feedback and input from the users. In this case, it all started with a post on the RawTherapee forums.

Thanks to the hard work of @Ingo and @Ilias Pentax shooters now have a Pixel Shift capable software that is not only FLOSS but also produces better results than the proprietary solutions!

Not so coincidentally, community member @nosle gave permission to use one of his PS files for everyone to try processing on the Play pixelshift thread. If you’d like to practice consider heading over to get his file and feedback from others!

Pixel Shift is currently in the development branch of RawTherapee and is slated for release with version 5.1.

## March 22, 2017

Blender is a true community effort. It’s an open public project where everyone’s welcome to contribute. In the past year, a growing number of corporations started to contribute to Blender as well.

We’d like to credit the companies who helping out to make Blender 2.8 happen.

# Tangent Animation

This animation studio released Ozzy last year, a feature film entirely made with Blender. They currently have 2 new films in production. The facility has two departments (Toronto, Winnipeg) and is growing to 150 people in 2017. They exclusively use Blender for 3D.

Since October 2016, Tangent supports two Blender Institute devs full time to work on the 2.8 viewport. They also hired their own Cycles developer team, who will be contributing openly.

# Nimble Collective

Nimble Collective was founded by former Dreamworks animators. Their goal is to give artists access to a complete studio pipeline, accessible online by just using your browser.

Since their launch in 2016 Nimble Collective has seriously invested in integrating Blender in their platform. They currently support one full time developer position in Blender Institute to support animation tools (dependency graph) and pipelines (Alembic).

# AMD

AMD is developing a prominent open source strategy, leading the way for FOSS graphics card drivers and the new open graphics standard Vulkan.
Since last summer 2016 AMD supports a developer to work on modernizing Blender OpenGL, and a developer to work on Cycles OpenCL (GPU) rendering.

# Aleph Objects

Aleph Objects is the manufacturer of the popular Libre Hardware Lulzbot 3D printer.

Starting this year, Aleph Objects will support Blender Institute to hire two people to work full time on UI and Workflow topics for Blender 2.8, with as goal to deliver a release-compatible “Blender 101” + training material for occasional 3D users.

The Blender Development fund is an essential instrument to keep Blender alive. Blender Foundation uses the Development fund and donations to support 2-3 full time developer positions. Big and loyal corporate sponsors to the fund are BlenderMarket , Cambridge Medical Robotics , Valve Steam Workshop , Blend4Web , CGCookie , Effetti Digitali , Insydium , Sketchfab , Wube Software , blendFX , Machinimatrix , Pepeland and RenderStreet.

# Blender Institute

The studio of Blender Institute gets hardware seeds – for example we had servers from Intel and Dell, GPUs from AMD and Nvidia. Blender Institute uses Blender Cloud income, sponsoring and subsidies to support developers and artists to work on free/open movies and 3D computer graphics production pipelines. BI currently employs 14 people, including BF chairman Ton Roosendaal.

Ryou is the amazing artist from Japan who made the Kiki plastic model. Thanks to Tyson Tan, we now have an interview with him!

### Can you tell us something about yourself?

I’m Ito Ryou-ichi (Ryou), a Japanese professional modeler and figure sculptor. I work for the model hobby magazine 月刊モデルグラフィックス (Model Graphics Monthly), writing columns, building guides as well as making model samples.

### When did you begin making models like this?

Building plastic models has been my hobby since I was a kid. Back then I liked building robot models from anime titles like the Gundam series. When I grew up, I once worked as a manga artist, but the job didn’t work out for me, so I became a modeler/sculptor around my 30s (in the 2000s). That said, I still love drawing pictures and manga!

### How do you design them?

Being a former manga artist, I like to articulate my figure design from a manga character design perspective. First I determine the character’s general impression, then collect information like clothing style and other stuff to match that impression. Using those references, I draw several iterations until I feel comfortable with the whole result.

Although I like human and robot characters in general, my favorite has to be kemono (Japanese style furry characters). A niche genre indeed, especially in the modeling scene — you don’t see many of those figures around. But to me, it feels like a challenge in which I can make the best use of my taste and skills.

### How do you make the prototypes? And how were they produced?

There are many ways of prototyping a figure. I have been using epoxy putty sculpting most of the time. First I make the figure’s skeleton using metallic wires, then put epoxy putty around the skeleton to make a crude shape for the body. I then use art knives and other tools to do the sculpting work, slowly making all the details according to the design arts. A trusty old “analogue approach” if you will. In contrast, I have been trying the digital approach with ZBrushCore as well. Although I’m still learning, I can now make something like a head out of it.

In case of Kiki’s figure (and most of my figures), the final product is known as a “Garage Kit” — a box of unassembled, unpainted resin parts. The buyer builds and paints the figure by themselves. To turn the prototype into a garage kit, the finished prototype must first be broken into a few individual parts, make sure they have casting friendly shapes. Silicon-based rubber is then used to make molds out of those parts. Finally, flowing synthetic resin is injected into the molds and parts are harvested after the injected resin settled. This method is called “resin casting”. Although I can cast them at home by myself, I often commission a professional workshop to do it for me. It costs more that way, but they can produce parts of higher quality in large quantity.

### How did you learn about Krita?

Some time ago I came across Tyson Tan’s character designs on Pixiv.net and immediately became a big fan of his work. His Kiki pictures caught my attention and I did some research out of curiosity, leading me to Krita. I haven’t yet learned how to use Krita, but I’ll do that eventually.

### Why did you decide to make a Kiki statuette?

Ryou: Before making Kiki, I had already collaborated with a few other artists, turning their characters into figures. Tyson has a unique way of mixing the beauty of living beings and futuristic robotic mechanism that I really liked, so I contacted him on Twitter. I picked a few characters from his creations as candidates, one of them was Kiki. Although more ”glamorous” would have been great too, after some discussion we finally decided to make Kiki.

Tyson: During the discussions, we looked into many of my original characters, some cute, some sexy. We did realize the market prefer figures with glamorous bodies, but we really wanted to make something special. Kiki being Krita’s mascot, a mascot of a free and open source art software, has one more layer of meaning than “just someone’s OC”. It was very courageous for Ryou to agree on a plan like that, since producing such a figure is very expensive and he would be the one to bear the monetary risk. I really admire his decision.

### Where can people order them?

The Kiki figure kit can be ordered from my personal website. I send them worldwide:  http://bmwweb3.nobody.jp/mail2.html

### Anything else you want to share with us?

I plan to collaborate with other artists in the future to make more furry figures like Kiki. I will contact the artist if I like their work, but you may also commission me to make a figure for a specific character.

I hope through making this Kiki figure I can connect with more people!

Ryou’s Personal Website: http://bmwweb3.nobody.jp/

## March 21, 2017

It’s short notice, but if you’re in Charlottetown, I’ll be giving a talk tonight at 7:30pm (March 21, 2017) at the Charlottetown UI/UX/Design Meetup about the history of silverorange and our involvement with the Firefox logo.

The Stellarium development team after three months of development is proud to announce the second correcting release of Stellarium in series 0.15.x - version 0.15.2. This version contains few closed bugs (ported from series 1.x) and some new additions and improvements.

We have updated the configuration file and the Solar System file, so if you have an existing Stellarium installation, we highly recommended reset the settings when you will install the new version (you can choose required points in the installer).

A huge thanks to our community whose contributions help to make Stellarium better!

Full list of changes:
- Added new algorithm for DeltaT from Stephenson, Morrison and Hohenkerk (2016)
- Added new option to InfoString group
- Added orbit visualization data for asteroids
- Added calculation of extincted magnitudes of satellites
- Added new type of Solar system objects: sednoids
- Added classificator of objects into Solar System Editor plugin
- Added albedo for infostring (planets and moons)
- Added some improvements and clean up of code in Search Tool
- Added use ISO 8601 to date formatting in Date and Time Dialog (LP: #1655630)
- Added "Restore direction to initial values" in Oculars plugin (LP: #1656085)
- Added the define for GL_DOUBLE again to restore compilation on ARM.
- Added calculation and show of horizontal and vertical scales of visible field of view of CCD (LP: #1656825)
- Added binning for CCD in Oculars plugin
- Added a mean solar day (equals to Earth's day) on the Sun for educational purposes
- Added transmitting map of object info via RemoteControl
- Added int property handling with sliders
- Added a scriptable function to retrieve landscape brightness.
- Added Spout licence.txt to Windows installer script
- Added displaying solstices points (LP: #1670046)
- Added extension of objectInfoMaps per object, most useful for scripting (LP: #1670412)
- Added tentative fix for crash without network (LP: #1667703)
- Added separate storing of view direction/FoV and other settings.
- Added option to change the prediction depth of Iridium flares (Satellites plugin)
- Added tooltips for AstroCalc features
- Fixed indirect dependency to QtOpenGL by QtMultimediaWidgets (LP: #1656525)
- Fixed text encoding in installer (LP: #1652515)
- Fixed changing value of n.dot in tooltip when ephemeris type is changed (LP: #1652762)
- Fixed mistakes in DeltaT stuff
- Fixed typos in AstroCalc tool
- Fixed visual style for spinup/spindown markers
- Fixed missing cross-id of Epsilon Lyrae (LP: #1653388)
- Fixed updating a list of Solar system bodies in AstroCalc tool when new objects added or objects was removed
- Fixed calculation of period for comets on elliptial orbits
- Fixed prediction of Iridium flares (LP: #1643311)
- Fixed saving visibility flag for Bookmarks button (LP: #1654164)
- Fixed refraction for Satellites (LP: #1654331)
- Fixed wrong parallax and distance for IC 59 (LP: #1655423)
- Fixed updating a text in Help window when shortcuts are changed (LP: #1656001)
- Fixed saving flags of visibility of Milky Way and Zodiacal Light (LP: #1656067)
- Fixed memory leaks
- Fixed few reports of Clang static analyzer
- Fixed double clicks causing crashes (LP: #1656525)
- Fixed packaging QtOpenGL in Windows/macOS packages
- Fixed handling a log lines with missing newline char
- Fixed a bad-value crash in ArchaeoLines plugin
- Fixed an invalid escape sequence in RemoteControl plugin
- Fixed bug in Search Tool (LP: #1655055)
- Fixed doing a screenshots (do it via FBO - solution for QOpenGLWidget)
- Fixed work a button for ArchaeoLines plugin
- Fixed calculation and rendering CCD frame in Oculars plugin
- Fixed an memory leak with the spheric mirror distorter, and removed stencil buffer from the effect FBO (we don't need it for our rendering) (LP: #1661375)
- Fixed tile-based render performance (always glClear all buffers at the start of the frame) (LP: #1661375)
- Fixed glClear alpha channel usage (glClear alpha to zero instead of one)
- Fixed Scenery3d cubemap rendering (restores rendering)
- Fixed crash, when location 'Sierra Nevada Observatory, Spain' is chosen (LP: #1662113)
- Fixed NetBSD and OpenBSD build by linking glues with Qt5::Gui.
- Fixed size for few DSO textures (NPOT textures for ancient GPUs) (LP: #1641773)
- Fixed crash, when missing a stars catalog from the middle of list (e.g. stars4 is missing and we tried zooming) (LP: #1653315)
- Fixed crash for configure color of generic DSO marker (LP: #1667787)
- Fixed date limit in AstroCalc tool (Set a minimum possible date limit for the range of dates for
QDateTimeEdit widgets) (LP: #1660208)
- Fixed escaping of symbols for Simbad Lookup (LP: #1669088)
- Fixed DE431 mismatch (LP: #1606583)
- Fixed overbright Sun when zooming in (LP: #1421173)
- Fixed absolute magnitude calculation of the planets, their moons, and the Pluto-Charon system (LP: #1664143)
- Fixed a long-standing bug concerning centering small-fov views in equatorial mount mode (LP: #1484976)
- Fixed influencing sky luminance/eye adaptation for bright objects covered by the landscape horizon. (LP: #1138533)
- Fixed atmospheric brightening by Earth's moon when location is on other planets (LP: #1673283)
- Fixed application of DE43x DeltaT when date outside range of the selected DE43x.
- Fixed Night mode issue for binocular mode of Oculars Plugin (LP: #1673187)
- Fixed altitude computation for landscapes
- Fixed a small error in that Zodiacal light was aligned with Ecliptic J2000, not Ecliptic of date (LP: #1628765)
- Fixed crash Stellarium in debug mode on OS X with Qt 5.7+, through clear GL error state after using QPainter (LP: #1628072)
- Fixed a problem with Qt timezone handling, when some IANA timezones have been renamed compared to entries in our location database (LP: #1662132)
- Allows SkyImages in all reference frames and deprecate the old explicit core.loadSkyImageAltAz() type commands
- Updated rules for usage of custom time zones (the custom time zone may be use in all time now) (LP: #1652763)
- Updated shortcuts
- Updated rules for source package builder
- Updated URL of DSS collection
- Updated detect of OS
- Updated deployment rules for Windows installer
- Updated script for building Stellarium User Guide
- Updated GUI for set coefficients for custom equation of DeltaT
- Updated list of contributors
- Updated ArchaeoLines and Gridlines options to RemoteControl pages
- Updated Tongan sky culture
- Updated catalog of DSO
- Updated common names of DSO
- Updated star names (LP: #1664671)
- Updated Solar System Screen Saver.
- Updated Oculars plugin
- Updated Satellites plugin (Let's start looking to the Iridium on 15 seconds before flash)
- Updated plist data for macOS
- Updated textures of minor planets
- Updated default color scheme
- Removed code for automatic tuning star scales of the view through ocular/CCD (LP: #1656940)
- Code clean-up
- Prevent an unnecessary StelProperty change
- Changed the way the OpenGL format is set once again

Timothée Giet has finished his latest training course for Krita. In three parts, Timothée introduces the all-new animation feature in Krita. Animation was introduced in Krita 3.0, last year and is already used by people all over the world, for fun and for real work.

Animation in Krita is meant to recreate the glory days of hand-drawn animation, with a modern twist. It’s not a flash substitute, but allows you to pair Krita’s awesome drawing capabilities with a frame-based animation approach.

In this training course, Timothée first gives us a tour of the new animation features and panels in Krita. The second part introduces the foundation of traditional animation. The final part takes you through the production of an entire short clip, from sketching to exporting. All necessary production files are included, too!

Animate with Krita is available as a digital download and costs just €14,95 (excluding VAT in the European Union) English and French subtitles are included, as well as all project files.

## March 20, 2017

Last time I wrote about artistic constraints being useful to remain focus and be able to push yourself to the max. In the near future I plan to dive into the new contstraint based layout of gtk4, Emeus. Today I’ll briefly touch on another type of constraint, the Blender object constraint!

So what are they and how are they useful in the context of a GNOME designer? We make quite a few prototypes and one of the things to decide whether a behavior is clear and comprehensible is motion design, particularly transitions. And while we do not use tools directly linked to out stack, it helps to build simple rigs to lower the manual labor required to make sometimes similar motion designs and limit the number of mistakes that can be done. Even simple animations usually consist of many keyframes (defined, non-computed states in time). Defining relationships between objects and createing setups, “rigs”, is a way to create of a sort of working model of the object we are trying to mock up.

Blender Constraints

Constraints in Blender allow to define certain behaviors of objects in relation to others. Constraints allow you to limit movement of an object to specific ranges (a scrollbar not being able to be dragged outside of its gutter), or to convert certain motion of an object to a different transformation of another (a slider adjusting a horizon of an image, ie. rotating it).

The simplest method of defining relation is through a hierarchy. An object can become a parent of another, and thus all children will inherit movements/transforms of a parent. However there are cases — like interactions of a cursor with other objects — where this relationship is only temporary. Again, constraints help here, in particular the copy location constraint. This is because you can define the influence strength of a constraint. Like everything in Blender, this can also be keyframed, so at some point you can follow the cursor and later disengage this tight relationship. Btw if you ever though you can manualy keyframe two animations manually so they do not slide, think again.

Inverse transform in Blender

The GIF screencasts have been created using Peek, which is available to download as a flatpak.

Peek, a GIF screencasting app.

I've been quiet for a while, partly because I've been busy preparing for a booth at the upcoming Everyone Does IT event at PEEC, organized by LANL.

In addition to booths from quite a few LANL and community groups, they'll show the movie "CODE: Debugging the Gender Gap" in the planetarium, I checked out the movie last week (our library has it) and it's a good overview of the problem of diversity, and especially the problems women face in in programming jobs.

I'll be at the Los Alamos Makers/Coder Dojo booth, where we'll be showing an assortment of Raspberry Pi and Arduino based projects. We've asked the Coder Dojo kids to come by and show off some of their projects. I'll have my RPi crittercam there (such as it is) as well as another Pi running motioneyeos, for comparison. (Motioneyeos turned out to be remarkably difficult to install and configure, and doesn't seem to do any better than my lightweight scripts at detecting motion without false positives. But it does offer streaming video, which might be nice for a booth.) I'll also be demonstrating cellular automata and the Game of Life (especially since the CODE movie uses Life as a background in quite a few scenes), music playing in Python, a couple of Arduino-driven NeoPixel LED light strings, and possibly an arm-waving penguin I built a few years ago for GetSET, if I can get it working again: the servos aren't behaving reliably, but I'm not sure yet whether it's a problem with the servos and their wiring or a power supply problem.

The music playing script turned up an interesting Raspberry Pi problem. The Pi has a headphone output, and initially when I plugged a powered speaker into it, the program worked fine. But then later, it didn't. After much debugging, it turned out that the difference was that I'd made myself a user so I could have my normal shell environment. I'd added my user to the audio group and all the other groups the default "pi" user is in, but the Pi's pulseaudio is set up to allow audio only from users root and pi, and it ignores groups. Nobody seems to have found a way around that, but sudo apt-get purge pulseaudio solved the problem nicely.

I also hit a minor snag attempting to upgrade some of my older Raspbian installs: lightdm can't upgrade itself (Errors were encountered while processing: lightdm`). Lots of people on the web have hit this, and nobody has found a way around it; the only solution seems to be to abandon the old installation and download a new Raspbian image.

But I think I have all my Raspbian cards installed and working now; pulseaudio is gone, music plays, the Arduino light shows run. Now to play around with servo power supplies and see if I can get my penguin's arms waving again when someone steps in front of him. Should be fun, and I can't wait to see the demos the other booths will have.

If you're in northern New Mexico, come by Everyone Does IT this Tuesday night! It's 5:30-7:30 at PEEC, the Los Alamos Nature Center, and everybody's welcome.