August 21, 2018

Adventures with NVMe, part 2

A few days ago I asked people to upload their NVMe “cns” data to the LVFS. So far, 643 people did that, and I appreciate each and every submission. I promised I’d share my results, and this is what I’ve found:

Number of vendors implementing slot 1 read only “s1ro” factory fallback: zero – this was way less than I hoped. Not all is lost: the number of slots in a device “nfws”indicate how many different versions of firmware the drive can hold, just like some wireless broadband cards. The idea is that a bad firmware flash means you can “fall back” to an old version that actually works. It was surprising how many drives didn’t have this feature because they only had one slot in total:

I also wanted to know how many firmware versions there were for a specific model (deduping by removing the capacity string in the model); the idea being that if drives with the same model string all had the same version firmware then the vendor wasn’t supplying firmware updates at all, and might be a lost cause, or have perfect firmware. Vendors don’t usually change shipped firmware on NMVe drives for no reason, and so a vendor having multiple versions of firmware for a given model could indicate a problem or enhancement important enough to re-run all the QA checks:

So, not all bad, but we can’t just assume that trying to flash a firmware is a safe thing to do for all drives. The next, much bigger problem was trying to identify which drives should be flashed with a specific firmware. You’d think this would be a simple problem, where the existing firmware version would be stored in the “fr” firmware revision string and the model name would be stored in the “mn” string. Alas, only Lenovo and Apple store a sane semver like 1.2.3, other vendors seem to encode the firmware revision using as-yet-unknown methods. Helpfully, the model name alone isn’t all we need to identify the firmware to flash as different drives can have different firmware for the laptop OEM without changing the mn or fr. For this I think we need to look into the elusive “vs” vendor-defined block which was the reason I was asking for the binary dump of the CNS rather than the nvme -H or nvme -o json output. The vendor block isn’t formally defined as part of the NVMe specification and the ODM (and maybe the OEM?) can use this however they want.

Only 137 out of the supplied ~650 NVMe CNS blobs contained vendor data. SK hynix drives contain an interesting-looking string of something like KX0WMJ6KS0760T6G01H0, but I have no idea how to parse that. Seagate has simply 2002. Liteon has a string like TW01345GLOH006BN05SXA04. Some Samsung drives have things like KR0N5WKK0184166K007HB0 and CN08Y4V9SSX0087702TSA0 – the same format as Toshiba CN08D5HTTBE006BEC2K1A0 but it’s weird that the blob is all ASCII – I was somewhat hoping for a packed GUID in the sea of NULs. They do have some common sub-sections, so if you know what these are please let me know!

I’ve built a fwupd plugin that should be able to update firmware on NVMe drives, but it’s 100% untested. I’m going to use the leftover donation money for the LVFS to buy various types of NVMe hardware that I can flash with different firmware images and not cry if all the data gets wiped or the device get bricked. I’ve already emailed my contact at Samsung and fingers crossed something nice happens. I’ll do the same with Toshiba and Lenovo next week. I’ll also update this blog post next week with the latest numbers, so if you upload your data now it’s still useful.

August 20, 2018

security things in Linux v4.18

Previously: v4.17.

Linux kernel v4.18 was released last week. Here are details on some of the security things I found interesting:

allocation overflow detection helpers
One of the many ways C can be dangerous to use is that it lacks strong primitives to deal with arithmetic overflow. A developer can’t just wrap a series of calculations in a try/catch block to trap any calculations that might overflow (or underflow). Instead, C will happily wrap values back around, causing all kinds of flaws. Some time ago GCC added a set of single-operation helpers that will efficiently detect overflow, so Rasmus Villemoes suggested implementing these (with fallbacks) in the kernel. While it still requires explicit use by developers, it’s much more fool-proof than doing open-coded type-sensitive bounds checking before every calculation. As a first-use of these routines, Matthew Wilcox created wrappers for common size calculations, mainly for use during memory allocations.

removing open-coded multiplication from memory allocation arguments
A common flaw in the kernel is integer overflow during memory allocation size calculations. As mentioned above, C doesn’t provide much in the way of protection, so it’s on the developer to get it right. In an effort to reduce the frequency of these bugs, and inspired by a couple flaws found by Silvio Cesare, I did a first-pass sweep of the kernel to move from open-coded multiplications during memory allocations into either their 2-factor API counterparts (e.g. kmalloc(a * b, GFP...) -> kmalloc_array(a, b, GFP...)), or to use the new overflow-checking helpers (e.g. vmalloc(a * b) -> vmalloc(array_size(a, b))). There’s still lots more work to be done here, since frequently an allocation size will be calculated earlier in a variable rather than in the allocation arguments, and overflows happen in way more places than just memory allocation. Better yet would be to have exceptions raised on overflows where no wrap-around was expected (e.g. Emese Revfy’s size_overflow GCC plugin).

Variable Length Array removals, part 2
As discussed previously, VLAs continue to get removed from the kernel. For v4.18, we continued to get help from a bunch of lovely folks: Andreas Christoforou, Antoine Tenart, Chris Wilson, Gustavo A. R. Silva, Kyle Spiers, Laura Abbott, Salvatore Mesoraca, Stephan Wahren, Thomas Gleixner, Tobin C. Harding, and Tycho Andersen. Almost all the rest of the VLA removals have been queued for v4.19, but it looks like the very last of them (deep in the crypto subsystem) won’t land until v4.20. I’m so looking forward to being able to add -Wvla globally to the kernel build so we can be free from the classes of flaws that VLAs enable, like stack exhaustion and stack guard page jumping. Eliminating VLAs also simplifies the porting work of the stackleak GCC plugin from grsecurity, since it no longer has to hook and check VLA creation.

Kconfig compiler detection
While not strictly a security thing, Masahiro Yamada made giant improvements to the kernel’s Kconfig subsystem so that kernel build configuration now knows what compiler you’re using (among other things) so that configuration is no longer separate from the compiler features. For example, in the past, one could select CONFIG_CC_STACKPROTECTOR_STRONG even if the compiler didn’t support it, and later the build would fail. Or in other cases, configurations would silently down-grade to what was available, potentially leading to confusing kernel images where the compiler would change the meaning of a configuration. Going forward now, configurations that aren’t available to the compiler will simply be unselectable in Kconfig. This makes configuration much more consistent, though in some cases, it makes it harder to discover why some configuration is missing (e.g. CONFIG_GCC_PLUGINS no longer gives you a hint about needing to install the plugin development packages).

That’s it for now! Please let me know if you think I missed anything. Stay tuned for v4.19; the merge window is open. :)

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

Interview with Margarita Gadrat

Could you tell us something about yourself?

Hello! My name is Margarita Gadrat, I was born in Russia and live in France. Drawing is my favourite activity since my childhood. After some years working as a graphic designer in different companies, I decided to follow my dream and now I’m a freelance illustrator and graphic designer.

Do you paint professionally, as a hobby artist, or both?

Both. Personal paintings for experimenting and improving my technique. Professionally, I’m open to new interesting projects. There are still a lot to learn and this is so much fun!

What genre(s) do you work in?

I like painting nature inspired subjects, like landscapes, cute animals. And mysterious, dark atmospheres.

Whose work inspires you most — who are your role models as an artist?

Ketka, Ruan Jia, Hellstern, Pete Mohrbacher… I couldn’t put all of them 🙂

I love the works of classical masters too – Sargent, Turner, Ivan Shishkin, Diego Velasquez, Bosch. Aivazovsky’s sea is stunning! And the Pre-Raphaelites art has a magical aura.

How and when did you get to try digital painting for the first time?

10 years ago my husband offered me a Wacom tablet. After trying this tool on Photoshop, I was impressed.

What makes you choose digital over traditional painting?

The creative possibilities you have without buying oils, watercolor. No need to clean your table and material after! Also, you can easily correct details with filters and doing the Ctrl-Z 😉 Working fast too, thanks to useful tools: selections, transforming tools…

How did you find out about Krita?

My husband, who is into FOSS, told me about Krita.

What was your first impression?

Whoa, it’s so fluid and comfortable! Coming from Photoshop, I wasn’t lost with the general concepts (layers, filters, blending, masks…), but had to take time to understand how it was organized in Krita.

What do you love about Krita?

All those features to help in a work process: drawing assistants for perspective, the new reference tool where you can easily arrange your references and put it to your canvas, the freedom of the brush presets. And working with layers in the non destructive way I love so much. The animation section is great too.

What do you think needs improvement in Krita? Is there anything that
really annoys you?

Nothing that really annoys me. Krita is awesome and complete software! Maybe a couple of little things, but I don’t really use them. Like text tool, which is now getting better and better. And I’d like to be able to move the selection form not while selecting, but after it is selected.

What sets Krita apart from the other tools that you use?

Krita is really rich software. You can imitate the traditional materials, but also experimenting with blending to create original results. It permits a fast and quality workflow.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

“Lake house”

This work combines architecture and nature, it was a nice challenge working on the design of the house and the composition.

What techniques and brushes did you use in it?

I painted in Krita with grey scale values, mostly with the default round brush. Default blending brushes make smooth values. After that, I colorized it with color layers and adjusted levels with the filter layer.

Where can people see more of your work?

DeviantArt: https://www.deviantart.com/darkmagou/gallery/
My personal site with the illustration and graphic design works (in French): https://www.margarita-gadrat.xyz/book
Dribble: https://dribbble.com/mgadrat
Behance: https://www.behance.net/mgadrat

Anything else you’d like to share?

Thank you for Krita, it’s a wonderful program, working on all the platforms, free, open source and constantly including new features!

August 18, 2018

GIMP 2.10.6 Released

Almost four months have passed since GIMP 2.10.0 release, and this is already the fourth version in the series, bringing you bug fixes, optimizations, and new features.

The most notable changes are listed below (see also the NEWS file).

Main changes

Vertical text layers

GIMP finally gets support for vertical text (top-to-bottom writing)! This is a particularly anticipated feature for several East-Asian writing systems, but also for anyone wishing to design fancy vertical text.

Vertical text Vertical text in GIMP 2.10.6.

For this reason, GIMP provides several variants of vertical text, with mixed orientation (as is typical in East-Asian vertical writing) or upright orientation (more common for Western vertical writing), with right-to-left, as well as left-to-right columns.

Thanks to Yoshio Ono for the vertical text implementation!

New filters

Two new filters make an entrance in this release:

Little Planet

This new filter is built on top of the pre-existing gegl:stereographic-projection operation and is finetuned to create “little planets” from 360×180° equirectangular panorama images.

Little Planet filter Little Planet filter in GIMP 2.10.6.
Image on canvas: Luftbild Panorama der Isar bei Ettling in Niederbayern, by Simon Waldherr, (CC by-sa 4.0).

Long Shadow

This new GEGL-based filter simplifies creating long shadows in several visual styles.

There is a handful of configurable options, all helping you to cut extra steps from getting the desired effect.

The feature was contributed by Ell.

Improved straightening in the Measure tool

A lot of people appreciated the new Horizon Straightening feature added in GIMP 2.10.4. Yet many of you wanted vertical straightening as well. This is now possible.

Vertical Straightening Vertical straightening in GIMP 2.10.6.
Image on canvas: View of the western enclosing wall of the Great Mosque of Kairouan, by Moumou82, (CC by-sa 2.0).

In the Auto mode (default), Straighten will snap to the smaller angle to decide for vertical or horizontal straightening. You can override this behavior by specifying explicitly which it should be.

Optimized drawable preview rendering

Most creators working on complex projects in GIMP have had bad days when there are many layers in a large image, and GIMP can’t keep up with scrolling the layers list or showing/hiding layers.

Part of the reason was that GIMP couldn’t update user interface until it was done rendering layer previews. Ell again did some miracles here by having most drawable previews render asynchronously.

For now, the only exception to that are layer groups. Rendering them asynchronously is still not possible, so until we deal with this too, we made it possible for you to disable rendering layer group previews completely. Head over to Preferences > Interface and tick off the respective checkbox.

Disable preview of layer groups Disable preview of layer groups in GIMP 2.10.6.

One more thing to mention here. For technically-minded users, the Dashboard dockable dialog (introduced in GIMP 2.10.0) now displays the number of async operations running in the Misc group.

A new localization: Marathi

GIMP was already available in 80 languages. Well, it’s 81 languages now!

A team from the North Maharashtra University, Jalgaon, worked on a Marathi translation and contributed a nearly full translation of GIMP.

Of course, we should not forget all the other translators who do a wonderful work on GIMP. In this release, 13 other translations were updated: Brazilian Portuguese, Dutch, French, German, Greek, Italian, Latvian, Polish, Romanian, Russian, Slovenian, Spanish, and Swedish.

Thanks everyone!

GIMP in Marathi Marathi translation in GIMP 2.10.6.

File dialog filtering simplified

A common cause of confusion in the file dialogs (opening, saving, exporting…) was the presence of two file format lists, one for displaying files with a specific extension, the other for the actual file format choice. So we streamlined this.

There is just one list available now, and it works as both the filter for displayed images and the file format selector for the image you are about to save or export.

File Dialog File dialog in GIMP 2.10.6.

Additionally, a new checkbox allows you to display the full list of files, regardless of the currently chosen file format. This could be useful when you want to enforce an unusual file extension or reuse an existing file’s name by choosing it in the list and then appending your extension.

The end of DLL hell? A note to plug-in developers…

A major problem over the years, on Windows, was what developers call the DLL hell. This was mostly caused either by third-party software installing libraries in system folders or by third-party plug-ins installing themselves with shared libraries interfering with other plug-ins.

The former had already been mostly fixed by tweaking the DLL search priority order. This release provides an additional fix by taking into account 32-bit plug-ins running on 64-bit Windows systems (WoW64 mode).

The latter was fixed already since GIMP 2.10.0 if you installed your plug-ins in its own directory (which is not compulsory yet, but will be in GIMP 3).

E.g. if you have a plug-in named myplugin.exe, please install it under plug-ins/myplugin/myplugin.exe. This way, not only you won’t pollute other plug-ins if you ever included libraries, but your plug-in won’t be prevented from running by unwanted libraries as well. All our core plug-ins are now installed this way. Any third-party plug-ins should be as well.

Ongoing Development

Prepare for the space invasion!

Meanwhile, taking place simultaneously on the babl, GEGL, and GIMP 2.99 fronts, pippin and Mitch embarked on a project internally nicknamed the “space invasion“, the end goal of which is to simplify and improve color management in GIMP, as well as other GEGL-based projects.

Space invasion (ongoing development) Mutant goats from outer space, soon landing in GIMP.

About a year ago, babl, the library used by GIMP and GEGL to perform color conversions, gained the ability to tie arbitrary RGB color spaces to existing pixels formats. This, in turn, allowed GIMP to start using babl for performing conversions between certain classes of color profiles, instead of relying solely on the LCMS library, greatly improving performance. However, these conversions would only take place at the edges between the internal image representation used by GIMP, and the outside world; internally, the actual color profile of the image had limited effect, leading to inconsistent or incorrect results for certain image-processing operations.

The current effort seeks to change that, by having all image data carry around the information regarding its color profile internally. When properly handled by GEGL and GIMP, this allows babl to perform the right conversions at the right time, letting all image-processing operations be applied in the correct color space.

While the ongoing work toward this goal is already available in the mainline babl and GEGL versions, we are currently restricting it to the GIMP 2.99 development version (to become GIMP 3.0), but it will most likely make its way into a future GIMP 2.10.x release.

GIMP extensions

Lastly Jehan, from ZeMarmot project, has been working on extensions in GIMP. An extension could be anything from plug-ins to splash images, patterns, brushes, gradients… Basically anything which could be created and added by anyone. The end goal would be to allow creators of such extensions to upload them on public repositories, and for anyone to search and install them in a few clicks, with version management, updates, etc.

Extension (Ongoing development) Extension manager in future GIMP

This work is also only in the development branch for the time being, but should make it to a GIMP 2.10.x release at some point in the future as well.

Helping development

Keep in mind that pippin and Jehan are able to work on GEGL and GIMP thanks to crowdfunding and the support of the community. Every little bit helps to support their work and helps to make GIMP/GEGL even more awesome! If you have a moment, check out their support pages:

Most importantly: have fun with GIMP everyone!

Font sidecars: dream or nightmare?

At DebConf this year, I gave a talk about font-management on desktop Linux that focused, for the most part, on the importance of making font metadata accessible to users. That’s because I believe that working with fonts and managing a font collection are done primarily through the metadata — like other types of media content (audio files, video files, even still images, although most people fall back to looking at thumbnails on the latter). For example, you need to know the language support of a font, what features it offers (either OpenType or variable-font options), and you mentally filter fonts based on categories (“sans serif”, “handwriting”, “Garamond”, tags that you’ve assigned yourself, etc.).

That talk was aimed at Debian font-package maintainers, who voluntarily undertake the responsibility of all kinds of QA and testing for the fonts that end users get on their system, including whether or not the needed metadata in the font binaries is present and is correct. It doesn’t do much good to have a fancy font manager that captures foundry and designer information if those fields are empty in 100% of the fonts, after all.

At the end of the session, a question came up from the audience: what do we do about fonts that we can’t alter (even to augment metadata) from upstream? This is a surprisingly common problem. There are a lot of fonts in the Debian archive that have “if you change the font, you must change the name” licenses. Having to change the user-visible font name (which is what these licenses mean) defeats the purpose of filling in missing metadata. So someone asked whether or not a Debian package could, instead, ship a metadata “sidecar” file, much like photo editors use for camera-raw image files (which need to remain unmodified for archival purposes). Darktable, Digikam, RawTherapee, Raw Studio, and others do this.

It’s not a bad idea, of course. Anecdotally, I think a lot of desktop Linux users are unaware of some of the genuinely high-quality fonts available to them that just happen to be in older binary formats, like Adobe Utopia. The question is what format a metadata sidecar file would use.

The wonderful world of sides of cars

Photo editors use XMP, the Extensible Metadata Platform, created by Adobe. It’s XML-based. So it could be adapted for fonts; all that’s needed would be for someone to draw up a suitable XML namespace (or is it schema? Maybe both?). XML is mildly human-readable, so the file might be enlightening early-evening reading in addition to being consumable by software. But it’s not trivial, and utilizing it would require XML support in all the relevant font utilities. And implementing new support for that new namespace/schema. I’d offhandedly give it a 4 out of 10 for solving-the-problemhood.

On the other hand … almost all font-binary internals are already representable in another XML format, TTX, which is defined as part of the free-software FontTools project. There are a few metadata properties that aren’t directly included in OpenType (and, for the record, OpenType wraps both TrueType and PostScript Type 1, so it’s maximal), but a lot of those are either “derived” properties (such as is-this-a-color-font) or might be better represented at a package level (such as license details). TTX has the advantage of already being directly usable by just about every font-engineering tool here on planet Earth.

In fact, theoretically, you could ship a TTX sidecar file that a simple script could use to convert the Adobe Utopia Type-1 file into a modernized OpenType file. For licensing reasons, the package installer probably cannot do that on the user’s behalf, and the package-build tool certainly could not, because it would trigger the renaming clause. But you could leave it as a fun option for the user to run, or build it into a font manager (or a office-application extension).

Anyway, the big drawback to TTX as a sidecar is that creating hand-crafted TTX files is not easy. I know it’s been done at least once, but the people involved don’t talk about it as a personal high point. In addition, TTX isn’t built to handle non-font-binary content. You could stuff extra metadata into unclaimed high-order name table slots, but roundtripping those is problematic. But for the software-integration features, I’d call it at least a 8 out of 10 possibility.

Another existing metadata XML option is the metadata block already defined by the W3C for WOFF, the compressed web-font-delivery file format. “But wait,” you cry, leaping out of your chair in excitement as USB cables and breakfast cereal goes flying, “if it’s already defined, isn’t the problem already solved?” Well, not exactly. The schema is minimal in scope: it’s intended to serve the WOFF-delivery mechanism only. So it doesn’t cover all of the properties you’d need for a desktop font database. Furthermore, the metadata block is a component of the WOFF binary, not a sidecar, so it’s not fully the right format. That said, it is possible that it could be extended (possibly with the W3C on board). There is a per-vendor “extension” element defined. For widespread usage, you’d probably want to just define additional elements. Here again, the drawback would be lack of tool support, but it’s certainly more complete than DIY XML would ever be. 6 out of 10, easily.

There are also text-based formats that already exist and cover some of what is desired in a sidecar. For example, the FEA format (also by Adobe) is an intentionally human-writeable format geared toward writing OpenType feature definitions (think: ligature-substitution rules, but way more flexible. You can do chaining rules that match pre-expressions and post-expressions, and so on). FEA includes formal support for (as far as I can tell) all the currently used name table metadata in OpenType, and there is an “anonymous” block type you could use to embed other properties. Tool support is stellar; FEA usage is industry standard these days.

That having been said, I’m not sure how widespread support for the name table and anonymous stuff is in software. Most people use FEA to write GSUB and GPOS rules—and nothing else. Furthermore, FEA is designed to be compiled into a font, and done so when building the font from source. So going from FEA to a metadata database is a different path altogether, and the GSUB-orientation of the format means it’s kind of a hack. If the tool support is good and people agreed on how to stuff things into anonymous blocks, call it 5.5 out of 10.

Finally, there’s the metadata sidecars used by Google Fonts. These are per-font-family, and quite lightweight. The format is actually plain-text Google protobuf, although the Google Fonts repository has them all using the binary file extension, *.pb. Also, the Google Fonts docs are out of date: they reference the old approach, which used JSON. While this format is quite easy to read, write, and do something with, at the moment it is also minimal in scope: basic font and family name information, weight & style, copyright. That’s it. Certainly the extension mechanism is easy to adopt, however; this is trivial “key”:”value” stuff. And there are a lot of tools capable of reading protobuf and similar structured formats; very database-friendly. 6 out of 10 or so.

Choose a side

So where does that leave us? Easy answer: I don’t know. If I were packaging fonts today and wanting to simply record missing metadata, unsure of what would happen to it down the line, I might do it in protobuf format—for the sake of simplicity. It’s certainly feasible to imagine that protobuf metadata files could be converted to TTX or to FEA for consumption by existing tools, after all.

If I was writing a sidecar specifically to use for the Utopia example from above, as a shortcut to turn an old Type-1 font into an OpenType font with updated metadata, I would do it in TTX, since I know that would work. But that’s not ultimately the driving metadata-sidecar question as raised in the audience Q&A after my talk. That was about filling in a desktop-font-metadata database. Which, at the moment, does not exist.

IF a desktop-font-metadata plan crystalizes and IF relying on an XML format works for everyone involved, it’d certainly be less effort to rely on TTX than anything else.

To get right down to it, though, all of this metadata is meant to be the human-readable kind. It’d probably be easier to forgo the pain of writing or extending an XML namespace (however much you love the W3C) and keep the sidecar simple. If it proves useful, at least it’s simpler for other programs and libraries to read it and, perhaps, transform it into the other formats that utilities and build tools can do more interesting things with. Or vice-versa, like fontmake taking anonymous FEA blocks and plopping them into a protobuf file when building.

All that having been said, I fully expect the legions of XML fans (fxans?) to rush the stage and cheer for their favorite option….

August 17, 2018

Easy DIY Cellphone Stand

Over the years I've picked up a couple of cellphone stands as conference giveaways. A stand is a nice idea, especially if you like to read news articles during mealtime, but the stands I've tried never seem to be quite what I want. Either they're not adjustable, or they're too bulky or heavy to want to carry them around all the time.

A while back, I was browsing on ebay looking for something better than the ones I have. I saw a few that looked like they might be worth trying, but then it occurred to me: I could make one pretty easily that would work better than anything I'd found for sale.

I started with plans that involved wire and a hinge -- the hinge so the two sides of the stand would fold together to fit in a purse or pocket -- and spent a few hours trying different hinge options.I wasn't satisfied, though. And then I realized: all I had to do was bend the wire into the shape I needed. Voilà -- instant lightweight adjustable cellphone stand.

And it has worked great. I've been using it for months and it's much better than any of the prefab stands I had before.

Bend a piece of wire

[Bent wire]

I don't know where this wire came from: it was in my spare-metal-parts collection. You want something a little thinner than coathanger wire, so you can bend it relatively easily; "baling wire" or "rebar wire" is probably about right.

Bend the tips around

[Tips adjusted to fit your phone's width]

Adjust the curve so it's big enough that your cellphone will fit in the crook of the wires.

Bend the back end down, and spread the two halves apart

[Bend the back end down]

Adjust so it fits your phone

[Instant cellphone stand]

Coat the finished stand with rubberized coating (available at your local hardware store in either dip or spray-on varieties) so it won't slide around on tables and won't scratch anything. The finished product is adjustable to any angle you need -- so you can adjust it based on the lighting in any room -- and you can fold the two halves together to make it easy to carry.

NVMe Firmware: I Need Your Data

In a recent Google Plus post I asked what kind of hardware was most interesting to be focusing on next. UEFI updating is now working well with a large number of vendors, and the LVFS “onboarding” process is well established now. On that topic we’ll hopefully have some more announcements soon. Anyway, back to the topic in hand: The overwhelming result from the poll was that people wanted NVMe hardware supported, so that you can trivially update the firmware of your SSD. Firmware updates for SSDs are important, as most either address data consistency issues or provide nice performance fixes.

Unfortunately there needs to be some plumbing put in place first, so don’t expect anything awesome very fast. The NVMe ecosystem is pretty new, and things like “what version number firmware am I running now” and “is this firmware OEM firmware or retail firmware” are still queried using vendor-specific extensions. I only have two devices to test with (Lenovo P50 and Dell XPS 13) and so I’m asking for some help with data collection. Primarily I’m trying to find out what NMVe hardware people are actually using, so I can approach the most popular vendors first (via the existing OEMs). I’m also going to be looking at the firmware revision string that each vendor sets to find quirks we need — for instance, Toshiba encodes MODEL VENDOR, and everyone else specifies VENDOR MODEL. Some drives contain the vendor data with a GUID, some don’t, I have no idea of the relative number or how many different formats there are. I’d also like to know how many firmware slots the average SSD has, and the percentage of drives that have a protected slot 1 firmware. This all lets us work out how safe it would be to attempt a new firmware update on specific hardware — the very last thing we want to do is brick an expensive new NMVe SSD with all your data on.

So, what do I would like you to do. You don’t need to reboot, unmount any filesystems or anything like that. Just:

  1. Install nvme (e.g. dnf install nvme-cli or build it from source
  2. Run the following command:
    sudo nvme id-ctrl --raw-binary /dev/nvme0 > /tmp/id-ctrl
    
  3. If that worked, run the following command:
    curl -F type=nvme \
        -F "machine_id="`cat /etc/machine-id` \
        -F file=@/tmp/id-ctrl \
        https://staging.fwupd.org/lvfs/upload_hwinfo

If you’re not sure if you have a NVMe drive you can check with the nvme command above. The command isn’t doing anything with the firmware; it’s just asking the NVMe drive to report what it knows about itself. It should be 100% safe, the kernel already did the same request at system startup.

We are sending your random machine ID to ensure we don’t record duplicate submissions — if that makes you unhappy for some reason just choose some other 32 byte hex string. In the binary file created by nvme there is the encoded model number and serial number of your drive; if this makes you uneasy please don’t send the file.

Many thanks, and needless to say I’ll be posting some stats here when I’ve got enough submissions to be statistically valid.

August 12, 2018

Prevent a Linux/systemd System from Auto-Sleeping

About three weeks ago, a Debian (testing) update made a significant change on my system: it added a 30-minute suspend timeout. If I left the machine unattended for half an hour, it would automatically go to sleep.

What's wrong with that? you ask. Why not just let it sleep if you're going to be away from it that long?

But sometimes there's a reason to leave it running. For instance, I might want to track an ongoing discussion in IRC, and occasionally come back to check in. Or, more important, a long-running job that doesn't require user input, like a system backup, or ripping a CD. or testing a web service. None of those count as "activity" to keep the system awake: only mouse and keyboard motions count.

There are lots of pages that point to the file /etc/systemd/logind.conf, where you can find commented-out lines like

#IdleAction=ignore
#IdleActionSec=30min

The comment at the top of the file says that these are the defaults and references the logind.conf man page. Indeed, man logind.conf says that setting IdleAction=ignore should prevent annything from happening, and that setting IdleActionSec=120min should lead to a longer delay.

Alas, neither is true. This file is completely ignored as far as I can tell, and I don't know why it's there, or where the 30 minute setting is coming from.

What actually did work was in Debian's Suspend wiki page. I was skeptical since the page hasn't been updated since Debian Jessie (Stretch, the successor to Jessie, has been out for more than a year now) and the change I was seeing just happened in the last month. I was also leery because the only advice it gives is "For systems which should never attempt any type of suspension, these targets can be disabled at the systemd level". I do suspend my system, frequently; I just don't want it to happen unless I tell it to, or with a much longer timeout than 30 minutes.

But it turns out the command it suggests does work:

sudo systemctl mask sleep.target suspend.target hibernate.target hybrid-sleep.target
and it doesn't disable suspending entirely: I can still suspend manually, it just disables autosuspend. So that's good enough.

Be warned: the page says next:

Then run systemctl restart systemd-logind.service or reboot.

It neglects to mention that restarting systemd-logind.service will kill your X session, so don't run that command if you're in the middle of anything.

It would be nice to know where the 30-second timeout had been coming from, so I could enable it after, say, 90 or 120 minutes. A timeout sounds like a good thing, if it's something the user can configure. But like so many systemd functions, no one who writes documentation seems to know how it actually works, and those who know aren't telling.

August 11, 2018

All font metadata, all the time

A recurring obstacle within the problem of “making fonts work better for desktop Linux users” (see previous posts for context) is how much metadata is involved and, thus, needs to be shuffled around so that it can be exposed to the user or accessed by system utilities. People hunting for a font to install need to see metadata about it in the package manager / software center; people selecting a font from their collection need to see metadata about it in the application they’re using (and within the font-selection dialogs, if not elsewhere in the app’s UI). The computing environment needs to be aware of metadata to set up the correct rendering options and to pass along the right information to other components above and below in the stack.

As it stands now, there are several components on a GNOME-based system that already do track metadata about fonts: Pango, Fontconfig, AppStream, HarfBuzz, GTK+, etc. There are also a ton of metadata fields already defined in the OpenType font format specification. Out of curiosity, I decided to dig into the various data structures and see which properties are tracked where.

I put the results into a Markdown table as a GitLab “snippet” (a.k.a., gist…) and posted it as this font metadata cross-reference chart. For comparison’s sake, I also included the internal data structures of the old Fontmatrix font manager (which a lot of folks still reference as a high-water mark) and the … shall we say “terse” … font ontology defined in Nepomuk. I basically used the OpenType spec as the reference point, although some of the metadata in that column (the left-most) is more of a “derived property” than an actual field. Like “is this font a color font” — you might determine that by looking for one of the four possible color font tables (sbix, CBDT, SVG, or COLR), but there’s not a “color included” bit, exactly.

There are just over 50 properties, which is a soft number because it includes a lot of not-quite aligned items, such as where OpenType stores a separate field for the font vendor name and the font vendor’s URL, and some properties that are overly general in one or more implementations. That, and it includes several of the derived properties mentioned just above.

You’re welcome to suggest additions. I did focus on the user-visible metadata world, which means the table excludes some system-oriented properties like the em size and baseline height. My interest in primarily in things that directly tie in to how users select fonts for use. I’m definitely not averse to pull requests to add that system stuff to the table, though. That’s why I put it on a GitLab snipgist — personally, I find Markdown tables awful to work with. But I digress.

No idea where this goes from here. There was some talk around the Fonts BoF at GUADEC this past July about whether or not GNOME ought to cache font metadata in some sort of user-local database. My recollection is that this would be helpful in several places; certainly as it stands now, for example, the awesome work that Matthias Clasen has done reengineering the GTK+ font explorer seems to require mining each font binary, so saving those results for reuse seems like a clear win. If Pango and the top-level application are also parsing the font binary just to show stuff, that’s overkill.

In my impromptu GUADEC talk about font selection, I noted that other GNOME “content objects” like audio, video, and image files all seems to get their metadata taken care of by Tracker, and posited that Tracker could take that task on for fonts as well. Most developers there didn’t seem to think this was a great fit. Some people have issues with Tracker anyway, but Tracker is also (by design) a “user-level” service, so putting font metadata into it would not solve the caching needs of the various libraries and system components.

Nevertheless, I did toy with the idea of writing up a modern Nepomuk ontology for font objects based on the grand-unified-GitLab-snipgist table (Nepomuk being the standard that Tracker uses). That may just be out of personal self-interest in teasing out structure from the table itself. It might be useful, for example, to capture more structure about which properties are related in which ways — such as where they are nested within other properties, are duplicates, or are mutually exclusive. And a lot more for which Markdown isn’t remotely useful.

August 10, 2018

Introducing Blender Benchmark

Today we present the Blender Benchmark, a new platform to collect and display the results of hardware and software performance tests. With this benchmark we aim at an optimal comparison between system hardware and installations, and to assist developers to track performance during Blender development.

The benchmark consists of two parts: a downloadable package which runs Blender and renders on several production files, and the Open Data portal on blender.org, where the results will be (optionally) uploaded.

We’ve built the Blender Benchmark platform with maximum focus on transparency and privacy. We only use free and open source software (GNU GPL), the testing content is public domain (CC0), and the test results are being shared anonymized as public domain data – free for anyone to download and to process further.

We believe this is the best way to invite the Blender community to contribute the results of their performance tests, and create a world-class Open Dataset for the entire CG industry.

How does it work?

Users download the Benchmark Client and run one of the two benchmarks (‘quick’ or ‘complete’). The benchmark will gather information about the system, such as operating system, RAM, graphics cards, CPU model, as well as information about the performance of the system during the execution of the benchmark. After that, the user will be able to share the result online on the Blender Open Data platform, or to save the data locally.

In order to provide control over the data that is shared online, the benchmark result is first associated with the Blender ID of the user, and uploaded on mydata.blender.org, where the user will be able to redact and anonymize the parts containing personal information (Blender ID username and hostname). Currently this information is removed by default. No other personal information is collected.

Blender Open Data Architecture

Blender Open Data portal

In order to visualize, share and explore the data, we’ve built opendata.blender.org. The data hosted on the website is available under Public Domain, it is updated in near-realtime after every benchmark and it is easily processable and well documented.

While hosting Blender Benchmark results will be the initial purpose of the Open Data portal, we plan to host other data sets in the future. For example, information about Blender downloads, telemetry information, etc. Each data set published on the platform will adhere to our Open Data principles, and its collection will be clearly communicated.

Blender Open Data principles

As initial guideline for our definition of Blender Open Data, we were inspired by the Eight Principles of Open Data. We believe that Blender Open Data should be:

  • Complete. All public data is made available. Public data is data that is not subject to valid privacy, security or privilege limitations.
  • Primary. Data is as collected at the source, with the highest possible level of granularity, not in aggregate or modified forms.
  • Timely. Data is made available as quickly as necessary to preserve the value of the data.
  • Accessible. Data is available to the widest range of users for the widest range of purposes.
  • Machine processable. Data is reasonably structured to allow automated processing.
  • Non-discriminatory. Data is available to anyone, with no requirement of registration.
  • Non-proprietary. Data is available in a format over which no entity has exclusive control.
  • License-free. Data is not subject to any copyright, patent, trademark or trade secret regulation. Reasonable privacy, security and privilege restrictions may be allowed.
  • Online and Free. Information is not meaningfully public if it is not available on the Internet at no charge, or at least no more than the marginal cost of reproduction. It should also be findable.
  • Permanent. Data should be made available at a stable Internet location indefinitely and in a stable data format for as long as possible.

We take a privacy-conscious approach when handling Benchmark data.

Timeline

  • August 10th: public beta; a test run to verify if everything works. Send feedback to devtalk.blender.org.
  • September: first official release.

Credits

The project has been developed by the team at Blender: Brecht van Lommel, Dan MacGrath, Francesco Siddi, Markus Ritberger, Pablo Vazquez, Sybren Stüvel and Sergey Sharybin.This project was commissioned by Ton Roosendaal, chairman Blender Foundation.

August 09, 2018

Blender at SIGGRAPH 2018

SIGGRAPH is the largest annual conference for  computer graphics. Since 1999, Blender has had a presence there every year. This year’s edition is in Vancouver – 11 to 16 August.

Blender will take part in the following events:

Foundation and Community Meeting

Ton Roosendaal. Sunday 2 PM, Room Pan Pacific Hotel, Pacific Rim 2

Blender Spotlight

David Andrade & Ton Roosendaal. Sunday 3.30 PM, Room Pan Pacific Hotel, Pacific Rim 2

AMD Booth Theater

Mike Pan. Tuesday, 3-4 PM, Cycles production rendering. Wednesday 2-3 PM, EEVEE real-time 3D in Blender 2.8

August Hailstorm

We're still not getting the regular thunderstorms one would normally expect in the New Mexico monsoon season, but at least we're getting a little relief from the drought.

Last Saturday we had a fairly impressive afternoon squall. It only lasted about ten minutes but it dumped over an inch of rain and hail in that time. ("Over an inch" means our irritating new weather station stopped recording at exactly 1.0 even though we got some more rain after that, making us suspect that it has some kind of built-in "that can't be right!" filter. It reads in hundredths of an inch and it's hard to believe that we didn't even get another .01 after that.)

{Pile of hailstones on our deck} It was typical New Mexico hail -- lentil-sized, not like the baseballs we heard about in Colorado Springs a few days later that killed some zoo animals. I hear this area does occasionally get big hailstones, but it's fortunately rare.

There was enough hail on the ground to make for wintry snow scenes, and we found an enormous pile of hailstones on our back deck that persisted through the next day (that deck is always shady). Of course, the hail out in the yard disappeared in under half an hour once the New Mexico sun came out.

{Pile of hailstones on our deck} But before that, as soon as the squall ended, we went out to walk the property and take a look the "snow" and in particular at "La Cienega" or "the swamp", our fanciful name for an area down at the bottom of the hill where water collects and there's a little willow grove. There was indeed water there -- covered with a layer of floating hail -- but on the way down we also had a new "creek" with several tributaries, areas where the torrent carved out little streambeds.

It's fun to have our own creek ... even if it's only for part of a day.

More photos: August hailstorm.

August 08, 2018

GNOME Software and automatic updates

For GNOME 3.30 we’ve enabled something that people have been asking for since at least the birth of the gnome-software project: automatically installing updates.

This of course comes with some caveats. Since it’s still not safe to auto-update packages (trust me, I triaged the hundreds of bugs) we will restrict automatic updates to Flatpaks. Although we do automatically download things like firmware updates, ostree content, and package updates by default they’re deployed manually like before. I guess it’s important to say that the auto-update of Flatpaks is optional and can easily be turned off in the GUI, and that you’ll be notified when applications have been auto-updated and need restarting.

Another common complaint with gnome-software was that it didn’t show the same list of updates as command line tools like dnf. The internal refactoring required for auto-deploying updates also allows us to show updates that are available, but not yet downloaded. We’ll still try and auto-download them ahead of time if possible, but won’t hide them until that. This does mean that “new” updates could take some time to download in the updates panel before either the firmware update is performed or the offline update is scheduled.

This also means we can add some additional UI controlling whether updates should be downloaded and deployed automatically. This doesn’t override the existing logic regarding metered connections or available battery power, but does give the user some more control without resorting to using gsettings invocation on the command line.

Also notable for GNOME 3.30 is that we’ve switched to the new libflatpak transaction API, which both simplifies the flatpak plugin considerably, and it means we install the same runtimes and extensions as the flatpak CLI. This was another common source of frustration as anyone trying to install from a flatpakref with RuntimeRepo set will testify.

With these changes we’ve also bumped the plugin interface version, so if you have out-of-tree plugins they’ll need recompiling before they work again. After a little more polish, the new GNOME Software 2.29.90 will soon be available in Fedora Rawhide, and will thus be available in Fedora 29. If 3.30 is as popular as I think it might be, we might even backport gnome-software 3.30.1 into Fedora 28 like we did for 3.28 and Fedora 27 all those moons ago.

Comments welcome.

August 06, 2018

Please welcome Lenovo to the LVFS

I’d like to formally welcome Lenovo to the LVFS. For the last few months myself and Peter Jones have been working with partners of Lenovo and the ThinkPad, ThinkStation and ThinkCenter groups inside Lenovo to get automatic firmware updates working across a huge number of different models of hardware.

Obviously, this is a big deal. Tens of thousands of people are likely to be offered a firmware update in the next few weeks, and hundreds of thousands over the next few months. Understandably we’re not just flipping a switch and opening the floodgates, so if you’ve not seen anything appear in fwupdmgr update or in GNOME Software don’t panic. Over the next couple of weeks we’ll be moving a lot of different models from the various testing and embargoed remotes to the stable remote, and so the list of supported hardware will grow. That said, we’ll only be supporting UEFI hardware produced fairly recently, so there’s no point looking for updates on your beloved T61. I also can’t comment on what other Lenovo branded hardware is going to be supported in the future as I myself don’t know.

Bringing Lenovo to the LVFS has been a lot of work. It needed changes to the low level fwupdate library, fwupd, and even the LVFS admin portal itself for various vendor-defined reasons. We’ve been working in semi-secret for a long time, and I’m sure it’s been frustrating to all involved not being able to speak openly about the grand plan. I do think Lenovo should be applauded for the work done so far due to the enormity of the task, rather than chastised about coming to the party a little late. If anyone from HP is reading this, you’re now officially late.

We’re still debugging a few remaining issues, and also working on making the update metadata better quality, so please don’t judge Lenovo (or me!) too harshly if there are initial niggles with the update process. Updating the firmware is slightly odd in that it sometimes needs to reboot a few times with some scary-sounding beeps, and on some hardware the first UEFI update you do might look less than beautiful. If you want to do the firmware update on Lenovo hardware, you’ll have a lot more success with newer versions of fwupd and fwupdate, although we should do a fairly good job of not offering the update if it’s not going to work. All our testing has been done with a fully updated Fedora 28 workstation. It of course works with SecureBoot turned on, but if you’ve enabled the BootOrder lock manually you’ll need to turn that off first.

I’d like to personally thank all the Lenovo engineers and managers I’ve worked with over the last few months. All my time has been sponsored by Red Hat, and they rightfully deserve love too.

Interview with Serhii

Could you tell us something about yourself?

Hello everybody! My name is Serhii and I’m an artist from Ukraine. I like to draw illustrations and make videos about it. Lately my favourite theme has been fantasy, but I like other interesting themes. I also work with vector graphics, but that is another story. 🙂

Do you paint professionally, as a hobby artist, or both?

Both, but it’s a little more my work. In other words, my hobby turns into my work, and I think that isn’t so bad.

What genre(s) do you work in?

As I said earlier I prefer fantasy, it’s very exciting and you can create unreal worlds and characters. With your imagination, you can do whatever you want. You can turn any plot, even the simplest, into an interesting fantasy illustration. I think it’s cool!

Whose work inspires you most — who are your role models as an artist?

Hard to tell. I like a lot of artists, and all of them have different things to teach. Someone has wonderful graphics, someone has a great composition, and somebody else is an incredible colorist. Each one has their own amazing style and vision of art.

How and when did you get to try digital painting for the first time?

It was around 2000, when my friend bought a wacom tablet. I remember that there was a small drawing program in the set. On this day, I discovered digital painting for myself.

What makes you choose digital over traditional painting?

I like digital painting because it has a lot of opportunities, for example you can use different tools and effects in one illustration. And the main thing: you have the magic combination of keys, Ctrl+Z 😀 It’s also important that you can quickly show your work to viewers from all over the world.

How did you find out about Krita?

One day I decided to change my operating system and I chose Linux. And then I needed a program for drawing, so I tried some programs and Krita was the best for me.

What was your first impression?

Before my experience with Krita I used some painting applications, so it was easy for me, because all tools and options were understandable, comfortable and useful for my art.

What do you love about Krita?

Firstly, I like it that there are a lot of different brushes with the opportunity for rine-tuning as you want. In the latest version of Krita all brushes from the standard set became the best of the best. These brushes are all that I need for my artworks. Also there are many cool themes and interface settings.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I am satisfied with everything in Krita. I would like the big brushes to work a little faster, if it’s possible. Generally, developers of Krita in each version make this app better and I want them to continue their work in the same vein and don’t stop.

What sets Krita apart from the other tools that you use?

For me this is the only application for drawing and digital painting that I can use in Linux. Of course, there are Gimp and Mypaint, but they’re not really right for me, so Krita is the best app for digital painting in Linux.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?


Now I think that my favourite illustration is “Unstoppable”, because in this picture there is a funny plot, as it seems to me. Like many of my works it is in the fantasy genre, and I like that best.

What techniques and brushes did you use in it?

I used a technique that is similar to traditional – one canvas without any layers. What about brushes, I create my artwork with help of “Pencil-2” for sketch, “Texture Big” for basic colors, “Basic-2 Opacity” and “Bristles-2 Flat Rough” for details.

Where can people see more of your work?

Youtube: https://www.youtube.com/user/grafikwork
Instagram: https://www.instagram.com/serg_grafikwork/
Artstation: https://www.artstation.com/serggrafik
Twitter: https://twitter.com/Serg_Grafik
Facebook: https://www.facebook.com/serg.grafik
Deviantart: https://www.deviantart.com/grafikwork

Anything else you’d like to share?

I wish the team of developers to continue their fantastic work on the creation of this application and I want to see how many more artists use Krita for their creativity and making artworks.

August 01, 2018

FreeCAD BIM development news - July 2018

Hi folks, This is the monthly development post about the development of BIM functionalities of FreeCAD. This month we won't have a video here, because there is quite a lot of stuff to show already, but next month I'll resume making videos, and I'll try to begin to show real projects in them too (not sure...

July 30, 2018

Krita in the Windows Store: an update

We’ve been publishing Krita in the Windows store for quite some time now. Not quite a year, but we’ve updated our Store listing almost twenty times. By far the majority of users get Krita from this website: about 30,000 downloads a week. Store downloads are only about 125 a week. Still, the income generated makes it possible for the Krita maintainer to work on Krita full-time, which would not have been possible otherwise.

That’s good, because combining a day job and working on Krita is a sure recipe for a burn-out. (Donations fund Dmitry’s work, but there aren’t enough donations to fund two people at the same time: we have about 2000 euros per month in donations.)

What’s not so good is that having an app in a Store basically means you’re a sharecropper. You do the work, and the Store allows you whatever it wants to allow you. You’re absolutely powerless. All the rules are made by the Store. And if there’s one particular rule that gets interpreted by the Store curators in a way that’s incompatible with the English language, well, we’ll have to submit to it and be obedient.

Originally, we would mention in the Store listing that people could also get Krita for free from krita.org, and where you’d be able to get Krita’s source code:

 

This, Microsoft contends falls foul of its policy 10.8.5:

However, this is part of Policy 10.8:

Sounds clear, right? If your app includes those things, 10.8.5 applies. So, if your app doesn’t do that, it shouldn’t apply. At least, that was our naive interpretation. However, Microsoft disagrees. In a mail to the Krita maintainer they say:

And, of course, apart from 10.8 not being the case, so the “if” clause doesn’t apply, it’s not the Krita application that “promotes or distributes software” outside the Microsoft Store but the Store listing, and 10.8.5 doesn’t say anything about that.

Now Microsoft was certainly aware that Krita is open source software published in the GNU Public License, and Krita would be distributed outside the Windows Store. They actually helped us get Krita in the Windows Store to begin with, because, well, the Windows Store still is rather bare and doesn’t have that much good quality content.

In any case, since we’re absolutely powerless, we’ve had to change the Store listing…

Building Firefox: Changing the App Name

In my several recent articles about building Firefox from source, I omitted one minor change I made, which will probably sound a bit silly. A self-built Firefox thinks its name is "Nightly", so, for example, the Help menu includes About Nightly.

Somehow I found that unreasonably irritating. It's not a nightly build; in fact, I hope to build it as seldom as possible, ideally only after a git pull when new versions are released. Yet Firefox shows its name in quite a few places, so you're constantly faced with that "Nightly". After all the work to build Firefox, why put up with that?

To find where it was coming from, I used my recursive grep alias which skips the obj- directory plus things like object files and metadata. This is how I define it in my .zshrc (obviously, not all of these clauses are necessary for this Firefox search), and then how I called it to try to find instances of "Nightly" in the source:

gr() {
  find . \( -type f -and -not -name '*.o' -and -not -name '*.so' -and -not -name '*.a' -and -not -name '*.pyc' -and -not -name '*.jpg' -and -not -name '*.JPG' -and -not -name '*.png' -and -not -name '*.xcf*' -and -not -name '*.gmo' -and -not -name '.intltool*' -and -not -name '*.po' -and -not -name 'po' -and -not -name '*.tar*' -and -not -name '*.zip' -or -name '.metadata' -or -name 'build' -or -name 'obj-*' -or -name '.git' -or -name '.svn' -prune \) -print0 | xargs -0 grep $* /dev/null
}

gr Nightly | grep -v '//' | grep -v '#' | grep -v isNightly  | grep test | grep -v task | fgrep -v .js | fgrep -v .cpp | grep -v mobile >grep.out

Even with all those exclusions, that still ends up printing an enormous list. But it turns out all the important hits are in the browser directory, so you can get away with running it from there rather than from the top level.

I found a bunch of likely files that all had very similar "Nightly" lines in them:

  • browser/branding/nightly/branding.nsi
  • browser/branding/nightly/configure.sh
  • browser/branding/nightly/locales/en-US/brand.dtd
  • browser/branding/nightly/locales/en-US/brand.ftl
  • browser/branding/nightly/locales/en-US/brand.properties
  • browser/branding/unofficial/configure.sh
  • browser/branding/unofficial/locales/en-US/brand.dtd
  • browser/branding/unofficial/locales/en-US/brand.properties
  • browser/branding/unofficial/locales/en-US/brand.ftl

Since I didn't know which one was relevant, I changed each of them to slightly different names, then rebuilt and checked to see which names I actually saw while running the browser.

It turned out that browser/branding/unofficial/locales/en-US/brand.dtd is the file that controls the application name in the Help menu and in Help->About -- though the title of the About window is still "Nightly" and I haven't found what controls that.

branding/unofficial/locales/en-US/brand.ftl controls the "Nightly" references in the Edit->Preferences window.

I don't know what all the others do. There may be other instances of "Nightly" that appear elsewhere in the app, the other files, but I haven't seen them yet.

Past Firefox building articles: Building Firefox Quantum; Building Firefox for ALSA (non PulseAudio) Sound; Firefox Quantum: Fixing Ctrl W (or other key bindings).

July 29, 2018

How many open fonts licenses are there?

Recently I’ve been stump-speeching at the various free-software conferences I haunt on the topic of some plumbing-layer issues that affect using fonts on Linux systems.

One interesting rabbit hole in this field of stumps is the subject of font licenses. Today, we live in a harmonious world where open and libre fonts are uniformly distributed under the SIL Open Font License, and all is simple in our garden. Or so people think.

Technically speaking, of course, there have been two versions of the SIL OFL, so it’s actually possible that some of the fonts you refer to as “OFL licensed” are yours under the terms of OFL 1.0, and others under the current version, 1.1. It’s also possible that some of those fonts were published with the Reserved Font Name clause activated and some were not. So there are four possibilities, if you count how that clause alters compliance tracking.

Nevertheless, that’s still a complexity-free environment in which to roam: four license variants, demarking a known-and-knowable world like the pillars of Hercules & whatever was directly opposite the pillars of Hercules.

Five, if you count the fact that you may have fonts on your system that are published under the other major alternative, the GNU GPL with Font-Embedding Exception. Six if you include the MIT license, let’s say. Seven if you include Apache, eight or nine if you include BSD variants with a different number of clauses (ignoring for now how many, because I lose track; my point, after all, is that there are very few).

Still, that’s not that many. Ten if you include the IPA Font License. Eleven if you have X11 fonts. Twelve if you have an Artistic License font. Which exist.

That’s what we tell ourselves, anyhow. But although I didn’t mention it up front, we’re really just limiting the above list to the licenses currently included in the SPDX License List. Although SPDX is widely used as a machine-readable way to track licenses, it’s not the only place to look to see what licenses people are using for their open fonts.

If you run Debian or a Debian-based distribution, for example, then you don’t live in the small, confined world of ten-ish libre font licenses. You live, instead, in a funtastic world of broad historical context and twenty-plus-ish years of free software and, for much of that time period, there were no readymade, boiler-plate–like font licenses to select. Consequently, many people imbued with sudden inspiration rose up and boldly wrote their own licenses.

Since font packages tend to be things that reach a “completion point”, after which they get few-to-no further releases, many of these license-pioneering font packages persist to this day. As a result, their singleton licenses survive into the 21st Century as well.

A few such coelacanths belong to fonts well-known among Debian users, such as

  • The Bitstream Vera license (which also applies to DejaVu)
  • The Bitstream Charter license (which is not under same license as Vera, although it is comparable)
  • The Liberation Fonts License
  • The ParaType Free Font License
  • The Ubuntu Font License
  • The STIX Fonts License
  • The TUG Utopia License
  • The Larabie Fonts EULA

Others might not be as well-known, but are in fighting shape nonetheless:

  • The M+ Fonts Project License
  • The Arphic Public License
  • The GUST Font License
  • The Magenta Open License
  • The Mikachan Fonts License

If you have TeX installed, you have even more options to explore, including:

  • The Day-Roman Font License
  • The Luxi Fonts License
  • The Luxi Mono Font License
  • The Adobe Euro Font License
  • The Librerias Gandhi Font License
  • The Literat Font License
  • The IBM Courier License

That might seem like enough. But license authoring is a creative endeavor, and when the muse strikes, it can be hard to ignore. Yet not everyone so inspired is truly gifted with originality, so there are several  licenses to be found that may or may not actually be doppelgangers of other licenses (apart from the name itself). If you’re a lawyer, you can probably read and decide for yourself. If you’re unwilling to set aside that much of your day and that much of your soul, you might just assume that these other licenses are all distinct:

  • The Baekmuk License
  • The Open Government Data License
  • The Hanazono Font License
  • The UmeFont License
  • The BaKoMa Fonts License
  • The Misaki Fonts License
  • The Oradano Mincho Fonts License
  • The SazanamiFont License
  • The Hershey Fonts License

But that’s about it. Although, to be honest, that’s just as far as I got digging into packages in Debian, and there are a few I never could sort out, such as the license of WINE fonts and Symbola Fonts license. Maybe I should start looking again. Oh, and I guess there are LaTeX-licensed fonts.

July 24, 2018

Rain Song

We've been in the depths of a desperate drought. Last year's monsoon season never happened, and then the winter snow season didn't happen either.

Dave and I aren't believers in tropical garden foliage that requires a lot of water; but even piñons and junipers and other native plants need some water. You know it's bad when you find yourself carrying a watering can down to the cholla and prickly pear to keep them alive.

This year, the Forest Service closed all the trails for about a month -- too much risk of one careless cigarette-smoking hiker, or at least I think that was the reason (they never really explained it) -- and most the other trail agencies followed suit. But then in early July, the forecasts started predicting the monsoon at last. We got some cloudy afternoons, some humid days (what qualifies as humid in New Mexico, anyway -- sometimes all the way up to 40%), and the various agencies opened their trails again. Which came as a surprise, because those clouds and muggy days didn't actually include any significant rain. Apparently mere air humidity is enough to mitigate a lot of the fire risk?

Tonight the skies finally let loose. When the thunder and lightning started in earnest, a little after dinner, Dave and I went out to the patio to soak in the suddenly cool and electric air and some spectacular lightning bolts while watching the hummingbirds squabble over territory. We could see rain to the southwest, toward Albuquerque, and more rain to the east, toward the Sangres, but nothing where we were.

Then a sound began -- a distant humming/roaring, like the tires of a big truck passing on the road. "Are we hearing rain approaching?" we both asked at the same time. Since moving to New Mexico we're familiar with being able to see rain a long way away; and of course everyone has heard rain as it falls around them, either as a light pitter-patter or the louder sound from a real storm; but we'd never been able to hear the movement of a rainstorm as it gradually moved toward us.

Sure enough, the sound got louder and louder, almost unbearably loud -- and then suddenly we were inundated with giant-sized drops, blowing way in past the patio roof to where we were sitting.

I've heard of rain dances, and songs sung to bring the rain, but I didn't know it could sing back.

We ran for the door, not soon enough. But that was okay; we didn't mind getting drenched. After a drought this long, water from the sky is cause only for celebration.

The squall dumped over a third of an inch in only a few minutes. (This according to our shiny new weather station with a sensitive tipping-bucket rain gauge that measures in hundredths of an inch.) Then it eased up to a light drizzle for a while, the lightning moved farther away, and we decided it was safe to run down the trail to "La Cienega" (Spanish for swamp) at the bottom of the property and see if any water had accumulated. Sure enough! Lake La Senda (our humorous moniker for a couple of little puddles that sometimes persist as long as a couple of days) was several inches deep. Across the road, we could hear a canyon tree frog starting to sing his ratchety song -- almost more welcome than the sound of the rain itself.

As I type this, we're reading a touch over half an inch and we're down to a light drizzle. The thunder has receded but there's still plenty of lightning.

More rain! Keep it coming!

July 23, 2018

From Russia with Love


From Russia with Love

An Interview with Photographer Ilya Varivchenko

Ilya Varivchenko is a fashion and portrait photographer from Ivanovo, Russian Federation. He’s a UNIX administrator with a long-time passion for photography that has now become a second part-time job for him. Working on location and in his studio, he’s been producing a wonderful body of work specializing in portraiture, model tests, and more.

He’s a member of the community here (@viv), and he was kind enough to spare some time and answer a few questions (plus it gives me a good excuse to showcase some of his great work!).

by Ilya Varivchenko

Much of your work feels very classical in posing and light, particularly your studio portraits. What would you say are your biggest influences?

I am influenced by several classical painters and great modern photographers. Some of them are: Patrick Demarchelier, Steven Meisel and Peter Lindbergh. The general mood defines what I see around me. Russia is a very neglected but beautiful country and women around are an inexhaustible source of inspiration.

by Ilya Varivchenko

How would you describe your own style overall?

My style is certainly a classic portrait in its modern performance.

What motivates you when deciding who/how you shoot?

I usually plan shooting in advance. The range of models is rather narrow and it’s not so easy to get there. However, I am constantly looking for new faces. I choose the style and direction of a particular shooting based on my vision of the model and the current mood.

Why portraits? What about portraiture draws you to it?

I shoot portraits because people interest me. For me, photography is an instrument of knowing people and a means of communication.

by Ilya Varivchenko

If you had to pick your own favorite 3 photographs of your work, which ones would you choose and why?

It’s difficult to choose only three photographs, but maybe these:

by Ilya Varivchenko This photo was chosen by Olympus as a logo for their series of photo events in Russia 2017.
by Ilya Varivchenko This is one of my most reproducible photos. ;)
by Ilya Varivchenko This photo has a perfect mood in my opinion.

If you had to pick 3 favorite images from someone else, which ones would you choose and why?

It is very difficult to choose only three photos. The choice in any case will be incomplete, but here’s the first ones that comes to mind:

  1. The portrait of Heather Stewart-Whyte by Friedemann Hauss:
by Ilya Varivchenko
  1. The portrait of Monica Bellucci by Chico Bialas:
by Ilya Varivchenko

3) The portrait of Nicole Kidman by Patrick Demarchelier

by Ilya Varivchenko

How do you find your models usually?

Via social media which is the best means for model searching, but if I meet a girl I really like in the street, I can try and talk to her straight away. In fact, the problem is not to find a model, but to choose how to reject a request without offending a prospect model that is of no interest to me.

Do you pre-visualize and plan your shoots ahead of time usually, or is there a more organic interaction with the model and the space you’re shooting in?

It’s always good to have a plan. It is also very good to have a spare plan.

Usually I discuss some common points with the model and stylist before shooting. But these plans are more connected with the mood and the general idea of the session. So when the magic of shooting begins, usually all the plans fly to hell. ;)

by Ilya Varivchenko

Do you have a shooting assistant with you, or is normally just you and the model?

The preparatory stage of shooting often requires participation of many people: a makeup artist, a hair stylist, etc., but shooting itself goes better when only two persons are involved. This is a fairly intimate process. Just like sex. :)

On the other hand, if we do a fashion shoot on order, then the presence of the customer representatives is a must.

Many shots have a strong editorial fashion feel to them: are those works for magazine/editorial use - or were they personal works you were planning to be that way?

I take pictures for local magazines and advertising agencies sometimes. Maybe it somehow influenced my other work.

by Ilya Varivchenko by Ilya Varivchenko by Ilya Varivchenko

What do you do with the photos you shoot?

Most of my works are for personal use. However, I often print them in a large format and I’ve also had two solo exhibitions. Prints with my works are sold and they can always be ordered. I also publish in photo magazines sometimes, but these magazines are Russian ones so they are hardly known to you.

By the way: I periodically take part in the events held by Olympus Russia, where I demonstrate my workflow.

This video shows that I use the RawTherapee as a raw converter :)

You’re shooting on Olympus gear quite a bit, are you officially affiliated with Olympus in some way?

On occasions I hold workshops as a part of the Olympus company marketing activities. Sometimes the Olympus company provides me with their products for testing and I am expected to follow up with a review.

Is your choice to use Free Software for pragmatic reasons, or more idealistic?

The choice was dictated by purely practical considerations. I found a tool, the results of which I am almost completely satisfied with. Detail for example is outstanding, comfortable work with color grading, excellent black and white conversion, and much more.

The fact that the product is free and (which is more important to me) I have an opportunity to communicate with its developers is a huge plus!

For example, with the output of Fuji X-T20, when it was required to add a new DCP profile to the converter I simply contacted the developers, shot the test target and got what I wanted.

by Ilya Varivchenko

Would you describe your workflow a bit? Which projects do you use regularly?

My workflow is quite simple:

  1. Shooting. I try to shoot in a way which will not require heavy postprocessing at all. It is much easier to set up light properly than to fix it in Photoshop later.

  2. Raw development with RawTherapee. My goal is to develop the image in a way which makes it as close to final as possible. Sometimes this is the end of my workflow. ;)

  3. Color correction (if necessary) with 3DLutCreator. In rare cases, it is more convenient to make complex color correction with the help of LUTs.

  4. Retouching with Adobe Photoshop. Nothing special. Removal of skin and hair defects, etc. Dodge and burn technique with a Wacom Intuos Pro.

by Ilya Varivchenko

Speaking of gear, what are you shooting with currently?

I have two systems now: Micro Four Thirds system from Olympus and X Series from Fujifilm. Typical setups are:

Studio: Olympus PEN-F + Panasonic G 42.5/1.7 Planair: Olympus PEN-F + M.Zuiko 75/1.8 or FujiFilm X-T20 + Fujinon 35/1.4

Many of your images appear make great use of natural light. For your studio lighting setup, what type of lighting gear are you using?

My studio equipment is a mix of Aurora Codis and Bowens studio lights + a lot of modifiers from large 2 meters parabolic octobox to narrow 40x150 strip boxes and so on.

by Ilya Varivchenko by Ilya Varivchenko

Is there something outside your comfort zone you wish you could try/shoot more of?

It is definitely landscape photography. And macro photography also attracts me - ants and snails are all great models in fact. :)

What is one piece of advice you would offer to another photographer?

Find in yourself what you want to share with others. Beauty is in the eye of the beholder. No beautiful models will help if you are empty inside.

by Ilya Varivchenko

I want to thank Ilya for taking the time to chat with me! Take some time to have a look through his blog and work (it’s chock full of wonderful work)!

All images copyright Ilya Varivchenko and used with permission.

Interview with Takiro Ryo

Could you tell us something about yourself?

Well, I’m not really sure what to tell about myself. My name is Takiro, I’m from Germany and I’ve liked drawing since I was a little kid although it wasn’t very easy because I never had decent paper and pens. Art wasn’t seen as an actual thing in my family, I guess. I started to take art more serious as a teenager, when I was 14 or 15, with traditional art. Everyone was into Mangas and Anime back then and so was I. I started out with poorly drawn fanarts of my favorite anthropomorphic characters before doing my own thing, and my style was typical Manga style until just a few years ago, when I transitioned to a more painterly style, trying to emulate the look of traditional paintings. I still do fanarts sometimes but nowadays I mostly paint anthropomorphic and feral animals often in magical fantasy settings inspired by video games and books.

Do you paint professionally, as a hobby artist, or both?

At the moment I’m more of a hobby artist, I sometimes do commissions for private clients but that rarely happens. I’d love to be a professional artist some day but I’m still not sure if I have what it takes because there is more to being a professional than just being good at art.

Timelapse of this painting

What genre(s) do you work in?

I love drawing pretty much anything animal and fantasy related. When I was a child I read a lot of books about Reynard the Fox and the other animals from the German fables. This influence lasts on and today I almost exclusively draw paintings of two-legged and four-legged anthropomorphic animals with foxes being my most recurring characters.

Whose work inspires you most — who are your role models as an artist?

To be honest, I never really thought about that. Definitely not one of the old masters. I had a pretty bad art education (maybe this is a German thing) and probably couldn’t tell a Picasso from a Da Vinci if my life depends on it. I think, when it comes to style, the artist who inspired me most in the early days was Michaela Frech with her realistic paintings.

Later, when I discovered that there is a whole community around anthropomorphic animals, I also found Tess Garman, who is part of Blotch, and Yee Chong, to just name a few. All of which had big influence on my style how it is now.

I had the amazing opportunity to meet Michaela Frech at an Art Show we both attended a year ago and later become friends with her. She helped me a lot with her professional advice and critique and enabled me to reach the skill level I now have.

As for my ideas, I get them mostly from fantasy movies and series, the novels and fables I read in my childhood and from video games. I often like to switch the characters with animals and think about how their animal behavior and characteristics could influences the plot.

Timelapse of this painting

How and when did you get to try digital painting for the first time?

Gosh, I think it was when I was in vocational school, 16 or so years ago. We had a class in PaintShop Pro, where we learned basic photo editing. Drawing and painting wasn’t really part of the class but shortly after a few lessons I discovered that it is fun, bought my very fist graphics tablet and made some baby steps in digital drawing and painting.

What makes you choose digital over traditional painting?

There are a few things. But I think what I like most about working digital is that it’s very forgiving. You can just throw some paint onto the digital canvas and look what sticks, then work with it and improve on it. For a long time my work flow was more like when you do a traditional painting, now I work more like a sculptor who adds or takes some clay and approximate the final result.

Another thing that made me choose digital over traditional is that you don’t need much space. As a teenager, I had only a small room and no real desk for a long time. Also didn’t had much space to store all my painting stuff.

And colors. I love the bright colors you usually only can have on a screen, which unfortunately is also a drawback when you print a painting.

How did you find out about Krita?

I just checked and it looks like I started using Krita in 2013. I’m not exactly sure how it happened but I remember that I was frustrated about all the other tools I tried by then, PaintShop, Corel, Photoshop, SAI, you name it. I already was almost completely transitioned to Linux, except for the painting part and then I remembered that my computer science teacher once mentioned a tool for Linux that was not gimp. Unfortunately I couldn’t remember the name so I typed “good painting tool for Linux -gimp” into a search engine and BAM! Krita.

What was your first impression?

I vaguely remember that it was like: “OMG, I can just draw. And brushes, it has nice brushes by default. And the example artworks are not shit.” Seriously, I looked into some other open source tools and the artworks they used to advertise the software often looked like they where made by someone who never held a pen and I thought: “That doesn’t look like you can do amazing stuff in that program.” But Krita made an amazing first impression.

What do you love about Krita?

I loved and still love that you can just pick it up and start painting. It’s default brushes already did a great job when I started to use it. I later found David Revoy’s brush pack which was awesome and I love what he did with the 4.0 brushes (although I had to bring two old friends back from 3.0). I think one of the greatest strengths of Krita today is the good choice of brushes that come with it. I still remember that it took me hours to configure some brushes in all the other tools I tried, before I could even start properly.

What do you think needs improvement in Krita? Is there anything that really annoys you?

If you have asked me back when I started using it, with version 2.7 I would have given you a page long list. There is an occasional crash now and then but now most of the major bugs are fixed and often, when I find a bug and want to report it, it’s already marked as resolved and I just have to wait for the next release.

The palette docker still has some issues and is a bit doggy. One of my favorite tools is the assistant tools, especially the vanishing points. I would like to group them and deactivate/move/hide whole groups. It can be very tricky if you have a lot of vanishing points in a picture.

What sets Krita apart from the other tools that you use?

For me it is the feeling that Krita is made with the artist in mind. Often when I use a tool I think: “Yeah that’s a cool tool with nice features but how am I supposed to use that?” This is a thing that I often find in a lot of Open Source software. Programmers put a lot of amazing know-how in the software that often can do awesome stuff but then the UI is terrible and the whole feel is off. And then an otherwise cool piece of software is unnecessary hard to use. I still find a few unpolished corners like that in Krita sometimes but other than that I felt comfortable from the start. It gives you just what you need.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

Hm, I have a few but I think my latest is “About to Fail”, an artwork I just finished recently. It shows my character in a typical situation where he is careless and about to mess up an experiment. I like it because, not only is it a typical artwork that includes a lot of things i like (fantasy, magic, animals) but also because it was a milestone in my personal development as an artist.

What techniques and brushes did you use in it?

First I did a rough sketch with the “Charcoal Rock Soft” brush, slowly working out the details until I have a grayscale image, then I keep refining details with the “Bristles Wet”, a brush that is originally from the Krita 3.0 default brush pack. After that I use “Wet Bristles Rough” to add the fluffyness of the fur. The smear brushes are my favorites and I use them mostly like you would in traditional painting. I love how they mix colors directly on the canvas. After the whole grayscale image was done I added textures, like the wood of the desk or the cracks in the wall. I used the default texture brushes for that. I try to use the default brushes as much as I can to concentrate on drawing and not wasting to much time on creating a brush that I maybe never use after that. Sometimes I tweak them a little of course. At last I add color with some layers set to Color blending mode, more rendering after that because some colored areas have to overlap and than it’s usually done. I have some recordings that illustrate pretty well how I work, unfortunately not for this particular artwork.

Where can people see more of your work?

I have a website but it’s pretty much a work in progress and just a pet project I rarely find time for, but I plan to make it fully functional in the future. I also have other galleries, for example on FurAffinity, which unfortunately does not allow high resolution uploads – and on InkBunny. I only post my finished works to my galleries. For sketches, WIPs and other stuff I use my twitter @Takiro_Ryo. On my YouTube channel I upload timelapsed recordings of my paintings, and some day maybe tutorials but I haven’t decided yet.

Anything else you’d like to share?

Krita is already an amazing Program and I sense a bright future for it in my twitchy tail. If you’re just starting out with digital painting, you should definitely start with Krita. Save the 300 bucks Photoshop would cost you and invest it in a decent tablet instead. Once you spend some time in Krita, you probably don’t want to switch to something else. Whereas Photoshop crushes you under a pile of functionality you never need as a painter, Krita will give you everything you need, and just that without wanting anything in return.

I also want to thank the Krita team for being amazing and for this interview opportunity, which is my first interview as an artist ever. Keep on the good work. I definitely will keep spreading the word about Krita.

At last I want to thank Michaela Frech for her help and also my boyfriend who had to bear a lot of my art related frustrations over the years.

Best practices for diffing two online MySQL databases

We’ve had to move our internal Red Hat Beaker instance to a new MySQL database version. We made the jump with a 5min downtime of Beaker. One of the items we wanted to make sure is to not to loose any data.

Setup and Motivation

A database dump is about 135 GB compressed with gzip. The main database was being served by a MySQL 5.1 master/slave setup.

We discussed two possible strategies for switching to MariaDB. Either a dump and load which meant a downtime of 16h, or the use of an additional MariaDB slave which will be promoted to the new master. We chose the latter: a new MariaDB 10.2 slave promoted to be the new master.

We wanted to make sure that both slaves, the MySQL 5.1 and new MariaDB 10.2, were in sync and with promoting the MariaDB 10.2 slave to master we would not loose any data. To verify data consistency across the slaves, we diffed both databases.

Diffing

I went through a few iterations of dumping and diffing. Here are the items, which worked best.

Ignore mysql-utils if you only have read access

MySQL comes with a bunch of utilities and one of them is a tool to compare two databases, called mysqldbcompare and mysqldiff. I’ve tried mysqldiff first, but, after studying the source code, decided against using it. Reason being is that you will have to grant it additional write privileges to the databases which are arguably small, but still too much I was comfortable with.

Use the “at” utility to schedule mysqldump

The best way I found to kick off performing the database dumps at the same time is to use at. Scheduling a mysqldump manually for the two databases introduces way too much noisy differences. I guess, it goes without mention, that the database hosts clocks are synchronized (e.g. by the use of chronyd).

Dump the entire database at once

The mysqldump tool can dump each table separately, but that is not what you want. Also the default options which are geared towards a dump and load is not what you want.

Instead I dumped MySQL with:

mysqldump --single-transaction --order-by-primary --skip-extended-insert beaker | gzip > mysql.sql.gz;

while for MariaDB I used:

mysqldump --order-by-primary --skip-extended-insert beaker | gzip > mariadb.sql.gz;

The options used are aiding the later diff:

  • –order-by-primary orders every dumped table row consistently by their primary keys
  • –single-transaction keeps a transaction open until the dump has finished so you get a comparable database snapshot across the two databases for the same starting point
  • –skip-extended-inserts is used to have an INSERT statement for each row, otherwise they’re collapsed to multi-row insert statements which are harder to compare

Compression (GZip) and shell pipes are your friend

With big databases, like the Beaker production database, you want to avoid writing anything uncompressed. Linux ships additional gzip wrappers for cat (zcat), less (zless) and so on, which will help with creating shell pipes in order to process the data.

Cut up the dump

Once you have both database dumps, cut them up into their separate tables. Purpose of this is not to sift through the dumps with your own eye, but rather to cater for diff. The diff tool loads the entire file into memory and you will face, with large database dumps, it is running out of memory quickly:

diff mysql-beaker.sql.gz mariadb-replica-beaker.sql.gz
diff: memory exhausted

While I did found a tool to diff both large files, having a unified diff output is easier to compare data with.

Example: Using gzip and a pipe from my point above:

diff -u <(zcat mysql/table1.sql.gz) <(zcat mariadb/table1.sql.gz) > diffed/table1.diff

Now you can use your SHELL foo to loop over all cut up tables and write the diff into a separate folder which then lets you easily compare.

July 20, 2018

Pulseaudio: the more things change, the more they stay the same

Such a classic Linux story.

For a video I'll be showing during tonight's planetarium presentation (Sextants, Stars, and Satellites: Celestial Navigation Through the Ages, for anyone in the Los Alamos area), I wanted to get HDMI audio working from my laptop, running Debian Stretch. I'd done that once before on this laptop (HDMI Presentation Setup Part I and Part II) so I had some instructions to follow; but while aplay -l showed the HDMI audio device, aplay -D plughw:0,3 didn't play anything and alsamixer and alsamixergui only showed two devices, not the long list of devices I was used to seeing.

Web searches related to Linux HDMI audio all pointed to pulseaudio, which I don't use, and I was having trouble finding anything for plain ALSA without pulse. In the old days, removing pulseaudio used to be the cure for practically every Linux audio problem. But I thought to myself, It's been a couple years since I actually tried pulse, and people have told me it's better now. And it would be a relief to have pulseaudio working so things like Firefox would Just Work. Maybe I should try installing it and see what happens.

So I ran an aptitude search pulseaudio to find the package name I'd need to install. Imagine my surprise when it turned out that it was already installed!

So I did some more web searching to find out how to talk to pulse and figure out how to enable HDMI, or un-mute it, or whatever it was I needed. But to no avail: everything I found was stuff like "In the Ubuntu audio panel, do this". The few pages I found that listed commands to run didn't help -- the commands all gave errors.

Running short on time, I reverted to the old days: aptitude purge pulseaudio. Rebooted to make sure the audio system was reset, ran alsamixergui and sure enough, there were all my normal devices, including the IEC958 device for HDMI, which was indeed muted. I unmuted it, tried the video again -- and music blasted from my TV's speakers.

I'm sure there are machines where pulseaudio works. There are even a few people who have audio setups complicated enough to need something like pulseaudio. But in 2018, just as in 2006, aptitude purge pulseaudio is the easiest solution to a Linux sound problem.

July 18, 2018

Detail Considered Harmful

Ever since the dawn of times, we’ve been crafting pixel perfect icons, specifically adhering to the target resolution. As we moved on, we’ve kept with the times and added these highly detailed high resolution and 3D modelled app icons that WebOS and MacOS X introduced.

As many moons have passed since GNOME 3, it’s fair to stop and reconsider the aesthetic choices we made. We don’t actually present app icons at small resolutions anymore. Pixel perfection sounds like a great slogan, but maybe this is another area that dillutes our focus. Asking app authors to craft pixel precise variants that nobody actually sees? Complex size lookup infrastructure that prominent applications like Blender fail to utilize properly?

Blender: Linux is the only platform with botched app icon. Blender: Linux is the only platform with botched app icon.

For the platform to become viable, we need to cater to app developers. Just like Flatpak aims to make it easy to distribute apps, and does it in a completely decetralized manner, we should emphasize with the app developers to design and maintain their own identity.

Having clear and simple guidelines for other major platforms and then seeing our super complicated ones, with destructive mechanisms of theming in place, makes me really question why we have anyone actually targeting GNOME.

The irony of the previous blog post is not lost on me, as I’ve been seduced by the shading and detail of these highres artworks. But every day it’s more obvious that we need to do a dramatic redesign of the app icon style. Perhaps allowing to programatically generate the unstable/nightlies style. Allow a faster turnaround for keeping the style contemporary and in sync what other platforms are doing. Right now, the dated nature of our current guidelines shows.

Time to murder our darlings…

Krita 4.1.1 Released

Today we’re releasing Krita 4.1.1, the first bug fix release for Krita 4.1.0.

  • Fix loading PyKrita when using PyQt 5.11 (patch by Antonio Rojas, thanks!) (BUG:396381)
  • Fix possible crashes with vector objects (BUG:396145)
  • Fix an issue when resizing pixel brushes in the brush editor (BUG:396136)
  • Fix loading the system language on macOS if more than one language is enabled in macOS
  • Don’t show the unimplemented color picker button in the vector object tool properties docker (BUG:389525)
  • Fix activation of the autosave time after a modify, save, modify cycle (BUG:393266)
  • Fix out-of-range lookups in the cross-channel curve filter (BUG:396244)
  • Fix an assert when pressing PageUp into the reference images layer
  • Avoid a crash when merging layers in isolated mode (BUG:395981)
  • Fix loading files with a transformation mask that uses the box transformation filter (BUG:395979)
  • Fix activating the transform tool if the Box transformation filter was selected (BUG:395979)
  • Warn the user when using an unsupported version of Windows
  • Fix a crash when hiding the last visible channel (BUG:395301)
  • Make it possible to load non-conforming GPL palettes like https://lospec.com/palette-list/endesga-16
  • Simplify display of the warp transformation grid
  • Re-add the Invert Selection menu entry (BUG:395764)
  • Use KFormat to show formatted numbers (Patch by Pino Toscano, thanks!)
  • Hide the color sliders config page
  • Don’t pick colors from fully transparent reference images (BUG:396358)
  • Fix a crash when embedding a reference image
  • Fix some problems when saving and loading reference images (BUG:396143)
  • Fix the color picker tool not working on reference images (BUG:396144)
  • Extend the panning range to include any reference images

Download

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

When it is updated, you can also use the Krita Lime PPA to install Krita 4.1.1 on Ubuntu and derivatives. We are working on an updated snap.

OSX

Note: the touch docker, gmic-qt and python plugins are not available on OSX.

Source code

md5sum

For all downloads:

Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here (filenames ending in .sig).

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

July 16, 2018

LWV National Convention, 2018: Plenary Sessions

or: How Sausage is Made

I'm a big fan of the League of Women Voters. Really. State and local Leagues do amazing work. They publish and distribute those non-partisan Voter Guides you've probably seen before each election. They register new voters, and advocate for voting rights and better polling access for everybody, including minorities and poor people. They advocate on lots of other issues too, like redistricting, transparency, the influence of money in politics, and health care. I've only been involved with the League for a few years; although my grandmother was active in her local League as far back as I can remember, somehow it didn't occur to me to get involved until I moved to a small town where it was more obvious what a difference the local League made.

So, local and state Leagues are great. But after returning from my second LWV national convention, I find myself wondering how all this great work manages to come out of an organization that has got to be the most undemocratic, conniving political body I've ever been involved with.

I have separate write-ups of the caucuses and other program sessions I attended at this year's convention, for other LWV members wanting to know what they missed. But the Plenary sessions are where the national League's business is conducted, and I felt I should speak publicly about how they're run.

In case there's any confusion, this article describes my personal reactions to the convention's plenary sessions. I am speaking only for myself, not for any state or local league.

The 2018 National Convention Plenary Sessions

I didn't record details of every motion; check the Convention 2018 Daily Briefing if you care. (You might think there would be a published official record of the business conducted at the national convention; good luck on finding it.)

The theme of the convention, printed as a banner on many pages of the convention handbook, was Creating a More Perfect Democracy. It should have been: Democracy: For Everyone Else.

Friday Plenary

In case you're unfamiliar with the term (as I was), "Plenary" means full or complete, from the Latin plenus, full. A plenary session is a session of a conference which all members of all parties are to attend. It doesn't seem to imply voting, though that's how the LWVUS uses the term.

After the national anthem, the welcome by a designated local official, a talk, an opening address, acceptance of various committee reports, and so on, the tone of the convention was set with the adoption of the convention rules.

A gentleman from the Oregon state League (LWVOR) proposed a motion that would have required internal decisions to be able to be questioned as part of convention business. This would include the controversial new values statement. There had been discussion of the values statement before the convention, establishing that many people disagreed with it and wanted a vote.

LWVUS president Chris Carson wasn't having any of it. First, she insisted, the correct parliamentary way to do this was to vote to approve the rest of the rules, not including this one. That passed easily. Then she stated that the motion on the table would require a 2/3 vote, because it was an amendment to the rules which had just passed. (Never mind that she had told us we were voting to pass all the rules except that one).

The Oregon delegate who had made the motion protested that the first paragraph of the convention rules on page 27 of the handbook clearly stated that amendment of the rules only requires a simple majority. Carson responded that would have been true before the convention rules were adopted, but now that we'd voted to adopt them, it now required a 2/3 vote to amend them due to some other rule somewhere else, not in the handbook. She was adamant that the motion could not now pass with a simple majority.

The Oregon delegate was incredulous. "You mean that if I'd known you were going to do this, I should have protested voting on adopting the rules before voting on the motion?"

The room erupted in unrest. Many people wanted to speak, but after only a couple, Carson unilaterally cut off further discussion. But then, after a lot of muttering with her Parliamentarian, she announced that she would take a show-of-hands vote on whether to approve her ruling requiring the 2/3 vote. She allowed only three people to speak on that motion (the motion to accept her ruling) and then called the question herself.

The vote was fairly close but was ruled to be in favor of her ruling, meaning that the original motion would require a 2/3 vote. When we finally voted on the original motion it looked roughly equal, not 2/3 in favor -- so the motion to allow debate on the values statement failed.

(We never did find out what this mysterious other rule was that supposedly mandated the 2/3 vote. The national convention has an official Parliamentarian sitting on the podium, as well as parliamentary assistants sitting next to each microphone in the audience, but somehow there's nobody who does much of a job of keeping track of what's going on or can state the rules under which we're operating. Several times during the three days of plenary, Carson and her parliamentarian lost track of things, for instance, saying she'd hear two pro and two con comments but actually calling three pro and one con.)

I notice in the daily briefing, this whole fracas is summarized as, "The motion was defeated by a hand vote."

Officer "Elections"

With the rules adopted by railroad, we were next presented with the slate of candidates for national positions. That sounds like an election but it's not.

During discussion of the previous motion, one national board member speaking against the motion (or for Carson's 2/3 ruling, I can't remember which) said "You elected us, so you should trust us." That spawned some audience muttering, too. See, in case there's any confusion, delegates at the convention do not actually get to vote for candidates. We're presented with a complete slate of candidates chosen by the nominating committee (for whom we also do not vote), and the only option is to vote yes or no on the whole slate "by acclamation".

There is one moment where it is possible to make a nomination from the floor. If nominated, such a nominee has one minute to make her case to the delegates before the final vote. Since there's obviously no chance, there are seldom any floor nominees, and on the rare occasion someone tries, they invariably lose.

Now, I understand that it's not easy getting volunteers for leadership positions in nonprofit organizations. It's fairly common, in local organizations, that you can't fill out all the available positions and have to go begging for people to fill officer positions, so you'll very often see a slate of officers proposed all at once. But in the nationwide LWVUS? In the entire US, in the (hundreds of thousands? I can't seem to find any membership figures, though I found a history document that says there were 157,000 members in the 1960s) of LWV members nationwide, there are not enough people interested in being a national officer that there couldn't be a competitive election? Really?

Though, admittedly ... after watching the sausage being made, I'm not sure I'd want to be part of that.

Not Recommended Items

Of course, the slate of officers was approved. Then we moved on to "Not Recommended Items". How that works: in the run-up to the convention, local Leagues propose areas the National board should focus on during the upcoming two years. The National board decides what they care about, and marks the rest as as "Not recommended". During the Friday plenary session, delegates can vote to reconsider these items.

I knew that because I'd gone to the Abolish the Electoral College caucus the previous evening, and that was the first of the not-recommended items proposed for consideration.

It turned out there were two similar motions: the "Abolish the Electoral College" proposal and the "Support the National Popular Vote Compact" proposal, two different approaches to eliminating the electoral college. The NPV is achievable -- quite a few states have already signed, totalling 172 electoral votes of the 270 that would be needed to bring the compact into effect. The "Abolish" side, on the other hand, would require a Constitutional amendment which would have to be ratified even by states that currently have a big advantage due to the electoral college. Not going to happen.)

Both proposals got enough votes to move on to consideration at Saturday's plenary, though. Someone proposed that the two groups merged their proposals, and met with the groups after the session, but alas, we found out on Saturday that they never came to agreement.

One more proposal that won consideration was one to advocate for implementation of the Equal Rights Amendment should it be ratified. A nice sentiment that everyone agreed with, and harmless since it's not likely to happen.

Friday morning "Transformation Journey" Presentation and Budget Discussion

I didn't take many notes on this, except during the presentation of the new IT manager, who made noise about reduced administrative burden for local Leagues and improving access to data for Leagues at all levels. These are laudable goals and badly needed, though he didn't go into any detail about how any of was going to work. Since it was all vague high-level hand waving I won't bother to write up my notes (ask me if you want to see them).

The only reason I have this section here is for the sharp-eyed person who asked during the budget discussion, "What's this line item about 'mailing list rental?'"

Carson dismissed that worry -- Oh, don't worry, there are no members on that list. That's just a list of donors who aren't members.

Say what? People who donate to the LWVUS, if they aren't members, get their names on a mailing list that the League then sells? Way to treat your donors with respect.

I wish nonprofits would get a clue. There are so many charities that I'd like to donate to if I could do so without resigning myself to a flood of paper in my mailbox every day for the rest of my life. If nonprofits had half a lick of sense, they would declare "We will never give your contact info to anyone else", and offer "check this box to be excluded even from our own pleas for money more than once or twice a year." I'd be so much more willing to donate.

Saturday Plenary

The credentials committee reported: delegates present represented 762 Leagues, with 867 voting delegates from 49 states plus the District of Columbia. That's out of 1709 eligible voting delegates -- about half. Not surprising given the expense of the convention. I'm told there have been proposals in past years to change the rules to make it possible to vote without attending convention, but no luck so far.

Consideration of not-recommended items: the abolition of the electoral college failed. Advocacy for the National Popular Vote Compact passed. So the delegates agreed with me on which of the two is achievable. Too bad the Electoral Abolition people weren't willing to compromise and merge their proposal with the NPV one.

The ERA proposal passed overwhelmingly.

Rosie Rios, 43rd Treasurer of the US, gave a terrific talk on, among other things, the visibility of women on currency, in public art and in other public places, and what that means for girls growing up. I say a little more about her talk in my Caucus Summary.

We had been scheduled to go over the bylaws before Rios' talk, but that plan had been revised because there was an immigration protest (regarding the separation of children from parents) scheduled some distance north of the venue, and a lot of delegates wanted to go. So the revised plan, we'd been told Friday, was to have Rios' talk and then adjourn and discuss the bylaws on Sunday.

Machinations

What actually happened: Carson asked for a show of hands of people who wanted to go to the protest, which looked like maybe 60% of the room. She dismissed those people with well wishes.

Then she looked over the people still in the room and said, "It looks like we might still have a quorum. Let's count."

I have no idea what method they used to count the people sitting in the room, or what count they arrived at: we weren't told, and none of this is mentioned in the daily summary linked at the top of this article. But somehow she decided we still had a quorum, and announced that we would begin discussion of the bylaws.

The room erupted in angry murmurs -- she had clearly stated before dismissing the other delegates that we were done for the day and would not be discussing the bylaws until Sunday.

"It's appalling", one of our delegation, a first-timer, murmured. Indeed.

But the plenary proceeded. We voted to pass the first bylaws proposal, an uncontroversial one that merely clarified some wording, and I'm sure the intent was to sneak the second proposal through as well -- a vague proposal making it easier to withdraw recognition from a state or local league -- but enough delegates remained who had actually read the proposals and weren't willing to let it by without discussion.

On the other hand, the discussion didn't come to anything. A rewording amendment that I'm told had been universally agreed to at the Bylaws caucus the previous evening failed to go through because too many of the people who understood the issue were away at the protest. The amendment failed, so even though we ran out of time and had to stop before voting on the proposal, the amended wording had already failed and couldn't be reconsidered on Sunday when the discussion was resumed.

(In case you're curious, this strategy is also how Pluto got demoted from being a planet. The IAU did almost exactly the same thing as the LWVUS, waiting until most of the voting members were out of the room before presenting the proposal to a small minority of delegates. Astronomers who were at the meeting but out of the room for the Pluto vote have spoken out, saying the decision was a bad one and makes little sense scientifically.)

Sunday Plenary

There's not much to say about Sunday. The bylaws proposal was still controversial, especially since half the delegation never had the chance to vote on the rewording proposal; the vote required a "card vote", meaning rather than counting hands or voices, delegates passed colored cards to the aisles to be counted. This was the only card vote of the convention.

Accessibility note: I was surprised to note that the voting cards were differentiated only by color; they didn't have anything like "yes" or "no" printed on them. I wonder how many colorblind delegates there were in that huge roomful of people who couldn't tell the cards apart.

The rest of Sunday's voting was on relatively unimportant, uncontroversial measures, ending with a bunch of proclamations that don't actually change anything. Those easily passed, rah, rah. We're against gun violence, for the ERA, against the electoral college, for pricing carbon emissions, for reproductive rights and privacy, and for climate change assessments that align with scientific principles. Nobody proposed anything about apple pie but I'm sure we would have been for that too.

And thus ended the conference and we all headed off to lunch or the airport. Feeling frustrated, a bit dirtied and not exactly fired up about Democracy.


Up: LWV National Convention, June-July 2018, Chicago

July 11, 2018

Welcoming the gPhoto Project to the PIXLS.US community!


Welcoming the gPhoto Project to the PIXLS.US community!

Helping the community one project at a time

A major goal of the PIXLS.US effort is to do whatever we can do to help developers of projects unburden themselves from administrating their project. We do this, in part, by providing forum hosting, participating in support, providing web design, and doing community outreach. With that in mind, we are excited to welcome the gPhoto Projects to our discuss forum!

The Entangle interface, which makes use of libgphoto The Entangle interface, which makes use of libgphoto.

You may not have heard of gPhoto, but there is a high chance that you’ve used the project’s software. At the heart of the project is libgphoto2, a portable library that gives application access to hundreds of digital cameras. On top of the foundational library is gphoto2, a command line interface to your camera that supports almost everything that the library can do. The library is used in a bunch of awesome photography applications, such as digiKam, darktable, entangle, and GIMP. There is even a FUSE module, so you can mount your camera storage as a normal filesystem.

gPhoto was recruited to the PIXLS.US community when @darix was sitting next to gPhoto developer Marcus. Marcus was using darix’s Fuji camera to test integration into libgphoto, then the magic happened! Not only will some Fuji models be supported, but our community is growing larger. This is also a reminder that one person can make a huge difference. Thanks darix!

Welcome, gPhoto, and thank you for the years and years of development!

July 09, 2018

Interview with Andrea Buso

Could you tell us something about yourself?

“I am in the middle of the journey of our life ..” (50 years). I was born in Padua in Italy. Since I was a child I’ve always designed; during primary school I created my first Japanese robot style cartoon (Mazinga). I attended art school. I attended computer graphics courses with Adobe products in 1995 and then a course of specialization at the internship school of Illustration for Children of Sarmede in Treviso (IT).

I worked as a freelancer in advertising and comics, I taught traditional and digital painting and drawing in my studio (now closed) La Casa Blu. Today I continue to draw as a hobby and some work, since I teach painting in a center for disabled people.

Do you paint professionally, as a hobby artist, or both?

These days I paint both for hobby and for work, even more as a hobby. Teaching in the special-needs center takes me a lot of time, but it also gives me a lot of satisfaction.

What genre(s) do you work in?

I do not have a specific genre, I like to change technique and style, I like to change, to find new ways. Even if my background is the comic. However, generally I prefer themes of fiction, fantasy. As you can guess I love mixing everything I like.

Whose work inspires you most — who are your role models as an artist?

In addition to loving Michelangelo Buonarotti and Caravaggio, I studied Richard Corben, Simon Bisley, Frank Frazetta. Currently I am following Mozart Couto, Ramon Miranda and David Revoy.

How and when did you get to try digital painting for the first time?

In 2000, my brother, a computer programmer, made me try OpenSuse. I used Gimp, and I felt good because I could draw what I wanted and how I wanted. Since then, I have abandoned Windows for Linux and I have discovered a series of wonderful programs which allow me to work professionally, giving me the advantage of digital.

What makes you choose digital over traditional painting?

In my opinion digital painting is infinity. You can do whatever you want and go back on your steps, whenever you want. It has an infinite number of techniques and tools to create, techniques and tools that you can create yourself. The limit is your own imagination.

How did you find out about Krita?

Watching Youtube videos by Mozart Couto, Ramon Miranda and David Revoy, I saw that they used Krita. I did not know what it was, I did some research on the Internet and found the site Voilà! Love is born! Today it is my favorite program (I’m not saying that to make a good impression on you!).

What was your first impression?

I must say that at the beginning the approach with Krita was a bit difficult. I came from the experience with Gimp and Mypaint, software that has a mild learning curve. But in the end, I managed to “tame” Krita at my will, now it’s my home.

What do you love about Krita?

Given that there are characteristics of Krita that I don’t know and that I will maybe never know, because they’re not necessary to my painting technique, I love everything about Krita!

Above all the panel to create the brushes. It’s wonderful, sometimes I spend hours creating brushes, which I’ll never use because they don’t make sense, but I create them to see how far Krita can get. I love the possibility of combining raster and vector levels; the ability to change the text as I want, level styles. Everything is perfect for my needs in Krita.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Improvements for Krita valid for me would be an implementation of the effects such as: Clouds, plasma, etcetera (I mention those of Gimp to give an example). Moreover, because I have the habit of adjusting lights and shadows at the end of work, I miss controls such as exposure or other typically photographic effects.

I have nothing negative to say about Krita.

What sets Krita apart from the other tools that you use?

The freedom to manage your work, the potential of the various tools and the stability of the software are the most salient features of Krita. When I use Krita, I feel free to create without technical limitations of the software. Also, the quality of the brushes is unparalleled, and when you print your work you realize how much they are cared for and real.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

North (spirit), is my favorite work done with Krita, so far.

In it there are my first (serious) created brushes. The texture of the paper was created with Krita. Also there is my passion for the peoples of the north: my great-grandfather was Swedish. Therefore, in the drawing there is a large part of myself and the drawing represents me.

What techniques and brushes did you use in it?

I used a brush of my own creation and I took inspiration from Mucha (an Art Nouveau painter). The coloring style is similar to cell-shading, while the reflections and the glow of the moonlight were created with the standard FX brushes. The setting is on a magical night.

Where can people see more of your work?

You can see my work on Deviantart: https://audector.deviantart.com/

Anything else you’d like to share?

Thank you for the interview, it was an honor for me, I would just invite all creative people to use Krita, but above all Linux and KDE. The possibilities to work well and professionally are concrete, there is no longer a gap between open source and Windows. However, with Linux and KDE there is the possibility to work better. Thanks to you!

July 07, 2018

Script to modify omni.ja for a custom Firefox

A quick followup to my article on Modifying Firefox Files Inside omni.ja:

The steps for modifying the file are fairly easy, but they have to be done a lot.

First there's the problem of Firefox updates: if a new omni.ja is part of the update, then your changes will be overwritten, so you'll have to make them again on the new omni.ja.

But, worse, even aside from updates they don't stay changed. I've had Ctrl-W mysteriously revert back to its old wired-in behavior in the middle of a Firefox session. I'm still not clear how this happens: I speculate that something in Firefox's update mechanism may allow parts of omni.ja to be overridden, even though I was told by Mike Kaply, the onetime master of overlays, that they weren't recommended any more (at least by users, though that doesn't necessarily mean they're not used for updates).

But in any case, you can be browsing merrily along and suddenly one of your changes doesn't work any more, even though the change is still right there in browser/omni.ja. And the only fix I've found so far is to download a new Firefox and re-apply the changes. Re-applying them to the current version doesn't work -- they're already there. And it doesn't help to keep the tarball you originally downloaded around so you can re-install that; firefox updates every week or two so that version is guaranteed to be out of date.

All this means that it's crazy not to script the omni changes so you can apply them easily with a single command. So here's a shell script that takes the path to the current Firefox, unpacks browser/omni.ja, makes a couple of simple changes and re-packs it. I called it kitfox-patch since I used to call my personally modified Firefox build "Kitfox".

Of course, if your changes are different from mine you'll want to edit the script to change the sed commands.

I hope eventually to figure out how it is that omni.ja changes stop working, and whether it's an overlay or something else, and whether there's a way to re-apply fixes without having to download a whole new Firefox. If I figure it out I'll report back.

July 03, 2018

GIMP 2.10.4 Released

The latest update of GIMP’s new stable series delivers bugfixes, simple horizon straightening, async fonts loading, fonts tagging, and more new features.

Simple Horizon Straightening

A common use case for the Measure tool is getting GIMP to calculate the angle of rotation, when horizon is uneven on a photo. GIMP now removes the extra step of performing rotation manually: after measuring the angle, just click the newly added Straighten button in the tool’s settings dialog.

Straightening images Straightening images in GIMP 2.10.4.

Asynchronous Fonts Loading

Loading all available fonts on start-up can take quite a while, because as soon as you add new fonts or remove existing ones, fontconfig (a 3rd party utility GIMP uses) has to rebuild the fonts cache. Windows and macOS users suffered the most from it.

Thanks to Jehan Pagès and Ell, GIMP now performs the loading of fonts in a parallel process, which dramatically improves startup time. The caveat is that in case you need to immediately use the Text tool, you might have to wait till all fonts complete loading. GIMP will notify you of that.

Fonts Tagging

Michael Natterer introduced some internal changes to make fonts taggable. The user interface is the same as for brushes, patterns, and gradients.

GIMP doesn’t yet automatically generate any tags from fonts metadata, but this is something we keep on our radar. Ideas and, better yet, patches are welcome!

Dashboard Updates

Ell added several new features to the Dashboard dockable dialog that helps debugging GIMP and GEGL or, for end-users, finetune the use of cache and swap.

New Memory group of widgets shows currently used memory size, the available physical memory size, and the total physical memory size. It can also show the tile-cache size, for comparison against the other memory stats.

Updated Dashboard Updated Dashboard in GIMP 2.10.4.

Note that the upper-bound of the meter is the physical memory size, so the memory usage may be over 100% when GIMP uses the swap.

The Swap group now features “read” and “written” fields which report the total amount of data read-from/written-to the tile swap, respectively. Additionally, the swap busy indicator has been improved, so that it’s active whenever data has been read-from/written-to the swap during the last sampling interval, rather than at the point of sampling.

PSD Loader Improvements

While we cannot yet support PSD features such as adjustment layers, there is one thing we can do for users who just need a file to render correctly in GIMP. Thanks to Ell, GIMP now can load a “merged”, pre-composited version of the image, that becomes available when a PSD file was saved with “Maximize Compatibility” option enabled in Photoshop.

This option is currently exposed as an additional file type (“Photoshop image (merged)”), which has to be explicitly selected from the filetype list when opening the image. GIMP then will render the file correctly, but drop certain additional data from the file, such as channels, paths, and guides, while retaining metadata.

Builds for macOS Make a Comeback

Beta builds of GIMP 2.10 for macOS are available now. We haven’t eliminated all issues yet, and we appreciate your feedback.

GEGL and babl

Ell further improved the Recursive Transform operation, allowing multiple transformations to be applied simultaneously. He also fixed the trimming of tile xache into the swap.

New Selective Hue-Saturation operation by Miroslav Talasek is now available in the workshop. The idea is that you can choose a hue, then select width of the hues range around that base hue, then tweak saturation of all affected pixels.

Øyvind Kolås applied various fixes to the Pixelize operation and added the “needs-alpha” meta-data to Color to Alpha and svg-luminancetoalpha operations. He also added a Threshold setting to the Unsharp Mask filter (now called Sharpen (Unsharp Mask)) to restore and improve the legacy Unsharp Mask implementation from GIMP prior to v2.10.

In babl, Ell introduced various improvements to the babl-palette code, including making the default palette initialization thread-safe. Øyvind Kolås added an R~G~B~ set of spaces (which for all BablSpaces mean use sRGB TRC), definitions of ACEScg and ACES2065-1 spaces, and made various clean-ups. Elle Stone contributed a fix for fixed-to-double conversions.

Ongoing Development

While we spend as much time on bugfixing in 2.10.x as we can, our main goal is to complete the GTK+3 port as soon as possible. There is a side effect of this work: we keep discovering old subpar solutions that frustrate us until we fix them. So there is both GTK+3 porting and refactoring, which means we can’t predict when it’ll be done.

Recently, we also revitalized an outdated subproject called ‘gimp-data-extras’ with the sole purpose of keeping the Alpha-to-Logo scripts that we removed from 2.10 due to poor graphics quality. Since some users miss those scripts, there is now a simple way to get them back: download gimp-data-extras v2.0.4, unpack the archive, and copy all ‘.scm’ files from the ‘scripts’ folder to your local GIMP’s ‘scripts’ folder.

July 02, 2018

Affiliated Vendors on the LVFS

We’ve just about to deploy another feature to the LVFS that might be interesting to some of you. First, some nomenclature:

OEM: Original Equipment Manufacturer, the user-known company name on the outside of the device, e.g. Sony, Panasonic, etc
ODM: Original Device Manufacturer, typically making parts for one or more OEMs, e.g. Foxconn, Compal

There are some OEMs where the ODM is the entity responsible for uploading the firmware to the LVFS. The per-device QA is typically done by the OEM, rather than the ODM, although it can be both. Before today we didn’t have a good story about how to handle this other than having a “fake” oem_odm@oem.com useraccounts that were shared by all users at the ODM. The fake account isn’t of good design from a security or privacy point of view and so we needed something better.

The LVFS administrator can now mark other vendors as “affiliates” of other vendors. This gives the ODM permission to upload firmware that is “owned” by the OEM on the LVFS, and that appears in the OEM embargo metadata. The OEM QA team is also able to edit the update description, move the firmware to testing and stable (or delete it entirely) as required. The ODM vendor account also doesn’t have to appear in the search results or the vendor table, making it hidden to all users except OEMs.

This also means if an ODM like Foxconn builds firmware for two different OEMs, they also have to specify which vendor should “own” the firmware at upload time. This is achieved with a simple selection widget on the upload page, but will only be shown if affiliations have been set up. The ODM is able to manage their user accounts directly, either using local accounts with passwords, or ODM-specific OAuth which is the preferred choice as it means there is only one place to manage credentials.

If anyone needs more information, please just email me or leave a comment below. Thanks!

fwupdate is {nearly} dead; long live fwupd

If the title confuses you, you’re not the only one that’s been confused with the fwupdate and fwupd project names. The latter used the shared library of the former to schedule UEFI updates, with the former also providing the fwup.efi secure-boot signed binary that actually runs the capsule update for the latter.

In Fedora the only user of libfwupdate was fwupd and the fwupdate command line tool itself. It makes complete sense to absorb the redundant libfwupdate library interface into the uefi plugin in fwupd. Benefits I can see include:

  • fwupd and fwupdate are very similar names; a lot of ODMs and OEMs have been confused, especially the ones not so Linux savy.
  • fwupd already depends on efivar for other things, and so there are no additional deps in fwudp.
  • Removal of an artificial library interface, with all the soname and package-induced pain. No matter how small, maintaining any project is a significant use of resources.
  • The CI and translation hooks are already in place for fwupd, and we can use the merging of projects as a chance to write lots of low-level tests for all the various hooks into the system.
  • We don’t need to check for features or versions in fwupd, we can just develop the feature (e.g. the BGRT localised background image) all in one branch without #ifdefs everwhere.
  • We can do cleverer things whilst running as a daemon, for instance uploading the fwup.efi to the ESP as required rather than installing it as part of the distro package.
    • The last point is important; several distros don’t allow packages to install files on the ESP and this was blocking fwupdate being used by them. Also, 95% of the failures reported to the LVFS are from Arch Linux users who didn’t set up the ESP correctly as the wiki says. With this new code we can likely reduce the reported error rate by several orders of magnitude.

      Note, fwupd doesn’t actually obsolete fwupdate, as the latter might still be useful if you’re testing capsule updates on something super-embedded that doesn’t ship Glib or D-Bus. We do ship a D-Bus-less fwupdate-compatible command line in /usr/libexec/fwupd/fwupdate if you’re using the old CLI from a shell script. We’re all planning to work on the new integrated fwupd version, but I’m sure they’ll be some sharing of fixes between the projects as libfwupdate is shipped in a lot of LTS releases like RHEL 7.

      All of this new goodness is available in fwupd git master, which will be the new 1.1.0 release probably available next week. The 1_0_X branch (which depends on libfwupdate) will be maintained for a long time, and is probably the better choice to ship in LTS releases at the moment. Any distros that ship the new 1.1.x fwupd versions will need to ensure that the fwup.efi files are signed properly if they want SecureBoot to work; in most cases just copying over the commands from the fwupdate package is all that is required. I’ll be updating Fedora Rawhide with the new package as soon as it’s released.

      Comments welcome.

FreeCAD BIM development news - June 2018

Hi all, Time for a new update on the development of BIM tools for FreeCAD. There is some exciting new stuff, most of it are things that I've been working for some time, that are now ready. As always, a bug thank you to everybody who helped me this month through Patreon or Liberapay! We are...

June 27, 2018

Krita 4.1.0 Released

Three months after the release of Krita 4.0, we’re releasing Krita 4.1!

This release includes the following major new features:

 

  • A new reference images tool that replaces the old reference images docker.
  • You can now save and load sessions: the set of images and views on images you were working on
  • You can create multi-monitor workspace layouts
  • An improved workflow for working with animation frames
  • An improved animation timeline display
  • Krita can now handle larger animation by buffering rendered frames to disk
  • The color picker now has a mixing option
  • Improved vanishing point assistant — and assistants can be painted with custom colors
  • Krita’s scripting module can now be built with Python 2
  • The first part of Ivan Yossi’s Google Summer of Code work on improving the performance of brush masks through vectorization is included as well!

And there are a host of bug fixes, of course, and improvements to the rendering performance and more features. Read the full release notes to discover what’s new in Krita 4.1!

Image by RJ Quiralta

Note!

We found a bug where activating the transform tool will cause a crash if you had selected the Box filter previously. if you experience a crash when enabling the transform tool in krita 4.1.0, go to your kritarc file () and remove the line that says “filterId=Box” in the [KisToolTransform] section. Sorry for the inconvenience. We will bring out a bug fix release as soon as possible.

Download

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

When it is updated, you can also use the Krita Lime PPA to install Krita 4.1.0 on Ubuntu and derivatives. We are working on an updated snap.

OSX

Note: the touch docker, gmic-qt and python plugins are not available on OSX.

Source code

md5sum

For all downloads:

Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here (filenames ending in .sig).

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

June 25, 2018

Interview with Natasa

Could you tell us something about yourself?

Hey, my name is Natasa, I’m a Greek illustrator from Athens currently living in Portugal. My nick is Anastasia_Arjuk. I get all of my inspiration from nature, mythology and people.

Do you paint professionally, as a hobby artist, or both?

I’ve been working on off professionally, did some book covers, children’s book illustration and a bit of jewelry design back home. But life happened and now I’m starting fresh trying to build something that’s all mine. I’ve never stopped drawing though, very happy about that.

What genre(s) do you work in?

The picture has to tell a story, that’s all I really look into. Other than that I just pick what feels right each time.

Whose work inspires you most — who are your role models as an artist?

They’re so many! Contemporary ones I’d say Gennady Spirin, Lisbeth Zwerger and Andrew Hem. From digital art Apterus is excellent in my opinion. I also love Byzantine art, Islamic art and a huge number of old painters, way too many to mention here. Don’t ignore history of art folks, you won’t believe the difference it will make to your work.

How and when did you get to try digital painting for the first time?

I actually started in early 2017, been working only traditional before that. Still not completely comfortable with it but getting there.

What makes you choose digital over traditional painting?

For practical reasons really, it’s so much easier to work professionally on digital art. From having more room, to mailing, to everything. I still prefer traditional art for my personal projects though.

How did you find out about Krita?

I was looking into YouTube for Photoshop lessons at the time, and ran into an artist’s channel who was using Krita. The brushwork seemed so creamy and rich, I had to try it out.

What was your first impression?

I loved the minimal UI and it felt very intuitive. Easy to pick up and go.

What do you love about Krita?

First of all it has an Animation Studio included, I haven’t done 2D animation in years and now I can do it at home, on my PC. Yay! The brush engine is second to none quite frankly and yes I’ve tried more than Krita before I reach that conclusion. I love the mirror tools, the eraser system and that little colour pick up docker where you can attach your favorite brushes as well. Love that little bugger, so practical. Oh and the pattern tool.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I’d like to be able to lock the entire UI in place, not just the dockers, if possible. To be able to zoom in and out like it is on Photoshop, like the Z key in combination with the pen. An improved Text tool. Also probably a stronger engine, to handle larger files. Just nitpicking really.

What sets Krita apart from the other tools that you use?

It’s a very professional freeware program. I very much support what that stands for and like I said, amazing amazing brush engine. Coming from traditional media, textures are extremely important for me. Also the animation possibilities.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I don’t like to dwell on older pieces you can see all their mistakes after they’re done, but I’d say Anansi, the Spider. I learned a lot working on that piece.

What techniques and brushes did you use in it?

I just painted it the same way as traditional art, layers of colour on top of each other, new layer – paint – erase at certain spots, rinse and repeat, a bit like water colour technique. I want the underpainting to be visible an parts. I don’t remember the brushes now but they were all default brushes from the Paint tab which I also used as erasers. A little bit of overlay filters and color maps and voila. Like I said Krita is very intuitive.

Where can people see more of your work?

Artstation: https://www.artstation.com/anastasia_arjuk
Behance: https://www.behance.net/Anastasia_Arjuk
Instagram: https://www.instagram.com/anastasia_arjuk/
Twitter: https://twitter.com/Anastasia_Arjuk
Deviant Art: https://anastasia-arjuk.deviantart.com/
YouTube: https://www.youtube.com/channel/UCAy9Hg8ZaV87wqT6GO4kVnw

Anything else you’d like to share?

Thanks for having me first of all and keep up the good work. In all honesty Krita makes a huge difference to people who want to get involved with art but can’t afford (or don’t want to use) the industry standards. So such a professional open source program is a vital help.

June 24, 2018

Modifying Firefox Files Inside Omni.ja

My article on Fixing key bindings in Firefox Quantum by modifying the source tree got attention from several people who offered helpful suggestions via Twitter and email on how to accomplish the same thing using just files in omni.ja, so it could be done without rebuilding the Firefox source. That would be vastly better, especially for people who need to change something like key bindings or browser messages but don't have a souped-up development machine to build the whole browser.

Brian Carpenter had several suggestions and eventually pointed me to an old post by Mike Kaply, Don’t Unpack and Repack omni.ja[r] that said there were better ways to override specific files.

Unfortunately, Mike Kaply responded that that article was written for XUL extensions, which are now obsolete, so the article ought to be removed. That's too bad, because it did sound like a much nicer solution. I looked into trying it anyway, but the instructions it points to for Overriding specific files is woefully short on detail on how to map a path inside omni.ja like chrome://package/type/original-uri.whatever, to a URL, and the single example I could find was so old that the file it referenced didn't exist at the same location any more. After a fruitless half hour or so, I took Mike's warning to heart and decided it wasn't worth wasting more time chasing something that wasn't expected to work anyway. (If someone knows otherwise, please let me know!)

But then Paul Wise offered a solution that actually worked, as an easy to follow sequence of shell commands. (I've changed some of them very slightly.)

$ tar xf ~/Tarballs/firefox-60.0.2.tar.bz2
  # (This creates a "firefox" directory inside the current one.)

$ mkdir omni
$ cd omni

$ unzip -q ../firefox/browser/omni.ja
warning [../firefox-60.0.2/browser/omni.ja]:  34187320 extra bytes at beginning or within zipfile
  (attempting to process anyway)
error [../firefox-60.0.2/browser/omni.ja]:  reported length of central directory is
  -34187320 bytes too long (Atari STZip zipfile?  J.H.Holm ZIPSPLIT 1.1
  zipfile?).  Compensating...
zsh: exit 2     unzip -q ../firefox-60.0.2/browser/omni.ja

$ sed -i 's/or enter address/or just twiddle your thumbs/' chrome/en-US/locale/browser/browser.dtd chrome/en-US/locale/browser/browser.properties

I was a little put off by all the warnings unzip gave, but kept going.

Of course, you can just edit those two files rather than using sed; but the sed command was Paul's way of being very specific about the changes he was suggesting, which I appreciated.

Use these flags to repackage omni.ja:

$ zip -qr9XD ../omni.ja *

I had tried that before (without the q since I like to see what zip and tar commands are doing) and hadn't succeeded. And indeed, when I listed the two files, the new omni.ja I'd just packaged was about a third the size of the original:

$ ls -l ../omni.ja ../firefox-60.0.2/browser/omni.ja
-rw-r--r-- 1 akkana akkana 34469045 Jun  5 12:14 ../firefox/browser/omni.ja
-rw-r--r-- 1 akkana akkana 11828315 Jun 17 10:37 ../omni.ja

But still, it's worth a try:

$ cp ../omni.ja ../firefox/browser/omni.ja

Then run the new Firefox. I have a spare profile I keep around for testing, but Paul's instructions included a nifty way of running with a brand new profile and it's definitely worth knowing:

$ cd ../firefox

$ MOZILLA_DISABLE_PLUGINS=1 ./firefox -safe-mode -no-remote -profile $(mktemp -d tmp-firefox-profile-XXXXXXXXXX) -offline about:blank

Also note the flags like safe-mode and no-remote, plus disabling plugins -- all good ideas when testing something new.

And it worked! When I started up, I got the new message, "Search or just twiddle your thumbs", in the URL bar.

Fixing Ctrl-W

Of course, now I had to test it with my real change. Since I like Paul's way of using sed to specify exactly what changes to make, here's a sed version of my Ctrl-W fix:

$ sed -i '/key_close/s/ reserved="true"//' chrome/browser/content/browser/browser.xul

Then run it. To test Ctrl-W, you need a website that includes a text field you can type in, so -offline isn't an option unless you happen to have a local web page that includes some text fields. Google is an easy way to test ... and you might as well re-use that firefox profile you just made rather than making another one:

$ MOZILLA_DISABLE_PLUGINS=1 ./firefox -safe-mode -no-remote -profile tmp-firefox-profile-* https://google.com

I typed a few words in the google search field that came up, deleted them with Ctrl-W -- all was good! Thanks, Paul! And Brian, and everybody else who sent suggestions.

Why are the sizes so different?

I was still puzzled by that threefold difference in size between the omni.ja I repacked and the original that comes with Firefox. Was something missing? Paul had the key to that too: use zipinfo on both versions of the file to see what differed. Turned out Mozilla's version, after a long file listing, ends with

2650 files, 33947999 bytes uncompressed, 33947999 bytes compressed:  0.0%
while my re-packaged version ends with
2650 files, 33947969 bytes uncompressed, 11307294 bytes compressed:  66.7%

So apparently Mozilla's omni.ja is using no compression at all. It may be that that makes it start up a little faster; but Quantum takes so long to start up that any slight difference in uncompressing omni.ja isn't noticable to me.

I was able to run through this whole procedure on my poor slow netbook, the one where building Firefox took something like 15 hours ... and in a few minutes I had a working modified Firefox. And with the sed command, this is all scriptable, so it'll be easy to re-do whenever Firefox has a security update. Win!

Update: I have a simple shell script to do this: Script to modify omni.ja for a custom Firefox.

June 22, 2018

Thomson 8-bit computers, a history

In March 1986, my dad was in the market for a Thomson TO7/70. I have the circled classified ads in “Téo” issue 1 to prove that :)



TO7/70 with its chiclet keyboard and optical pen, courtesy of MO5.com

The “Plan Informatique pour Tous” was in full swing, and Thomson were supplying schools with micro-computers. My dad, as a primary school teacher, needed to know how to operate those computers, and eventually teach them to kids.

The first thing he showed us when he got the computer, on the living room TV, was a game called “Panic” or “Panique” where you controlled a missile, protecting a town from flying saucers that flew across the screen from either side, faster and faster as the game went on. I still haven't been able to locate this game again.

A couple of years later, the TO7/70 was replaced by a TO9, with a floppy disk, and my dad used that computer to write an educational software about top-down additions, as part of a training program run by the teachers schools (“Écoles Normales” renamed to “IUFM“ in 1990).

After months of nagging, and some spring cleaning, he found the listings of his educational software, which I've liberated, with his permission. I'm currently still working out how to generate floppy disks that are usable directly in emulators. But here's an early screenshot.


Later on, my dad got an IBM PC compatible, an Olivetti PC/1, on which I'd play a clone of Asteroids for hours, but that's another story. The TO9 got passed down to me, and after spending a full summer doing planning for my hot-dog and chips van business (I was 10 or 11, and I had weird hobbies already), and entering every game from the “102 Programmes pour...” series of books, the TO9 got put to the side at Christmas, replaced by a Sega Master System, using that same handy SCART connector on the Thomson monitor.

But how does this concern you. Well, I've worked with RetroManCave on a Minitel episode not too long ago, and he agreed to do a history of the Thomson micro-computers. I did a fair bit of the research and fact-checking, as well as some needed repairs to the (prototype!) hardware I managed to find for the occasion. The result is this first look at the history of Thomson.



Finally, if you fancy diving into the Thomson computers, there will be an episode coming shortly about the MO5E hardware, and some games worth running on it, on the same YouTube channel.

I'm currently working on bringing the “TeoTO8D emulator to Flathub, for Linux users. When that's ready, grab some games from the DCMOTO archival site, and have some fun!

I'll also be posting some nitty gritty details about Thomson repairs on my Micro Repairs Twitter feed for the more technically enclined among you.

June 21, 2018

First Beta Release of Krita 4.1

Three months after the release of Krita 4.0, we’re releasing the first (and probably only) beta of Krita 4.1, a new feature release! This release includes the following major new features:

  • A new reference images tool that replaces the old reference images docker.
  • You can now save and load sessions: the set of images and views on images you were working on
  • You can create multi-monitor workspace layouts
  • An improved workflow for working with animation frames
  • An improved animation timeline display
  • Krita can now handle larger animation by buffering rendered frames to disk
  • The color picker now has a mixing option
  • Improved vanishing point assistant — and assistants can be painted with custom colors
  • Krita’s scripting module can now be built with Python 2
  • The first part of Ivan Yossi’s Google Summer of Code work on improving the performance of brush masks through vectorization is included as well!

And there’s more. Read the full release notes to discover what’s new in Krita 4.1! With this beta release, the release notes are still work in progress, though.

Download

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

When it is updated, you can also use the Krita Lime PPA to install Krita 4.1.0-beta.2 on Ubuntu and derivatives. We are working on an updated snap.

OSX

Note: the touch docker, gmic-qt and python plugins are not available on OSX.

Source code

md5sum

For all downloads:

Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here (filenames ending in .sig).

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

June 18, 2018

Practical Printer Profiling with Gutenprint

Some time ago I purchased an Epson Stylus Photo R3000 printer, as I wanted to be able to print at A3 size, and get good quality monochrome prints. For a while I struggled a bit to get good quality color photo output from the R3000 using Gutenprint, as it took me a while to figure out which settings proved best for generating and applying ICC profiles.

Sidenote, if you happen to have a R3000 as well and you want to be able to get good results using Gutenprint, you can get some of my profiles here, not all of these profiles have been practically tested. Obviously your milage may vary.

When reading Gutenprint’s documentation they clearly indicated that you should use the “Uncorrected” Color Correction mode, which is very much good advice, as we need deterministic output to be able to generate and apply our ICC profiles in a consistent manner. What kinda threw me off, is that the “Uncorrected” Color Correction mode produces linear gamma output, which practically means very dark output, which the ICC profile is going to need to correct for. And while this is a valid approach, it does generally mean you need to generate a profile using more color patches, which means using more ink and paper for each profile you generate. A more practical approach would be to set Composite Gamma to a value of 1.0, which gamma corrects the output to look more perceptually natural, which consequently means the ICC profile has less to correct for, and thus can be generated using less color patches, and therefore using less ink and paper to do so.

Keep in mind that a printer profile is only valid for a particular combination of Printer, Ink set, Paper, Driver and Settings. Therefore you should document all these facets while generating a profile. This can be as simple as including a similarly named plain text file with each profile you create, for example:

Filename ............: epson_r3000_tecco_photo_matte_230.icc
MD5 Sum .............: 056d6c22ea51104b5e52de8632bd77d4

Paper Type ..........: Tecco Photo Matte 230

Printer Model .......: Epson Stylus Photo R3000
Printer Ink .........: Epson UltraChrome K3 with Vivid Magenta
Printer Firmware ....: AS25E3 (09/11/14)
Printer Driver ......: Gutenprint 5.2.13 (Ubuntu 18.04 LTS)

Color Model .........: RGB
Color Precision .....: Normal
Media Type ..........: Archival Matte Paper
Print Quality .......: Super Photo
Resolution ..........: 1440x1440 DPI
Ink Set .............: Matte Black
Ink Type ............: Standard
Quality Enhancement .: None
Color Correction ....: Uncorrected
Image Type ..........: Photograph
Dither Algorithm ....: EvenTone
Composite Gamma .....: 1.0

You’ll note I’m not using the maximum 5760×2880 resolution Gutenprint supports for this printer, as the quality increase seems almost negligible, and it slows down printing immensely, and might also increase ink consumption with little to show for it.

While the Matte Black (MK) ink set and Archival Matte Paper media type works very well for matte papers, you should probably use Photo Black (PK) ink set and Premium Glossy Photo Paper media type for glossy or Premium Semigloss Photo Paper for pearl, satin & lustre media types.

The following profiling procedure uses only a single sheet of A4 paper, with very decent results, you can use multiple pages by increasing the patch count, the increase in effective output quality will likely be underwhelming though, but your mileage may vary of course.

To proceed you’ll need a spectrophotometer (a colorimeter won’t suffice) supported by ArgyllCMS, like for example the X-Rite Color Munki Photo.

To install ArgyllCMS and other relevant tools on Debian (or one of its derivatives like Ubuntu):

apt-get install argyll liblcms2-utils imagemagick

First we’ll need to generate a set of color patches (we’re including a neutral grey axis, so the profile can more effectively neutralize Epson’s warm tone grey inks):

targen -v -d 3 -G -g 14 -f 210 myprofile
printtarg -v -i CM -h -R 42 -t 360 -M 6 -p A4 myprofile

This results in a TIF file, which you need to print at whatever settings you want to use the profile at. Make sure you let the print dry (and outgas) for an hour at the very least. After the print has dried we’ll need to start measuring the patches using our spectrophotometer:

chartread -v -H myprofile

Once all the patches have been read, we’re ready to generate the actual profile.

colprof -v -D "Tecco Photo Matte 230 for Epson R3000" \
           -C "Copyright 2018 Your Name Here" \
           -Zm -Zr -qm -nc \
           -S /usr/share/color/argyll/ref/sRGB.icm \
           -cmt -dpp myprofile

Note if you’re generating a profile for a glossy or lustre paper type remove the -Zm from the colprof commandline.

Evaluating Your Profile

After generating a custom print profile we can evaluate the profile using xicclu:

xicclu -g -fb -ir myprofile.icc

Looking at the graph above, there are a few things of note, you’ll notice the graph doesn’t touch the lower right corner, which represents a profiles black point, keep in mind that the blackest black any printer can print, still reflects some light, and thus isn’t perfectly black, i.e. 0.

Another point of interest is the curvature of lines, if the graph is bowing significantly to the upper right, it means the media type you have chosen for your profile is causing Gutenprint to put down more ink than the paper you’re using is capable of taking. And conversely if the graph is bowing significantly to the lower left, it means the media type you have chosen for your profile is causing Gutenprint to put down less ink than the paper you’re using is capable of taking. While a profile will compensate for either, having a profile compensate too strongly for either may cause banding artifacts in rare cases especially with an 8-bit workflow. While, I haven’t had a case yet where I needed to, you can use the Density control to adjust the amount of ink put on paper.

Visualizing Printer Gamut

To visualize the effective gamut of your profile you can generate a 3D Lab colorspace graph using iccgamut, which you can view with any modern web browser:

iccgamut -v -w -n myprofile.icc
xdg-open myprofile.x3d.htm

Comparing Gamuts

To compare the gamut of our new custom print profile against a standard working colorspace like sRGB follow these steps:

cp /usr/share/color/argyll/ref/sRGB.icm .
iccgamut -v sRGB.icmiccgamut -v myprofile.icc
viewgam -i -n myprofile.gam sRGB.gam srgb_myprofile
Intersecting volume = 406219.5 cubic units
'epson_r3000_hema_matt_coated_photo_paper_235.gam' volume = 464977.8 cubic units, intersect = 87.36%
'sRGB.gam' volume = 899097.5 cubic units, intersect = 45.18%
xdg-open srgb_myprofile.x3d.htm

From the above output we can conclude that our custom print profile covers about 45% of sRGB, meaning the printer has a gamut that is much smaller than sRGB. However we can also see that sRGB in turn covers about 87% of our custom print profile, which means that 13% of our custom print profile gamut is actually beyond the gamut of sRGB.

This is where gamut mapping comes in. This is where declared rendering intents actually affect how colors outside of the shared gamut is handled.

While a Relative Colorimetric rendering intent limits your prints to the shared area, effectively giving you the smallest practical gamut, it will however offer you the best color accuracy.

A Perceptual rendering intent will scale down colors from an area where a working space profile has a larger gamut (the other 55% of sRGB) into a smaller gamut.

A Saturation rendering intent will also scale up colors from an area where a working space profile has a smaller gamut into a larger gamut (the 13% of our custom print profile).

Manually Preparing Prints using liblcms2-utils

To test your profile, I suggest getting a good test image, like for example from SmugMug, and applying your new profile, using either Perceptual gamut mapping or Relative Colorimetric gamut mapping with Black Point Compensation respectively:

jpgicc -v -o printer.icc -t 0    -q 95 original.jpg print.jpg
jpgicc -v -o printer.icc -t 1 -b -q 95 original.jpg print.jpg

When you open either of the print corrected images, you’ll most likely find they’ll both look awful on your computer’s display, but keep in mind, this is because the images are correcting for printer, driver, ink & paper behavior. If you actually print either image, the printed image should look fairly close to the original image on your computer’s display (presuming you have your display setup properly and calibrated as well).

Manually Preparing Prints using ImageMagick

A more sophisticated way to prepare real images for printing would be using (for example) ImageMagick, these examples below illustrate how you can use ImageMagick to scale an image to a set resolution (360 DPI) for a given paper size, add print sharpening (this is why having a known static resolution is important, otherwise the sharpening would give inconsistent results across different images), then we add a thin black border, and a larger but equidistant (presuming a 3:2 image) white border, and finally we convert the image to our custom print profile:

A4 paper

convert -profile /usr/share/color/argyll/ref/sRGB.icm \
        -resize 2466^ -density 360 -unsharp 2x2+1+0 \
        -bordercolor black -border 28x28 -bordercolor white -border 227x227 \
        -black-point-compensation -intent relative -profile myprofile.icc \
        -strip -sampling-factor 1x1 -quality 95 original.jpg print.jpg

A3 paper

convert -profile /usr/share/color/argyll/ref/sRGB.icm \
        -resize 3487^ -density 360 -unsharp 2x2+1+0 \
        -bordercolor black -border 28x28 -bordercolor white -border 333x333 \
        -black-point-compensation -intent relative -profile myprofile.icc \
        -strip -sampling-factor 1x1 -quality 95 original.jpg print.jpg

A3+ paper

convert -profile /usr/share/color/argyll/ref/sRGB.icm \
        -resize 4320^ -density 360 -unsharp 2x2+1+0 \
        -bordercolor black -border 28x28 -bordercolor white -border 152x152 \
        -black-point-compensation -intent relative -profile myprofile.icc \
        -strip -sampling-factor 1x1 -quality 95 original.jpg print.jpg

Automatically Preparing Prints via colord

While the above method describes a way that gives you a lot of control on how to prepare images for printing, you may also want to use a profile for printing on plain paper, where the input is output of any random application, as opposed to a raster image file that can be very easily preprocessed.

Via colord you can assign a printer an ICC profile that will be automatically applied through cups-filters (pdftoraster), but keep in mind that this profile can only be changed through colormgr (or another colord frontend, like GNOME Control Center) and not through an application’s print dialog, sadly. To avoid messing with driver settings too much, I would suggesting duplicating your printer in CUPS, for example:

  • a printer instance for plain paper prints (with an ICC profile assigned through colord
  • a printer instance for matte color photographic prints (without a profile assigned through colord)
  • a printer instance for (semi)glossy color photographic prints
  • a printer instance for matte black and white photographic prints (likely without a need for a profile at all).
  • a printer instance for (semi)glossy black and white photographic prints (likely without a need for a profile at all).

One caveat of having a printer duplicated in CUPS is that it essentially also creates multiple print queues, which means if you have sent prints to multiple separate queues, you’ll have a race condition where it’s anybody’s guess which queue actually delivers the next print to your single physical printer, which may result in prints coming out in a different order as you had sent them. But it’s my guess that this disadvantage will hardly be noticeable for most people, and very tolerable to most who would notice it.

One thing to keep in mind is that pdftoraster applies an ICC profile by default using a Perceptual rendering intent, which means that out of gamut colors in a source image are scaled to fit inside the the print profile’s gamut. Fundamentally the Perceptual rendering intent makes the tradeoff to keep gradients intact, at the expense of color accuracy, which is most often a fairly sensible thing to do. Given this tidbit of information, and the fact that pdftoraster assumes sRGB input (unless explicitly told otherwise), I’d like to emphasize the importance of passing the -S parameter with an sRGB profile to colprof when generating print profiles for on Linux.

To assign an ICC profile to be applied automatically by cups-filters:

sudo cp navigator_colour_documents_120.icc /var/lib/colord/icc/navigator_colour_documents_120.icc
colormgr import-profile /var/lib/colord/icc/navigator_colour_documents_120.icc
colormgr find-profile-by-filename /var/lib/colord/icc/navigator_colour_documents_120.icc
colormgr get-devices-by-kind printer
colormgr device-add-profile \
         /org/freedesktop/ColorManager/devices/cups_EPSON_Epson_Stylus_Photo_R3000 \
         /org/freedesktop/ColorManager/profiles/icc_c43e7ce085212ba8f85ae634085ecfd3

More on Gutenprint media types

In contrast to commercial printer drivers, Gutenprint gives us the opportunity to peek under the covers, and find out more about the different media types Gutenprint supports for your printer, first lookup your printers model number:

$ grep 'R3000' /usr/share/gutenprint/5.2/xml/printers.xml 
<printer ... name="Epson Stylus Photo R3000" driver="escp2-r3000" ... model="115" ...

Then find the relevant media definition file:

$ grep 'media src' /usr/share/gutenprint/5.2/xml/escp2/model/model_115.xml 
<media src="escp2/media/f360_ultrachrome_k3v.xml"/>

Finally you can dig through the relevant media type definitions, where the Density parameter is of particular interest:

$ less /usr/share/gutenprint/5.2/xml/escp2/media/f360_ultrachrome_k3v.xml
<paper ... text="Plain Paper" ... PreferredInkset="ultra3matte">
  <ink ... name="ultra3matte" text="UltraChrome Matte Black">
    <parameter type="float" name="Density">0.720000</parameter>
<paper ... text="Archival Matte Paper" ... PreferredInkset="ultra3matte">
  <ink ... name="ultra3matte" text="UltraChrome Matte Black">
    <parameter type="float" name="Density">0.920000</parameter>
<paper ... text="Premium Glossy Photo Paper" ... PreferredInkset="ultra3photo">
  <ink ... name="ultra3photo" text="UltraChrome Photo Black">
    <parameter type="float" name="Density">0.720000</parameter>
<paper ... text="Premium Semigloss Photo Paper" ... PreferredInkset="ultra3photo">
  <ink ... name="ultra3photo" text="UltraChrome Photo Black">
    <parameter type="float" name="Density">0.720000</parameter>
<paper ... text="Photo Paper" ... PreferredInkset="ultra3photo">
  <ink ... name="ultra3photo" text="UltraChrome Photo Black">
    <parameter type="float" name="Density">1.000000</parameter> 

Dedicated Grey Neutralization Profile

As mentioned earlier, the Epson R3000 uses warm tone grey inks, which results in very pleasant true black & white images, without any color inks used, at least when Gutenprint is told to print in Greyscale mode.

If, unlike me, you don’t like the warm tone effect, applying the ICC we generated should neutralize it mostly, but possibly not perfectly, which is fine for neutral area’s in color prints, but may be less satisfactory for proper black & white prints.

While I haven’t done any particular testing on this issue, you may want to consider doing a second profile dedicated and tuned to grey neutralization, just follow the normal profiling procedure with the following target generation command instead:

targen -v -d 3 -G -g 64 -f 210 -c previous_color_profile.icc -A 1.0 -N 1.0 grey_neutral_profile

Obviously you’ll need to use this particular profile in RGB color mode, even though your end goal may be monochrome, given that the profile needs to use color inks to compensate for the warm tone grey inks.

YouTube Blocks Blender Videos Worldwide

Thursday June 21 2018, by Ton Roosendaal

Last night all videos came back (except the one from Andrew Price, which still is blocked in USA).
According to another person in Youtube we now *do* have to sign the other agreement as well. You can read it here.

I’m not sure if we should accept this. Will be studied on.

Wednesday 17h, June 20 2018, by Ton Roosendaal

None of our videos play still.

Wednesday 10.30h, June 20 2018, by Ton Roosendaal

Last night the Youtube Support team contacted Francesco Siddi by phone. As we understand it now it’s a mix of coincidences, bad UIs, wrong error messages, ignorant support desks and our non-standard decision to not monetize a popular Youtube channel.

The coincidence is that Youtube is rolling out their subscription system in Europe (and Netherlands). This subscription system will allow users to stream music and enjoy Youtube ad-free. They updated terms and conditions for it and need to get monetized channel owners to approve that. Coincidentally our channel was set to allow monetization.

The bad UI was that the ‘please accept the new terms’ button was only visible if you go the new Youtube “Content Manager” account, which I was not aware of and which is not active when you login to Youtube using the Foundation account to manage videos. The channel was also set to monatization mode which has no option to reset it. To make us even more confused, yesterday the system generated the wrong agreement to be signed.

(Image: after logging in to the Foundation account, the menu “Switch Accounts” – shows the option to login as “Content Manager”).

Because of not accepting the new terms, the wrong error message was to put all videos on “Not available in your country” mode, which usually signals that there is a copyright issue. Similar happened for Andrew Price’s video last year, which (according to our new contact) was because of a trademark dispute but that was never made explicit to us.

All support desk people we contacted (since December last year) couldn’t find out what was wrong. They didn’t know that not accepting ‘terms and conditions’ could be causing this. Until yesterday they thought there was a technical error.

After reviewing the new terms and conditions (which basically is to accept the subscription system, I decided to accept that. According to the new Youtube contact our channel then would be back in a few hours.

Just while writing this, the video thumbnails appeared to be back! They don’t play yet.

Tuesday (afternoon) 19 June 2018, by Ton Roosendaal

We are doing a PeerTube test on video.blender.org. It is running on one of our own servers, in a European datacenter. Just click around and have some fun. We’re curious to see how it holds!

Tuesday 19 June 2018, by Ton Roosendaal

Last night we received a contract from Google. You can read it here. It’s six pages of legal talk, but the gist of the agreement appears to be about Blender Foundation accepting to monetize content on its Youtube channel.

However, BF already has an ad-free Youtube account since 2008. We have monetizing disabled, but it looks like Google is going to change this policy. For example, we now see a new section on our channel settings page: “Monetization enabled”.

However, the actual advertisement option is disabled in the advanced settings:

Now there’s another issue. Last year we were notified by US Youtube visitors that a very popular Blender Conference talk wasn’t visible for them – the talk Andrew Price gave in 2016; The 7 Habits of Highly Effective Artists. It had over a million views already.

With our channel reaching > 100k subscribers, we have special priority support. So we contacted them to ask what was wrong. After a couple of mails back and forth, the reply was as follows (22 dec 2017):

Thanks for your continued support and patience.

I’ve received an update from our experts stating that you need to enable ads for your video. Once you enable, your video will be available in the USA.

If there’s anything else you’d need help with, please feel free to write back to us anytime as we are available 24/7 to take care of every partner’s concerns.

Appreciate your understanding and thanks for being our valuable partner. Have an amazing day!

Which was quite a surprising statement for us. My reply therefore was (22 dec 2017):

I’m chairman of the Blender Foundation. We choose to use a 100% ad-free channel for our work, this to emphasis our public benefit and non-profit goals.

According to your answer we are being forced to enable advertising now.
I would like to know where this new Youtube policy has been published and made official.
We then had every other month a reply like this:

Please allow me some time to work with specialists on your issue. I’ll investigate further and will reach back to you with an update at the earliest possible.

Appreciate your patience and understanding in the interim.

Just last week, June 12, I mailed them again to ask for the status of this issue. The reply was:

I completely understand your predicament. Apologies for the unusual delay in hearing back from the Policy team. I’ve escalated this issue for further investigation and assistance. Kindly bear with us while we get this fixed.

Appreciate your understanding in this regard.

And then on June 15th the entire channel went black.
To us it is still unclear what is going on. It could be related to Youtube’s new “subscription” system. It can also be just a human error or a bug; our refusal to monetize videos on a massively popular channel isn’t common.
However – it remains a fair and relevant question to Google: do you allow adfree channels without monetization? Stay tuned!

Monday 18 June 2018, by Francesco Siddi

Since a few days all Blender videos on the OFFICIAL BLENDER CHANNEL have been blocked worldwide without explanation. We are working with YouTube to resolve the issue, but the support has been less than stellar. In the meantime you can find most of the videos on cloud.blender.org.

June 17, 2018

Blender at Annecy 2018

The Blender team is back from The Annecy International Animation Film Festival 2018 and MIFA, the industry marketplace which takes place during the festival. Annecy is a major international event for over 11000 animation industry professionals and having a Blender presence there was an extremely rewarding experience.


The MIFA 2018

The entrance of the MIFA, at the Hotel Imperial

Hundreds of people stopped by the Blender booth and were amazed by the upcoming Blender 2.8 feature videos, the Blender Open Movie reels, the Hero showcase and the live set of Grease Pencil demos prepared by Daniel M. Lara. Breaking down production files step-by-step was a crowd pleaser and got an impressive number of compliments, good feedback and follow-up requests.

Demo setup

Daniel M. Lara showcasing Grease Pencil

While two years ago our presence was more focused on the upcoming Agent 327 film project, having a clearer focus on software lead to active outreach from studios currently using Blender in their production pipeline. In France alone, there are dozens of small and medium studios using Blender to produce film and TV series. These companies are often looking for artists and professional trainers, and have expressed positive remarks about the Blender Network and the BFCT initiatives.

Café des Arts

Café des Arts is where the festival happens at night

Overall, this experience confirmed the growing appreciation and adoption of Blender as integral part of the production pipeline. This is made possible thanks to the Blender development team, and the Blender community, which is often seen as one of the main reasons for switching tools.

A shout out to Pablo, Hjalti and Daniel for the great work at the booth. Keeping the show running 10 hours a day for 4 consecutive days was no joke :)

Until next year!
Francesco

The Annecy 2018 Team

The Annecy 2018 Team

June 14, 2018

security things in Linux v4.17

Previously: v4.16.

Linux kernel v4.17 was released last week, and here are some of the security things I think are interesting:

Jailhouse hypervisor

Jan Kiszka landed Jailhouse hypervisor support, which uses static partitioning (i.e. no resource over-committing), where the root “cell” spawns new jails by shrinking its own CPU/memory/etc resources and hands them over to the new jail. There’s a nice write-up of the hypervisor on LWN from 2014.

Sparc ADI

Khalid Aziz landed the userspace support for Sparc Application Data Integrity (ADI or SSM: Silicon Secured Memory), which is the hardware memory coloring (tagging) feature in Sparc M7. I’d love to see this extended into the kernel itself, as it would kill linear overflows between allocations, since the base pointer being used is tagged to belong to only a certain allocation (sized to a multiple of cache lines). Any attempt to increment beyond, into memory with a different tag, raises an exception. Enrico Perla has some great write-ups on using ADI in allocators and a comparison of ADI to Intel’s MPX.

new kernel stacks cleared on fork

It was possible that old memory contents would live in a new process’s kernel stack. While normally not visible, “uninitialized” memory read flaws or read overflows could expose these contents (especially stuff “deeper” in the stack that may never get overwritten for the life of the process). To avoid this, I made sure that new stacks were always zeroed. Oddly, this “priming” of the cache appeared to actually improve performance, though it was mostly in the noise.

MAP_FIXED_NOREPLACE

As part of further defense in depth against attacks like Stack Clash, Michal Hocko created MAP_FIXED_NOREPLACE. The regular MAP_FIXED has a subtle behavior not normally noticed (but used by some, so it couldn’t just be fixed): it will replace any overlapping portion of a pre-existing mapping. This means the kernel would silently overlap the stack into mmap or text regions, since MAP_FIXED was being used to build a new process’s memory layout. Instead, MAP_FIXED_NOREPLACE has all the features of MAP_FIXED without the replacement behavior: it will fail if a pre-existing mapping overlaps with the newly requested one. The ELF loader has been switched to use MAP_FIXED_NOREPLACE, and it’s available to userspace too, for similar use-cases.

pin stack limit during exec

I used a big hammer and pinned the RLIMIT_STACK values during exec. There were multiple methods to change the limit (through at least setrlimit() and prlimit()), and there were multiple places the limit got used to make decisions, so it seemed best to just pin the values for the life of the exec so no games could get played with them. Too much assumed the value wasn’t changing, so better to make that assumption actually true. Hopefully this is the last of the fixes for these bad interactions between stack limits and memory layouts during exec (which have all been defensive measures against flaws like Stack Clash).

Variable Length Array removals start

Following some discussion over Alexander Popov’s ongoing port of the stackleak GCC plugin, Linus declared that Variable Length Arrays (VLAs) should be eliminated from the kernel entirely. This is great because it kills several stack exhaustion attacks, including weird stuff like stepping over guard pages with giant stack allocations. However, with several hundred uses in the kernel, this wasn’t going to be an easy job. Thankfully, a whole bunch of people stepped up to help out: Gustavo A. R. Silva, Himanshu Jha, Joern Engel, Kyle Spiers, Laura Abbott, Lorenzo Bianconi, Nikolay Borisov, Salvatore Mesoraca, Stephen Kitt, Takashi Iwai, Tobin C. Harding, and Tycho Andersen. With Linus Torvalds and Martin Uecker, I also helped rewrite the max() macro to eliminate false positives seen by the -Wvla compiler option. Overall, about 1/3rd of the VLA instances were solved for v4.17, with many more coming for v4.18. I’m hoping we’ll have entirely eliminated VLAs by the time v4.19 ships.

That’s it for now! Please let me know if you think I missed anything. Stay tuned for v4.18; the merge window is open. :)

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

The Font BoF at Libre Graphics Meeting 2018

(Call it What’s New in Open Fonts, № 003 if you must. At least this one didn’t take as long as the LibrePlanet one to get pushed out into the street.)

Libre Graphics Meeting (LGM) is the annual shindig for all FOSS creative-graphics-and-design projects and users. This year’s incarnation was held in Sevilla, Spain at the end of April. I barely got there in time, having been en route from Typo Labs in Berlin a week earlier.

I put a “typography BoF” onto the schedule, and a dozen-ish people turned up. BoFs vary wildly across instances; this one featured a lot of developers…

Help

…which is good. I got some feedback to a question I posed in my talk: where is the “right” place for a font package to integrate into a Linux/FOSS help system? I could see that working within an application (i.e., GIMP ingests any font-help files and makes them available via the GIMP help tool) or through the standard desktop “help.”

Currently, font packages don’t hook into the help system. At best, some file with information might get stuffed somewhere in /usr/share/doc/, but you’ll never get told about that. But they certainly could hook in properly, to provide quick info on the languages, styles, and features they support. Or, for newer formats like variable fonts or multi-layer color fonts, to provide info on the axes, layers, and color palettes. I doubt anyone could use Bungee without consulting its documentation first.

But a big question here (particularly for feature support) is what’s the distinction between this “help” documentation and the UI demo strings that Matthias is working on showing in the GTK+ font explorer (as discussed in the LibrePlanet post).

The short answer is that “demo strings” show you “what you will get” (and do so at a glance); help documentation tells you the “why this is implemented” … and, by extension, makes it possible to search for a use case. For example: SIL Awami implements a set of features that do Persian stylistic swashes etc.

You could show someone the affected character strings, but knowing the intent behind them is key, and that’s a help issue. E.g., you might open the help box and search for “Persian fonts” and you’d find the Awami help entry related to it. That same query wouldn’t work by just searching for the Arabic letters that happen to get the swash-variant treatment.

Anyway, GIMP already has hooks in place to register additions to its built-in help system; that’s how GIMP plug-ins get supported by the help framework. So, in theory, the same hooks could be used for a font package to let GIMP know that help docs are available. Inkscape doesn’t currently have this, but Tavmjong Bah noted that it wouldn’t be difficult to add. Scribus does not seem to have an entry point.

In any case, after just a couple of minutes it seemed clear that putting such help documentation into every application is the wrong approach (in addition to not currently being possible). It ought to be system wide. For desktop users, that likely means hooking into the GNOME or KDE help frameworks.

Styles and other human-centric metadata

The group (or is it birds?) also talked about font management. One of the goals of the “low hanging fruit” improvements to font packaging that I’ve been cheerleading for the past few months is that better package-level features will make it possible for *many* application types and/or utilities to query, explore, and help the user make font decisions. So you don’t get just the full-blown FontMatrix-style manager, but you might also get richer font exploration built into translation tools, and you might get a nice “help me find a replacement for the missing font in this document” extension for LibreOffice, etc. Maybe you could even get font identification.

Someone brought up the popular idea that users want to be able to search their font library via stylistic attributes. This is definitely true; the tricky part is that, historically, it’s proven to be virtually impossible to devise a stylistic-classification scheme that (a) works for more than just one narrow slice of the world’s typefaces and (b) works for more than just a small subset of users. PANOSE is one such system that hasn’t take the world over yet; another guy on Medium threw Machine Learning at the Google Fonts library and came up with his own classification set … although it’s not clear that he intends to make that data set accessible to anybody else.

And, even with a schema, you’d still have to go classify and tag all of the actual fonts. It’d be a lot of additional metadata to track; one person suggested that Fontbakery could include a check for it — someone else commented that you’d really want to track whether or not a human being had given those tags a QA/sanity check.

Next, you’d have to figure out where to store that stylistic metadata. Someone asked whether or not fontconfig ought to offer an interface to request fonts by style. Putting that a little more abstractly, let’s say that you have stylistic tags for all of your fonts (however those tags are generated); should they get stored in the font binary? In fonts.conf? Interestingly enough, a lot of old FontMatrix users still have their FontMatrix tags on their system, so they say, and supporting them is kind of a de-facto “tagging solution” for new development projects. Style tags (however they’re structured) are just a subset of user-defined tags anyway: people are certainly going to want to amend, adjust, overwrite, and edit tags until they’re useful at a personal level.

Of course, the question of where to store font metadata is a recurring issue. It’s also come up in Matthias Clasen’s work on implementing smart-font–feature previews for GTK+ and in the AppStream project <font>-object discussion. Storing everything in the binary is nice ‘n’ compact, but it means that info is only accessible where the binary is — not, for example, when you’re scrolling through a package manager trying to find a font you want to install. Storing everything in a sidecar file helps that problem, but it means you’ve got two files to lose instead of one. And, of course, we all die a little on the inside whenever we see “XML”.

Dave Crossland pointed out one really interesting tidbit here: the OpenType STAT table, which was ostensibly created in conjunction with the variable-fonts features, can be used in every single-master, non-variable font, too. There, it can store valuable metadata about where each individual font file sits on the standard axes of variation within a font family (like the weight, width, and optical size in relation to other font files). I wrote a brief article about STAT in 2016 for LWN if you want more detail. It would be a good thing to add to existing open fonts, certainly. Subsequently, Marc Foley at Google has started adding STAT tables to Google Fonts families; it started off as a manual process, but the hope is that getting the workflow worked out will lead to proper tooling down the line.

Last but not least, the Inkscape team indicated that they’re interested in expanding their font chooser with an above-the-family–level selector; something that lets the user narrow down the fonts to choose from in high-level categories like “text”, “handwriting”, “web”, and so on. That, too, requires tracking some stylistic information for each font.

The unanswered question remains whose job it should be to fix or extend this sort of metadata in existing open fonts? Particularly those that haven’t been updated in a while. Does this work rise to the level of taking over maintainership of the upstream source? That could be controversial, or at least complicated.

Specimens, collections, and more

We also revisited the question of specimen creation. We discussed several existing tools for creating specimens; when considering the problem of specimen creation for the larger libre font universe, however, the problem is more one of time x peoplepower. Some ideas were floated, including volunteer sprints, drives conducted in conjunction with Open Source Design, and design teachers encouraging/assigning/forcing(?)/cajoling students to work on specimens as a project. It would also certainly help matters to have several specimen templates of different varieties to help interested contributors get started.

Other topics included curated collections of open fonts. This is one way to get over the information-overload problem, and it has come up before; this time it was was Brendan Howell who broached the subject. Similar ideas do seem to work for Adobe Typekit. Curated collections could be done at the Open Font Library, but it would require reworking the site. That might be possible, but it’s a bit unclear at the moment how interested Fabricatorz (who maintains the site) would be in revisiting the project in such a disruptive way — much less devoting a big chunk of “company time” to it. More discussion needed.

I also raised the question of reverse-engineering the binary, proprietary VFB font-source format (from older versions of FontLab). Quite a few OFL-licensed fonts have source  available in VFB format only. Even if the design of the outlines is never touched again, this makes them hard to debug or rebuild. It’s worse for binary-only fonts, of course, but extending a VFB font is not particularly doable in free software.

The VFB format is deprecated, now replaced by VFC (which has not been widely adopted by OFL projects, so far). FontLab has released a freeware CLI utility that converts VFB smoothly to UFO, an open format. While VFB could perhaps be reverse-engineered by (say) the Document Liberation Project, that might be unnecessary work: batch converting all OFL-VFB fonts once and publishing the UFO source may suffice. It would be a static resource, but could be helpful to future contributors.

It probably goes without saying, but, just in case, the BoF attendees did find more than a few potential uses for shoehorning blockchain technology into fonts. Track versioning of font releases! Track the exact permission set of each font license sold! Solve all your problems! None of these may be practical, but don’t let that stand in the way of raking in heaps of VC money before the Bitcoin bubble crashes.

That about wraps it up. There is an Etherpad with notes from the session, but I’m not quite sure I’ve finished cleaning it up yet (for formatting and to reflect what was actually said versus what may have been added by later edits). I’ll append that once it’s done. For my part, I’m looking forward to GUADEC in a few weeks, where there will inevitably be even more excitement to report back on. Stay tuned.

Firefox Quantum: Fixing Ctrl W (or other key bindings)

When I first tried switching to Firefox Quantum, the regression that bothered me most was Ctrl-W, which I use everywhere as word erase (try it -- you'll get addicted, like I am). Ctrl-W deletes words in the URL bar; but if you type Ctrl-W in a text field on a website, like when editing a bug report or a "Contact" form, it closes the current tab, losing everything you've just typed. It's always worked in Firefox in the past; this is a new problem with Quantum, and after losing a page of typing for about the 20th time, I was ready to give up and find another browser.

A web search found plenty of people online asking about key bindings like Ctrl-W, but apparently since the deprecation of XUL and XBL extensions, Quantum no longer offers any way to change or even just to disable its built-in key bindings.

I wasted a few days chasing a solution inspired by this clever way of remapping keys only for certain windows using xdotool getactivewindow; I even went so far as to write a Python script that intercepts keystrokes, determines the application for the window where the key was typed, and remaps it if the application and keystroke match a list of keys to be remapped. So if Ctrl-W is typed in a Firefox window, Firefox will instead receive Alt-Backspace. (Why not just type Alt-Backspace, you ask? Because it's much harder to type, can't be typed from the home position, and isn't in the same place on every keyboard the way W is.)

But sadly, that approach didn't work because it turned out my window manager, Openbox, acts on programmatically-generated key bindings as well as ones that are actually typed. If I type a Ctrl-W and it's in Firefox, that's fine: my Python program sees it, generates an Alt-Backspace and everything is groovy. But if I type a Ctrl-W in any other application, the program doesn't need to change it, so it generates a Ctrl-W, which Openbox sees and calls the program again, and you have an infinite loop. I couldn't find any way around this. And admittedly, it's a horrible hack having a program intercept every keystroke. So I needed to fix Firefox somehow.

But after spending days searching for a way to customize Firefox's keys, to no avail, I came to the conclusion that the only way was to modify the source code and rebuild Firefox from source.

Ironically, one of the snags I hit in building it was that I'd named my key remapper "pykey.py", and it was still in my PYTHONPATH; it turns out the Firefox build also has a module called pykey.py and mine was interfering. But eventually I got the build working.

Firefox Key Bindings

I was lucky: building was the only hard part, because a very helpful person on Mozilla's #introduction IRC channel pointed me toward the solution, saving me hours of debugging. Edit browser/base/content/browser-sets.inc around line 240 and remove reserved="true" from key_closeWindow. It turned out I needed to remove reserved="true" from the adjacent key_close line as well.

Another file that's related, but more general, is nsXBLWindowKeyHandler.cpp around line 832; but I didn't need that since the simpler fix worked.

Transferring omni.ja -- or Not

In theory, since browser-sets.inc isn't compiled C++, it seems like you should be able to make this fix without building the whole source tree. In an actual Firefox release, browser-sets.inc is part of omni.ja, and indeed if you unpack omni.ja you'll see the key_closeWindow and key_close lines. So it seems like you ought to be able to regenerate omni.ja without rebuilding all the C++ code.

Unfortunately, in practice omni.ja is more complicated than that. Although you can unzip it and edit the files, if you zip it back up, Firefox doesn't see it as valid. I guess that's why they renamed it .ja: long ago it used to be omni.jar and, like other .jar files, was a standard zip archive that you could edit. But the new .ja file isn't documented anywhere I could find, and all the web discussions I found on how to re-create it amounted to "it's complicated, you probably don't want to try".

And you'd think that I could take the omni.ja file from my desktop machine, where I built Firefox, and copy it to my laptop, replacing the omni.ja file from a released copy of Firefox. But no -- somehow, it isn't seen, and the old key bindings are still active. They must be duplicated somewhere else, and I haven't figured out where.

It sure would be nice to have a way to transfer an omni.ja. Building Firefox on my laptop takes nearly a full day (though hopefully rebuilding after pulling minor security updates won't be quite so bad). If anyone knows of a way, please let me know!

June 13, 2018

Krita 4.0.4 released!

Today the Krita team releases Krita 4.0.4, a bug fix release of Krita 4.0.0. This is the last bugfix release for Krita 4.0.

Here is the list of bug fixes in Krita 4.0.4:

  • OpenColorIO now works on macOS
  • Fix artefacts when painting with a pixel brush on a transparency mask (BUG:394438)
  • Fix a race condition when using generator layers
  • Fix a crash when editing a transform mask (BUG:395224)
  • Add preset memory to the Ten Brushes Script, to make switching back and forward between brush presets more smooth.
  • Improve the performance of the stroke layer style (BUG:361130, BUG:390985)
  • Do not allow nesting of .kra files: using a .kra file with embedded file layers as a file layer would break on loading.
  • Keep the alpha channel when applying the threshold filter (BUG:394235)
  • Do not use the name of the bundle file as a tag automatically (BUG:394345)
  • Fix selecting colors when using the python palette docker script (BUG:394705)
  • Restore the last used colors on starting Krita, not when creating a new view (BUG:394816)
  • Allow creating a layer group if the currently selected node is a mask (BUG:394832)
  • Show the correct opacity in the segment gradient editor (BUG:394887)
  • Remove the obsolete shortcuts for the old text and artistic text tool (BUG:393508)
  • Allow setting the multibrush angle in fractions
  • Improve performance of the OpenGL canvas, especially on macOS
  • Fix painting of pass-through group layers in isolated mode (BUG:394437)
  • Improve performance of loading OpenEXR files (patch by Jeroen Hoolmans)
  • Autosaving will now happen even if Krita is kept very busy
  • Improve loading of the default language
  • Fix color picking when double-clicking (BUG:394396)
  • Fix inconsistent frame numbering when calling FFMpeg (BUG:389045)
  • Fix channel swizzling problem on macOS, where in 16 and 32 bits floating point channel depths red and blue would be swapped
  • Fix accepting touch events with recent Qt versions
  • Fix integration with the Breeze theme: Krita no longer tries to create widgets in threads (BUG:392190)
  • Fix the batch mode flag when loading images from Python
  • Load the system color profiles on Windows and macOS.
  • Fix a crash on macOS (BUG:394068)

Download

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

When it is updated, you can also use the Krita Lime PPA to install Krita 4.0.4 on Ubuntu and derivatives. We are working on an updated snap.

OSX

Note: the touch docker, gmic-qt and python plugins are not available on OSX.

Source code

md5sum

For all downloads:

Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here (filenames ending in .sig).

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

June 12, 2018

Fingerprint reader support, the second coming

Fingerprint readers are more and more common on Windows laptops, and hardware makers would really like to not have to make a separate SKU without the fingerprint reader just for Linux, if that fingerprint reader is unsupported there.

The original makers of those fingerprint readers just need to send patches to the libfprint Bugzilla, I hear you say, and the problem's solved!

But it turns out it's pretty difficult to write those new drivers, and those patches, without an insight on how the internals of libfprint work, and what all those internal, undocumented APIs mean.

Most of the drivers already present in libfprint are the results of reverse engineering, which means that none of them is a best-of-breed example of a driver, with all the unknown values and magic numbers.

Let's try to fix all this!

Step 1: fail faster

When you're writing a driver, the last thing you want is to have to wait for your compilation to fail. We ported libfprint to meson and shaved off a significant amount of time from a successful compilation. We also reduced the number of places where new drivers need to be declared to be added to the compilation.

Step 2: make it clearer

While doxygen is nice because it requires very little scaffolding to generate API documentation, the output is also not up to the level we expect. We ported the documentation to gtk-doc, which has a more readable page layout, easy support for cross-references, and gives us more control over how introductory paragraphs are laid out. See the before and after for yourselves.

Step 3: fail elsewhere

You created your patch locally, tested it out, and it's ready to go! But you don't know about git-bz, and you ended up attaching a patch file which you uploaded. Except you uploaded the wrong patch. Or the patch with the right name but from the wrong directory. Or you know git-bz but used the wrong commit id and uploaded another unrelated patch. This is all a bit too much.

We migrated our bugs and repository for both libfprint and fprintd to Freedesktop.org's GitLab. Merge Requests are automatically built, discussions are easier to follow!

Step 4: show it to me

Now that we have spiffy documentation, unified bug, patches and sources under one roof, we need to modernise our website. We used GitLab's CI/CD integration to generate our website from sources, including creating API documentation and listing supported devices from git master, to reduce the need to search the sources for that information.

Step 5: simplify

This process has started, but isn't finished yet. We're slowly splitting up the internal API between "internal internal" (what the library uses to work internally) and "internal for drivers" which we eventually hope to document to make writing drivers easier. This is partially done, but will need a lot more work in the coming months.

TL;DR: We migrated libfprint to meson, gtk-doc, GitLab, added a CI, and are writing docs for driver authors, everything's on the website!

What’s Worse?

  1. Getting your glasses smushed against your face.
  2. Having your earbuds ripped out of your ear when the cord catches on a doorknob.

June 11, 2018

Interview with Zoe Badini

Could you tell us something about yourself?

Hi, I’m Zoe and I live in Italy. Aside from painting I love cooking and spending my time outdoors, preferably snorkeling in the sea.

Do you paint professionally, as a hobby artist, or both?

I’m just now starting to take my first steps professionally after many years of painting as a hobby.

What genre(s) do you work in?

I love to imagine worlds and stories for my paintings, so most of what I’ve done is related to fantasy illustration and some concept art. I also do portraiture occasionally.

Whose work inspires you most — who are your role models as an artist?

There are way too many to mention, I try to learn as much as I can from other artists, so there are a lot of people I look up to. There are a few I often watch on Youtube, Twitch, or other platforms, I learned a lot from their videos: Clint Cearley, Marco Bucci, Suzanne Helmigh, David Revoy.

How and when did you get to try digital painting for the first time?

I was used to traditional drawing, then a few years ago I saw some beautiful digital illustrations and was curious to try my hand at it, there was this old graphic tablet at my parents’ house, so I tried it. What I made was atrocious, but it didn’t discourage me!

What makes you choose digital over traditional painting?

Working digitally I feel like a wizard, with a touch of my wand I have a huge array of tools at my disposal: different techniques, effects, trying out ideas and discarding them freely if they don’t work out. It’s also a big space saver!

How did you find out about Krita?

I had heard it mentioned a couple of times, then I posted a painting on reddit and a user recommended Krita to me, I was a bit uncertain because I was used to my setup, my brushes and so on… But the seed was planted, in the span of a few months I was using Krita exclusively and I never went back.

What was your first impression?

I was understandably a bit lost and watched a few tutorials, but I found the program intuitive and easy to navigate.

What do you love about Krita?

Its accessibility and completeness: there’s everything I may need to paint at a professional level and it’s easy to find and figure out. Krita also comes with a very nice selection of brushes right out of the box.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Nothing really annoys me, as for improvements I wanted to say the text tool, but I know you’re working on it and it was already improved in 4.0.

What sets Krita apart from the other tools that you use?

As I said it’s professional and easy to use, I feel like it’s made for me. It’s also free, which is great for people just starting out.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

My favourite is always one of the latest I did, just because I get better over time. In this case it’s “Big Game Hunt”.

What techniques and brushes did you use in it?

Nothing particular in terms of technique, my brushes come from the Krita presets, my own experiments and a lot of bundles I gathered from the internet over time.

Where can people see more of your work?

Artstation: https://www.artstation.com/zoebadini
Twitter: https://twitter.com/ZoeBadini

Anything else you’d like to share?

I want to thank the Krita team for making a great software and I encourage people to try it, you won’t be disappointed. If you use it and like it, consider donating to help fund the project!

June 09, 2018

Building Firefox for ALSA (non PulseAudio) Sound

I did the work to built my own Firefox primarily to fix a couple of serious regressions that couldn't be fixed any other way. I'll start with the one that's probably more common (at least, there are many people complaining about it in many different web forums): the fact that Firefox won't play sound on Linux machines that don't use PulseAudio.

There's a bug with a long discussion of the problem, Bug 1345661 - PulseAudio requirement breaks Firefox on ALSA-only systems; and the discussion in the bug links to another discussion of the Firefox/PulseAudio problem). Some comments in those discussions suggest that some near-future version of Firefox may restore ALSA sound for non-Pulse systems; but most of those comments are six months old, yet it's still not fixed in the version Mozilla is distributing now.

In theory, ALSA sound is easy to enable. Build pptions in Firefox are controlled through a file called mozconfig. Create that file at the top level of your build directory, then add to it:

ac_add_options --enable-alsa
ac_add_options --disable-pulseaudio

You can see other options with ./configure --help

Of course, like everything else in the computer world, there were complications. When I typed mach build, I got:

Assertion failed in _parse_loader_output:
Traceback (most recent call last):
  File "/home/akkana/outsrc/gecko-dev/python/mozbuild/mozbuild/mozconfig.py", line 260, in read_mozconfig
    parsed = self._parse_loader_output(output)
  File "/home/akkana/outsrc/gecko-dev/python/mozbuild/mozbuild/mozconfig.py", line 375, in _parse_loader_output
    assert not in_variable
AssertionError
Error loading mozconfig: /home/akkana/outsrc/gecko-dev/mozconfig

Evaluation of your mozconfig produced unexpected output.  This could be
triggered by a command inside your mozconfig failing or producing some warnings
or error messages. Please change your mozconfig to not error and/or to catch
errors in executed commands.

mozconfig output:

------BEGIN_ENV_BEFORE_SOURCE
... followed by a many-page dump of all my environment variables, twice.

It turned out that was coming from line 449 of python/mozbuild/mozbuild/mozconfig.py:

   # Lines with a quote not ending in a quote are multi-line.
    if has_quote and not value.endswith("'"):
        in_variable = name
        current.append(value)
        continue
    else:
        value = value[:-1] if has_quote else value

I'm guessing this was added because some Mozilla developer sets a multi-line environment variable that has a quote in it but doesn't end with a quote. Or something. Anyway, some fairly specific case. I, on the other hand, have a different specific case: a short environment variable that includes one or more single quotes, and the test for their specific case breaks my build.

(In case you're curious why I have quotes in an environment variable: The prompt-setting code in my .zshrc includes a variable called PRIMES. In a login shell, this is set to the empty string, but in subshells, I add ' for each level of shell under the login shell. So my regular prompt might be (hostname)-, but if I run a subshell to test something, the prompt will be (hostname')-, a subshell inside that will be (hostname'')-, and so on. It's a reminder that I'm still in a subshell and need to exit when I'm done testing. In theory, I could do that with SHLVL, but SHLVL doesn't care about login shells, so my normal shells inside X are all SHLVL=2 while shells on a console or from an ssh are SHLVL=1, so if I used SHLVL I'd have to have some special case code to deal with that.

Also, of course I could use a character other than a single-quote. But in the thirty or so years I've used this, Firefox is the first program that's ever had a problem with it. And apparently I'm not the first one to have a problem with this: bug 1455065 was apparently someone else with the same problem. Maybe that will show up in the release branch eventually.)

Anyway, disabling that line fixed the problem:

   # Lines with a quote not ending in a quote are multi-line.
    if False and has_quote and not value.endswith("'"):
and after that, mach build succeeded, I built a new Firefox, and lo and behond! I can play sound in YouTube videos and on Xeno-Canto again, without needing an additional browser.

June 06, 2018

Updating Wacom Firmware In Linux

I’ve been working with Wacom engineers for a few months now, adding support for the custom update protocol used in various tablet devices they build. The new wacomhid plugin will be included in the soon-to-be released fwupd 1.0.8 and will allow you safely update the bluetooth, touch and main firmware of devices that support the HID protocol. Wacom is planning to release a new device that will be released with LVFS support out-of-the-box.

My retail device now has a 0.05″ SWI debugging header installed…

In other news, we now build both flatpak and snap versions of the standalone fwupdtool tool that can be used to update all kinds of hardware on distributions that won’t (or can’t) easily update the system version of fwupd. This lets you easily, for example, install the Logitech unifying security updates when running older versions of RHEL using flatpak and update the Dell Thunderbolt controller on Ubuntu 16.04 using snapd. Neither bundle installs the daemon or fwupdmgr by design, and both require running as root (and outside the sandbox) for obvious reasons. I’ll upload the flatpak to flathub when fwupd and all the deps have had stable releases. Until then, my flatpak bundle is available here.

Working with the Wacom engineers has been a pleasure, and the hardware is designed really well. The next graphics tablet you buy can now be 100% supported in Linux. More announcements soon.

darktable 2.4.4 released

we’re proud to announce the fourth bugfix release for the 2.4 series of darktable, 2.4.4!

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.4.4.

as always, please don’t use the autogenerated tarball provided by github, but only our tar.xz. the checksums are:

$ sha256sum darktable-2.4.4.tar.xz
964320b8c9ffef680fa0407a6ca16ed5136ad1f449572876e262764e78acb04d darktable-2.4.4.tar.xz
$ sha256sum darktable-2.4.4.dmg
9324562c98a52346fa77314103a5874eb89bd576cdbc21fc19cb5d8dfaba307a darktable-2.4.4.dmg
$ sha256sum darktable-2.4.4-win64.exe
3763d681de4faa515049daf3dae62ee21812e8c6c206ea7a246a36c0341eca8c darktable-2.4.4-win64.exe
$ sha256sum darktable-2.4.4-win64.zip
5dba3423b0889c69f723e378564e084878b20baf3996c349bfc9736bed815067 darktable-2.4.4-win64.zip

when updating from the currently stable 2.2.x series, please bear in mind that your edits will be preserved during this process, but it will not be possible to downgrade from 2.4 to 2.2.x any more.

Important note: to make sure that darktable can keep on supporting the raw file format for your camera, please read this post on how/what raw samples you can contribute to ensure that we have the full raw sample set for your camera under CC0 license!

and the changelog as compared to 2.4.3 can be found below.

New Features

  • Added 50% zoom option in darkroom mode to the navigation dropdown
  • perspective correction: usability improvement – allow setting the radius when (de)selecting lines

Bugfixes

  • Fix selecting drives in the import dialog on Windows by bundling a patched glib
  • Add some space between checkbox and label in color picker
  • OpenCL: better readability of debug output on memory usage
  • Levels: catch an edge case where float != int
  • Fix the alignment in a tooltip in lens correction
  • Local contrast: Reset strength slider to 120% when double clicked
  • Drop unused clone masks when loading xmp files
  • Remove all sub masks when clearing cloning masks
  • darktable-cltest: do not print summary statistics on OpenCL usage
  • Perspective correction: take aspect parameter into account when judging on neutral settings
  • Haze removal: fix tiled processing
  • Fix install on Windows due to GraphicsMagick’s versioned filenames
  • PPM: Handle byte order when loading files
  • Fix #12165: Don’t try to show dialog without gui
  • Fix an out-of-bounds memory access
  • Tools: Fix typo in darktable-gen-noiseprofile that made it unusable
  • MacOS package: point gettext to correct localedir

Camera support, compared to 2.4.2

Warning: support for Nikon NEF ‘lossy after split’ raws was unintentionally broken due to the lack of such samples. Please see this post for more details. If you have affected raws, please contribute samples!

White Balance Presets

  • Sony ILCE-6500

Noise Profiles

  • Canon EOS 800D
  • Canon EOS Kiss X9i
  • Canon EOS Rebel T7i
  • Nikon COOLPIX B700
  • Nikon D5600
  • Olympus TG-5

Updated translations

  • German
  • Russian

June 04, 2018

Krita Sprint: long fight with jaggy lines on OSX

Two weeks ago we had a very nice and motivating sprint in Deventer, where many members of the Krita team gathered in one place and met each other. Boud has already written a good post about it, so I will try to avoid repetitions and only tell a saga of my main goal for this sprint… fix OSX tablet problems!

Jagged lines caused by OSX input events compression: main symptom – they disappear as soon as one disables openGL

Tablet events compression

Since the very first release of Krita on OSX we’ve had a weird problem. When the user painted too quickly, the strokes became jagged, or as we call it “bent”. The problem happened because tablet events coming from the stylus were being lost somewhere on their way from the driver to Krita.

I should say that this problem has already happened in Krita multiple times on Linux and Windows. In most of the cases it was caused by a Qt update that introduced/activated “input events compression”: a special feature of Qt to drop extra tablet/mouse move events if the application becomes too slow to process them in time. This feature is necessary for normal non-painting applications, which do not expect so many tablet move events and can simply sink in them. The main symptom of such compression is that “jagged lines” almost disappear when you disable the openGL canvas, and it was reported that on OSX this symptom is also present. I have already fixed such compression problems multiple times on other systems, so I was heading to the sprint in quite an optimistic mood…

But I became less optimistic when I arrived at the sprint and checked Qt’s sources: there was no events compression implemented for OSX! I was a bit shocked, but it was so. Tests proved that all events that arrived to Qt were successfully delivered to Krita. That was a bit unexpected. It looked like OSX itself dropped the events if the application’s event loop didn’t fetch them from the queue in time (I still think that is the case).

So we couldn’t do anything with this compression: it happened somewhere inside the operating system or driver. The only way out was to make the main Krita GUI thread more responsive, but there was another thing… openGL!

Prevent openGL from blocking Krita’s GUI thread

The main symptom of the compression problem, was related to the fact that sometimes openGL needs quite a bit of time to upload updated textures or do the rendering of the canvas. Very simplified, our rendering pipeline looked like this:

  1. Image is updated by brush
  2. GUI thread uploads the textures to GPU using glTexImage2D or glTexSubImage2D
  3. GUI thread calls QOpenGLWidget::update() to start new rendering cycle
  4. Qt calls QOpenGLWidget::paintGL(), where we generate mipmaps for the updated textures and render them on screen.

This pipeline worked equally good on all platforms except OSX. If we ran it on OSX, Krita would render the textures with corrupted mipmaps. Long time ago, when we first found this issue, we couldn’t understand why it happened and just added a dirty hack to workaround the problem: we added glFinish() between uploading the textures and rendering. It solved the problem of corrupted mipmaps, but it made the rendering loop slower. We never understood why it was needed, but it somehow fixed the problem, and the OSX-specific pipeline started to look like this:

  1. Update the image
  2. Upload textures
  3. Call glFinish() /* VEEERY SLOOOW */
  4. Call QOpenGLWidget::update()
  5. Generate mipmaps and render the textures

We profiled Krita with apitrace and it became obvious that this glFinish() is really a problem. It blocks the event loop for long time periods, making OSX drop input events. So we had to remove it, but why it was needed at all? OpenGL guarantees that all GPU calls are executed in chronological order, why do they become reordered?

I spent almost two days at the sprint trying to find out why this glFinish() was needed and two more days after returning back home. I even thought that it was a bug in OSX’s implementation of openGL protocol… but the thing was much simpler.

It turned out that we used two separate openGL contexts: one (Qt’s one) that uploaded the textures, and the other one (QOpenGLWidget’s one) that rendered the image. These contexts were shared, so we thought that they were equivalent, but they are not. Yes, they share all the resources, but the way how they process GPU command queues was undefined. On Linux and Windows they seem to share the commands queue so the commands were executed sequentially; but on OSX the queues were separate, so the commands became reordered and we got corrupted mipmaps…

In real life our pipeline looked like this:

  1. [openGL context 1] Update the image
  2. [openGL context 1] Upload textures
  3. [openGL context 2] Call QOpenGLWidget::update()
  4. [openGL context 2] Generate mipmaps and render the textures /* renders corrupted mipmaps, because uploading is not yet finished */

So we just had to move the uploading into the correct openGL context and the bug went away. The patch is now in master and is going to be released in Krita 4.0.4!

The moral of the story

Always take care about what openGL context you use for accessing GPU. If you are not inside QOpenGLWidget::paintGL(), the context might be random!

PS:
Of course, this patch hasn’t fixed the tablet problem completely. Compression still happens somewhere deep inside OSX, but it became almost impossible to notice! 🙂

PPS:
The 2018 Krita sprint was sponsored by KDE e.V. (travel) and the Krita Foundation (accommodation and food).

PPPS:

Apple is deprecating OpenGL…

June 03, 2018

Building Firefox Quantum

With Firefox Quantum, Mozilla has moved away from letting users configure the browser they way they like. If I was going to switch to Quantum as my everyday browser, there were several problems I needed to fix first -- and they all now require modifying the source code, then building the whole browser from scratch.

I'll write separately about fixing the specific problems; but first I had to build Firefox. Although I was a Firefox developer way back in the day, the development environment has changed completely since then, so I might as well have been starting from scratch.

Setting up a Firefox build

I started with Mozilla's Linux build preparation page. There's a script called bootstrap.py that's amazingly comprehensive. It will check what's installed on your machine and install what's needed for a Firefox build -- and believe me, there are a lot of dependencies. Don't take the "quick" part of the "quick and easy" comment at the beginning of the script too seriously; I think on my machine, which already has a fairly full set of build tools, the script was downloading additional dependencies for 45 minutes or so. But it was indeed fairly easy: the script asks lots of questions about optional dependencies, and usually has suggestions, which I mostly followed.

Eventually bootstrap.py finishes loading the dependencies and gets to the point of wanting to check out the mozilla-unified repository, and that's where I got into trouble.

The script wants to check out the bleeding edge tip of Mozilla development. That's what you want if you're a developer checking in to the project. What I wanted was a copy of the currently released Firefox, but with a chance to make my own customizations. And that turns out to be difficult.

Getting a copy of the release tree

In theory, once you've checked out mozilla-unified with Mercurial, assuming you let bootstrap.py enable the recommended "firefoxtree" hg extension (which I did), you can switch to the release branch with:

hg pull release
hg up -c release

That didn't work for me: I tried it numerous times over the course of the day, and every time it died with "abort: HTTP request error (incomplete response; expected 5328 bytes got 2672)" after "adding manifests" when it started "adding file changes".

That sent me on a long quest aided by someone in Mozilla's helpful #introduction channel, where they help people with build issues. You might think it would be a common thing to want to build a copy of the released version of Firefox, and update it when a new release comes out. But apparently not; although #introduction is a friendly and helpful channel, everyone seemed baffled as to why hg up didn't work and what the possible alternatives might be.

Bundles and Artifacts

Eventually someone pointed me to the long list of "bundle" tarballs and advised me on how to get a release tarball there. I actually did that, and (skipping ahead briefly) it built and ran; but I later discovered that "bundles" aren't actually hg repositories and can't be updated. So once you've downloaded your 3 gigabytes or so of Mozilla stuff and built it, it's only good for a week or two until the next Mozilla release, when you're now hopelessly out of date and have to download a whole nuther bundle. Bundles definitely aren't the answer, and they aren't well supported or documented either. I recommend staying away from them.

I should also mention "artifact builds". These sound like a great idea: a lot of the source is already built for you, so you just build a little bit of it. However, artifact builds are only available for a few platforms and branches. If your OS differs in any way from whoever made the artifact build, or if you're requesting a branch, you're likely to waste a lot of time (like I did) downloading stuff only to get mysterious error messages. And even if it works, you won't be able to update it to keep on top of security fixes. Doesn't seem like a good bet.

GitHub to the rescue

Okay, so Mercurial's branch switching doesn't work. But it turns out you don't have to use Mercurial. There's a GitHub mirror for Firefox called gecko-dev, and after cloning it you can use normal git commands to switch branches:

git clone https://github.com/mozilla/gecko-dev.git
cd gecko-dev/
git checkout -t origin/release

You can verify you're on the right branch with git branch -vv, or if you want to list all branches and their remotes, git branch -avv.

Finally: a Firefox release branch that you can actually update!

Building Firefox

Once you have a source tree, you can use the all-powerful mach script to build the current release of Firefox:

./mach build

Of course that takes forever -- hours and hours, depending on how fast your machine is.

Running your New Firefox

The build, after it finishes, helpfully tells you to test it with ./mach run, which runs your newly-built firefox with a special profile, so it doesn't interfere with your running build. It also prints:

For more information on what to do now, see https://developer.mozilla.org/docs/Developer_Guide/So_You_Just_Built_Firefox

Great! Except there's no information there on how to package or run your build -- it's just a placeholder page asking people to contribute to the page.

It turns out that obj-whatever/dist/bin is the directory that corresponds to the tarball you download from Mozilla, and you can run /path/to/mozilla-release/obj-whatever/dist/bin/firefox from anywhere.

I tried filing a bug request to have a sub-page created explaining how to run a newly built Firefox, but it hasn't gotten any response. Maybe I'll just edit the "So You Just Built" page.

Incidentally, my gecko-dev build takes 16G of disk space, of which 9.3G is things it built, which are helpfully segregated in obj-x86_64-pc-linux-gnu.

June 02, 2018

Not just Krita at the 2018 Krita Sprint

At the 2018 Krita Sprint we had a special guest: Valeriy Malov, the maintainer of the Plasma Wacom tablet settings module. We’ve asked him to write about his experience at the sprint, so over to him!

Hello,

This is my Krita 2018 sprint report and general report / pre-release announcement for new version of wacomtablet.

Krita 2018 sprint

A couple of weeks ago I’ve attended the Krita sprint, it was a both fun and productive event. Boudewijn, Timothee and Raghukamath have tested the git version of wacomtablet on their computers. I’ve also tested a handful of wacom devices with the KCM, got some user input, and made a few fixes:

  • Calibration of Cintiq devices should be severely improved. Previously the KCM didn’t account for Cintiq’s unusual sensor coordinate system, which doesn’t start at 0x0, because device sensors are larger than the built-in screen.
  • Support for devices that report separate USB ID for touch sensor has been added. It’s not great (yet), because it still might require user intervention. If you have a device that is listed twice in the KCM, please run Wacom Tablet Finder utility and mark pen/touch parts of the device accordingly.
  • Few other mapping calibration improvements: lock proportions button, calibration screen should now open on the screen the tablet is mapped to, and there’s now an option to manually fine tune calibration values without getting into configuration files (this one was requested on the sprint, but has been added only post-sprint).
  • A couple of minor bugs fixed: touch should now follow overall tablet rotation, hotkeys should not repeat themselves anymore.
  • General consensus was that KCM’s UI has some usability issues. I’m going to ask Krita team for some help with that, but this is postponed until 3.1.0 release. There’s also an open question about having good default settings.
  • I’ve tested rudimentary LED support. It works granted that system has been configured to allow normal users access to Wacom LED API (probably through udev rules). No OLED support yet, but it uses same API. Basically, this is something to work on, should be easy to fix, but unfortunately not a priority because not many devices have LEDs/OLEDs.

I want to thank Boudewijn and Irina for hosting the sprint, and Krita Foundation and KDE e.V. for sponsoring the event. Without them those issues probably wouldn’t get fixed anytime soon.

On new release and testing

There’s also been a major change since 3.0.0: libwacom support. This should increase the number of devices we support out of the box. However, it’s only partial for now (no LED support yet, no multiple USB ID devices yet, libwacom-supplied button schemes don’t fit very well in current UI). It also requires libwacom 0.29 for devices with quirky buttons (you can still build with older libwacom, but it will be much less useful). So don’t throw away “Wacom Tablet Finder” yet.

Another small change is that logging have been ported to QLoggingCategory, which means that for enabling debug logs you need to run kdebugsettings and look for “wacom”.

With all these changes I’m going to make a 3.1.0 branch soon, which means that a release should happen this month. Most important bug fixes since 3.0.0 are hard to backport, so most likely there will be no 3.0.1, sorry. There will be no beta release either (Neon Dev Unstable, Arch and Gentoo already provide git builds for testing).

Known issues:

  • No Wayland support. Like, at all, no ETA. I’m ready to cooperate with someone who wants to implement tablet support in KWin wayland, but I can’t work on it myself anytime soon.
  • Automated rotation tracking most likely won’t work on multi-screen setup. This is a Qt bug.
  • Calibration window can enter “drag” mode when touched/dragged by pen. This is quite annoying but it shouldn’t affect calibration results. This is a KDE feature (you can disable it in widget style settings) which I don’t know how to circumvent yet.

There’s also a handful of issues that are kept open for now, but after release of 3.1.0 I’ll eventually close some of them as I consider them fixed, unless anyone confirms otherwise:

  • Bug 334520 – Calibration fails on Tablet PC if external screen is connected and tablet mapped to internal screen (should be fixed in git/3.1.0)
  • Bug 336748 – Calibrate doesn’t work very good on cintiq13 (should be fixed in git/3.1.0)
  • Bug 322918 – Problem with calibrating wacom cintiq 13HD (should be fixed in git/3.1.0)
  • Bug 327952 – wacom module is not working for calibrating a Cintiq 21ux (should be fixed in git/3.1.0)
  • Bug 364043 – Intuos Pro cannot generate settings profiles, cannot configure buttons.
  • Bug 343666 – Device ‘Wacom Bamboo One M Pen’ is not in wacom_devicelist, not able to configure using tablet configuration (should be fixed in git/3.1.0)
  • Bug 339138 – Tablet screen mapping resets after KDE restart
  • Bug 325520 – Dell latitude xt2: touchscreen inverted when rotate to portrait (should be fixed in git/3.1.0)

Full list of open bugs/wishes here.

Do not hesitate opening a bug if you encounter an issue. If no new issues will surface after the 3.1.0 release, usability improvements is probably the next priority for the project.

On packaging

This is a sort of very important topic which I can’t do much about directly. Currently, wacom support in KDE is an optional component, so if Tablet section is missing from Input Devices settings, you need to install it. Package usually goes by name wacomtablet or kcm-wacomtablet. You can check if your distribution packages it here and here. As far as I know, only KDE Neon, Arch (+derivatives) and Gentoo provide up-to-date package for wacomtablet right now. Kubuntu has it too, but it’s hidden in experimental PPA. If you’re using something else, your options are:

  • Building from source and installing as README.md instructs, which I really don’t want people doing for a bunch of reasons.
  • Asking someone else (preferably your distribution’s KDE team) to package it. This is usually done via distribution’s bugtracker or support forums.

Unfortunately due to how the project is structured (it’s just a bunch of plugins), I don’t think I can build an AppImage for everyone to use. So the best way to get it in your distribution is letting distribution maintainers know that it exists and you need it to be packaged.

June 01, 2018

FreeCAD BIM development news - May 2018

Hi there, Time for a new update on BIM development in FreeCAD. Since last month saw the release of version 0.17, we now have our hands free to start working again on new features! There is quite a lot of new stuff this month, as usual now, spread between the Arch and BIM workbenches. For who...

May 31, 2018

Trying Firefox Variants: From Firefox ESR to Pale Moon to Quantum

For the last year or so the Firefox development team has been making life ever harder for users. First they broke all the old extensions that were based on XUL and XBL, so a lot of customizations no longer worked. Then they made PulseAudio mandatory on Linux bug (1345661), so on systems like mine that don't run Pulse, there's no way to get sound in a web page. Forget YouTube or XenoCanto unless you keep another browser around for that purpose.

For those reasons I'd been avoiding the Firefox upgrade, sticking to Debian's firefox-esr ("Extended Support Release"). But when Debian updated firefox-esr to Firefox 56 ESR late last year, performance became unusable. Like half a minute between when you hit Page Down and when the page actually scrolls. It was time to switch browsers.

Pale Moon

I'd been hearing about the Firefox variant Pale Moon. It's a fork of an older Firefox, supposedly with an emphasis on openness and configurability.

I installed the Debian palemoon package. Performance was fine, similar to Firefox before the tragic firefox-56. It was missing a few things -- no built-in PDF viewer or Reader mode -- but I don't use Reader mode that often, and the built-in PDF viewer is an annoyance at least as often as it's a help. (In Firefox it's fairly random about when it kicks in anyway, so I'm never sure whether I'll get the PDF viewer or a Save-as prompt on any given PDF link).

For form and password autofill, for some reason Pale Moon doesn't fill out fields until you type the first letter. For instance, if I had an account with name "myname" and a stored password, when I loaded the page, both fields would be empty, as if there's nothing stored for that page. But typing an 'm' in the username field makes both username and password fields fill in. This isn't something Firefox ever did and I don't particularly like it, but it isn't a major problem.

Then there were some minor irritations, like the fact that profiles were stored in a folder named ~/.moonchild\ productions/ -- super long so it messed up directory listings, and with a space in the middle. PaleMoon was also very insistent about using new tabs for everything, including URLs launched from other programs -- there doesn't seem to be any way to get it to open URLs in the active tab.

I used it as my main browser for several months, and it basically worked. But the irritations started to get to me, and I started considering other options. The final kicker when I saw Pale Moon bug 86, in which, as far as I can tell, someone working on the PaleMoon in OpenBSD tries to use system libraries instead of PaleMoon's patched libraries, and is attacked for it in the bug. Reading the exchange made me want to avoid PaleMoon for two reasons. First, the rudeness: a toxic community that doesn't treat contributors well isn't likely to last long or to have the resources to keep on top of bug and security fixes. Second, the technical question: if Pale Moon's code is so quirky that it can't use standard system libraries and needs a bunch of custom-patched libraries, what does that say about how maintainable it will be in the long term?

Firefox Quantum

Much has been made in the technical press of the latest Firefox, called "Quantum", and its supposed speed. I was a bit dubious of that: it's easy to make your program seem fast after you force everybody into a few years of working with a program that's degraded its performance by an order of magnitude, like Firefox had. After firefox 56, anything would seem fast.

Still, maybe it would at least be fast enough to be usable. But I had trepidations too. What about all those extensions that don't work any more? What about sound not working? Could I live with that?

Debian has no current firefox package, so I downloaded the tarball from mozilla.org, unpacked it, made a new firefox profile and ran it.

Initial startup performance is terrible -- it takes forever to bring up the first window, and I often get a "Firefox seems slow to start up" message at the bottom of the screen, with a link to a page of a bunch of completely irrelevant hints. Still, I typically only start Firefox once a day. Once it's up, performance is a bit laggy but a lot better than firefox-esr 56 was, certainly usable.

I was able to find replacements for most of the really important extensions (the ones that control things like cookies and javascript). But sound, as predicted, didn't work. And there were several other, worse regressions from older Firefox versions.

As it turned out, the only way to make Firefox Quantum usable for me was to build a custom version where I could fix the regressions. To keep articles from being way too long, I'll write about all those issues separately: how to build Firefox, how to fix broken key bindings, and how to fix the PulseAudio problem.

Funding Krita: 2017

We decided at the last sprint to publish a yearly report on Krita’s income and outlay. We did that in 2015 and 2016. 2017 has been over some time now, so let’s discuss last year’s finances a bit. Last year was weird, of course, and that’s clearly visible from the results: we ended the year € 9.211,84 poorer than we started.

Because of the troubles, we had to split sales and commercial work off from the Krita Foundation. We did have a “company” ready — Boudewijn Rempt Software, which was created when our maintainer was trying to fund his work on Krita through doing totally unrelated freelance jobs, after KO GmbH went bust. That company is now handling sales of art books, dvd’s and so on, as well as doing commercial support for Krita. So the “Sales” number is only for the first quarter of 2017.

We wouldn’t have survived 2017 as a project if two individuals hadn’t generously supported both Dmitry’s and Boudewijn’s work on Krita for several months. That is also not reflected in these numbers: that was handled directly, not through the Krita Foundation. And since Boudewijn, having been badly burned out on his consultancy gig in 2016, couldn’t manage combining working on Krita with a day job anymore, the remainder of his work on Krita in 2017 was sponsored by his savings, which is also not reflected in these numbers. If it were, the amount of money spent on development would be double what is in the books.

Loans were made to Boudewijn and have been repaid in 2018, when the income from the Windows Store started coming in. In 2017 we also produced the 2016 Art Book, which was rather expensive, and very expensive to send out. We still have a lot of copies left, too. Donations are donations we made to people who did things that were useful for Krita as a project, while the post “volunteers” represents money we give under Dutch tax law to people who do an inordinate amount of volunteer work for Krita .

In 2018, we are doing reasonably well. We have done some interesting paid projects for Intel, like optimzing Krita for many core system, creating window session management and a reference images tool. The Windows Store sales basically now fund Boudewijn to work full-time on Krita. That money goes to Boudewijn Rempt Software. We have an average of 2000 euros a month in donations to the Krita Foundation, which goes some way to funding Dmitry’s full-time work. Currently, we have 87 subscribers to the Development Fund, and that number is growing. We plan to have a fund raiser again in September.

May 30, 2018

GIMP has moved to Gitlab

Along with the GEGL and babl libraries, GIMP has moved to a new collaborative programming infrastructure based on Gitlab and hosted by GNOME. The new URLs are:

On the end-user side, this mostly means an improved bug reporting experience. The submission is easier to fill in, and we provide two templates — one for bug reports and one for feature requests.

New issue form on Gitlab New issue form on Gitlab.

For developers, it means simplified contribution, as you can simply fork the GIMP repository, commit changes, and send a merge request. Please note that while we accept merge requests, we only do that in cases when patches can be fast-forwarded. That means you need to rebase your fork on the master branch (we’ll see if we can do merge requests for the ‘gimp-2-10’ branch).

In the meantime, work continues in both ‘master’ branch (GTK+3) porting and the ‘gimp-2-10’ branch. Most notably, Ell and Jehan Pagès have been improving the user-perceivable time it takes GIMP to load fonts by adding the asynchronous loading of resources on startup.

What it means is that font loading does not block startup anymore, but if you have a lot of fonts and you want to immediately use the Text tool, you might have to wait.

The API is general rather than fonts-specific and can be further used to add the background loading of brushes, palettes, patterns etc.

May 28, 2018

Interview with Răzvan Rădulescu

Could you tell us something about yourself?

Hi! My name’s Răzvan Rădulescu, I’m from Romania. I’ve had an interest in drawing since I was little. Unfortunately Romania is one of those countries that can crush creativity at a very early stage. At the time I was also interested in computers and started learning programming by myself, finally ended up doing physics in college and about three years ago I started playing with the idea of digital drawing and painting. The first two years have been painting on and off different things to get the hang of it, but about a year ago I decided to think about this path as more than just a hobby.

Do you paint professionally, as a hobby artist, or both?

I guess the answer is I’m in-between, I’m finally in a position to start working on arts projects, I will elaborate a bit more on it later.

What genre(s) do you work in?

I’m interested in everything and do a lot of experimentation, as you see in my artwork it’s pretty much “all over the place”.

Whose work inspires you most — who are your role models as an artist?

Since I started very late in my life with digital painting, I am not influenced by well known masters, but being attracted to concept art/freelance type of work I have my selection of modern artists which I look up to: Sparth, Piotr Jabłoński, Sergey Kolesov, Sinix, Simon Stålenhag, Viktor Bykov, to name a few. They all have very different painting styles and techniques so that explains why my own art is “all over the place”, I’ve been trying to understand their work process and integrate it in my own.

How and when did you get to try digital painting for the first time?

It must have been about 12 years ago, when I first played with Photoshop, but I didn’t pursue it at the time, it was a very short experiment, a couple of months, no more.

What makes you choose digital over traditional painting?

The usual suspects: cleanliness over messiness, power of layers, easy adjustments and modifications, FX and so on and so forth.

How did you find out about Krita?

I think it was by mistake, I was searching for GIMP related stuff and someone must have mentioned Krita in a forum or something like that.

What was your first impression?

At the time I tried it, I was coming from GIMP Paint Studio, not having touched Photoshop in years and I honestly believed that GIMP Paint Studio is as best as it can be. I was pleasantly surprised to find Krita, for painting I thought it was awesome. I was really impressed by the tools and the “hidden” gems, I’m the type of guy that tries everything, looks at every detail so I quickly found the G’MIC plugin, the assistant tools, clone layers etcetera, and I’m barely scratching the surface still. For what I’ve seen people really don’t know about these features or they don’t really use them, but I like touching every corner of it even if I don’t end up using the features, I still keep them in mind just in case. With the addition of Python scripting, the feature list for Krita as a FOSS alternative is simply amazing.

What do you love about Krita?

I like the fact that it’s a real alternative for “industry” standard software like Photoshop, Corel Paint and so on. I am a huge fan of FOSS philosophy and initiative so Krita is very important to me and I think to the world in general. Krita is quickly becoming the Blender of 2D art world, people are slow to adopt these alternatives because of familiar workflows and for historical reasons, but people just starting off have no reason not to at least try them and I believe with time they will in become a core part of the of the professional artist. A year ago there was almost no mention of Krita, but with the release of v4 I think people are finally starting to get notice of it. I can see this in the LevelUp Facebook group (a very well known and important group for concept artists all over the world – https://www.facebook.com/groups/levelup.livestream) where I’m a moderator that now and then there’s the occasional mention of Krita so I know for a fact that more people are watching the development with anticipation.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Hm, it’s tough to say without experimenting with other software to have as a reference. If there’s one thing that annoys me it’s that there are some lingering bugs since for ever – I’m looking at you “random transform crash” & “color sliders bug”. In terms of improvement, I think the tag/brush menu system needs an update but I know it’s on the roadmap so it will be taken care of eventually. I would probably have a better response if I knew other painting software.

What sets Krita apart from the other tools that you use?

As far as digital painting, I don’t use other tools so there’s nothing much to say here. Generally speaking Krita is apart due to it being the only real FOSS alternative that can push a shift in mindset.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

It has to be this one, it’s my favourite because it’s the first successful one and it all started with a rock study, trying to understand how Piotr Jabłoński applies color and atmosphere in his work. It’s nothing special in terms of design but I I like the overall feel of it.

What techniques and brushes did you use in it?

Ah memories… I made it after starting to work on my RZV Krita Brushpack (which can be downloaded for free at my website – https://razcore.art/). I didn’t like the fuzziness and quality of the patterns of the default set so that prompted me to work on my own set. Having said that, I’ve tried the latest nightly build of Krita v4 and the new default set is simply awesome. I know this stuff can be pretty subjective, but from my point of view the latest brush set, thanks to David Revoy, is now years ahead of the default set in v3. It forced me to rethink my own brush set for which I’ll be releasing an update after Krita v4 official release. I think it will be quite a nice addition, I’ve only kept the really different brushes that make a difference.

As for the technique, for this painting it was quite the natural approach, I used more of a traditional style technique, with few layers, I’ve only really used layers for overlaps so that I don’t have to worry about moving things, like the floating rocks in front of the central island. One final set of layers was used for placing the lights, like you see on the tree, and that’s pretty much it.

Where can people see more of your work?

Some of my paintings can be found at https://razvanc-r.deviantart.com, but I plan to move my more advanced and successful ones to https://razcore.artstation.com/ (which is empty for now) and my own site https://razcore.art/.

Other places people can find me at:

– YouTube: https://www.youtube.com/channel/UC6iuu2ajEK2GiMc2TFWkEqw – I started this channel just a couple of months ago so it’s still very unpopular and it’s quite experimental. I’m trying to get back to it and create mini-courses of sorts using free online resources I know of. There are quite a few good places to learn digital art for free but it takes a very long time to find them because they’re scattered all over the place. One of my objectives is to explore them together with the viewers so all suggestions and comments are welcome. I had to take a bit of a break from the LLT (Let’s Learn Together) playlist project but once things settle down I’ll try to pick it up again.

– Mastodon: https://mastodon.art/@razcore

– Twitter: https://twitter.com/razcore_art

– Instagram: https://www.instagram.com/razcore.art

Anything else you’d like to share?

I think this is a good place to elaborate on the project I mentioned at the beginning. I’m currently in a collaboration with the CEO of Boot.AI and we’re interested in exploring the idea for a future society through design illustrations and podcasts to engage people to think and share opinions on these subjects. It’s still at a very early stage of development, but one thing we’d like to work on is a sort of design and illustration course tackling these subjects with the help of FOSS projects such as Krita, Blender, Natron, to name a few. For these we’re preparing the project at https://boot.ai/seed – an illustrated experimental city society.

 

Krita Manual Updated

Kiki, the Krita mascot, is on the foreground, leaning on a book with the title "Krita" and the Krita logo. In her right hand, she's holding up a stylus, and she's facing the audience with a big grin. On the background a giant sphinx is sitting, hunched over to look at Kiki.

Over the past month or two, we’ve been really busy with the manual. Our manual always has been a source of pride for the project: it’s extensive, detailed, up-to-date and contains a lot of in-depth information not just on Krita, but on digital painting in general. The manual was implemented as a mediawiki site. A wiki makes it easy to create and edit pages, but it turned out to be hard to have translations available or to generate an off-line version like a PDF or epub version of the manual.

Enter Sphinx. We’ve ported the entire manual to Sphinx. You can find the source here:

https://cgit.kde.org/websites/docs-krita-org.git/

Every time someone pushes a commit to the repository, the manual gets rebuilt!

The manual itself is in its old location:

https://docs.krita.org

All old links to pages have changed though! But all information is available, and the table of contents and search box work perfectly fine.

And you can get the manual as a 1000+ page epub, too!

https://docs.krita.org/en/epub/KritaManual.epub

Huge thanks go out to Scott, Wolthera, Raghukamath, Timothee and especially Ben who have made this happen!

May 27, 2018

Faking Javascript <body onload=""> in Wordpress

After I'd switched from the Google Maps API to Leaflet get my trail map working on my own website, the next step was to move it to the Nature Center's website to replace the broken Google Maps version.

PEEC, unfortunately for me, uses Wordpress (on the theory that this makes it easier for volunteers and non-technical staff to add content). I am not a Wordpress person at all; to me, systems like Wordpress and Drupal mostly add obstacles that mean standard HTML doesn't work right and has to be modified in nonstandard ways. This was a case in point.

The Leaflet library for displaying maps relies on calling an initialization function when the body of the page is loaded:

<body onLoad="javascript:init_trailmap();">

But in a Wordpress website, the <body> tag comes from Wordpress, so you can't edit it to add an onload.

A web search found lots of people wanting body onloads, and they had found all sorts of elaborate ruses to get around the problem. Most of the solutions seemed like they involved editing site-wide Wordpress files to add special case behavior depending on the page name. That sounded brittle, especially on a site where I'm not the Wordpress administrator: would I have to figure this out all over again every time Wordpress got upgraded?

But I found a trick in a Stack Overflow discussion, Adding onload to body, that included a tricky bit of code. There's a javascript function to add an onload to the tag; then that javascript is wrapped inside a PHP function. Then, if I'm reading it correctly, The PHP function registers itself with Wordpress so it will be called when the Wordpress footer is added; at that point, the PHP will run, which will add the javascript to the body tag in time for for the onload even to call the Javascript. Yikes!

But it worked. Here's what I ended up with, in the PHP page that Wordpress was already calling for the page:

<?php
/* Wordpress doesn't give you access to the <body> tag to add a call
 * to init_trailmap(). This is a wordaround to dynamically add that tag.
 */
function add_onload() {
?>

<script type="text/javascript">
  document.getElementsByTagName('body')[0].onload = init_trailmap;
</script>

<?php
}

add_action( 'wp_footer', 'add_onload' );
?>

Complicated, but it's a nice trick; and it let us switch to Leaflet and get the PEEC interactive Los Alamos area trail map working again.

May 24, 2018

Google Maps API No Longer Free?

A while ago I wrote an interactive trail map page for the PEEC nature center website. At the time, I wanted to use an open library, like OpenLayers or Leaflet; but there were no good sources of satellite/aerial map tiles at the time. The only one I found didn't work because they had a big blank area anywhere near LANL -- maybe because of the restricted airspace around the Lab. Anyway, I figured people would want a satellite option, so I used Google Maps instead despite its much more frustrating API.

This week we've been working on converting the website to https. Most things went surprisingly smoothly (though we had a lot more absolute URLs in our pages and databases than we'd realized). But when we got through, I discovered the trail map was broken. I'm still not clear why, but somehow the change from http to https made Google's API stop working. In trying to fix the problem, I discovered that Google's map API may soon cease to be free:

New pricing and product changes will go into effect starting June 11, 2018. For more information, check out the Guide for Existing Users.

That has a button for "Transition Tool" which, when you click it, won't tell you anything about the new pricing structure until you've already set up a billing account. Um ... no thanks, Google.

Googling for google maps api billing led to a page headed "Pricing that scales to fit your needs", which has an elaborate pricing structure listing a whole bnch of variants (I have no idea which of these I was using), of which the first $200/month is free. But since they insist on setting up a billing account, I'd probably have to give them a credit card number -- which one? My personal credit card, for a page that isn't even on my site? Does the nonprofit nature center even have a credit card? How many of these API calls is their site likely to get in a month, and what are the chances of going over the limit?

It all rubbed me the wrong way, especially when the context of "Your trail maps page that real people actually use has broken without warning, and will be held hostage until you give usa credit card number". This is what one gets for using a supposedly free (as in beer) library that's not Free open source software.

So I replaced Google with the excellent open source Leaflet library, which, as a bonus, has much better documentation than Google Maps. (It's not that Google's documentation is poorly written; it's that they keep changing their APIs, but there's no way to tell the dozen or so different APIs apart because they're all just called "Maps", so when you search for documentation you're almost guaranteed to get something that stopped working six years ago -- but the documentation is still there making it look like it's still valid.) And I was happy to discover that, in the time since I originally set up the trailmap page, some open providers of aerial/satellite map tiles have appeared. So we can use open source and have a satellite view.

Our trail map is back online with Leaflet, and with any luck, this time it will keep working. PEEC Los Alamos Area Trail Map.

May 22, 2018

Downloading all the Books in a Humble Bundle

Humble Bundle has a great bundle going right now (for another 15 minutes -- sorry, I meant to post this earlier) on books by Nebula-winning science fiction authors, including some old favorites of mine, and a few I'd been meaning to read.

I like Humble Bundle a lot, but one thing about them I don't like: they make it very difficult to download books, insisting that you click on every single link (and then do whatever "Download this link / yes, really download, to this directory" dance your browser insists on) rather than offering a sane option like a tarball or zip file. I guess part of their business model includes wanting their customers to get RSI. This has apparently been a problem for quite some time; a web search found lots of discussions of ways of automating the downloads, most of which apparently no longer work (none of the ones I tried did).

But a wizard friend on IRC quickly came up with a solution: some javascript you can paste into Firefox's console. She started with a quickie function that fetched all but a few of the files, but then modified it for better error checking and the ability to get different formats.

In Firefox, open the web console (Tools/Web Developer/Web Console) and paste this in the single-line javascript text field at the bottom.

// How many seconds to delay between downloads.
var delay = 1000;
// whether to use window.location or window.open
// window.open is more convenient, but may be popup-blocked
var window_open = false;
// the filetypes to look for, in order of preference.
// Make sure your browser won't try to preview these filetypes.
var filetypes = ['epub', 'mobi', 'pdf'];

var downloads = document.getElementsByClassName('download-buttons');
var i = 0;
var success = 0;

function download() {
  var children = downloads[i].children;
  var hrefs = {};
  for (var j = 0; j < children.length; j++) {
    var href = children[j].getElementsByClassName('a')[0].href;
    for (var k = 0; k < filetypes.length; k++) {
      if (href.includes(filetypes[k])) {
        hrefs[filetypes[k]] = href;
        console.log('Found ' + filetypes[k] + ': ' + href);
      }
    }
  }
  var href = undefined;
  for (var k = 0; k < filetypes.length; k++) {
    if (hrefs[filetypes[k]] != undefined) {
      href = hrefs[filetypes[k]];
      break;
    }
  }
  if (href != undefined) {
    console.log('Downloading: ' + href);
    if (window_open) {
      window.open(href);
    } else {
      window.location = href;
    }
    success++;
  }
  i++;
  console.log(i + '/' + downloads.length + '; ' + success + ' successes.');
  if (i < downloads.length) {
    window.setTimeout(download, delay);
  }
}
download();

If you have "Always ask where to save files" checked in Preferences/General, you'll still get a download dialog for each book (but at least you don't have to click; you can hit return for each one). Even if this is your preference, you might want to consider changing it before downloading a bunch of Humble books.

Anyway, pretty cool! Takes the sting out of bundles, especially big ones like this 42-book collection.

Back from Krita Sprint 2018

Hi,
Yesterday I came back from 3,5 days of Krita Sprint in Deventer. Even if nowadays I have less time for Krita with my work on GCompris, I’m always following what is happening and keep helping where I can, especially on icons, and a few other selected topics. And it’s always very nice to meet my old friends from the team, and the new ones! 🙂

A lot of things were discussed and done, and plans have been set for the next steps.
I was in the discussions for the next fundraiser, the Bugzilla policies, the next release, the resources management rewrite, and defining and ordering the priorities for the unfinished tasks.

I did start a little the french translation for the new manual that is coming soon, mostly porting the existing translation of the FAQ and completing it. Again about the manual I gave a little idea to Wolthera who was looking at reducing the size of png images.. result is almost half smaller, around 60Mo for 1000pages, not bad 😉

I discussed with Valeriy, the new maintainer of kcm-wacomtablet, about some little missing feature I would like to have, and built the git version to test on Mageia 6. Great progress already, and more goodies to come!

As we decided to make layer names in default document templates translatable, we defined a list of translatable keywords to use for layer names in those default templates. The list was made by most artists present there (me, Deevad, Wolthera, Raghukamath and Bollebib).

Also I helped Raghukamath who was fighting with his bluish laptop screen to properly calibrate it on his Linux system, and he was very happy of the result.

Many thanks to Boudewijn and Irina who organised and hosted the sprint in their house, to the Krita Foundation for the accommodation and food, and to KDE e.V. for the travel support that made it possible to gather contributors from many different countries.

You can find more info about this sprint on the Krita website:

Krita 2018 Sprint Report

Krita 2018 Sprint Report

This weekend, Krita developers and artists from all around the world came to the sleepy provincial town of Deventer to buy cheese — er, I mean, to discuss all things Krita related and do some good, hard work! After all, the best cheese shop in the Netherlands is located in Deventer. As are the Krita Foundation headquarters! We started on Thursday, and today the last people are leaving.

Image by David Revoy

Events like these are very important: bringing people together, not just for serious discussions and hacking, but for lunch and dinner and rambling walks makes interaction much easier when we’ve gone back to our IRC channel, #krita. We didn’t have a big sprint in 2017, the last big sprint was in 2016.

So… What did we do? We first had a long meeting where we discussed the following topics:

  • 2018 Fund Raiser. We currently receive about €2000 a month in donations and have about eighty development subscribers. This is pretty awesome, and goes a long way towards funding Dmitry’s work. But we still want to go back to having a yearly fund raiser! We aim for September. Fund raisers are always a fun and energizing way to get together with our community. However, Kickstarter is out: it’s a bit of tired formula. Instead we want to figure out how to make this more of a festival or a celebration. This time the fund raiser won’t have feature development as a target, because…
  • This year’s focus: zarro bugs. That’s what bugzilla used to tell you if your search didn’t find a single bug. Over the past couple of years we’ve implemented a lot of features, ported Krita to Qt5 and in general produced astonishing amounts of code. But not everything is done, and we’ve got way too many open bug reports, way too many failing unittests, way too many embarrassing hits in pvs, way too many features that aren’t completely done yet — so our goal for this year is to work on that.
  • Unfinished business: We identified a number of places where we have unfinished business that we need to get back to. We asked the artists present to rank those topics, and this is the result:
    • Boudewijn will work on:
      • Fix resource management (https://phabricator.kde.org/T379).
      • Shortcut and canvas input unification and related bugs
      • Improved G’Mic integration
    • Dmitry will work on:
      • Masks and selections
      • Improving the text layout engine, for OpenType support, vertical text, more SVG2 text features.
      • SVG leftovers: support for filters and patterns, winding mode and grouping
      • Layer styles leftovers
    • Jouni will work on animation left-overs:
      • Frame cycles and cloning
      • Transform mask interpolation curves
    • Wolthera will work on:
      • Collecting information about missing scripting API
      • Color grading filters
  • Releases. We intend to release Krita 4.1.0 June 20th. We also want to continue doing monthly bug-fix releases. We’ve asked the KDE system administrators whether we can have nightly builds of the stable branch so people can test the bug fix releases before we actually release them. Krita 4.1 will have lots of animation features, animation cache swapping, session management and the reference images tool — and more!

We also discussed the resource management fixing plan, worked really hard on making the OpenGL canvas work even smoother, especially on macOS, where it currently isn’t that smooth, added ffmpeg to the Windows installer, fixed translation issues, improved autosave reliability, fixed animation related bugs and implemented support for a cross-channel curves filter for color grading. And at the same time, people who weren’t present worked on improving OpenEXR file loading (it’s multi-threaded now, among other things), fixed issues with the color picker and made that code simpler and added even more improvements to the animation timeline!

And that’s not all, because Wolthera, Timothee and Raghukamath also finished porting our manual to Sphinx, so we can generate off-line documentation and support translations of the manual. The manual is over 1000 pages long!

There were three people who hadn’t attended a sprint before, artist Raghukamath, ace windows developer Alwin Wong and Valeriy Malov, the maintainer of the KDE Plasma desktop tablet settings utility, who improved support for cintiq-like devices during the weekend.

And of course, there was time for walks, buying cheese, having lunch at our regular place, De Rode Kater, and on Sunday the sun even started shining! And now back to coding!

Image by David Revoy.

The 2018 Krita sprint was sponsored by KDE e.V. (travel) and the Krita Foundation (accommodation and food).

May 19, 2018

GIMP 2.10.2 Released

It’s barely been a month since we released GIMP 2.10.0, and the first bugfix version 2.10.2 is already there! Its main purpose is fixing the various bugs and issues which were to be expected after the 2.10.0 release.

Therefore, 44 bugs have been fixed in less than a month!

We have also been relaxing the policy for new features and this is the first time we will be applying this policy with features in a stable micro release! How cool is that?

For a complete list of changes please see NEWS.

New features

Added support for HEIF image format

This release brings HEIF image support, both for loading and export!

Thanks to Dirk Farin for the HEIF plug-in.

New filters

Two new filters have been added, based off GEGL operations:

Spherize filter to wrap an image around a spherical cap, based on the gegl:spherize operation.

Spherize filter Spherize filter in GIMP 2.10.2.
Original image CC-BY-SA by Aryeom Han.

Recursive Transform filter to create a Droste effect, based on the gegl:recursive-transform operation.

Recursive Transform filter Recursive transform filter in GIMP 2.10.2, with a custom on-canvas interface.
Original image CC-BY by Philipp Haegi.

Noteworthy improvements

Better single-window screenshots on Windows

While the screenshot plug-in was already better in GIMP 2.10.0, we had a few issues with single-window screenshots on Windows when the target window was hidden behind other windows, partly off-screen, or when display scaling was activated.

All these issues have been fixed by our new contributor Gil Eliyahu.

Histogram computation improved

GIMP now calculates histograms in separate threads which eliminates some UI freezes. This has been implemented with some new internal APIs which may be reused later for other cases.

Working with third-parties

Packagers: set your bug tracker address

As you know, we now have a debug dialog which may pop-up when crashes occur with debug information. This dialog opens our bug tracker in a browser.

We realized that we get a lot of bugs from third-party builds, and a significant part of the bugs are package-specific. In order to relieve that burden a bit (because we are a very small team), we would appreciate if packagers could make a first triaging of bugs, reporting to us what looks like actual GIMP bugs, and taking care of their own packaging issues themselves.

This is why our configure script now has the --with-bug-report-url option, allowing you to set your own bug tracker web URL. This way, when people click the “Open Bug Tracker” button it will open the package bug tracker instead.

XCF-reader developers: format is documented

Since 2006, our work format, XCF, is documented thanks to the initial contribution of Henning Makholm. We have recently updated this document to integrate all the changes to the format since the GIMP 2.10.0 release.

Any third-party applications wishing to read XCF files can refer to this updated documentation. The git log view may actually be more interesting since you can more easily spot the changes and new features which have been documented recently.

Keep in mind that XCF is not meant to be an interchange format (unlike for instance OpenRaster) and this document is not a “specification”. The XCF reference document is the code itself. Nevertheless we are happy to help third-party applications, and if you spot any error or issues within this document feel free to open a bug report so we can fix it.

GIMP 3 is already on its way…

While GIMP 2.10.0 was still hot and barely released, our developers started working on GIMP 3. One of the main tasks is cleaning the code from the many deprecated pieces of code or data as well as from code made useless by the switch to GTK+ 3.x.

The deletion is really going full-speed with more than 200 commits made in less than a month on the gtk3-port git branch and with 5 times more lines deleted than inserted in the last few weeks.

Delete delete delete… exterminate!

Exterminate (GTK+2)! Michael Natterer and Jehan portrayed by Aryeom.
It’s actually misses Simon Budig, a long time contributor who made a big comeback on the GTK+3 port with dozens of commits!

May 14, 2018

Plotting the Jet Stream, or Other Winds, with ECMWF Data

I've been trying to learn more about weather from a friend who used to work in the field -- in particular, New Mexico's notoriously windy spring. One of the reasons behind our spring winds relates to the location of the jet stream. But I couldn't find many good references showing how the jet stream moves throughout the year. So I decided to try to plot it myself -- if I could find the data. Getting weather data can surprisingly hard.

In my search, I stumbled across Geert Barentsen's excellent Annual variations in the jet stream (video). It wasn't quite what I wanted -- it shows the position of the jet stream in December in successive years -- but the important thing is that he provides a Python script on GitHub that shows how he produced his beautiful animation.

[Sample jet steam image]

Well -- mostly. It turns out his data sources are no longer available, and he didn't go into a lot of detail on where he got his data, only saying that it was from the ECMWF ERA re-analysis model (with a link that's now 404). That led me on a merry chase through the ECMWF website trying to figure out which part of which database I needed. ECMWF has lots of publically available databases (and even more) and they even have Python libraries to access them; and they even have a lot of documentation, but somehow none of the documentation addresses questions like which database includes which variables and how to find and fetch the data you're after, and a lot of the sample code doesn't actually work. I ended up using the "ERA Interim, Daily" dataset and requesting data for only specific times and only the variables and pressure levels I was interested in. It's a great source of data once you figure out how to request it.

Sign up for an ECMWF API Key

Access ECMWF Public Datasets (there's also Access MARS and I'm not sure what the difference is), which has links you can click on to register for an API key.

Once you get the email with your initial password, log in using the URL in the email, and change the password. That gave me a "next" button that, when I clicked it, took me to a page warning me that the page was obsolete and I should update whatever bookmark I had used to get there. That page also doesn't offer a link to the new page where you can get your key details, so go here: Your API key. The API Key page gives you some lines you can paste into ~/.ecmwfapirc.

You'll also have to accept the license terms for the databases you want to use.

Install the Python API

That sets you up to use the ECMWF api. They have a Web API and a Python library, plus some other Python packages, but after struggling with a bunch of Magics tutorial examples that mostly crashed or couldn't find data, I decided I was better off sticking to the basic Python downloader API and plotting the results with Matplotlib.

The Python data-fetching API works well. To install it, activate your preferred Python virtualenv or whatever you use for pip packages, then run the pip command shown at Web API Downloads (under "Click here to see the installation/update instructions..."). As always with pip packages, you'll have to decide on a Python version (they support both 2 and 3) and whether to use a virtualenv, the much-disrecommended sudo pip, pip3, etc. I used pip3 in a virtualenv and it worked fine.

Specify a dataset and parameters

That's great, but how do you know which dataset you want to load?

There doesn't seem to be anything that just lists which datasets have which variables. The only way I found is to go to the Web API page for a particular dataset to see the form where you can request different variables. For instance, I ended up using the "interim-full-daily" database, where you can choose date ranges and lists of parameters. There are more choices in the sidebar: for instance, clicking on "Pressure levels" lets you choose from a list of barometric pressures ranging from 1000 all the way down to 1. No units are specified, but they're millibars, also known as hectoPascals (hPa): 1000 is more or less the pressure at ground level, 250 is roughly where the jet stream is, and Los Alamos is roughly at 775 hPa (you can find charts of pressure vs. altitude on the web).

When you go to any of the Web API pages, it will show you a dialog suggesting you read about Data retrieval efficiency, which you should definitely do if you're expecting to request a lot of data, then click on the details for the database you're using to find out how data is grouped in "tape files". For instance, in the ERA-interim database, tapes are grouped by date, so if you're requesting multiple parameters for multiple months, request all the parameters for a given month together, rather than making one request for level 250, another request for level 1000, etc.

Once you've checked the boxes for the data you want, you can fetch the data via the web interface, or click on "View the MARS request" to get parameters you can plug into a Python script.

If you choose the Python script option as I did, you can start with the basic data retrieval example. Use the second example, the one that uses 'format' : "netcdf", which will (eventually) give you a file ending in .nc.

Requesting a specific area

You can request only a limited area,

"area": "75/-20/10/60",
but they're not very forthcoming on the syntax of that, and it's particularly confusing since "75/-20/10/60" supposedly means "Europe". It's hard to figure how those numbers as longitudes and latitudes correspond to Europe, which doesn't go down to 10 degrees latitude, let alone -20 degrees. The Post-processing keywords page gives more information: it's North/West/South/East, which still makes no sense for Europe, until you expand the Area examples tab on that page and find out that by "Europe" they mean Europe plus Saudi Arabia and most of North Africa.

Using the data: What's in it?

Once you have the data file, assuming you requested data in netcdf format, you can parse the .nc file with the netCDF4 Python module -- available as Debian package "python3-netcdf4", or via pip -- to read that file:

import netCDF4

data = netCDF4.Dataset('filename.nc')

But what's in that Dataset? Try running the preceding two lines in the interactive Python shell, then:

>>> for key in data.variables:
...   print(key)
... 
longitude
latitude
level
time
w
vo
u
v

You can find out more about a parameter, like its units, type, and shape (array dimensions). Let's look at "level":

>>> data['level']
<class 'netCDF4._netCDF4.Variable'>
int32 level(level)
    units: millibars
    long_name: pressure_level
unlimited dimensions: 
current shape = (3,)
filling on, default _FillValue of -2147483647 used

>>> data['level'][:]
array([ 250,  775, 1000], dtype=int32)

>>> type(data['level'][:])
<class 'numpy.ndarray'>

Levels has shape (3,): it's a one-dimensional array with three elements: 250, 775 and 1000. Those are the three levels I requested from the web API and in my Python script). The units are millibars.

More complicated variables

How about something more complicated? u and v are the two components of wind speed.

>>> data['u']
<class 'netCDF4._netCDF4.Variable'>
int16 u(time, level, latitude, longitude)
    scale_factor: 0.002161405503194121
    add_offset: 30.095301438361684
    _FillValue: -32767
    missing_value: -32767
    units: m s**-1
    long_name: U component of wind
    standard_name: eastward_wind
unlimited dimensions: time
current shape = (30, 3, 241, 480)
filling on
u (v is the same) has a shape of (30, 3, 241, 480): it's a 4-dimensional array. Why? Looking at the numbers in the shape gives a clue. The second dimension has 3 rows: they correspond to the three levels, because there's a wind speed at every level. The first dimension has 30 rows: it corresponds to the dates I requested (the month of April 2015). I can verify that:
>>> data['time'].shape
(30,)

Sure enough, there are 30 times, so that's what the first dimension of u and v correspond to. The other dimensions, presumably, are latitude and longitude. Let's check that:

>>> data['longitude'].shape
(480,)
>>> data['latitude'].shape
(241,)

Sure enough! So, although it would be nice if it actually told you which dimension corresponded with which parameter, you can probably figure it out. If you're not sure, print the shapes of all the variables and work out which dimensions correspond to what:

>>> for key in data.variables:
...   print(key, data[key].shape)

Iterating over times

data['time'] has all the times for which you have data (30 data points for my initial test of the days in April 2015). The easiest way to plot anything is to iterate over those values:

    timeunits = JSdata.data['time'].units
    cal = JSdata.data['time'].calendar
    for i, t in enumerate(JSdata.data['time']):
        thedate = netCDF4.num2date(t, units=timeunits, calendar=cal)

Then you can use thedate like a datetime, calling thedate.strftime or whatever you need.

So that's how to access your data. All that's left is to plot it -- and in this case I had Geert Barentsen's script to start with, so I just modified it a little to work with slightly changed data format, and then added some argument parsing and runtime options.

Converting to Video

I already wrote about how to take the still images the program produces and turn them into a video: Making Videos (that work in Firefox) from a Series of Images.

However, it turns out ffmpeg can't handle files that are named with timestamps, like jetstream-2017-06-14-250.png. It can only handle one sequential integer. So I thought, what if I removed the dashes from the name, and used names like jetstream-20170614-250.png with %8d? No dice: ffmpeg also has the limitation that the integer can have at most four digits.

So I had to rename my images. A shell command works: I ran this in zsh but I think it should work in bash too.

cd outdir
mkdir moviedir

i=1
for fil in *.png; do
  newname=$(printf "%04d.png" $i)
  ln -s ../$fil moviedir/$newname
  i=$((i+1))
done

ffmpeg -i moviedir/%4d.png -filter:v "setpts=2.5*PTS" -pix_fmt yuv420p jetstream.mp4
The -filter:v "setpts=2.5*PTS" controls the delay between frames -- I'm not clear on the units, but larger numbers have more delay, and I think it's a multiplier, so this is 2.5 times slower than the default.

When I uploaded the video to YouTube, I got a warning, "Your videos will process faster if you encode into a streamable file format." I then spent half a day trying to find a combination of ffmpeg arguments that avoided that warning, and eventually gave up. As far as I can tell, the warning only affects the 20 seconds or so of processing that happens after the 5-10 minutes it takes to upload the video, so I'm not sure it's terribly important.

Results

Here's a video of the jet stream from 2012 to early 2018, and an earlier effort with a much longer 6.0x delay.

And here's the script, updated from the original Barentsen script and with a bunch of command-line options to let you plot different collections of data: jetstream.py on GitHub.

Fontstuff at LibrePlanet 2018

I’m going to try and capture some thoughts from recent conferences, since otherwise I fear that so much information gets lost in the fog.

* (If you want to think of it this way, consider this post “What’s New in Open Fonts, № 002”)

I went to LibrePlanet a few weeks ago, for the first time. One of the best outcomes from that trip (apart from seeing friends) was the hallway track.

[FYI, I was happy to see that LWN had some contributors on hand to provide coverage; when I was an editor there we always wanted to go, but it was never quite feasible, between the cost and the frequent overlap with other events. Anyway, do read the LWN coverage to get up to speed on the event.]

RFNs

Dave Crossland and I talked about Reserved Font Names (RFNs), an optional feature of the SIL Open Font License (OFL) in which the font publisher claims a reservation on a portion of their font’s name. Anyone’s allowed to make a derivative of the OFL-licensed font (which is true regardless of the RFN-invocation status), but if they do so they cannot use *any* portion of the RFN in their derivative font’s name.

The intent of the clause is to protect the user-visible “mark” (so to speak; my paraphrase) of the font publisher, so that users do not confuse any derivatives with the original when they see it in menus, lists, etc.

A problem arises, however, for software distributors, because the RFN clause is triggered by making any change to the upstream font — a low bar that includes a lot of functions that happen automatically when serving a font over HTTP (like Google Fonts does) and when rebuilding fonts from source (like Debian does).

There’s not a lot of good information out there on the effects that RFN-invocation has on downstream software projects. SIL has a section in its FAQ document, but it doesn’t really address the downstream project’s needs. So Dave and I speculated that it might be good to write up such a document for publication … somewhere … and help ensure that font developers think through the impact of the decision on downstream users before they opt to invoke an RFN.

My own experience and my gut feeling from other discussions is that most open-font designers, especially when they are new, plonk an RFN statement in their license without having explored its impact. It’s too easy to do, you might say; it probably seems like it’s built into the license for a reason, and there’s not really anything educating you about the impact of the choice going forward. You fill in a little blank at the very top of the license template, cause it’s there, and there’s no guidance.  That’s what needs to change.

Packages

We also chatted a little about font packaging, which is something I’m keen to revisit. I’ve been giving a talk about “the unsolved problems in FOSS type” the past couple of months, a discussion that starts with the premise that we’ve had open-source web fonts for years now, but that hasn’t helped open fonts make inroads into any other areas of typography: print, EPUB, print-on-demand, any forms of marketing product, etc. The root cause is that Google Fonts and Open Font Library are focused on providing a web service (as they should), which leaves a
lot of ground to be covered elsewhere, from installation to document templates to what ships with self-contained application bundles (hint: essentially nothing does).

To me, the lowest-hanging fruit at present seems to be making font packages first-class objects in the distribution packaging systems. As it is, they’re generally completely bare-bones: no documentation, no system integration, sketchy or missing metadata, etc. I think a lot can be done to improve this, of course. One big takeaway from the conversation was that Lasse Fister from the Google Fonts crew is working on a specimen micro-site generator.

That would fill a substantial hole in current packages: fonts tend to ship with no document that shows the font in use — something all proprietary, commercial fonts include, and
that designers use to get a feel for how the font works in a real document setting.

Advanced font features in GTK+ and GNOME

Meanwhile Matthias Clasen has been forging ahead with his own work enhancing the GNOME font-selection experience. He’s added support for showing what variation axes a variable font contains and for exposing the OpenType / smart-font features that the font includes.

He did, however, bring up several pain points he’s encountered. The first is that many of the OpenType features are hard to preview/demonstrate because they’re sparsely documented. The only substantive docs out there are ancient Microsoft material definitely written by committee(s) — then revised, in piecemeal format, by multiple unrelated committees. For example, go to the link above, then try and tell me the difference between `salt` (stylistic alternates), `ccNN` (character variants) and `ssNN` (stylistic sets). I think there’s an answer, but it’s detective work.

A more pressing concern Matthias raised was the need to create “demo strings” that show what actually changes when you enable or disable one of the features. The proper string for some features is obvious (like `onum` (oldstyle numerals): the digits 0 to 9). For others, it’s anybody’s guess. And the font-selector widget, ideally, should not have to parse every font’s entire GSUB feature table, look for all affected codepoints, and create a custom demo string. That might be arbitrarily complex, since GSUB substitutions can chain together, and might still be incorrect (not to mention the simpler case, of that method finding you random letters that add up to unhelpful gibberish).

At lunch on Sunday, Matthias, Dave, Owen Taylor, Felipe Sanches, and a few others … who I’m definitely drawing a blank on this far after the fact (go for the comments) … hashed through several other topics. The discussion turned to Pango, which (like several other storied GNOME libraries), isn’t exactly unmaintained, but certainly doesn’t get attention anymore … see also Cairo….). There are evidently still some API mismatches between what a Pango font descriptor gives you and the lower-level handles you need to work with newer font internals like
variation axes.

A longer-term question was whether or not Pango can do more for applications — there are some features it could add, but major work like building in hyphenation or justification would entail serious effort. It’s not clear that anyone is available to take on that role.

Interfaces

Of course, that ties into another issue Matthias raised, which is that it’s hard to specify a feature set for a “smart” font selector widget/framework/whathaveyou for GTK+ when there are not many GTK-based applications that will bring their own demands. GIMP is still using GTK2, Inkscape basically does its own font selection, LibreOffice has a whole cross-platform layer of its own, etc. The upshot is that application developers aren’t bringing itches needing to be scratched. There is always Gedit, as Matthias said (which I think was at least somewhat satirical). But it complicates the work of designing a toolkit element, to be sure.

The discussion also touched on how design applications like Inkscape might want to provide a user interface for the variable-font settings that a user has used before. Should you “bookmark” those somehow (e.g., “weight=332,width=117,slant=10” or whatnot)? If so, where are they saved? Certainly you don’t want users to have to eyeball a bunch of sliders in order to hit the same combination of axes twice; not providing a UI for this inevitably leads to documents polluted with 600-odd variable-font-setting regions that are all only slightly off from each other. Consensus seemed to lean towards saving variable-axes-settings in sort of “recently used” palette, much as many applications already do with the color picker. Still waiting to see the first implementations of this, however.

As we were leaving, Matthias posed a question to me — in response to a comment I’d made about there needing to be a line between a “generic” font selector and a “full-featured” font selector. The question was what sort of UI was I envisioning in the “generic” case, particularly where variable fonts are concerned, as I had suggested that a full set of sliders for the fonts variation axes was too complex.

I’m not sure. On the one hand, the simple answer would be “none” or “list the variation axes in the font”, but that’s not something I have any evidence for: it’s just a easy place to draw a line.

Perhaps I’m just worried that exposing too many dials and controls will turn users off — or slow them down when they’re trying to make a quick choice. The consumer/pro division is a  common tactic, evidently, for trying to avert UI overload. And this seems like a place where it’s worth keeping a watchful eye, but I definitely don’t have answers.

It may be that “pro” versus “consumer” user is not the right plane on which to draw a line anyway: when I was working on font-packaging questions, I found it really helpful to be document-first in my thinking (i.e., let the needs of the document the user is working on reveal what information you want to get from the font package). It’s possible that the how-much-information-do-you-show-in-the-UI question could be addressed by letting the document, rather than some notion of the “professionalism” of the user, be the guide. More thinking is required.

Interview with El Gato Cangrejo

Could you tell us something about yourself?

Well, I think I am a human shaped thing also known as Aedouard A. and also as El Gato Cangrejo, who loves making drawings and listening to music.

Do you paint professionally, as a hobby artist, or both?

I’m really trying to make it professionally, “very hard thing” but also I try to keep the fun in it so I would have to say both.

What genre(s) do you work in?

I like to let my hand and my pen go to wherever they want to go, and then I begin to think about those traces and it leads me to different shapes, themes and genres. I can build an script for a comic or for a short film, an illustration or even sounds based on a web of random traces on a digital canvas or on a piece of paper.

Whose work inspires you most — who are your role models as an artist?

I love the paintings, illustrations, designs and movies from these people: William Boguereau, Alphonse Mucha, Albrecht Durer, Jules Lefebvre, William Waterhouse, Masamune Shirow, Haruhiko Mikimoto, Shoji Kawamori, Mamoru Oshii, Quentin Tarantino, Hideaki Anno, Hayao Miyasaki, Ralph Bakshi, Guillermo del Toro… (not mentioning musicians, they are such an endless source of inspiration, I only can work while listening to music)

How and when did you get to try digital painting for the first time?

I tried digital painting for the first time like 12 years ago, I bought my first PC and I tried with a software called Image Ready from Photoshop, I did a couple of landscapes with the mouse and then I tried scanning my drawings and retrace them in Corel Draw, also with the mouse.

What makes you choose digital over traditional painting?

The production time, everything is like 10 times faster, expensive materials and the super powerful Ctrl-Z.

How did you find out about Krita?

I like to search for new tools and I try to use libre software. I can’t remember when I tried Krita for first time but I think it was like 7 years ago and it ran very very badly on my old PC.

What was your first impression?

I hated Krita at the time, now I love it!

What do you love about Krita?

The shortcuts are essential, the brushes, the animation tools, “insert meme here” it’s free!

What do you think needs improvement in Krita? Is there anything that really annoys you?

The performance in Linux, I recently changed my OS from Windows 7 to Linux Mint and I have noticed a significant difference in performance between the systems. I noticed a difference in performance between working in grayscale and working in color too, and and also I’m waiting for some layer FX’s as the ones in photoshop, specifically the trace effect, which I used a lot when I worked with photoshop.

What sets Krita apart from the other tools that you use?

As I said earlier, the shortcuts are essential, the animation tools combined with those awesome brushes makes a powerful tool for animation, and I love the fact that Krita has been made for professional use but you can also have tons of fun with it.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I would choose Distant.

What techniques and brushes did you use in it?

I like the “Airbrush_Linear” a lot. I set it to a big size and the opacity to 10 percent, then I use the “Eraser_Circle” the hard shaped one, to define shapes, also I use a lot the “Smudge_Soft” I like to play with it taking the paint from one side to another. When I grabbed Krita again it reminded me of my old times drawing with pencil and paper I just loved.

Where can people see more of your work?

https://gatocangrejo.deviantart.com/gallery/

Anything else you’d like to share?

If you are the pretty invisible friend, thanks and I’ll see you in a parallel universe.
If you are the Sorceress, I really sorry about the silence, I had a couple of good reasons…
If I owe you money, I’m trying to pay it.
If you are the extraterrestrial, stop it man.
If you are the C.I.A. stop sending stuff to my invisible friends and to the extraterrestrial.
If you like my drawings, keep your eyes peeled, I’m going to start a patreon/kickstarter campaign that involves comic, animation, Krita, Blender and other libre software.
If you are from Krita staff, thanks for Krita and thanks for the interview.
If you don’t know Krita, just give it a try, it is awesome. You don’t need to be an artist, you just need to have fun.

May 12, 2018

Stay Tuned

“The arc of the moral universe is long, but it bends towards podcasts.”

– Preet Bharara, while interviewing Bassem Youssef for his Stay Tuned podcast.

Krita 4.0.3 Released

Today the Krita team releases Krita 4.0.3, a bug fix release of Krita 4.0.0. This release fixes an important regression in Krita 4.0.2: sometimes copy and paste between images opened in Krita would cause crashes (BUG:394068).

Other Improvements

  • Krita now tries to load the system color profiles on Windows
  • Krita can open .rw2 RAW files
  • The splash screen is updated to work better on HiDPI or Retina displays (BUG:392282)
  • The OpenEXR export filter will convert images with an integer channel depth before saving, instead of giving an error.
  • The OpenEXR export filter no longer gives export warnings calling itself the TIFF filter
  • The emtpy error message dialog that would erroneously be shown after running some export filters is no longer shown (BUG:393850).
  • The setBackGroundColor method in the Python API has been renamed to setBackgroundColor for consistency
  • Fix a crash in KisColorizeMask (BUG:393753)

Download

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

When it is updated, you can also use the Krita Lime PPA to install Krita 4.0.3 on Ubuntu and derivatives. We are working on an updated snap.

OSX

Note: the touch docker, gmic-qt and python plugins are not available on OSX.

Source code

md5sum

For all downloads:

Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here (filenames ending in .sig).

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

May 11, 2018

Making Videos (that work in Firefox) from a Series of Images

I was working on a weather project to make animated maps of the jet stream. Getting and plotting wind data is a much longer article (coming soon), but once I had all the images plotted, I wanted to combine them all into a video showing how the jet stream moves.

Like most projects, it's simple once you find the right recipe. If your images are named outdir/filename00.png, outdir/filename01.png, outdir/filename02.png and so on, you can turn them into an MPEG4 video with ffmpeg:

ffmpeg -i outdir/filename%2d.png -filter:v "setpts=6.0*PTS" -pix_fmt yuv420p jetstream.mp4

%02d, for non-programmers, just means a 2-digit decimal integer with leading zeros, If the filenames just use 1, 2, 3, ... 10, 11 without leading zeros, use %2d instead; if they have three digits, use %03d or %3d, and so on.

The -pix_fmt yuv420p turned out to be the tricky part. The recipes I found online didn't include that part, but without it, Firefox claims "Video can't be played because the file is corrupt", even though most other browsers can play it just fine. If you open Firefox's web console and reload, it offers the additional information "Details: mozilla::SupportChecker::AddMediaFormatChecker(const mozilla::TrackInfo&)::<lambda()>: Decoder may not have the capability to handle the requested video format with YUV444 chroma subsampling.":

Adding -pix_fmt yuv420p cured the problem and made the video compatible with Firefox, though at first I had problems with ffmpeg complaining "height not divisible by 2 (1980x1113)" (even though the height of the images was in fact divisible by 2). I'm not sure what was wrong; later ffmpeg stopped giving me that error message and converted the video. It may depend on where in the ffmpeg command you put the pix_fmt flag or what other flags are present. ffmpeg arguments are a mystery to me.

Of course, if you're only making something to be uploaded to youtube, the Firefox limitation probably doesn't matter and you may not need the -pix_fmt yuv420p argument.

Animated GIFs

Making an animated GIF is easier. You can use ImageMagick's convert:

convert -delay 30 -loop 0 *.png jetstream.gif
The GIF will be a lot larger, though. For my initial test of thirty 1000 x 500 images, the MP4 was 760K while the GIF was 4.2M.

Rest in price

It’s easy to pile on criticism when a major company redesigns their logo, but I couldn’t help myself in this case. The logo looks fine to me, but am I the only one that sees a toe-tag on a corpse when I see the new Best Buy logo?

Cause of death: Excessive color saturation on demo-mode TVs.

May 10, 2018

Krita 4.0.2 released

Today the Krita team releases Krita 4.0.2, a bug fix release of Krita 4.0.0. We fixed more than fifty bugs since the Krita 4.0.0 release! See below for the full list of fixed isses. We’ve also got fixes submitted by two new contributors: Emmet O’Neil and Seoras Macdonald. Welcome!

Please note that:

  • The reference image docker has been removed. Krita 4.1.0 will have a new reference images tool. You can test the code-in-progress by downloading the nightly builds for Windows and Linux. You can also use Antoine Roux’s reference images docker python plugin.
  • Translations are broken in various ways. On Linux everything should work. On Windows, you might have to select your language as an extra override language in the Settings/Select language dialog. This might also be the case on macOS
  • The macOS binaries are now signed, but do not have G’Mic and do not have Python scripting.

If you find a new issue, please consult this draft document on reporting bugs before reporting an issue. After the 4.0 release more than 150 bugs were reported, but most of those reports were duplicates, requests for help or just not useful at all. This puts a heavy strain on the developers and makes it harder to actually find time to improve Krita. Please be helpful!

Improvements

Windows

  • Patch QSaveFile so working on images stored in synchronized folders (dropbox, google drive) is safe. BUG:392408
  • Enable WinInk or prompt if WinTab cannot be loaded

Animation

  • Fix canvas update issues when an animation is being rendered to the cache BUG:392969
  • Fix playback in isolated mode BUG:392559
  • Fix saving animated transparency and filter masks, adjustment layer BUG:393302
  • set size for a few timeline icons as it is painfully small on Windows
  • Fix copy-pasting pixel data from animated layers BUG:364162

Brushes

  • Fix keeping “eraser switch size/opacity” option when saving the brush BUG:393499
  • Fix update of the preset editor GUI when a default preset is created BUG:392869
  • Make strength and opacity sliders from 0 to 100 percent in brush editor

File format support

  • Fix saving state of the selection masks into .kra
  • Read multilayer EXR files saved by Nuke BUG:393771
  • PSD: convert the image if its colorspace is not supported
  • Don’t let autosave close currently running actions

Grids

  • increase the range for the pixel grid threshold
  • only allow isometric grid with OpenGL enabled BUG:392526

Crashes

  • Fix a hangup when closing the image BUG:393916
  • Fix a crash when duplicating active global selection masks BUG:382315
  • Fix crashes on undo/redo of vector path points operations BUG:393209, BUG:393087
  • Fix crash when deleting palette BUG:393353
  • Fix crash when resizing the Tool Options for the shape selection tool BUG:393217

User interface

  • Show the exact bounds in the layer properties dialog
  • Add ability for vanishing point assistants to show and configure radial lines
  • Make the Saturation slider update when picking a color that has Value 100 BUG:391934
  • Fix “Break at segment” to work correctly with closed paths
  • Disable right-clicking on popup palette BUG:391696, BUG:378484
  • Don’t let the color label widget mess up labels when right button is pressed BUG:392815
  • Fix Canvas position popping after pop-up palette rotation reset BUG:391921 (Patch by Emmet O’Neil, thanks!)
  • Change the behaviour of the add layer button BUG:385050 (Patch by Seoras Macdonald, thanks!)
  • Clicking outside preview box moves view to that point BUG:384687 (Patch by Seoras Macdonald, thanks!)
  • Implement double Esc key press shortcut for canceling continued transform mode BUG:361852
  • Display flow and opacity as percentage instead of zero to one on toolbar

Other

Download

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

When it is updated, you can also use the Krita Lime PPA to install Krita 4.0.2 on Ubuntu and derivatives. We are working on an updated snap.

OSX

Note: the gmic-qt and python plugins are not available on OSX.

Source code

md5sum

For all downloads:

Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here (filenames ending in .sig).

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

May 09, 2018

System76 and the LVFS

tl;dr: Don’t buy System76 hardware and expect to get firmware updates from the LVFS

System76 is a hardware vendor that builds laptops with the Pop_OS! Linux distribution pre-loaded. System76 machines do get firmware updates, but do not use the fwupd and LVFS shared infrastructure. I’m writing this blog post so I can point people at some static text rather than writing out long replies to each person that emails me wanting to know why they don’t just use the LVFS.

In April of last year, System76 contacted me, wanting to work out how to get on the LVFS. We wrote 30+ cordial emails back and forth with technical details. Discussions got stuck when we found out they currently use a nonfree firmware flash tool called afuefi rather than use the UEFI specification called UpdateCapsule. All vendors have support for capsule updates as a requirement for the Windows 10 compliance sticker, so it should be pretty easy to use this instead. Every major vendor of consumer laptops is already using capsules, e.g. Dell, HP, Lenovo and many others.

There was some resistance to not using the proprietary AUEFI executable to do the flashing. I still don’t know if System76 has permission to redistribute afuefi. We certainly can’t include the non-free and non-redistributable afuefi as a binary in the .cab file uploaded to the LVFS, as even if System76 does have special permission to distribute it, as the LVFS would be a 3rd party and is mirrored to various places. IANAL and all that.

An employee of System76 wrote a userspace tool in rust to flash the embedded controller (EC) using a reverse engineered protocol (fwupd is written in C) and the intention was add a plugin to fwupd to do this. Peter Jones suggested that most vendors just include the EC update as part of the capsule as the EC and system firmware typically form a tightly-coupled pair. Peter also thought that afuefi is really just a wrapper for UpdateCapsule, and S76 was going to find out how to make the AMI BIOS just accept a capsule. Apparently they even built a capsule that works using UpdateCapsule.

I was really confused when things went so off-course with a surprise announcement in July that System76 had decided that they would not use the LVFS and fwupd afterall even after all the discussion and how it all looked like it was moving forwards. Looking at the code it seems the firmware update notifier and update process is now completely custom to System76 machines. This means it will only work when running Pop_OS! and not with Fedora, Debian, Ubuntu, SUSE, RHEL or any other distribution.

Apparently System76 decided that having their own client tools and firmware repository was a better fit for them. At this point the founder of System76 got cc’d and told me this wasn’t about politics, and it wasn’t competition. I then got told that I’d made the LVFS and fwupd more complicated than it needed to be, and that I should have adopted the infrastructure that System76 had built instead. This was all without them actually logging into the LVFS and seeing what features were available or what constraints were being handled…

The way forward from my point of view would be for System76 to spend a few hours making UpdateCapsule work correctly, another few days to build an EFI binary with the EC update, and a few more hours to write the metadata for the LVFS. I don’t require an apology, and would happily create them a OEM account on the LVFS. It looks instead that the PR and the exclusivity are more valuable that working with other vendors. I guess it might make sense for them to require Pop_OS! on their hardware but it’s not going to help when people buy System76 hardware and want to run Red Hat Enterprise Linux in a business. It also means System76 also gets to maintain all this security-sensitive server and client code themselves for eternity.

It was a hugely disappointing end to the discussion as I had high hopes System76 would do the right thing and work with other vendors on shared infrastructure. I don’t actually mind if System76 doesn’t use fwupd and the LVFS, I just don’t want people to buy new hardware and be disappointed. I’ve heard nothing more from System76 about uploading firmware to the LVFS or using fwupd since about November, and I’d given multiple people many chances to clarify the way forward.

If you’re looking for a nice laptop that will run Linux really well, I’d suggest you buy a Dell XPS instead — it’ll work with any distribution you choose.

Decoding Codes

My friend and colleague of over 20 years, Nick Burka, has written a great article about the Usability for Promotion Codes and Access Codes over on the silverorange blog.

Read Usability for Promotion Codes and Access Codes by Nick Burka on the silverorange blog.

You might not care about promotion codes, but you’ve probably had to type in some kind of code for 2-factor authentication or the rare non-scammy coupon code. Nick’s article covers what can make these codes easy (or difficult) to remember, type, and say over the phone.

It’s too bad the creators of our Canadian postal code system couldn’t have read this before they put all of those Gs and Js in the Quebec postal codes (an English G and French J sound almost identical).

I’m particularly proud of this article as it draws on external expertise – something we’ve been trying to do more of at silverorange. This article in particular draws on things we learned for a literacy and essential skills consultant, and from the non-profit Computers for Success Canada.

May 07, 2018

A Hissy Fit

As I came home from the market and prepared to turn into the driveway I had to stop for an obstacle: a bullsnake who had stretched himself across the road.

[pugnacious bullsnake]

I pulled off, got out of the car and ran back. A pickup truck was coming around the bend and I was afraid he would run over the snake, but he stopped and rolled down the window to help. White Rock people are like that, even the ones in pickup trucks.

The snake was pugnacious, not your usual mellow bullsnake. He coiled up and started hissing madly. The truck driver said "Aw, c'mon, you're not fooling anybody. We know you're not a rattlesnake," but the snake wasn't listening. (I guess that's understandable, since they have no ears.)

I tried to loom in front of him and stamp on the ground to herd him off the road, but he wasn't having any of it. He just kept coiling and hissing, and struck at me when I got a little closer.

I moved my hand slowly around behind his head and gently took hold of his neck -- like what you see people do with rattlesnakes, though I'd never try that with a venomous snake without a lot of practice and training. With a bullsnake, even if they bite you it's not a big deal. When I was a teenager I had a pet gopher snake (a fringe benefit of having a mother who worked on wildlife documentaries), and though "Goph" was quite tame, he once accidentally bit me when I was replacing his water dish after feeding him and he mistook my hand for a mouse. (He seemed acutely embarrassed, if such an emotion can be attributed to a reptile; he let go immediately and retreated to sulk in the far corner of his aquarium.) Anyway, it didn't hurt; their teeth are tiny and incredibly sharp, and it feels like the pinprick from a finger blood test at the doctor's office.

Anyway, the bullsnake today didn't bite. But after I moved him off the road to a nice warm basalt rock in the yard, he stayed agitated, hissing loudly, coiling and beating his tail to mimic a rattlesnake. He didn't look like he was going to run and hide any time soon, so I ran inside to grab a camera.

In the photos, I thought it was interesting how he held his mouth when he hisses. Dave thought it looked like W.C. Fields. I hadn't had a chance to see that up close before: my pet snake never had occasion to hiss, and I haven't often seen wild bullsnakes be so pugnacious either -- certainly not for long enough that I've been able to photograph it. You can also see how he puffs up his neck.

I now have a new appreciation of the term "hissy fit".

[pugnacious bullsnake]

May 05, 2018

May 04, 2018

(NSFW) What Stefan Sees


(NSFW) What Stefan Sees

An Interview with Photographer Stefan Schmitz

Stefan Schmitz is a photographer living in Northern France and specializing in sensual and nude portraits. I stumbled upon his work during one of my searches for photographers using Free Software on Flickr, and as someone who loves shooting portraits his work was an instant draw for me.

Franzi Skamet by Stefan  Schmitz Franzi Skamet by Stefan Schmitz
Khiara Gray by Stefan  Schmitz Khiara Gray by Stefan Schmitz

He’s a member of the forums here (@beachbum) and was gracious enough recently to spare some time chatting with me. Here is our conversation (edited for clarity)…

Are you shooting professionally?

Nope, I’m not a professional photographer, and I think I’m quite happy about that. I do happen to photograph my surroundings for ±40 years now, and I have a basic idea about camera-handling and light. Being a pro is about paying invoices by shooting photos, and I fear that the pressure at the end of some months or quarters can easily take the fun out of photography. I’m an engineer and photography is my second love behind wife and kids.

Every now and then some of my pictures are requested and published by some sort of magazine, press or web-service, and I appreciate the attention and exposure, but there is no (or very little) money in the kind of photography I specialize in, so … everything’s OK the way it is.

Khiara Gray by Stefan Schmitz Khiara Gray by Stefan Schmitz

What would you say are your biggest influences?

Starting with photographers: Andreas Feininger, Peter Lindbergh and Alfred Stieglitz. Check out the portrait of Georgia O’Keeffe by Alfred Stieglitz: it’s 100 years old and it’s all there. Pose, light, intensity, personality - nobody has invented anything [like it] afterwards. We all just try to get close. I feel the same when I look at images taken by Peter Lindbergh, but my eternal #1 is Andreas Feininger.

Georgia O’Keeffe by Alfred Stieglitz

I got the photo-virus from my father and I learned nearly everything from daddy’s well-worn copy of The Complete Photographer [amzn] (Feininger) from 1965. Every single photo in that book is a masterpiece, even the strictly “instructional” ones. You measure every photo-book in the world against this one and they all finish second. Get your copy!

How would you describe your own style overall?

I shoot portraits of women and most of the time they don’t wear clothes. The portrait-part is very important for me: the model must connect with the viewer and ideally the communication goes beyond skin-deep. I want to see (and show) more than just the surface, and when that happens, I just press the shutter-button and try to get out of the way of the model’s performance.

Jennifer Polska by Stefan Schmitz Jennifer Polska by Stefan Schmitz
Franzi Skamet by Stefan Schmitz Franzi Skamet by Stefan Schmitz

What motivates you when deciding what/how/who to shoot?

I like women, so I take photos of women. If I were interested in beetles, I’d buy a macro lens and shoot beetles. All kidding aside, I think it’s a natural thing to do. I am married to a beautiful woman, an ex-model, and when she got fed-up with my eternal “can we do one more shoot” requests, we discussed things and she allowed me to go ahead and shoot models. Her support is very important to me, but her taste is very different from mine. I really never asked myself “why” I shoot sensual portraits and nudes. It just feels like “I want to do that” and I feel comfy with it. Does there have to be a reason?

The location is very important for me. Nothing is more boring than blinding a person with a flashlight in front of a gray wallpaper. A room, a window-sill, a landmark - there’s a lot of inspiration out there, and I often think “this is where I want to shoot”. Sometimes my wife tells me of some place she has been to or seen, and I check that out.

If you had to pick your own favorite 3 images of your work, which ones would you choose and why?

Jennifer Polska by Stefan Schmitz Jennifer Polska by Stefan Schmitz

Jennifer is a very professional and inspiring model. We’ve worked together quite a number of times and while you may think that this shot was inspired by The Who’s “Pinball Wizard”, I’d answer “right band, wrong song”. It’s The Who, alright, but the song’s “A quick one while he’s away”. I chose this photo because it’s all about Jennifer’s pose and facial expression. It’s sensual, even sexy, but looking at Jennifer’s face you forget about the naked skin and all. There’s beauty, there’s depth … that’s what I’m after.

Alice by Stefan Schmitz Alice by Stefan Schmitz

This shot of Alice is an example for the importance of natural light. There are photographers out there who can arrange light in a similar way, but I doubt that Alice would express this natural serenity in a studio setup with cables and stands and electric-transformers humming. She’s at ease, the light is perfect - I just try to be invisible because I don’t want to ruin the moment.

Khiara Gray by Stefan  Schmitz Khiara Gray by Stefan Schmitz

Try to escape Khiara’s eyes. Go, do it. It’s all there, the pose, the room, the ribbon-chair and the little icon, but those eyes make the picture. I did NOT whiten the eyeballs nor did I dodge the iris, and of course it’s all natural/available light.

If you had to pick 3 favorite images from someone else, which ones would you choose and why?

I already named Stieglitz’ Georgia O’Keeffe as an inspiration further up - next to that there’s Helmut Newton’s Big Nude III, Henrietta and Kim Basinger’s striptease in 9 12 weeks (white silk nighty and all). Each one a masterpiece, each one very influential for me. Imagine the truth and depth of Georgia with the force and pride of Henrietta and the erotic playfulness of Kim Basinger. That photo would rule the world.

Big Nude III, Henrietta, Helmut Newton

Is there something outside of your comfort zone you wish you could try/shoot more of?

I would like to work more with women above the age of 35, but it’s hard to find them. In general they stop modeling nude when the kids arrive.

Shooting more often outdoors would be cool, too, but that’s not easy here in northern France - there is no guarantee for good weather, and it’s frustrating when you organize a shoot two weeks in advance just to call it off in the very last minute due to bad weather.

Last but not least there’s a special competition among photographers; it’s totally unofficial and called “the white shirt contest”. Shoot a woman in a white shirt and make everybody “feel” the texture of that shirt. I give it a try on every shoot and very few pictures come out the way I wish. Go for it - it’s way harder than I thought!

Alice by Stefan Schmitz Alice by Stefan Schmitz

How do you find your models usually?

There are websites where models and photographers can present their work and get in contact. The biggest-one worldwide is modelmayhem.com, and I highly recommend to become a member. Another good place is tumblr.com, but you have to go through a lot of dirt before you find some true gems. I have made contact via both sites and I recommend them.

You will need some pictures in your portfolio in order to show that you are - in fact - a photographer with a basic idea of portrait-work. If you shoot portraits (I mean really portraits, not some snapshots of granny and the kids under the Christmas-tree), you probably have enough photos on your disk to state the point. But if you don’t and you want to start (nude) portraits, spend some money on a workshop. I did that twice and it really helped me in several ways: communication with the model, how to start a session, do’s and don’ts - and at the end of the day you will drive home with a handful of pictures for your portfolio.

Hannah by Stefan Schmitz Hannah by Stefan Schmitz

Speaking of gear, what are you shooting with currently (or what is your favorite setup)?

Gear is overrated. I am with Nikon since 1979 and today I own and use two bodies: a 1975 Nikon F2 photomic (bought used in 82), loaded with Kodak Tri-X and a Nikon D610 DSLR. 90% of my pictures are shot with a 50mm standard lens. Next on the list is the 35mm - you will need that in small rooms when the 50mm is already a bit too long and you want to keep some distance. I happen to own a 85mm, but the locations I book and shoot rarely offer enough space to make use of that lens.

There are these cheap, circular 1m silver reflectors on amazon. They cost about 15 €/$ and you get a crappy stand for the same price. That stuff is pure gold - I use the reflector a lot and I highly recommend to learn how to work with it. It’s my little secret weapon when I shoot against the light (see Alice here above).

A camera with a reasonably fast standard lens, a second battery and a silver reflector is all I need. The rest is luxury for me, but I am pretty much a one-trick-pony. Other photographers will benefit more from a bigger kit.

Most of your images appear to be making great use of natural light. Do you use other lighting gear (speedlights, monoblocks, modifiers, etc)?

Right - available light is where it’s at. I very rarely shoot with a flash kit today because it distracts me from the work with the model. I’m a loner on the set, no assistants or friends who come and help, so everything must be totally simple and foolproof.

Saying that, I own an alarming number of speedlights, umbrellas, triggers and softboxes, but I don’t need that gear very often. I try to visit the locations before I shoot. I check the directions and plan for a realistic timeframe, so today I will neither find myself in a totally dark dungeon nor in a sun-filled room with contrasts à gogo. Windows to the west - shoot in the morning, windows facing south-east: shooting in the (late) afternoon.

Karolina Lewschenko by Stefan Schmitz Karolina Lewschenko by Stefan Schmitz

Here’s a shot of Karolina Lewschenko. We took this photo in a hotel room by the end of October and the available (window) light got too weak, so I used an Aurora Firefly 65 cm softbox with a Metz speedlight and set-up some classic Rembrandt-Light. I packed that gear because I knew that our timeframe wasn’t guaranteed to work out perfectly. “Better be safe than sorry”.

Franzi Skamet by Stefan  Schmitz Franzi Skamet by Stefan Schmitz

Do you pre-visualize and plan your shoots ahead of time usually, or is there a more organic interaction with the model and the space you’re shooting in?

Yes, I do. When I visit a place, a possible location, I have some Ideas of where to shoot, what furniture to push around and what pose to try. I can pretty much see the final picture (or my idea of it) before I book the model. Having said that, you know that no battle-plan has ever survived the first shot fired…

When the model arrives, we take some time to walk around the locations and discuss possible sets. We will then start to shoot fully clothed in order to get used to another and see how the light will be on the final shots. It’s very important for me to get feedback from the model. She might say that a pose is difficult for her or hurts after a few seconds, that she’s not comfy with something or that she would like to try a totally different thing here. I always pay a lot of attention to those ideas and - out of experience - those shots based on the model’s ideas are in general among the best of the day.

Karolina Lewschenko by Stefan Schmitz Karolina Lewschenko by Stefan Schmitz

I mean we’re not here because I shoot bugs or furniture, you don’t give me the opportunity to express myself here because you are a fan of crickets; all the attention is linked to the beautiful women on my photos and how they connect with the beholder. I am just the one who captures the moments, it’s the models who fill those moments with intensity and beauty. It would be very stupid of me not to cooperate with a model who knows how to present herself and who comes up with her own ideas.

Always listen to the model, always communicate, never go quiet.

The discussion with the model also includes what degree of nudity we consider. So the second round of photos starts with the “open shirt” or topless shots before the model undresses completely. If we take photos in lingerie, we do that last (after the nudes) because lingerie often leaves traces on the skin and we don’t want that to show.

Franzi Skamet by Stefan Schmitz Franzi Skamet by Stefan Schmitz

It is important to know what to do and in what order. You don’t want to have a nude model standing in front of you, asking “what’s next?” and you answer “I dunno - maybe (!) try this or that again”. If you lose your directions for a moment, just say so or say “please get your bathrobe and let’s have a look at the last pictures together”. If you are “not sure”, the model might be “not comfy”, and that’s something we want to avoid.

Would you describe your workflow a bit? Which projects do you use regularly?

A typical session is 90 to 120 minutes and I will end-up with about 500 exposures on the SD-card and maybe a roll of exposed Kodak Tri-X. The film goes to a lab and I will get the negatives and scans back within 15 to 30 days.

There’s two SD-cards, one with RAW files that I import with gThumb to /photos/year/month/day. The other card holds fine-quality JPG and those go to /pictures/year/name_of_model. My camera is already set to monochrome, I get every picture I shoot in b/w on the camera-screen and the JPG-files are also monochrome.

Next step is a pre-selection in Geeqie. That’s one great picture viewer and I delete all the missed shots (bad framing, out of focus etc.) and note/mark all the promising/good shots here. This is normally the end of day one.

Switching from RAWstudio to darktable has been a giant step for me. dt is just a great program and I still learn about new functions and modules every day. The file comes in, is converted to monochrome and afterwards color saturation and lights (red and yellow) are manipulated . This way I can treat the skin (brighter or darker) without influencing the general brightness of the picture. Highlights and lowlights may be pushed a bit to the left and I add the signature and a frame 0,5% wide, lens correction is set automatically. That’s the whole deal. On very rare occasions I add some vignette or drop the brightness gradually from top to bottom, but again: it doesn’t happen all that often. I never cut, crop or re-frame a shot. WYSIWYG. Cropping something out, turning the picture in order to get perfectly vertical lines or the likes - it all feels like cheating. I have no client to please, no deadline to meet, I can take a second longer and frame my photo when I look through the viewfinder.

Franzi Skamet by Stefan Schmitz Franzi Skamet by Stefan Schmitz

The photos will then be treated in the GIMP. Some dodge and burn (especially when there are problematic, very high or low contrasts), maybe stamp an electric plug away and in the end I re-size them down to 2560 on the long side (big enough for A3 prints) and (sometimes) apply the sharpening tool with value 20 or 25. Done. I can’t save a crappy shot in post-prod and I won’t try. Out of the 500 or so frames, 10 to 15 will be processed like that and it feels like nothing has changed over the last 40 years. The golden rule was “one good shot per roll of film” and I happen to be there, too. Spot-on!

I load those 15 pictures up on my Flickr account and about once or twice a week I place a shot in the many Flickr groups. Also once a week (or every ten days) I post a photo on my Tumblr account. Today I have about 5k followers and my photos are seen between 500’000 and one million times a month, depending on the time of year and weather. There’s less traffic on warm summer days and more during cold and rainy winter-nights.

It takes me some time before I add a shot to my own website. In comparison I show few photos there, every one for a reason and I point point people to that address, so I hope I only show the best.

Aya Kashi by Stefan Schmitz Aya Kashi by Stefan Schmitz

Is your choice to use Free Software for pragmatic reasons, or more idealistic?

I owned an Apple II in 1983 and a digital MicroVax in 1990 or so. My way to FOSS started out pragmatic and it became a conviction later on. In the late 90’s and early 2000’s I had my own small business and worked with MS Office on a Win NT machine. Photos were processed with a Nikon film-scanner through the proprietary software into an illegal copy of Adobe PS4. It was OK, stable and I didn’t fear anything, but I wasn’t really happy neither. One day I swung over to Star-Office/OpenOffice.org for financial reasons and I also got rid of that unlicensed PS and installed the GIMP (I don’t know what version, but I upgraded some time later to 1.2, that’s for sure). I had internet access and an email address since 1994, but in the late 90’s big programs still came on CDs attached to computer-magazines. Downloading the GIMP was out of question.

Gaming was never my thing and when I installed Win XP, all hell broke lose - keeping a computer safe, virus-free and running wasn’t easy before the first service pack, but MS reacted way too slow in my opinion - I tried debian (10 CD kit) on my notebook, got it running, found the GIMP and OOo - and that was it. It took a bit of trial and error and I had to buy a number of W-Lan sticks because very few were supported and so on, but in the end I got the machines running.

Later on I got hold of an Ubuntu 7.10 CD, tried that and never looked back. The few changes on my system were from Gnome to XFCE desktop and from Thunderbird to a browser-based mail-client. Xubuntu is a no-brainer, it runs stable and fast. I contribute every December 100.- € to FOSS. That’s in general 50 and 40 to two projects and a tenner to Wikipedia. I’d spend an extra tenner to any project that helps to convert old star-office files (.sdw and so on) to today’s standards (odt…), but nobody seems interested.

What is one piece of advice you would offer to another photographer?

Don’t take any advise from me, i’m still learning myself. Or wait: be kind and a gentleman with the models. They all - each and everyone of them - have had bad experiences with photographers who forgot that the models are nude for the camera, not for the man behind it. They all have been in a room with a photographer who breathes a bit too hard and doesn’t get his gear working … don’t be that arsewipe!

Irina by Stefan Schmitz Irina by Stefan Schmitz

Arrange for a place where the model can undress in privacy - she didn’t come for a strip-show and you shouldn’t try to make it one. Have some bottles of water at hand and talk about your plans, poses and sets with the model. Few people can read minds, so communication works best when you say what you have in mind and the model says how she thinks this can be realized. The more you talk, the better you communicate, the better the pictures. No good photo has ever been shot during a quiet session, believe me.

In general the model will check your portfolio/website and expect to do more or less the same kind of work with you. If you want to do something different, say so when booking the model. If your website shows a lot of nude portraits, models will expect to do that kind of photos. They may be a bit upset if you ask them out of nowhere to wear a latex suit because it’s fetish-Friday in your world. The more open and honest you are from the beginning, the better the shooting will go down.

Irina by Stefan Schmitz Irina by Stefan Schmitz

Don’t overdo the gear-thingy. 90% of my photos are taken with the 50mm standard lens. Period. Sometimes I have to switch to 35mm because the room is a bit to small and the distance too close for the one four-fifty, so everything I bring to an indoor-shooting is the camera, a 50, a 35, an el-cheap-o 100cm reflector from amazon (+/- 15 €/$) and an even cheaper stand for the reflector. Gear is not important, communication is.

Want to spend 300 €/$ on new gear? Spend it on a workshop. Learn how to communicate, get inspiration and fill your portfolio with a first set of pictures, so the next model you email can see that you already have some experience in the field of (nude) portraits. That’s more important than a new flashlight in your bag.

Isabelle Descamps by Stefan Schmitz Isabelle Descamps by Stefan Schmitz

Thank You Stefan!

I want to thank Stefan again for taking the time and being patient enough to chat with me!

Stefan is currently living in Northern France. Before that he lived and worked in Miami, FL, and Northern Germany where he is from, went to school, and met his wife. His main website is at https://whatstefansees.com/, and he can be found on Flickr, Facebook, Twitter, Instagram, and Tumblr.

Unless otherwise noted, all of the images are copyright Stefan Schmitz (all rights reserved) and are used with permission.

May 02, 2018

Bíonn gach tosach lag*

Tá mé ag foghlaim Gaeilge; tá uaim scríobh postálacha blag as Gaeilge, ach níl mé oilte ar labhairt nó scríbh as Gaeilge go fóill. Tiocfaidh sé le tuilleadh cleachtaidh.**

Catching up

I have definitely fallen off the blog wagon; as you may or may not know the past year has been quite difficult for me personally, far beyond being an American living in Biff Tannen’s timeline these days. Blogging definitely was pushed to the bottom of the formidable stack I must balance but in hindsight I think the practice of writing is beneficial matter what it’s about so I will carve regular time out to do it.

Tá mé ag foghlaim Gaeilge

This post title and opening is in Irish; I am learning Irish and trying to immerse myself as much as one can outside of a Gaeltacht. There’s quite a few reasons for this:

  • The most acute trigger is that I have been doing some genealogy and encountered family records written in Irish. I couldn’t recall enough of the class I’d taken while in college and got pulled in wanting to brush up.
  • Language learning is really fun, and Irish is of course part of my heritage and I would love to be able to teach my kids some since it’s theirs, too.
  • One of the main reasons I took Japanese in college for 2 years is because I wanted to better understand how kanji worked and how to write them. With Irish, I want to understand how to pronounce words, because from a native English speaker point of view they sound very different than they look!
  • Right now appears to be an exciting moment for the language; it has shed some of the issues that I think plagued it during ‘The Troubles’ and you can actually study and speak it now without making some kind of unintentional political statement. There’s far more demand for Gaelscoils (schools where the medium for education in all subjects is Irish) than can be met. In the past year, the Pop Up Gaeltacht movement has started and really caught on, a movement run in an open source fashion I might add!
  • I am interested in how the brain recovers from trauma and I’ve a little theory that language acquisition could be used as a model for brain recovery and perhaps suggest more effective therapies for that. Being knee deep in language learning, at the least, is an interesting perspective in this context.
  • I also think – as a medium that permeates everything you do, languages are similar to user interfaces – you don’t really pay attention to a language when you speak it if you’re fluent, it’s just the medium. Where you pay attention to the language rather than the content is where you have a problem speaking it or understanding it. (Yes, the medium is the message except when it isn’t. 🙂 )Similarly, user interfaces aren’t something you should pay attention to – you should pay attention to the content, or your work, rather than focus on the intricacies of how the interface works. I think drawing connections between these two things is at least interesting, if not informative. (Can you tell I like mashing different subjects together to see what comes out?)

Anyway, I could go on and on, but yes, $REASONS. I’m trying to learn a little bit every day rather than less frequent intensive courses. For example, I’m trying to ‘immerse’ as I can by using my computers and phone in the Irish language, keep long streaks in the Duolingo course, listen to RnaG and watch TG4 and some video courses, and some light conversation with other Irish learners and speakers.

Maybe I’ll talk more about the approach I’m taking in detail in another post. In general, I think a good approach to language learning is a policy I try to subscribe to in all areas of life – just f*ing do it (speak it, write it, etc. Do instead of talking about doing. Few things infuriate me more although I’m as guilty as anyone. 🙂 ) There you go for now, though.

What else is going on?

I have been working on some things that will be unveiled at the Red Hat Summit and don’t want to be a spoiler. I am planning to talk a bit more about that kind of work here. One involves a coloring book :), and another involves a project Red Hat is working on with Boston University and Boston Children’s Hospital.

Just this week, I received my laptop upgrade 🙂 It is the Thinkpad Yoga X1 3rd Gen and I am loving it so far. I have pre-release Fedora 28 on it and am very happy with the out-of-the-box experience. I’m planning to post a review about running Fedora 28 on it soon!

Slán go fóill!

(Bye for now!)

* Every beginning is weak.

** I’m learning Irish; I want to write blog posts in Irish, but I don’t speak or write Irish well enough yet. It’ll come with practice. (Warning: This is likely Gaeilge bhriste / broken Irish)

FreeCAD BIM development news - April 2018

Hello everybody, This is time for a new report on FreeCAD development, particularly the development of BIM tools. To resume the affair for who is new to this column, I recently started to "divide" the development of BIM tools in FreeCAD between the original Arch, which is included in FreeCAD itself, and the new BIM...

May 01, 2018

Goodbye Kansas Studios

Goodbye Kansas Studios is a VFX studio that creates award-winning visual effects, digital animation and motion capture for movies, game trailers and commercials. Goodbye Kansas Studios main office lies in Stockholm, Sweden, but they are also located in Los Angeles, London, Hamburg and Uppsala.

Goodbye Kansas Studio

Text by Nils Lagergren and Daniel Bystedt, Goodbye Kansas

We pride ourselves in having a structure at work where we put the artists first and the administration works a lot to support the artists. This has in turn created a company culture where artists help each other out as soon they run into any CG related issue. We also have a very strong creative atmosphere where artists feel ownership of their tasks and go out of their way to achieve visual excellence.

At Goodbye Kansas Studios we use several 3D applications, such as Houdini, Blender, Zbrush and Maya. We always try to approach a challenge with the tool that is best suited for solving the problem at hand. Blender first caught our eye because some of our artists had started trying it out and were surprised over how much faster they could produce models. Even though not every artist at the company use Blender it is becoming more and more popular in the modeling department at the Stockholm office. Let’s have a look at some projects!

Characters for Unity – Adam Demo

Characters were modeled in Blender and Zbrush. The low poly version of the character was entirely done in Blender.

Blender fits nicely into our pipeline because of its powerful modeling tools. We also use it for hair grooming, which then is exported as curves and used for procedural hair setups in other packages. Blender has a very nice mix between procedural tools, standard box modeling and sculpting. Generally we use Zbrush for character work and Blender for hard surface and props/environment work. We also use it in parts of our environment workflow for scattering objects.

Walking dead – season 8

Retopology and UV-mapping of human actor scans were done in Zbrush and Blender. Grooming of hairstyles were also done in Blender.

Here are something that artists say about Blender.

“Things that are very complex to achieve in other applications are suddenly easy!”
“As a modeler it’s a program that works with you, instead of against you.”
“Suddenly I love Dutch people”
“It made box modelling fun again”
“It feels so strange that Blender is free when it’s actually better than most other modeling programs on the market”

Overkill’s: The Walking dead – Aidan trailer

“Upresolution” of zombie game assets were made both in Zbrush and Blender. Grooming of zombie hairstyles were done in Blender, and we also made a bunch of environment assets.

Along with the gods – The two worlds

The stone chamber was created with Blender. There was a lot of tedious work with placing rocks so they would no intersect in this environment. Thanks to Blender’s fast rigid body simulation system, we could simulate a low resolution version of the rocks and drop them in place. The rocks were then relinked to a high resolution version and published as an environment model. The stone characters in this scene were also done in Blender in two passes. First, the rocks were scattered onto a human base mesh and then they were nudged around by hand for better art direction. The big stone walls were also sculpted in Blender.

Biomutant – cinematic trailer

The little hero character was modeled in Zbrush and Blender. Grooming of the fur was done in Blender.

Raid: World War 2 – Cinematic Trailer

Several environments were done in Blender. We started the layout process using Grease Pencil. This was great, since we could do it very quickly, side-by-side with the art director and address his thoughts and notes. This Grease Pencil sketch was later linked into each environment artists’ scene so they had a good reference when building it. The environment artists did also link each others scene so that they could see each others work update. This made it easy to tie the separate rooms together.

Mass Effect: The Andromeda Initiative

The Moon environment was made in Blender. Being able to sculpt the ground at the same time as scattering out rocks made it really easy to iterate the shot and see how everything looked in the camera. By importing the character animation with Alembic from Maya to Blender, the environment artist could make sure that nothing intersected the characters feet while they were walking. This also enabled us to create the environment simultaneously as we were animating the shots.

April 30, 2018

Interview with JK Riki

Could you tell us something about yourself?

Hi everyone! My name is JK. I am an animator, graphic designer, author, and the Art-half of the Weekend Panda game studio.

Do you paint professionally, as a hobby artist, or both?

My full time job in game development has me doing art professionally, but I’m always working on improving my skills by doing digital painting as a hobby as well – so a little bit of both.

What genre(s) do you work in?

My most practiced genre is the comic/cartoon art style seen in the image above, which I have a lot of fun doing. I also strive to push beyond my comfort zone and try everything from fully rendered illustrations to graphic styles.

I want to continue to improve all-around as an artist so every genre becomes a possibility.

Whose work inspires you most — who are your role models as an artist?

* In animation: Glen Keane, who worked on things like Ariel in The Little Mermaid and Ratigan in The Great Mouse Detective (or as some know it, Basil of Baker Street).
* In comics: Bill Amend, who does the syndicated comic strip Fox Trot.
* In figure drawing: Samantha Youssef, who runs Studio Technique and has been a wonderful mentor.
* In painting: There are so many, and I seem to find more every day!

How and when did you get to try digital painting for the first time?

I imagine the first time I tried it was back in Art School, though that’s probably close to 15 years ago, so the memories are hazy.

What makes you choose digital over traditional painting?

I am a big proponent of “Fail fast and often.” Digital painting allows for just that. I can make (and try to correct) 20 mistakes digitally in the time it takes to pinpoint and alter one mistake traditionally.

Of course, I still love traditional art, even though I find it takes far longer to do. I have sketchbooks littered around my office, and would happily animate with paper and pencil any time any day.

How did you find out about Krita?

It was actually from my wife, who is a software engineer! She needed to do some graphics for a project at her old job, and wanted to find a free program to do it. After Adobe went to a forced subscription-only model, I was looking to make a change, and she showed me Krita.

What was your first impression?

Well, to be honest, I have a hard time learning new programs, so initially I was a little bit resistant! There were so many brushes, and I had to adapt to the differences between Krita and Photoshop. It won me over far more quickly than any other program, though. The flow and feel of painting and drawing in Krita is on a whole different level, probably because it was designed with that in mind! I would never want to go back now.

What do you love about Krita?

Every day I find new tools and tricks in Krita that blow me away. I recently discovered the Assistant Tool and it was practically life-changing. I can do certain things so much faster thanks to learning about that magical little icon.

I also adore so many of the brush presets. They seem much more aligned with what I’m trying to do than the ones that come with other art programs.

The fact that Krita is free is icing on the cake. (Spoiler: Artists love free stuff.)

What do you think needs improvement in Krita? Is there anything that
really annoys you?

I’ve never quite gotten used to the blending mode list/UI in Krita vs. Photoshop. The PS one just feels more intuitive to me. I’d love to see an option to make the Krita drop down menu more like that one.

What sets Krita apart from the other tools that you use?

Apart from the price tag, Krita is just more fun to work in than most other programs I use. I genuinely enjoy creating art in Krita. Sometimes with other programs it feels like half of my job is fighting the software. Rarely do I feel that way in Krita.

If you had to pick one favourite of all your work done in Krita so far,
what would it be, and why?

You torture me, how can I choose?! I suppose it would be this one:

It may not be the most finished or technically impressive art I’ve ever done, but it was one of the first times digital painting really clicked with me and I thought “Hey, maybe I can do this!” I’ve always felt an affinity for comic and cartoon style, but realism often eludes me. This piece proved in some small way that my practice was starting to pay off and I was getting somewhere. It felt like a turning point. So even if no one else feels the same way, this little bird will always be special to me.

What techniques and brushes did you use in it?

My most-used brushes are Ink_tilt_10 and Ink_tilt_20 (as seen in this screen capture!)

These days I use many more brushes and techniques, but that whole image was done with just those two, and different levels of flow and opacity. I didn’t even know about the Alpha Lock on the layers panel for this, which I use now in almost every digital painting.

Where can people see more of your work?

People can PLAY some of my work in the mobile game The Death of Mr. Fishy! All the art assets for that game were done in Krita. I’m doing more art for our next game right now as well. The latest details will always be posted at WeekendPanda.com.

I also share my practice art and work-in-progress on my personal Twitter account which is @JK_Riki.

Anything else you’d like to share?

Yes. A note to other artists out there: You can have the greatest tools and knowledge in the world but if you don’t practice, and truly put in the work, you will never achieve your best art. It is hard. I know, I’m with you there. It’s worth it, though. Work hard, practice a ton, and we’ll all improve together. Let’s do it! And if you ever need someone to encourage you to keep going, send me a note! 🙂

April 28, 2018

Displaying PDF with Python, Qt5 and Poppler

I had a need for a Qt widget that could display PDF. That turned out to be surprisingly hard to do. The Qt Wiki has a page on Handling PDF, which suggests only two alternatives: QtPDF, which is C++ only so I would need to write a wrapper to use it with Python (and then anyone else who used my code would have to compile and install it); or Poppler. Poppler is a common library on Linux, available as a package and used for programs like evince, so that seemed like the best route.

But Python bindings for Poppler are a bit harder to come by. I found a little one-page example using Poppler and Gtk3 via gi.repository ... but in this case I needed it to work with a Qt5 program, and my attempts to translate that example to work with Qt were futile. Poppler's page.render(ctx) takes a Cairo context, and Cairo is apparently a Gtk-centered phenomenon: I couldn't find any way to get a Cairo context from a Qt5 widget, and although I found some web examples suggesting renderToImage(), the Poppler available in gi.repository doesn't have that function.

But it turns out there's another Poppler: popplerqt5, available in the Debian package python3-poppler-qt5. That Poppler does have renderToImage, and you can take that image and paint it in a paint() callback or turn it into a pixmap you can use with a QLabel. Here's the basic sequence:

    document = Poppler.Document.load(filename)
    document.setRenderHint(Poppler.Document.TextAntialiasing)
    page = document.page(pageno)
    img = self.page.renderToImage(dpi, dpi)

    # Use the rendered image as the pixmap for a label:
    pixmap = QPixmap.fromImage(img)
    label.setPixmap(pixmap)

The line to set text antialiasing is not optional. Well, theoretically it's optional; go ahead, try it without that and see for yourself. It's basically unreadable.

Of course, there are plenty of other details to take care of. For instance, you can get the size of the rendered image:

    size = page.pageSize()
... after which you can use size.width() and size.height(). They're in points. There are 72 points per inch, so calculate accordingly in the dpi values you pass to renderToImage if you're targeting a specific DPI or need it to fit in a specific window size.

Window Resize and Efficient Rendering

Speaking of fitting to a window size, I wanted to resize the content whenever the window was resized, which meant redefining resizeEvent(self, event) on the widget. Initially my PDFWidget inherited from Qwidget with a custom paintEvent(), like this:

        # Create self.img once, early on:
        self.img = self.page.renderToImage(self.dpi, self.dpi)

    def paintEvent(self, event):
        qp = QPainter()
        qp.begin(self)
        qp.drawImage(QPoint(0, 0), self.img)
        qp.end()
(Poppler also has a function page.renderToPainter(), but I never did figure out how to get it to do anything useful.)

That worked, but when I added resizeEvent I got an infinite loop: paintEvent() called resizeEvent() which triggered another paintEvent(), ad infinitum. I couldn't find a way around that (GTK has similar problems -- seems like nearly everything you do generates another expose event -- but there you can temporarily disable expose events while you're drawing). So I rewrote my PDFWidget class to inherit from QLabel instead of QWidget, converted the QImage to a QPixmap and passed it to self.setPixmap(). That let me get rid of the paintEvent() function entirely and let QLabel handle the painting, which is probably more efficient anyway.

Showing all pages in a scrolled widget

renderToImage gives you one image corresponding to one page of the PDF document. More often, you'll want to see the whole document laid out, with all the pages. So you need a way to stack a bunch of widgets vertically, one for each page. You can do that with a QVBoxLayout on a widget inside a QScrollArea.

I haven't done much Qt5 programming, so I wasn't familiar with how these QVBoxes work. Most toolkits I've worked with have a VBox container widget to which you add child widgets, but in Qt5, you create a widget (no particular type -- a QWidget is enough), then create a layout object that modifies the widget, and add the sub-widgets to the layout object. There isn't much documentation for any of this, and very few examples of doing it in Python, so it took some fiddling to get it working.

Initial Window Size

One last thing: Qt5 doesn't seem to have a concept of desired initial window size. Most of the examples I found, especially the ones that use a .ui file, use setGeometry(); but that requires an (X, Y) position as well as (width, height), and there's no way to tell it to ignore the position. That means that instead of letting your window manager place the window according to your preferences, the window will insist on showing up at whatever arbitrary place you set in the code. Worse, most of the Qt5 examples I found online set the geometry to (0, 0): when I tried that, the window came up with the widget in the upper left corner of the screen and the window's titlebar hidden above the top of the screen, so there's no way to move the window to a better location unless you happen to know your window manager's hidden key binding for that. (Hint: on many Linux window managers, hold Alt down and drag anywhere in the window to move it. If that doesn't work, try holding down the "Windows" key instead of Alt.)

This may explain why I've been seeing an increasing number of these ill-behaved programs that come up with their titlebars offscreen. But if you want your programs to be better behaved, it works to self.resize(width, height) a widget when you first create it.

The current incarnation of my PDF viewer, set up as a module so you can import it and use it in other programs, is at qpdfview.py on GitHub.

April 26, 2018

GIMP 2.10.0 Released

The long-awaited GIMP 2.10.0 is finally here! This is a huge release, which contains the result of 6 long years of work (GIMP 2.8 was released almost exactly 6 years ago!) by a small but dedicated core of contributors.

The Changes in short

We are not going to list the full changelog here, since you can get a better idea with our official GIMP 2.10 release notes. To get an even more detailed list of changes please see the NEWS file.

Still, to get you a quick taste of GIMP 2.10, here are some of the most notable changes:

  • Image processing nearly fully ported to GEGL, allowing high bit depth processing, multi-threaded and hardware accelerated pixel processing, and more.
  • Color management is a core feature now, most widgets and preview areas are color-managed.
  • Many improved tools, and several new and exciting tools, such as the Warp transform, the Unified transform and the Handle transform tools.
  • On-canvas preview for all filters ported to GEGL.
  • Improved digital painting with canvas rotation and flipping, symmetry painting, MyPaint brush support…
  • Support for several new image formats added (OpenEXR, RGBE, WebP, HGT), as well as improved support for many existing formats (in particular more robust PSD importing).
  • Metadata viewing and editing for Exif, XMP, IPTC, and DICOM.
  • Basic HiDPI support: automatic or user-selected icon size.
  • New themes for GIMP (Light, Gray, Dark, and System) and new symbolic icons meant to somewhat dim the environment and shift the focus towards content (former theme and color icons are still available in Preferences).
  • And more, better, more, and even more awesome!

» READ COMPLETE RELEASE NOTES «

Enjoy GIMP!

Wilber likes it spicy!

Profiling a camera with darktable-chart


Profiling a camera with darktable-chart

Figure out the development process of your camera

What is a camera profile?

A camera profile is a combination of a color lookup table (LUT) and a tone curve which is applied to a RAW file to get a developed image. It translates the colors that a camera captures into the colors they should look like. If you shoot in RAW and JPEG at the same time, the JPEG file is already a developed picture. Your camera can do color corrections to the data it gets from the sensor when developing a picture. In other words, if a certain camera tends to turn blue into turquoise, the manufacturers internal profile will correct for the color shift and convert those turquoise values back to their proper hue.

The camera manufacturer creates a tone curve for the camera and understands what color drifts the camera tends to capture and can correct it. We can mimic what the camera does using a tone curve and a color LUT. We want to do this as the base curves provided by darktable are generalised for a manufacturers sensor behaviour, but indivudually profiling your camera can provide better color results.

Why do we want a color profile?

The camera captures light as linear RGB values. RAW development software needs to transform those into CIE XYZ tristimulus values for mathematical calculations. The color transformation is often done under the assumption that the conversion from camera RGB to CIE XYZ is a linear 3x3 mapping. Unfortunately it is not because the process is spectral and the camera sensor sensitivity also absorbs spectral light. In darktable the conversion is done the following way: The camera RGB values are transformed using the color matrix (either coming from the Adobe DNG Converter or dcraw) to arrive at approximately profiled XYZ values. darktable provides color lookup table in Lab color space to fix inaccuracies or implement styles which are semi-camera independent. A very cool feature is that a user can edit the color LUT. This color LUT can be created by darktable-chart as this article will show so that you don’t have to create it yourself.

What we want to have is the same knowlege about colors in our raw development software as the manufacturer put into the camera. Therefore we have two ways to achieve this. Either we fit to a JPEG generated by the camera, which can also apply creative styles (such as film emulations, filters), or we profile against real color reproduction. For real color a color target ships with a file providing the color values for each patch it has.

In summary, we can create a profile that emulates the manufactures color processing inside the body, or we can create a profile that renders real color as accurately as possible.

The process for both is nearly identical, and we will note when it diverges in the instructions.

Creating pictures for color profiling

To create the required pictures for camera profiling we need a color chart (aka Color Checker) or an IT8 chart as our target. The difference between a color chart and and IT8 chart is the number of patches and the price. As the IT8 chart has more patches the result will be much better. Optimal would be if the color target comes with a grey card for creating a custom White Balance. I can recommend the X-Rite ColorChecker Passport Photo. It is small, lightweight, all plastic, a good quality tool and also has a grey card. An alternative is the Spyder Checkr. If you want a better profiling result, a good IT8 chart is the ColorChecker Digital SG.

Note: ArgllCMS offers CIE and CHT files for different color charts, if you already have one or are going to buy one, check if ArgyllCMS offers support for it first! You can always add support to your color chart to ArgylCMS, but the process is much more complex. This will be very important later! You can find these files (generally) in:

/usr/share/color/argyll/ref/

We are creating a color profile for sunlight conditions which can be used in various scenarios. For this we need some special conditions.

The Color Checker needs to be photographed in direct light, which helps to reduce any metamerism of colors on the target and ensures a good match to the data file, that tells the profiling software what the colors on the target should look like. However a major concern is glare, but we can reduce it with some tricks.

One of the things we can do to reduce glare, is to build a simple shooting box. For this we need a cardboard box and three black t-shirts. The box should be open on the top and on the front like in the following picture (Figure 1).

A cardboard box Figure 1: Cardboard box suitable for color profiling

Normally you just need to cut one side open. Then coat the inside of the box with black t-shirts like this:

A cardboard box coated with black t-shirts Figure 2: A simple box for color profiling

To further reduce glare we just need the right location to shoot the picture. Of course, a lot depends on where you are located and the time of year, but in general, the best time to shoot the target is either 1-2 hours before mid-day or 1-2 hours after mid-day (when the sun has the highest elevation, keep Daylight Saving Time (DST) in mind). Try to shoot on a day with minimal clouds so the sun isn’t changing intensity while you shoot. The higher the temperature the more water is in the atmosphere, which means the quality of the images for profiling are reduced. Temperatures below 20°C are better than above.

In some countries it may not be possible to accurately produce these images with sunlight. This could be due to air polution (or lack of), temperature, humidity, latitude, and atomospheric conditions. For example, in Australia, one might be unable to use sunlight to create this profile, and would have to use a set of color balanced bulbs with the same box setup to create this.

Shooting outdoor

If you want to shoot outdoor, look for an empty tared parking lot. It should be pretty big, like from a mall, without any cars or trees. You should be far away from walls or anything which can reflect. Put the box on the ground and shoot with the sun above your right or left shoulder behind you. You can use a black fabric (bed sheets) if the ground reflects.

Shooting indoor

Find a place indoor where you can put the box in the sun and place you camera with a tripod in the shadow. The darker the room the better! Garages with an additional garage door are great. Also the sun needs to shine at an angle on the Color Checker. This means when you photograph the color chart with the sun above your right or left shoulder behind you. Use a black fabric to cover anything which could reflect.

Shooting indoor with artifical light

Avoid all windows and stained glass. Create the box as mentioned, and arrange it in a V shape with your tripod. At the top left of the V is the camera, at the bottom is the color target, at the top right is the light source. The right source should be bright and even across the room and your setup. Position yourself underneath it to avoid all shadows.

How to shoot the target?

  1. Put your shooting box in the light and setup your camera on a tripod. The best is to have the camera looking down on on the color chart like in the following picture:
A camera pointing into the profiling box Figure 3: Camera doing a custom white balance with the color profiling box
  1. You should use a prime lens for taking the pictures. If possible a 50mm or 85mm lens (or anything in between). The less glass the light has to travel through the better it is for profiling. Thus those two lenses are a good choice in the number of glass elements they have and their field of view! With a tele lens we would be too far away and with a wide angle lens we would need to be too near to have just the black box in the picture.

  2. If your camera has a custom white balance feature and you have a gray card provided by your target, create a custom white balance with it and use it (see figure 3). Put the gray card in your black box in the sunlight at the same position as the Color Checker.

  3. Set your metering mode to matrix metering (evaluative metering - this is often a symbol with 4 boxes and a circle in the centre) and use an aperture of at least f/4.0.

  4. Make sure the color chart is parallel to plane of the camera sensors so all patches of the chart are in focus. The color chart should be in the middle of the image using about 1/3 of the screen so that vignetting is not an issue.

  5. Set the camera to capture “RAW & JPEG” and disable lens corrections (vignetting corrections) for JPEG files if possible. This is important for JPEG and real color fitting.

Now you want to begin taking images. We want to have a camera profile for the most used ISO values. So for each ISO value you need to take 4 pictures of your target. This is so that if an image is over or under exposed, you have the step above or below that make be exposed correctly. One photo for -1/3 EV, 0 EV, 1/3 EV and 2/3 EV. Start with ISO 100, don’t shoot for Extended ISO values (50, 64, 80). Some cameras (Fuji) ISO 100 is an Extended value, so start at ISO 200. Normally they are captured with the lowest physical ISO and overexposed and then exposure is reduced in software. Use the lowest ISO profile for them. If you approach the maximum shutter speed (1/8000 commonly, but 1/4000 is not rare), start to close the aperture. Remember, your camera may show you 1/8000 shutter for the 0 EV image, so the -1/3 EV image may be over exposed. I started to close at 1/5000.

Creating profile for values above ISO 12800 doesn’t really make sense. Probably after ISO 6400 the results begin to be not 100% accurate anymore! You can use the profile for ISO 6400 on higher values. This is for the same reason as the low ISO values that are software processed. Many mirrorless cameras have a maximum ISO, and all values above are software processed. Some have a maximum as low as ISO 2000, others go to ISO 6400.

Once you have done all the required shots, it is time to download the RAW and JPEG files to your computer.

Verifying correct images in darktable

For verifying the images we need to know the L-value from the Lab color space of the neutral gray field in the gray ramp of our color target. For the ColorChecker Passport we can look it up in the color information (CIE) file (ColorCheckerPassport.cie) shipping with ArgyllCMS, which should be located at:

/usr/share/color/argyll/ref/ColorCheckerPassport.cie

The ColorChecker Passport has actually two gray ramps. The neutral gray field is the field on the bottom right of the color target ramp and is called D1. On creative enhancement target, the it is on the top right and is called NEU8. If we check the CIE file, we will find out that the neutral gray field D1 has an L-value of: L=96.260066. Lets round it to L=96. For other color targets you can find the L-value in the description or specification of your target, often it is L=92. Better check the CIE file!

You then open the RAW file in darktable and disable most modules, especially the base curve! Select the standard input matrix in the input color profile module and disable gamut clipping. Make sure “camera white balance” in the white balance module is selected. If lens corrections are automatically applied to your JPEG files, you need to enable lens corrections for your RAW files too! Only apply what has been applied to the JPEG file too.

For my configuration I was left with the following modules enabled:

Output Color Profile
Input Color Profile
Lens Correction (Optional)
Denoise
Demosaic
White Balance
Raw Black/White Point

Apply the changes to all RAW files you have created!

You could consider making a “profiling” style and applying it en-masse.

You can also crop the image but you need to apply exactly the same crop to the RAW and JPEG file! (This is why you use a tripod!)

Now we need to use the global color picker module in darkroom to find out the value of the natural white field on the color target.

  • Open the first RAW file in darkroom and expand the global color picker module on the left.
  • Select area, mean and Lab in the color picker and use the eye-dropper to select the natural gray field of your target. on the Color Checker it’s on the bottom right. Here is an example:
darktable global color picker Figure 4: Determining the color of the neutral white patch
  • If the value displayed in the color picker module matches the L-value of the field or is close (+0/-2. This means L=94 to L=96 is acceptable), give the RAW file and the corresponding JPEG file 5 stars. In the picture above it is the first value of: (96.491, -0.431, 3.020). This means L=96.491, which is what you’re looking for on this color target. You might be looking for e.g. L=92 if you are using a different Color Checker. See above how to find our the L-value for your target.

  • For real color profiling this is very important to get right. Additionally you want to check the JPEG is registering a L=96(+0/-2) tolerance. You do not want overexposure here! If your images are over exposed, your profile will actually darken the images (which is not what you want).

  • For profile extraction, this is less important as darktable-chart will extract the differences between the raw and the JPEG, and will assume the camera’s exposure level was correct. This means if your camera “thinks” a good exposure is L=98 for the JPEG, and the RAW reads as L=85, then your profile needs to create the difference here so you get the same effect.

Exporting images for darktable-chart

For exporting we need to select Lab as output color profile. This color space is not visible in the combo box by default. You can enable it by starting darktable with the following command line argument:

darktable --conf allow_lab_output=true

Or you always enable it by setting allow_lab_output to TRUE in darktablerc. Make sure that you have closed darktable before making this change, then reopen it ( darktable writes to this file and may erase your change if you edit while darktable is running).

~/.config/darktable/darktablerc
allow_lab_output=TRUE

As the output format select “PFM (float)” and for the export path you can use:

$(FILE_FOLDER)/PFM/$(MODEL)_ISO$(EXIF_ISO)_$(FILE_EXTENSION)

Remember to select the Lab output color profile here as well.

You need to export all the RAW and JPEG files, not just the RAWs.

Select all 5 star RAW and JPEG files and export them.

darktable export dialog Figure 5: Exporting the images for profiling

Profiling with darktable-chart

Before we can start you need the chart file for your color target. The chart file contains the layout of the color checker. For example it tells the profiling software where the gray ramp is located or which field contains which color. For the “X-Rite Colorchecker Passport Photo” there is a (ColorCheckerPassport.cht) file provided by ArgyllCMS. You can find it here:

/usr/share/color/argyll/ref/ColorCheckerPassport.cht

Now it is time to start darktable-chart. The initial screen will look like this:

darktable-chart startup Figure 6: The darktable-chart screen after startup

Source Image

In the source image tab, select your PFM exported RAW file as image and for chart your Color Checker chart file. Then fit the displayed grid on your image.

darktable-chart source image Figure 7: Selecting the source image in darktable-chart

Make sure that the inner rectangular of the grid is completely inside of the color field, see Figure 8. If it is to big, you can use the size slider in the top right corner to adjust it. Better too small than too large.

darktable-chart source image with grid Figure 8: Placing the chart grid on the source image

Reference values

This is the only step where the process diverges for real color vs camera profile creation.

If you are creating a color profile to match the manufacturers color processing in body, you will want to select color chart image and as the reference image select the PFM exported JPEG file which corresponds to the RAW file in the source image tab. Once opened you need to resize the grid again to match the Color Checker in your image. Adjust the size with the slider if necessary.

darktable-chart selecting reference values Figure 9: Selecting the reference value for profiling in darktable-chart

If you are creating a color profile for real color, select the mode as cie/it8 file and load the corresponding CIE file for your color target. If you have issues with this, run darktable-chart from the CLI and check the output. I found that my chart would not open with:

error with the IT8 file, can't find the SAMPLE_ID column

It’s worth checking the ‘Lab (reference)’ values at the bottom of the display to ensure they match what you expect and were correctly loaded. I saw some cool (but incorrect) results when they did not load!

Process

In this tab you’re asked to select the patches with the gray ramp. For the ‘X-Rite Color Checker Passport’ these are the ‘NEU1 .. NEU8’ fields. The input number of final patches defines how many editable color patches the resulting style will use within the color look up table module. More patches gives a better result but slows down the process. I think 32 is a good compromise.

Once you have done this click on ‘process’ to start the calculation. The quality of the result in terms of average delta E and maximum delta E are displayed below the button. These data show how close the resulting style applied to the source image will be able to match the reference values – the lower the better.

You must click process each time you change source images or reference chart to generate the new profiles. Sometimes process is “greyed out”, so simply toggling the grey ramp setting will reactivate it.

After running ‘process’, click on ‘export’ to save the darktable style.

darktable-chart export Figure 10: Processing the image in darktable-chart

In the export window you should already get a good name for the style. Add a leading zero for ISO values smaller than 1000 get get correct sorting in the styles module, for example: ILCE-7M3_ISO0100_JPG.dtstyle. The JPG in the name should indicate that we fitted against a JPG file. If you fitted against a CIE file, remove the CIE filename from the style name. If you applied a creative style (for example, a film emulation or filter in the camera). to the JPG, probably add it at the end of the file name and style name.

Importing your dtstyle in darktable

To use your just created style, you need to import it in the style module in the lighttable. In the lighttable open the module on the right and click on ‘import’. Select the dtstyle file you created to add it. Once imported you can select a raw file and then double click on the style in the ‘style module’ to apply it.

Open the image in darkroom and you will notice that the base curve has been disabled and a few modules been enabled. The additional modules activated are normally: input color profile, color lookup table and tone curve.

Verifying your profile

To verify the style you created you can either apply it to one of the RAW files you created for profiling. Then use the global color picker to compare the color in the RAW with the style applied to the one in the JPEG file.

I also shoot a few normal pictures with nice colors like flowers in RAW and JPEG and then compare the result. Sometimes some colors can be off which can indicate that your pictures for profiling are not the best. This can be because there were some kind of clouds, glare or the wrong daytime. Redo the shots till you get the result you’re satisfied with.

Sadly this is a trial and error process, so you will have to create some number of profiles before you find the results you want. It’s a good idea to read this article again to see if you missed any important steps.

How does the result look like?

In the following screenshot (Figure 11) you can see the calculated tone curve by darktable chart and the Sony base curve of darktable. The tone curve is based on the color LUT. It will look flat if you apply it without the LUT.

darktable base curve vs. tone curve Figure 11: Comparison of the default base curve with the new generated tone curve

Here is a comparison between the base curve for Sony on the left and the dtstyle (color LUT + tone curve) created with darktable-chart:

darktable comparison Figure 12: Side by side comparison on an image (left the standard base curve, right the calculated dtstyle)

Other ideas

This process will work for extracting in-body black and white profiles, as well as creative color profiles. I see a significant improvment in black and white profiles from this process over the use of some of the black and white modules in darktable.

You may find that the lowest ISO profile may provide pretty good results for higher ISO values. This will save you a lot of time profiling, and allows you to blanket-apply your profile to all your images quickly - you only need one profile now! This is highly dependant on your camera however, so experiment with this.

These profiles should work in all light conditions, provided your white balance is correct. Given you now have a color target, you should always take one photo of it, so you can correct the whitebalance later.

Discussion

As always the ways to get better colors are open for discussion an it can be improved in collaboration.

Feedback is very welcome.

Thanks to the darktable developers for such a great piece of software! :-)

Updated 2018-07-24 by William Brown based on my own profiling experience following this tutorial.

April 25, 2018

Who is Producer X?

Astute observers of Seder-Masochism will notice one “Producer X” on the poster:

Poster_ProducerX

This is consistent with the film’s opening credits:

Moses_ProducerX_edit

and end credits:

Endcredit_ProducerX_edit

Why? Who? WTF?

I made Sita Sings the Blues almost entirely alone. That caused an unforeseen problem when it came time to send the film out into the world: I was usually the only person who could represent it at festivals. Other films have producers who aren’t also the director. Other films also have crews, staff, multiple executives, and money. As SSTB’s only executive, I couldn’t be everywhere at once. Often I couldn’t be anywhere at once, due to having a life that includes occasional crises. Sometimes, if I was lucky, I could send an actor like Reena Shah, or musician like Todd Michaelesen, or narrator like Aseem Chaabra, or sound designer Greg Sextro. But most of the time it meant there was no human being representing the film when it screened at film festivals.

I’m even more hermitic now, and made Seder-Masochism in splendid isolation in Central Illinois. This time I worked with no actors, narrators, or musicians. I did try recording some friends discussing Passover, but that experiment didn’t make it into the film. Greg Sextro is again doing the sound design, but we’re working remotely (he’s in New York).

I like working alone. But I don’t like going to film festivals alone. And sometimes, I can’t go at all.

Such as right now: in June, Seder-Masochism is having its world premiere at Annecy, but I have to stay in Illinois and get surgery. I have an orange-sized fibroid in my cervix, and finally get to have my uterus removed. (I’ve suffered a lifetime of debilitating periods, but was consistently instructed to just suck it up, buttercup; no doctor bothered looking for fibroids over the last 30 years in spite of my pain. But now that I’m almost menopausal, out it goes at last!)

Film festivals are “people” events, and having a human there helps bring attention to the film. The reason I want my film in festivals is to increase attention. The more attention, the better for the film, especially as a Free Culture project. So I want a producer with it at festivals.

Fortunately, Producer X has been with Seder-Masochism from the very beginning. After Sita’s festival years, I knew that credit would be built into my next film.

So who is Producer X?

Whoever I say it is.

She’ll see you in Annecy!

Share/Bookmark

flattr this!

April 24, 2018

3 Students Accepted for Google Summer of Code 2018

Since 2006, we have had the opportunity for Google to sponsor students to help out with Krita. For 2018 we have 3 talented students working over the summer. Over the next few months they will be getting more familiar with the Krita code base and working on their projects. They will be blogging about their experience and what they are learning along the way. We will be sure to share any progress or information along the way.

Here is a summary of their projects and what they hope to achieve.

Ivan Yossi – Optimize Krita Soft, Gaussian and Stamp brushes mask generation to use AVX with Vc Library

Krita digital painting app relies on quick painting response to give a natural experience. A painted line is composed of thousands of images placed one after the other. This image mask creation hast to be performed super fast as it is done thousands of times each second. If the process of applying the images on canvas is not fast enough the painting process gets compromised and the enjoyment of painting is reduced.

Optimizing the mask creation can be done using the AVX instructions sets to apply transformation in vectors of data in one step. In this case the data is the image component coordinates composing the mask. Programming AVX can be done using Vc optimization library, which manages low level optimization adaptable to the user processor features. However the data must be prepared so it optimizes effectively. Optimization has already been done on the Default brush mask engine allowing it to be as much as 5 times faster than the current Gaussian mask engine.

The project aims to improve painting performance by implementing AVX optimization code for Circular Gauss, Circular Soft, Rectangular Gaussian, Rectangular Soft Rectangular and Stamp mask.

Michael Zhou – A Swatches Docker for Krita

This project intends to create a swatches docker for Krita. It’s similar to the palette docker that’s already in Krita today, but it has the following advantages:

  • Users can easily add, delete, drag and drop colors to give the palette a better visual pattern so that it’s easier for them to keep track of the colors.
  • Users can store a palette with a work so that they can ensure the colors they use throughout a painting is consistent.
  • It will have a more intuitive UI design

Andrey Kamakin Optimize multithreading in Krita’s Tile Manager

This project is about improving Krita overall performance by introducing lock-free hash table for storing tiles and improving locks described in proposal

Problem: In single threaded execution of program there is no need to monitor shared resources, because it is guaranteed that only one thread can access resource. But in multi-threaded program flow it is a common problem that resources must be shared between threads, furthermore, situations such as dirty read, etc must be excluded for normal program behavior. So the simplest solution is to use locks on table operations so that only one thread can access resources and read/write

We wish all the students the best of luck this summer!

darktable 2.4.3 released

we’re proud to announce the third bugfix release for the 2.4 series of darktable, 2.4.3!

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.4.3.

as always, please don’t use the autogenerated tarball provided by github, but only our tar.xz. the checksums are:

$ sha256sum darktable-2.4.3.tar.xz
1dc5fc7bd142f4c74a5dd4706ac1dad772dfc7cd5538f033e60e3a08cfed03d3 darktable-2.4.3.tar.xz
$ sha256sum darktable-2.4.3.dmg
290ed5473e3125a9630a235a4a33ad9c9f3718f4a10332fe4fe7ae9f735c7fa9 darktable-2.4.3.1.dmg
$ sha256sum darktable-2.4.3-win64.exe
a34361924b4d7d3aa9cb4ba7e5aeef928c674822c1ea36603b4ce5993678b2fa darktable-2.4.3-win64.exe
$ sha256sum darktable-2.4.3-win64.zip
3e14579ab0da011a422cd6b95ec409565d34dd8f7084902af2af28496aead5af darktable-2.4.3-win64.zip

when updating from the currently stable 2.2.x series, please bear in mind that your edits will be preserved during this process, but it will not be possible to downgrade from 2.4 to 2.2.x any more.

Important note: to make sure that darktable can keep on supporting the raw file format for your camera, please read this post on how/what raw samples you can contribute to ensure that we have the full raw sample set for your camera under CC0 license!

and the changelog as compared to 2.4.2 can be found below.

New Features

  • Support for tags and ratings in the watermark module
  • Read Xmp.exif.DateTimeOriginal from XMP sidecars
  • Build and install noise tools
  • Add a script for converting .dtyle to an .xmp

Bugfixes

  • Don’t create unneeded folders during export in some cases
  • When collecting by tags, don’t select subtags
  • Fix language selection on OSX
  • Fix a crash while tethering

Camera support, compared to 2.4.2

Warning: support for Nikon NEF ‘lossy after split’ raws was unintentionally broken due to the lack of such samples. Please see this post for more details. If you have affected raws, please contribute samples!

Base Support

  • Fujifilm X-H1 (compressed)
  • Kodak EOS DCS 3
  • Olympus E-PL9
  • Panasonic DC-GX9 (4:3)
  • Sony DSC-RX1RM2
  • Sony ILCE-7M3

White Balance Presets

  • Sony ILCE-7M3

Noise Profiles

  • Canon PowerShot G1 X Mark III
  • Nikon D7500
  • Sony ILCE-7M3

Blender at FMX 2018

FMX 2018 (Stuttgart, April 24-27) is one of Europe’s most influential conference dedicated to Digital Visual Arts, Technologies, and Business. This year Blender is going to take part in 3 events, featuring Ton Roosendaal and artists from the Blender studio crew.

Blender at FMX 2018

Presentations and Panels

Blender will be represented at the following events on April 26th:

Come and see us!

If you are attending FMX and would like to hang out on Thursday, get in touch with francesco@blender.org or reach out to us directly on social media!

April 20, 2018

UEFI booting and RAID1

I spent some time yesterday building out a UEFI server that didn’t have on-board hardware RAID for its system drives. In these situations, I always use Linux’s md RAID1 for the root filesystem (and/or /boot). This worked well for BIOS booting since BIOS just transfers control blindly to the MBR of whatever disk it sees (modulo finding a “bootable partition” flag, etc, etc). This means that BIOS doesn’t really care what’s on the drive, it’ll hand over control to the GRUB code in the MBR.

With UEFI, the boot firmware is actually examining the GPT partition table, looking for the partition marked with the “EFI System Partition” (ESP) UUID. Then it looks for a FAT32 filesystem there, and does more things like looking at NVRAM boot entries, or just running BOOT/EFI/BOOTX64.EFI from the FAT32. Under Linux, this .EFI code is either GRUB itself, or Shim which loads GRUB.

So, if I want RAID1 for my root filesystem, that’s fine (GRUB will read md, LVM, etc), but how do I handle /boot/efi (the UEFI ESP)? In everything I found answering this question, the answer was “oh, just manually make an ESP on each drive in your RAID and copy the files around, add a separate NVRAM entry (with efibootmgr) for each drive, and you’re fine!” I did not like this one bit since it meant things could get out of sync between the copies, etc.

The current implementation of Linux’s md RAID puts metadata at the front of a partition. This solves more problems than it creates, but it means the RAID isn’t “invisible” to something that doesn’t know about the metadata. In fact, mdadm warns about this pretty loudly:

# mdadm --create /dev/md0 --level 1 --raid-disks 2 /dev/sda1 /dev/sdb1 mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90

Reading from the mdadm man page:

-e, --metadata= ... 1, 1.0, 1.1, 1.2 default Use the new version-1 format superblock. This has fewer restrictions. It can easily be moved between hosts with different endian-ness, and a recovery operation can be checkpointed and restarted. The different sub-versions store the superblock at different locations on the device, either at the end (for 1.0), at the start (for 1.1) or 4K from the start (for 1.2). "1" is equivalent to "1.2" (the commonly preferred 1.x format). "default" is equivalent to "1.2".

First we toss a FAT32 on the RAID (mkfs.fat -F32 /dev/md0), and looking at the results, the first 4K is entirely zeros, and file doesn’t see a filesystem:

# dd if=/dev/sda1 bs=1K count=5 status=none | hexdump -C 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00001000 fc 4e 2b a9 01 00 00 00 00 00 00 00 00 00 00 00 |.N+.............| ... # file -s /dev/sda1 /dev/sda1: Linux Software RAID version 1.2 ...

So, instead, we’ll use --metadata 1.0 to put the RAID metadata at the end:

# mdadm --create /dev/md0 --level 1 --raid-disks 2 --metadata 1.0 /dev/sda1 /dev/sdb1 ... # mkfs.fat -F32 /dev/md0 # dd if=/dev/sda1 bs=1 skip=80 count=16 status=none | xxd 00000000: 2020 4641 5433 3220 2020 0e1f be77 7cac FAT32 ...w|. # file -s /dev/sda1 /dev/sda1: ... FAT (32 bit)

Now we have a visible FAT32 filesystem on the ESP. UEFI should be able to boot whatever disk hasn’t failed, and grub-install will write to the RAID mounted at /boot/efi.

However, we’re left with a new problem: on (at least) Debian and Ubuntu, grub-install attempts to run efibootmgr to record which disk UEFI should boot from. This fails, though, since it expects a single disk, not a RAID set. In fact, it returns nothing, and tries to run efibootmgr with an empty -d argument:

Installing for x86_64-efi platform. efibootmgr: option requires an argument -- 'd' ... grub-install: error: efibootmgr failed to register the boot entry: Operation not permitted. Failed: grub-install --target=x86_64-efi WARNING: Bootloader is not properly installed, system may not be bootable

Luckily my UEFI boots without NVRAM entries, and I can disable the NVRAM writing via the “Update NVRAM variables to automatically boot into Debian?” debconf prompt when running: dpkg-reconfigure -p low grub-efi-amd64

So, now my system will boot with both or either drive present, and updates from Linux to /boot/efi are visible on all RAID members at boot-time. HOWEVER there is one nasty risk with this setup: if UEFI writes anything to one of the drives (which this firmware did when it wrote out a “boot variable cache” file), it may lead to corrupted results once Linux mounts the RAID (since the member drives won’t have identical block-level copies of the FAT32 any more).

To deal with this “external write” situation, I see some solutions:

  • Make the partition read-only when not under Linux. (I don’t think this is a thing.)
  • Create higher-level knowledge of the root-filesystem RAID configuration is needed to keep a collection of filesystems manually synchronized instead of doing block-level RAID. (Seems like a lot of work and would need redesign of /boot/efi into something like /boot/efi/booted, /boot/efi/spare1, /boot/efi/spare2, etc)
  • Prefer one RAID member’s copy of /boot/efi and rebuild the RAID at every boot. If there were no external writes, there’s no issue. (Though what’s really the right way to pick the copy to prefer?)

Since mdadm has the “--update=resync” assembly option, I can actually do the latter option. This required updating /etc/mdadm/mdadm.conf to add <ignore> on the RAID’s ARRAY line to keep it from auto-starting:

ARRAY <ignore> metadata=1.0 UUID=123...

(Since it’s ignored, I’ve chosen /dev/md100 for the manual assembly below.) Then I added the noauto option to the /boot/efi entry in /etc/fstab:

/dev/md100 /boot/efi vfat noauto,defaults 0 0

And finally I added a systemd oneshot service that assembles the RAID with resync and mounts it:

[Unit] Description=Resync /boot/efi RAID DefaultDependencies=no After=local-fs.target [Service] Type=oneshot ExecStart=/sbin/mdadm -A /dev/md100 --uuid=123... --update=resync ExecStart=/bin/mount /boot/efi RemainAfterExit=yes [Install] WantedBy=sysinit.target

(And don’t forget to run “update-initramfs -u” so the initramfs has an updated copy of /dev/mdadm/mdadm.conf.)

If mdadm.conf supported an “update=” option for ARRAY lines, this would have been trivial. Looking at the source, though, that kind of change doesn’t look easy. I can dream!

And if I wanted to keep a “pristine” version of /boot/efi that UEFI couldn’t update I could rearrange things more dramatically to keep the primary RAID member as a loopback device on a file in the root filesystem (e.g. /boot/efi.img). This would make all external changes in the real ESPs disappear after resync. Something like:

# truncate --size 512M /boot/efi.img # losetup -f --show /boot/efi.img /dev/loop0 # mdadm --create /dev/md100 --level 1 --raid-disks 3 --metadata 1.0 /dev/loop0 /dev/sda1 /dev/sdb1

And at boot just rebuild it from /dev/loop0, though I’m not sure how to “prefer” that partition…

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

April 16, 2018

GIMP 2.10.0 Release Candidate 2 Released

Hot on the heels of the first release candidate, we’re happy to have a second RC ready! In the last 3 weeks since releasing GIMP 2.10.0-RC1, we’ve fixed 44 bugs and introduced important performance improvements.

As usual, for a complete list of changes please see NEWS.

Optimizations and multi-threading for painting and display

A major regression of GIMP 2.10, compared to 2.8, was slower painting. To address this issue, several contributors (Ell, Jehan, Massimo Valentini, Øyvind Kolås…) introduced improvements to the GIMP core, as well as to the GEGL and babl libraries. Additionally, Elle Stone and Jose Americo Gobbo contributed performance testing.

The speed problems pushed Ell to implement multi-threading within GIMP, so that painting and display are now run on separate threads, thus greatly speeding up feedback of the graphical interface.

The new parallelization framework is not painting-specific and could be used for improving other parts of GIMP.

Themes rewritten

Since the development version 2.9.4, we had new themes shipped with GIMP, and in particular dark themes (as is now common for creative applications). Unfortunately they were unmaintained, bugs kept piling up, and the user experience wasn’t exactly stellar.

GIMP Themes Light, Gray, and Dark themes.

Our long-time contributor Ville Pätsi took up the task of creating brand new themes without any of the usability issues and glitches of previous ones. While cleaning up, only the Gray theme has been kept, whereas Light and Dark were rewritten from scratch. Darker and Lighter themes have been removed (they won’t likely reappear unless someone decides to rewrite and contribute them as well, and unless this person stays around for maintenance).

Gradient tool improved to work in linear color space

Thanks to Michael Natterer and Øyvind Kolås, the gradient tool can now work in either perceptual RGB, linear RGB, or CIE LAB color space at your preference.

Gradient tool in linear space Gradient tool in perceptual and linear spaces

We also used the opportunity to rename the tool, which used to be called “Blend tool” until now, even though barely anyone uses such name. “Gradient tool” is a much more understandable naming.

New on-canvas control for 3D rotation

A new widget for on-canvas interaction of 3D rotation (yaw, pitch, roll) has been implemented by Ell. This new widget is currently only used for the Panorama Projection filter.

GEGL Panorama View Panorama projection filter (image: Hellbrunn Banquet Hall by Matthias Kabel (cba))

Improvements in handling masks, channels, and selections

GIMP doesn’t do any gamma conversion when converting between selection, channels, and masks anymore. This makes the selection -> channel -> selection roundtrips correct and predictable.

Additionally, for all >8-bit per channel images, GIMP now uses linear color space for channels. This and many other fixes in the new release were done by Michael Natterer.

Translations

8 translations have been updated between the two release candidates. We are very close to releasing the final version of GIMP 2.10.0. If you plan to update a translation into your language and be in time for the release, we recommend starting now.

GEGL changes

Mosty of the changes in GEGL since the release in March are performance improvements and micro-optimizations in display paths. Additionally, avoiding incorrectly gamma/ungamma correcting alpha in u8 formats provides a tiny 2-3% performance boost.

For further work on mipmaps support, GEGL now keeps track of valid/invalid areas on smaller granularity than tiles in mipmap.

The Panorama Projection operation got reverse transform, which permits using GIMP for retouching zenith, nadir or other arbitrary gaze directions in equirectangular, also known as 360×180 panoramas.

Finally, abyss policy support in the base class for scale operations now makes it possible to achieve hard edges on rescaled buffers.

What’s Next

We are now 7 blocker bugs away from the final release.

On your marks, get set…

Interview with Runend

Could you tell us something about yourself?

Hi! I’m Faqih Muhammad and my personal brand name is runend. I’m 22 years old and live in Medan in Indonesia. I love film animation, concept art, game making, 3d art, and everything illustration.

Do you paint professionally, as a hobby artist, or both?

It can be said that I’m a hobbyist now, but I keep learning, practicing, experimenting to find new forms and new styles of self-expression, all to improve my skills and to be a professional artist in the near future!

What genre(s) do you work in?

So far I’ve made scenery background with character as a base to learn something. Starting from the basic we can make something more interesting, but still it was quite difficult for me.

Whose work inspires you most — who are your role models as an artist?

Hhmmm, there are many artists who give me inspiration. Mainly I follow Jeremy Fenske, Atey Ghailan and Ruan Jia. I won’t forget to mention masters like Rizal Abdillah, Agung Oka and Yogei, as well as my friends and mentors.

How and when did you get to try digital painting for the first time?

It was in 2014 using photoshop, which I used to create photo-manipulations with. In 2015 I finally bought my wacom intuos manga tablet and could finally begin learning about digital painting.

What makes you choose digital over traditional painting?

Digital painting has many features that make it easy to create art. Of course there’s no need to buy art supplies: with a computer, pen and tablet you can make art.

Lately I’ve been learning traditional painting using poster color, and that makes me feel both happy and challenged.

How did you find out about Krita?

I used Google to search for “free digital painting software” and I found Krita :D.

What was your first impression?

I was like “WOW”, grateful to find software as good as this.

What do you love about Krita?

I have tried some of the features, especially the brush engine, UI/UX, layering, animation tools, I love all of them! And of course it’s free and open source.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Probably the filter layer and filter mask performance. Those run very slowly, I think it would be better if they ran more smoothly and more realtime.

What sets Krita apart from the other tools that you use?

Free open source software that runs cross-platform, no need to spend more. If you get a job or a paid project with Krita, there is a donate button to make Krita better still.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I love all my work, sometimes some paintings look inconsistent, then I will make it better.

What techniques and brushes did you use in it?

Before starting I think about what I want to create like situation and color mood. If that’s difficult from only imagination I usually use some reference.

I first make a sketch, basic color, shading, texture, refine the painting, and check the value using fill black layer blending mode in color.

In Krita 4.0 beta there are many new brush presets, I think that’s enough to make awesome art.

Where can people see more of your work?

Artstation: https://www.artstation.com/runend
Twitter: https://twitter.com/runendarts
Facebook: https://web.facebook.com/runendartworks
Instagram: https://www.instagram.com/runend.artworks/

Anything else you’d like to share?

Krita is an amazing program, I’d like to thank the Krita team. I wish Krita a good future, I hope Krita can be better known to the people of Indonesia, for instance on campus, schools, the creative industry etcetera.

How to create camera noise profiles for darktable


How to create camera noise profiles for darktable

An easy way to create correct profiling pictures

Noise in digital images is similar to film grain in analogue photography. In digital cameras, noise is either created by the amplification of digital signals or heat produced by the sensor. It appears as random, colored speckles on an otherwise smooth surface and can significantly degrade image quality.

Noise is always present, and if it gets too pronounced, it detracts from the image and needs to be mitigated. Removing noise can decrease image quality or sharpness. There are different algorithms to reduce noise, but the best option is if having profiles for a camera to understand the noise patterns a camera model produces.

Noise reduction is an image restoration process. You want to remove the digital artefacts from the image in such a way that the original image is discernible. These artefacts can be just some kind of grain (luminance noise) or colorful, disturbing dots (chroma noise). It can either add to a picture or detract from it. If the noise is disturbing, we want to remove it. The following pictures show a picture with noise and a denoised version:

Noisy cup Denoised cup

To get the best noise reduction, we need to generate noise profiles for each ISO value for a camera.

Creating the pictures for noise profling

For every ISO value your camera has, you have to take a picture. The pictures need to be exposed a particular way to gather the information correctly. The photos need to be out of focus with a widespread histogram like in the following image:

Histogram

We need overexposed and underexposed areas, but mostly particularly the grey areas in between. These areas contain the information we are looking for.

Let’s go through the noise profile generation step by step. For easier creation of the required pictures, we will create a stencil which will make it easier to capture the photos.

Stencil for DSLM/DSLR lenses

You need to get some thicker black paper or cardboard. No light should shine through it! First we need to use the lens hood to get the size. The lens hood helps to move the paper away from the lens a bit and the lens hood gives us something to attach it to. Then we need to create a punch card. For wide angle lenses you need a close raster and for longer focal lengths, a wider raster. It is harder to create it for compact cameras with small lenses (check below).

Find the middle and mark the size of the lens hood:

Stencil Step 1

If you have the size, draw a grid on the paper:

Stencil Step 2

Once you have done that you need to choose a punch card raster for your focal length. I use a 16mm wide angle lens on a full frame body, so I choose a raster with a lot of holes:

Stencil Step 3

Untested: For a 50mm or 85mm lens I think you should start with 5 holes in the middle created just with a needle. Put your stencil on the lens hood and check. Then you know if you need bigger holes and maybe how much. Please share your findings in the comments below!

Stencil for compact cameras

I guess you would create a stencil, like for bigger lenses, but create a funnel to the camera. Contributions and ideas are welcome!

Taking the pictures

Wait for a cloudy day with thick clouds and no sun to take the pictures. The problem is the shutter speed and it is likely that you’ll hit the limit. My camera has 37 ISO values (including extended iso), so I need to start with 0.6 seconds exposure time to take the last picture with the limit of my camera, 1/8000 of a second exposure time. So a darker day helps to start with a slow shutter speed.

Use a tripod and point the camera to the sky, attach the lens hood and put the punch card on it. Better make sure that all filters are removed, so we don’t get any strange artefacts. In the end the setup should look like this:

Punch card on camera

Choose the fastest aperture available on your lens (e.g. f/2.8 or even faster), change the camera to manual focus, and focus on infinity. Take the shot! The result should look like this:

punch card picture

The holes will overexpose the picture, but you also need an underexposed area. So start to put most of my dark areas in the middle of the histogram and moved it to the black (left) side of the histogram until the first values start to clip. It is important to not to clip to much, as we are mostly interested the grey values between the overexposed and underexposed areas.

Once you’re done taking the pictures it is time to move to the computer.

Creating the noise profiles

STEP 1

Run

/usr/lib/darktable/tools/darktable-gen-noiseprofile --help

If this gives you the help of the tool, continue with STEP 2 othersise go to STEP 1a

STEP 1a

Your darktable installation doesn’t offer the noise tools so you need to compile it yourself. Before you start make sure that you have the following dependencies installed on your system:

  • git
  • gcc
  • make
  • gnuplot
  • convert (ImageMagick)
  • darktable-cli

Get the darktable source code using git:

git clone https://github.com/darktable-org/darktable.git

Now change to the source and build the tools for creating noise profiles using:

mkdir build
cd build
cmake -DCMAKE_INSTALL_PREFIX=/opt/darktable -DBUILD_NOISE_TOOLS=ON ..
cd tools/noise
make
sudo make install

STEP 2

Download the pictures from your camera and change to the directory on the commandline:

cd /path/to/noise_pictures

and run the following command:

/usr/lib/darktable/tools/darktable-gen-noiseprofile -d $(pwd)

or if you had to download and build the source, run:

/opt/darktable_source/lib/tools/darktable-gen-noiseprofile -d $(pwd)

This will automatically do everything for you. Note that this can take quite some time to finish. I think it took 15 to 20 minutes on my machine. If a picture is not shot correctly, the tool will tell you the image name and you have to recapture the picture with that ISO.

The tool will tell you, once completed, how to test and verify the noise profiles you created.

Once the tool finished, you end up with a tarball you can send to darktable for inclusion. You can open a bug at:

https://redmine.darktable.org/

The interesting files are the presets.json file (darktable input) and, for the developers, the noise_result.pdf file. You can find an example PDF here. It is a collection of diagrams showing the histogram for each picture and the results of the calculations.

A detailed explanation of the diagrams and the math behind it can be found in the original noise profile tutorial by Johannes Hanika.

For discussion

I’ve created the stencil above to make it easier to create noise profiles. However I’ve tried different ways to create the profiles and here is one which was a good idea but failed for low ISO values (ISO <= 320). We are in the open source world, and I think it is important to share failures too. Others may have an idea to improve it or at least learn from it.

For a simpler approach than the one described above, I’ve created a gradient from black to white. Then I used some black cardboard to attached it to the monitor to get some real black. Remember you need an underexposed area and the monitor is not able to output real black, as it is backlit.

In the end my setup looked liked this:

Gradient on Monitor

I’ve turned off the lights and took the shots. However the results for ISO values below and equal to ISO320 are not good. All other ISO values looked fine.

If you’re interested in the results, you can find them here:

Please also share pictures of working stencils you created.

Feedback is very much welcome in the comments below!

April 15, 2018

Hero – Blender Grease Pencil showcase

After a series of successful short film production focused on high-end 3D computer animation pipelines, the Blender team presents a 3 minutes short film showcasing Blender’s upcoming Grease Pencil 2.0.

Grease Pencil means 2D animation tools within a full 3D pipeline. In Blender. In Open Source. Free for everyone!

The original Grease Pencil technology has been in Blender for many years now, and it already got the attention of story artists in the animation industry worldwide. The upcoming Grease Pencil is meant to push the boundaries and allows feature quality animation production in Blender 2.8.

The Hero animation showcase is a fruit of the collaboration between Blender developers and a team of artist based in Barcelona, Spain, led by Daniel M. Lara. This is the 6th short film funded by the Blender Cloud, confirming once more the value of a financial model that combines crowdfunding of artistic and technical goals through the creation of Open Content.

The inclusion of Grease Pencil in Blender for mainstream release is part of the Blender 2.8 Code Quest, an outstanding development effort that is currently happening at the Blender headquarters in Amsterdam. The first beta of Blender 2.8 will be available in the second part of 2018.

Press Contact:
Francesco Siddi, Producer
francesco@blender.org

April 13, 2018

security things in Linux v4.16

Previously: v4.15.

Linux kernel v4.16 was released last week. I really should write these posts in advance, otherwise I get distracted by the merge window. Regardless, here are some of the security things I think are interesting:

KPTI on arm64

Will Deacon, Catalin Marinas, and several other folks brought Kernel Page Table Isolation (via CONFIG_UNMAP_KERNEL_AT_EL0) to arm64. While most ARMv8+ CPUs were not vulnerable to the primary Meltdown flaw, the Cortex-A75 does need KPTI to be safe from memory content leaks. It’s worth noting, though, that KPTI does protect other ARMv8+ CPU models from having privileged register contents exposed. So, whatever your threat model, it’s very nice to have this clean isolation between kernel and userspace page tables for all ARMv8+ CPUs.

hardened usercopy whitelisting
While whole-object bounds checking was implemented in CONFIG_HARDENED_USERCOPY already, David Windsor and I finished another part of the porting work of grsecurity’s PAX_USERCOPY protection: usercopy whitelisting. This further tightens the scope of slab allocations that can be copied to/from userspace. Now, instead of allowing all objects in slab memory to be copied, only the whitelisted areas (where a subsystem has specifically marked the memory region allowed) can be copied. For example, only the auxv array out of the larger mm_struct.

As mentioned in the first commit from the series, this reduces the scope of slab memory that could be copied out of the kernel in the face of a bug to under 15%. As can be seen, one area of work remaining are the kmalloc regions. Those are regularly used for copying things in and out of userspace, but they’re also used for small simple allocations that aren’t meant to be exposed to userspace. Working to separate these kmalloc users needs some careful auditing.

Total Slab Memory: 48074720 Usercopyable Memory: 6367532 13.2% task_struct 0.2% 4480/1630720 RAW 0.3% 300/96000 RAWv6 2.1% 1408/64768 ext4_inode_cache 3.0% 269760/8740224 dentry 11.1% 585984/5273856 mm_struct 29.1% 54912/188448 kmalloc-8 100.0% 24576/24576 kmalloc-16 100.0% 28672/28672 kmalloc-32 100.0% 81920/81920 kmalloc-192 100.0% 96768/96768 kmalloc-128 100.0% 143360/143360 names_cache 100.0% 163840/163840 kmalloc-64 100.0% 167936/167936 kmalloc-256 100.0% 339968/339968 kmalloc-512 100.0% 350720/350720 kmalloc-96 100.0% 455616/455616 kmalloc-8192 100.0% 655360/655360 kmalloc-1024 100.0% 812032/812032 kmalloc-4096 100.0% 819200/819200 kmalloc-2048 100.0% 1310720/1310720

This series took quite a while to land (you can see David’s original patch date as back in June of last year). Partly this was due to having to spend a lot of time researching the code paths so that each whitelist could be explained for commit logs, partly due to making various adjustments from maintainer feedback, and partly due to the short merge window in v4.15 (when it was originally proposed for merging) combined with some last-minute glitches that made Linus nervous. After baking in linux-next for almost two full development cycles, it finally landed. (Though be sure to disable CONFIG_HARDENED_USERCOPY_FALLBACK to gain enforcement of the whitelists — by default it only warns and falls back to the full-object checking.)

automatic stack-protector

While the stack-protector features of the kernel have existed for quite some time, it has never been enabled by default. This was mainly due to needing to evaluate compiler support for the feature, and Kconfig didn’t have a way to check the compiler features before offering CONFIG_* options. As a defense technology, the stack protector is pretty mature. Having it on by default would have greatly reduced the impact of things like the BlueBorne attack (CVE-2017-1000251), as fewer systems would have lacked the defense.

After spending quite a bit of time fighting with ancient compiler versions (*cough*GCC 4.4.4*cough*), I landed CONFIG_CC_STACKPROTECTOR_AUTO, which is default on, and tries to use the stack protector if it is available. The implementation of the solution, however, did not please Linus, though he allowed it to be merged. In the future, Kconfig will gain the knowledge to make better decisions which lets the kernel expose the availability of (the now default) stack protector directly in Kconfig, rather than depending on rather ugly Makefile hacks.

execute-only memory for PowerPC

Similar to the Protection Keys (pkeys) hardware support that landed in v4.6 for x86, Ram Pai landed pkeys support for Power7/8/9. This should expand the scope of what’s possible in the dynamic loader to avoid having arbitrary read flaws allow an exploit to read out all of executable memory in order to find ROP gadgets.

That’s it for now; let me know if you think I should add anything! The v4.17 merge window is open. :)

Edit: added details on ARM register leaks, thanks to Daniel Micay.

Edit: added section on protection keys for POWER, thanks to Florian Weimer.

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

April 11, 2018

Krita 4.0.1 Released

Today the Krita team releases Krita 4.0.1, a bug fix release of Krita 4.0.0. We fixed more than fifty bugs since the Krita 4.0.0 release! See below for the full list of fixed isses. Translations work again with the appimage and the macOS build. Please note that:

  • The reference image docker has been removed. Krita 4.1.0 will have a new reference images tool. You can test the code-in-progress by downloading the nightly builds for Windows and Linux
  • There is no scripting available on macOS. We had it almost working when the one macbook the project has received a broken update, which undid all our work. G’Mic is also not available on macOS.
  • The lock and collapse icons on the docker titlebars are removed: too many people were too confused by them.

If you find a new issue, please consult this draft document on reporting bugs, before reporting an issue. After the 4.0 release, more than 150 bugs were reported, but most of those reports were duplicates, requests for help or just not useful at all. This puts a heavy strain on the developers, and makes it harder to actually find time to improve Krita. Please be helpful!

Improvements

Windows

  • Patch QSaveFile so working on images stored in synchronized folders (dropbox, google drive) is safe

Shortcuts

  • Fix duplicate shortcut on Photoshop scheme
  • Alphabetize shortcut to make the diffs easier to read when doing changes

UI

  • Make the triangles larger on the categorized list view so they are more visible
  • Disable the macro recorder and playback plugin
  • Remove the docker titlebar lock and collapse buttons. BUG:385238 BUG:392235
  • Set the pixel grid to show up at 2400% zoom by default. BUG:392161
  • Improve the layout of the palette docker
  • Disable drag and drop in the palette view: moving swatches around did not actually change the palette. BUG:392349
  • Fix selecting the last used template in the new document dialog when using appimages. BUG:391973
  • Fix canvas lockup when using Guides at the top of the image. BUG:391098
  • Do not reset redo history when changing layer’s visibility. BUG:390581
  • Fix shifting the pan position after using the popup widget rotation circle. BUG:391921
  • Fix height map to normal map in wraparound mode. BUG:392191

Text

  • Make it possible to edit the font size in the svg text tool. BUG:392714
  • Let Text Shape have empty lines. BUG:392471
  • Fix updates of undo/redo actions. BUG:392257
  • Implement “Convert text into path” function. BUG:391294
  • Fix a crash in SvgTextTool when deleting hovered/selected shape. BUG:392128
  • Make the text editor window application modal. BUG:392248
  • Fix alignment of RTL text. BUG:392065 BUG:392064
  • Fix painting parts of text outside the bounding box on the canvas. BUG:392068
  • Fix rendering of the text with relative offsets. BUG:391160
  • Fix crash when transforming text with Transform Tool twice. BUG:392127

Animation

  • Fix handling of keyframes when saving. BUG:392233 BUG:392559
  • Keep show in timeline and onion skin options when merging layers. BUG:377358
  • Keep keyframe color labels when merging layers. BUG:388913
  • Fix exporting out audio with video formats MKV and OGV.

File handling

  • Do not load/save layer channel flags anymore (channel flags were removed from the UI in Krita 2.9). BUG:392504
  • Fix saving of Transform Mask into rendered formats. BUG:392229
  • Fix reporting errors when loading fails. BUG:392413
  • Fix a memory leak when loading file layers
  • Fix loading a krita file with a loop in the clone layers setup. BUG:384587
  • Fix showing a wait cursor after loading a PNG image. BUG:392249
  • Make bundle loading feedback a bit clearer regarding the bundle.

Vector bugs

  • Fix crash when creating a vector selection. BUG:391292
  • Fix crash when doing right-click on the gradient fill stop opacity input box of a vector BUG:392726
  • Fix setting the aspect ratio of vector shapes. BUG:391911
  • Fix a crash if a certain shape is not valid when writing SVG. BUG:392240
  • Fix hidden stroke and fill widgets not to track current shape selection BUG:391990

Painting and brush engines

  • Fix crash when creating a new spray preset. BUG:392869
  • Fix rounding of the the pressure curve
  • Fix painting with colorsmudge brushes on transparency masks. BUG:391268
  • Fix uninitialized distance info for KisHairyPaintOp BUG:391940
  • Fix rounding of intermediate pressure values
  • Fix the colorsmudge brush when painting in wraparound mode. BUG:392312

Layers and masks

  • Fix flattening of group layers with Inherit Alpha property set. BUG:390095
  • Fix a crash when using a transformation mask on a file layer. BUG:391270
  • Improve performance of the transformation mask

Download

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

When it is updated, you can also use the Krita Lime PPA to install Krita 4.0.1 on Ubuntu and derivatives. We are working on an updated snap.

OSX

Note: the gmic-qt and python plugins are not available on OSX.

Source code

md5sum

For all downloads:

Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here (filenames ending in .sig).

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

April 10, 2018

FreeCAD 0.17 is released

Hello everybody, Finally, after two years of intense work, the FreeCAD community is happy and proud to announce the release 0.17 of FreeCAD. You can grab it at the usual places, either via the Downloads page or directly via the github release page. There are installers for Windows and Mac, and an AppImage for Linux. Our...

April 05, 2018

Cave Creek Hiking and Birding Trip

A week ago I got back from a trip to the Chiricahua mountains of southern Arizona, specifically Cave Creek on the eastern side of the range. The trip was theoretically a hiking trip, but it was also for birding and wildlife watching -- southern Arizona is near the Mexican border and gets a lot of birds and other animals not seen in the rest of the US -- and an excuse to visit a friend who lives near there.

Although it's close enough that it could be driven in one fairly long day, we took a roundabout 2-day route so we could explore some other areas along the way that we'd been curious about.

First, we wanted to take a look at the White Mesa Bike Trails northwest of Albuquerque, near the Ojito Wilderness. We'll be back at some point with bikes, but we wanted to get a general idea of the country and terrain. The Ojito, too, looks like it might be worth a hiking trip, though it's rather poorly signed: we saw several kiosks with maps where the "YOU ARE HERE" was clearly completely misplaced. Still, how can you not want to go back to a place where the two main trails are named Seismosaurus and Hoodoo?

[Cabezon] The route past the Ojito also led past Cabezon Peak, a volcanic neck we've seen from a long distance away and wanted to see closer. It's apparently possible to climb it but we're told the top part is fairly technical, more than just a hike.

Finally, we went up and over Mt Taylor, something we've been meaning to do for many years. You can drive fairly close to the top, but this being late spring, there was still snow on the upper part of the road and our Rav4's tires weren't up to the challenge. We'll go back some time and hike all the way to the top.

We spent the night in Grants, then the following day, headed down through El Malpais, stopping briefly at the beautiful Sandstone Overlook, then down through the Datil and Mogollon area. We wanted to take a look at a trail called the Catwalk, but when we got there, it was cold, blustery, and starting to rain and sleet. So we didn't hike the Catwalk this time, but at least we got a look at the beginning of it, then continued down through Silver City and thence to I-10, where just short of the Arizona border we were amused by the Burma Shave dust storm signs about which I already wrote.

At Cave Creek

[Beautiful rocks at Cave Creek] Cave Creek Ranch, in Portal, AZ, turned out to be a lovely place to stay, especially for anyone interested in wildlife. I saw several "life birds" and mammals, plus quite a few more that I'd seen at some point but had never had the opportunity to photograph. Even had we not been hiking, just hanging around the ranch watching the critters was a lot of fun. They charge $5 for people who aren't staying there to come and sit in the feeder area; I'm not sure how strictly they enforce it, but given how much they must spend on feed, it would be nice to help support them.

The bird everyone was looking for was the Elegant Trogon. Supposedly one had been seen recently along the creekbed, and we all wanted to see it.

They also had a nifty suspension bridge for pedestrians crossing a dry (this year) arroyo over on another part of the property. I guess I was so busy watching the critters that I never went wandering around, and I would have missed the bridge entirely had Dave not pointed it out to me on the last day.

The only big hike I did was the Burro Trail to Horseshoe Pass, about 10 miles and maybe 1800 feet of climbing. It started with a long hike up the creek, during which everybody had eyes and ears trained on the sycamores (we were told the trogon favored sycamores). No trogon. But it was a pretty hike, and once we finally started climbing out of the creekbed there were great views of the soaring cliffs above Cave Creek Canyon. Dave opted to skip the upper part of the trail to the saddle; I went, but have to admit that it was mostly just more of the same, with a lot of scrambling and a few difficult and exposed traverses. At the time I thought it was worth it, but by the time we'd slogged all the way back to the cars I was doubting that.

[ Organ Pipe Formation at Chiricahua National Monument ] On the second day the group went over the Chiricahuas to Chiricahua National Monument, on the other side. Forest road 42 is closed in winter, but we'd been told that it was open now since the winter had been such a dry one, and it wasn't a particularly technical road, certainly easy in the Rav4. But we had plans to visit our friend over at the base of the next mountain range west, so we just made a quick visit to the monument, did a quick hike around the nature trail and headed on.

Back with the group at Cave Creek on Thursday, we opted for a shorter, more relaxed hike in the canyon to Ash Spring rather than the brutal ascent to Silver Peak. In the canyon, maybe we'd see the trogon! Nope, no trogon. But it was a very pleasant hike, with our first horned lizard ("horny toad") spotting of the year, a couple of other lizards, and some lovely views.

Critters

We'd been making a lot of trogon jokes over the past few days, as we saw visitor after visitor trudging away muttering about not having seen one. "They should rename the town of Portal to Trogon, AZ." "They should rename that B&B Trogon's Roost Bed and Breakfast." Finally, at the end of Thursday's hike, we stopped in at the local ranger station, where among other things (like admiring their caged gila monster) we asked about trogon sightings. Turns out the last one to be seen had been in November. A local thought maybe she'd heard one in January. Whoever had relayed the rumor that one had been seen recently was being wildly optimistic.

[ Northern Cardinal ] [ Coati ] [ Javalina ] [ white-tailed buck ]
Fortunately, I'm not a die-hard birder and I didn't go there specifically for the trogon. I saw lots of good birds and some mammals I'd never seen before (full list), like a coatimundi (I didn't realize those ever came up to the US) and a herd (pack? flock?) of javalinas. And white-tailed deer -- easterners will laugh, but those aren't common anywhere I've lived (mule deer are the rule in California and Northern New Mexico). Plus some good hikes with great views, and a nice visit with our friend. It was a good trip.

On the way home, again we took two days for the opportunity to visit some places we hadn't seen. First, Cloudcroft, NM: a place we'd heard a lot about because a lot of astronomers retire there. It's high in the mountains and quite lovely, with lots of hiking trails in the surrounding national forest. Worth a visit some time.

From Cloudcroft we traveled through the Mescalero Apache reservation, which was unexpectedly beautiful, mountainous and wooded and dotted with nicely kept houses and ranches, to Ruidoso, a nice little town where we spent the night.

Lincoln

[ Lincoln, NM ] Our last stop, Saturday morning, was Lincoln, site of the Lincoln County War (think Billy the Kid). The whole tiny town is set up as a tourist attraction, with old historic buildings ... that were all closed. Because why would any tourists be about on a beautiful Saturday in spring? There were two tiny museums, one at each end of town, which were open, and one of them tried to entice us into paying the entrance fee by assuring us that the ticket was good for all the sites in town. Might have worked, if we hadn't already walked the length of the town peering into windows of all the closed sites. Too bad -- some of them looked interesting, particularly the general store. But we enjoyed our stroll through the town, and we got a giggle out of the tourist town being closed on Saturday -- their approach to tourism seems about as effective as Los Alamos'.

Photos from the trip are at Cave Creek and the Chiricahuas.