May 22, 2018

Downloading all the Books in a Humble Bundle

Humble Bundle has a great bundle going right now (for another 15 minutes -- sorry, I meant to post this earlier) on books by Nebula-winning science fiction authors, including some old favorites of mine, and a few I'd been meaning to read.

I like Humble Bundle a lot, but one thing about them I don't like: they make it very difficult to download books, insisting that you click on every single link (and then do whatever "Download this link / yes, really download, to this directory" dance your browser insists on) rather than offering a sane option like a tarball or zip file. I guess part of their business model includes wanting their customers to get RSI. This has apparently been a problem for quite some time; a web search found lots of discussions of ways of automating the downloads, most of which apparently no longer work (none of the ones I tried did).

But a wizard friend on IRC quickly came up with a solution: some javascript you can paste into Firefox's console. She started with a quickie function that fetched all but a few of the files, but then modified it for better error checking and the ability to get different formats.

In Firefox, open the web console (Tools/Web Developer/Web Console) and paste this in the single-line javascript text field at the bottom.

// How many seconds to delay between downloads.
var delay = 1000;
// whether to use window.location or window.open
// window.open is more convenient, but may be popup-blocked
var window_open = false;
// the filetypes to look for, in order of preference.
// Make sure your browser won't try to preview these filetypes.
var filetypes = ['epub', 'mobi', 'pdf'];

var downloads = document.getElementsByClassName('download-buttons');
var i = 0;
var success = 0;

function download() {
  var children = downloads[i].children;
  var hrefs = {};
  for (var j = 0; j < children.length; j++) {
    var href = children[j].getElementsByClassName('a')[0].href;
    for (var k = 0; k < filetypes.length; k++) {
      if (href.includes(filetypes[k])) {
        hrefs[filetypes[k]] = href;
        console.log('Found ' + filetypes[k] + ': ' + href);
      }
    }
  }
  var href = undefined;
  for (var k = 0; k < filetypes.length; k++) {
    if (hrefs[filetypes[k]] != undefined) {
      href = hrefs[filetypes[k]];
      break;
    }
  }
  if (href != undefined) {
    console.log('Downloading: ' + href);
    if (window_open) {
      window.open(href);
    } else {
      window.location = href;
    }
    success++;
  }
  i++;
  console.log(i + '/' + downloads.length + '; ' + success + ' successes.');
  if (i < downloads.length) {
    window.setTimeout(download, delay);
  }
}
download();

If you have "Always ask where to save files" checked in Preferences/General, you'll still get a download dialog for each book (but at least you don't have to click; you can hit return for each one). Even if this is your preference, you might want to consider changing it before downloading a bunch of Humble books.

Anyway, pretty cool! Takes the sting out of bundles, especially big ones like this 42-book collection.

Back from Krita Sprint 2018

Hi,
Yesterday I came back from 3,5 days of Krita Sprint in Deventer. Even if nowadays I have less time for Krita with my work on GCompris, I’m always following what is happening and keep helping where I can, especially on icons, and a few other selected topics. And it’s always very nice to meet my old friends from the team, and the new ones! 🙂

A lot of things were discussed and done, and plans have been set for the next steps.
I was in the discussions for the next fundraiser, the Bugzilla policies, the next release, the resources management rewrite, and defining and ordering the priorities for the unfinished tasks.

I did start a little the french translation for the new manual that is coming soon, mostly porting the existing translation of the FAQ and completing it. Again about the manual I gave a little idea to Wolthera who was looking at reducing the size of png images.. result is almost half smaller, around 60Mo for 1000pages, not bad 😉

I discussed with Valeriy, the new maintainer of kcm-wacomtablet, about some little missing feature I would like to have, and built the git version to test on Mageia 6. Great progress already, and more goodies to come!

As we decided to make layer names in default document templates translatable, we defined a list of translatable keywords to use for layer names in those default templates. The list was made by most artists present there (me, Deevad, Wolthera, Raghukamath and Bollebib).

Also I helped Raghukamath who was fighting with his bluish laptop screen to properly calibrate it on his Linux system, and he was very happy of the result.

Many thanks to Boudewijn and Irina who organised and hosted the sprint in their house, to the Krita Foundation for the accommodation and food, and to KDE e.V. for the travel support that made it possible to gather contributors from many different countries.

You can find more info about this sprint on the Krita website:

Krita 2018 Sprint Report

Krita 2018 Sprint Report

This weekend, Krita developers and artists from all around the world came to the sleepy provincial town of Deventer to buy cheese — er, I mean, to discuss all things Krita related and do some good, hard work! After all, the best cheese shop in the Netherlands is located in Deventer. As are the Krita Foundation headquarters! We started on Thursday, and today the last people are leaving.

Image by David Revoy

Events like these are very important: bringing people together, not just for serious discussions and hacking, but for lunch and dinner and rambling walks makes interaction much easier when we’ve gone back to our IRC channel, #krita. We didn’t have a big sprint in 2017, the last big sprint was in 2016.

So… What did we do? We first had a long meeting where we discussed the following topics:

  • 2018 Fund Raiser. We currently receive about €2000 a month in donations and have about eighty development subscribers. This is pretty awesome, and goes a long way towards funding Dmitry’s work. But we still want to go back to having a yearly fund raiser! We aim for September. Fund raisers are always a fun and energizing way to get together with our community. However, Kickstarter is out: it’s a bit of tired formula. Instead we want to figure out how to make this more of a festival or a celebration. This time the fund raiser won’t have feature development as a target, because…
  • This year’s focus: zarro bugs. That’s what bugzilla used to tell you if your search didn’t find a single bug. Over the past couple of years we’ve implemented a lot of features, ported Krita to Qt5 and in general produced astonishing amounts of code. But not everything is done, and we’ve got way too many open bug reports, way too many failing unittests, way too many embarrassing hits in pvs, way too many features that aren’t completely done yet — so our goal for this year is to work on that.
  • Unfinished business: We identified a number of places where we have unfinished business that we need to get back to. We asked the artists present to rank those topics, and this is the result:
    • Boudewijn will work on:
      • Fix resource management (https://phabricator.kde.org/T379).
      • Shortcut and canvas input unification and related bugs
      • Improved G’Mic integration
    • Dmitry will work on:
      • Masks and selections
      • Improving the text layout engine, for OpenType support, vertical text, more SVG2 text features.
      • SVG leftovers: support for filters and patterns, winding mode and grouping
      • Layer styles leftovers
    • Jouni will work on animation left-overs:
      • frame cycles and cloning
      • Transform mask interpolation curves
    • Wolthera will work on
      • Collecting information about missing scripting API
      • Color grading filters
  • Releases. We intend to release Krita 4.1.0 June 20th. We also want to continue doing monthly bug-fix releases. We’ve asked the KDE system administrators whether we can have nightly builds of the stable branch so people can test the bug fix releases before we actually release them. Krita 4.1 will have lots of animation features, animation cache swapping, session management and the reference images tool — and more!

We also discussed the resource management fixing plan, worked really hard on making the OpenGL canvas work even smoother, especially on macOS, where it currently isn’t that smooth, added ffmpeg to the Windows installer, fixed translation issues, improved autosave reliability, fixed animation related bugs and implemented support for a cross-channel curves filter for color grading. And at the same time, people who weren’t present worked on improving OpenEXR file loading (it’s multi-threaded now, among other things), fixed issues with the color picker and made that code simpler and added even more improvements to the animation timeline!

And that’s not all, because Wolthera, Timothee and Raghukamath also finished porting our manual to Sphinx, so we can generate off-line documentation and support translations of the manual. The manual is over 1000 pages long!

There were three people who hadn’t attended a sprint before, artist Raghukamath, ace windows developer Alwin Wong and Valeriy Malov, the maintainer of the KDE Plasma desktop tablet settings utility, who improved support for cintiq-like devices during the weekend.

And of course, there was time for walks, buying cheese, having lunch at our regular place, De Rode Kater, and on Sunday the sun even started shining! And now back to coding!

Image by David Revoy.

The 2018 Krita sprint was sponsored by KDE e.V. (travel) and the Krita Foundation (accommodation and food).

May 19, 2018

GIMP 2.10.2 Released

It’s barely been a month since we released GIMP 2.10.0, and the first bugfix version 2.10.2 is already there! Its main purpose is fixing the various bugs and issues which were to be expected after the 2.10.0 release.

Therefore, 44 bugs have been fixed in less than a month!

We have also been relaxing the policy for new features and this is the first time we will be applying this policy with features in a stable micro release! How cool is that?

For a complete list of changes please see NEWS.

New features

Added support for HEIF image format

This release brings HEIF image support, both for loading and export!

Thanks to Dirk Farin for the HEIF plug-in.

New filters

Two new filters have been added, based off GEGL operations:

Spherize filter to wrap an image around a spherical cap, based on the gegl:spherize operation.

Spherize filter Spherize filter in GIMP 2.10.2.
Original image CC-BY-SA by Aryeom Han.

Recursive Transform filter to create a Droste effect, based on the gegl:recursive-transform operation.

Recursive Transform filter Recursive transform filter in GIMP 2.10.2, with a custom on-canvas interface.
Original image CC-BY by Philipp Haegi.

Noteworthy improvements

Better single-window screenshots on Windows

While the screenshot plug-in was already better in GIMP 2.10.0, we had a few issues with single-window screenshots on Windows when the target window was hidden behind other windows, partly off-screen, or when display scaling was activated.

All these issues have been fixed by our new contributor Gil Eliyahu.

Histogram computation improved

GIMP now calculates histograms in separate threads which eliminates some UI freezes. This has been implemented with some new internal APIs which may be reused later for other cases.

Working with third-parties

Packagers: set your bug tracker address

As you know, we now have a debug dialog which may pop-up when crashes occur with debug information. This dialog opens our bug tracker in a browser.

We realized that we get a lot of bugs from third-party builds, and a significant part of the bugs are package-specific. In order to relieve that burden a bit (because we are a very small team), we would appreciate if packagers could make a first triaging of bugs, reporting to us what looks like actual GIMP bugs, and taking care of their own packaging issues themselves.

This is why our configure script now has the --with-bug-report-url option, allowing you to set your own bug tracker web URL. This way, when people click the “Open Bug Tracker” button it will open the package bug tracker instead.

XCF-reader developers: format is documented

Since 2006, our work format, XCF, is documented thanks to the initial contribution of Henning Makholm. We have recently updated this document to integrate all the changes to the format since the GIMP 2.10.0 release.

Any third-party applications wishing to read XCF files can refer to this updated documentation. The git log view may actually be more interesting since you can more easily spot the changes and new features which have been documented recently.

Keep in mind that XCF is not meant to be an interchange format (unlike for instance OpenRaster) and this document is not a “specification”. The XCF reference document is the code itself. Nevertheless we are happy to help third-party applications, and if you spot any error or issues within this document feel free to open a bug report so we can fix it.

GIMP 3 is already on its way…

While GIMP 2.10.0 was still hot and barely released, our developers started working on GIMP 3. One of the main tasks is cleaning the code from the many deprecated pieces of code or data as well as from code made useless by the switch to GTK+ 3.x.

The deletion is really going full-speed with more than 200 commits made in less than a month on the gtk3-port git branch and with 9805 lines already inserted for the 921,630 lines deleted!

Delete delete delete… exterminate!

Exterminate (GTK+2)! Michael Natterer and Jehan portrayed by Aryeom.
It’s actually misses Simon Budig, a long time contributor who made a big comeback on the GTK+3 port with dozens of commits!

May 14, 2018

Plotting the Jet Stream, or Other Winds, with ECMWF Data

I've been trying to learn more about weather from a friend who used to work in the field -- in particular, New Mexico's notoriously windy spring. One of the reasons behind our spring winds relates to the location of the jet stream. But I couldn't find many good references showing how the jet stream moves throughout the year. So I decided to try to plot it myself -- if I could find the data. Getting weather data can surprisingly hard.

In my search, I stumbled across Geert Barentsen's excellent Annual variations in the jet stream (video). It wasn't quite what I wanted -- it shows the position of the jet stream in December in successive years -- but the important thing is that he provides a Python script on GitHub that shows how he produced his beautiful animation.

[Sample jet steam image]

Well -- mostly. It turns out his data sources are no longer available, and he didn't go into a lot of detail on where he got his data, only saying that it was from the ECMWF ERA re-analysis model (with a link that's now 404). That led me on a merry chase through the ECMWF website trying to figure out which part of which database I needed. ECMWF has lots of publically available databases (and even more) and they even have Python libraries to access them; and they even have a lot of documentation, but somehow none of the documentation addresses questions like which database includes which variables and how to find and fetch the data you're after, and a lot of the sample code doesn't actually work. I ended up using the "ERA Interim, Daily" dataset and requesting data for only specific times and only the variables and pressure levels I was interested in. It's a great source of data once you figure out how to request it.

Sign up for an ECMWF API Key

Access ECMWF Public Datasets (there's also Access MARS and I'm not sure what the difference is), which has links you can click on to register for an API key.

Once you get the email with your initial password, log in using the URL in the email, and change the password. That gave me a "next" button that, when I clicked it, took me to a page warning me that the page was obsolete and I should update whatever bookmark I had used to get there. That page also doesn't offer a link to the new page where you can get your key details, so go here: Your API key. The API Key page gives you some lines you can paste into ~/.ecmwfapirc.

You'll also have to accept the license terms for the databases you want to use.

Install the Python API

That sets you up to use the ECMWF api. They have a Web API and a Python library, plus some other Python packages, but after struggling with a bunch of Magics tutorial examples that mostly crashed or couldn't find data, I decided I was better off sticking to the basic Python downloader API and plotting the results with Matplotlib.

The Python data-fetching API works well. To install it, activate your preferred Python virtualenv or whatever you use for pip packages, then run the pip command shown at Web API Downloads (under "Click here to see the installation/update instructions..."). As always with pip packages, you'll have to decide on a Python version (they support both 2 and 3) and whether to use a virtualenv, the much-disrecommended sudo pip, pip3, etc. I used pip3 in a virtualenv and it worked fine.

Specify a dataset and parameters

That's great, but how do you know which dataset you want to load?

There doesn't seem to be anything that just lists which datasets have which variables. The only way I found is to go to the Web API page for a particular dataset to see the form where you can request different variables. For instance, I ended up using the "interim-full-daily" database, where you can choose date ranges and lists of parameters. There are more choices in the sidebar: for instance, clicking on "Pressure levels" lets you choose from a list of barometric pressures ranging from 1000 all the way down to 1. No units are specified, but they're millibars, also known as hectoPascals (hPa): 1000 is more or less the pressure at ground level, 250 is roughly where the jet stream is, and Los Alamos is roughly at 775 hPa (you can find charts of pressure vs. altitude on the web).

When you go to any of the Web API pages, it will show you a dialog suggesting you read about Data retrieval efficiency, which you should definitely do if you're expecting to request a lot of data, then click on the details for the database you're using to find out how data is grouped in "tape files". For instance, in the ERA-interim database, tapes are grouped by date, so if you're requesting multiple parameters for multiple months, request all the parameters for a given month together, rather than making one request for level 250, another request for level 1000, etc.

Once you've checked the boxes for the data you want, you can fetch the data via the web interface, or click on "View the MARS request" to get parameters you can plug into a Python script.

If you choose the Python script option as I did, you can start with the basic data retrieval example. Use the second example, the one that uses 'format' : "netcdf", which will (eventually) give you a file ending in .nc.

Requesting a specific area

You can request only a limited area,

"area": "75/-20/10/60",
but they're not very forthcoming on the syntax of that, and it's particularly confusing since "75/-20/10/60" supposedly means "Europe". It's hard to figure how those numbers as longitudes and latitudes correspond to Europe, which doesn't go down to 10 degrees latitude, let alone -20 degrees. The Post-processing keywords page gives more information: it's North/West/South/East, which still makes no sense for Europe, until you expand the Area examples tab on that page and find out that by "Europe" they mean Europe plus Saudi Arabia and most of North Africa.

Using the data: What's in it?

Once you have the data file, assuming you requested data in netcdf format, you can parse the .nc file with the netCDF4 Python module -- available as Debian package "python3-netcdf4", or via pip -- to read that file:

import netCDF4

data = netCDF4.Dataset('filename.nc')

But what's in that Dataset? Try running the preceding two lines in the interactive Python shell, then:

>>> for key in data.variables:
...   print(key)
... 
longitude
latitude
level
time
w
vo
u
v

You can find out more about a parameter, like its units, type, and shape (array dimensions). Let's look at "level":

>>> data['level']
<class 'netCDF4._netCDF4.Variable'>
int32 level(level)
    units: millibars
    long_name: pressure_level
unlimited dimensions: 
current shape = (3,)
filling on, default _FillValue of -2147483647 used

>>> data['level'][:]
array([ 250,  775, 1000], dtype=int32)

>>> type(data['level'][:])
<class 'numpy.ndarray'>

Levels has shape (3,): it's a one-dimensional array with three elements: 250, 775 and 1000. Those are the three levels I requested from the web API and in my Python script). The units are millibars.

More complicated variables

How about something more complicated? u and v are the two components of wind speed.

>>> data['u']
<class 'netCDF4._netCDF4.Variable'>
int16 u(time, level, latitude, longitude)
    scale_factor: 0.002161405503194121
    add_offset: 30.095301438361684
    _FillValue: -32767
    missing_value: -32767
    units: m s**-1
    long_name: U component of wind
    standard_name: eastward_wind
unlimited dimensions: time
current shape = (30, 3, 241, 480)
filling on
u (v is the same) has a shape of (30, 3, 241, 480): it's a 4-dimensional array. Why? Looking at the numbers in the shape gives a clue. The second dimension has 3 rows: they correspond to the three levels, because there's a wind speed at every level. The first dimension has 30 rows: it corresponds to the dates I requested (the month of April 2015). I can verify that:
>>> data['time'].shape
(30,)

Sure enough, there are 30 times, so that's what the first dimension of u and v correspond to. The other dimensions, presumably, are latitude and longitude. Let's check that:

>>> data['longitude'].shape
(480,)
>>> data['latitude'].shape
(241,)

Sure enough! So, although it would be nice if it actually told you which dimension corresponded with which parameter, you can probably figure it out. If you're not sure, print the shapes of all the variables and work out which dimensions correspond to what:

>>> for key in data.variables:
...   print(key, data[key].shape)

Iterating over times

data['time'] has all the times for which you have data (30 data points for my initial test of the days in April 2015). The easiest way to plot anything is to iterate over those values:

    timeunits = JSdata.data['time'].units
    cal = JSdata.data['time'].calendar
    for i, t in enumerate(JSdata.data['time']):
        thedate = netCDF4.num2date(t, units=timeunits, calendar=cal)

Then you can use thedate like a datetime, calling thedate.strftime or whatever you need.

So that's how to access your data. All that's left is to plot it -- and in this case I had Geert Barentsen's script to start with, so I just modified it a little to work with slightly changed data format, and then added some argument parsing and runtime options.

Converting to Video

I already wrote about how to take the still images the program produces and turn them into a video: Making Videos (that work in Firefox) from a Series of Images.

However, it turns out ffmpeg can't handle files that are named with timestamps, like jetstream-2017-06-14-250.png. It can only handle one sequential integer. So I thought, what if I removed the dashes from the name, and used names like jetstream-20170614-250.png with %8d? No dice: ffmpeg also has the limitation that the integer can have at most four digits.

So I had to rename my images. A shell command works: I ran this in zsh but I think it should work in bash too.

cd outdir
mkdir moviedir

i=1
for fil in *.png; do
  newname=$(printf "%04d.png" $i)
  ln -s ../$fil moviedir/$newname
  i=$((i+1))
done

ffmpeg -i moviedir/%4d.png -filter:v "setpts=2.5*PTS" -pix_fmt yuv420p jetstream.mp4
The -filter:v "setpts=2.5*PTS" controls the delay between frames -- I'm not clear on the units, but larger numbers have more delay, and I think it's a multiplier, so this is 2.5 times slower than the default.

When I uploaded the video to YouTube, I got a warning, "Your videos will process faster if you encode into a streamable file format." I then spent half a day trying to find a combination of ffmpeg arguments that avoided that warning, and eventually gave up. As far as I can tell, the warning only affects the 20 seconds or so of processing that happens after the 5-10 minutes it takes to upload the video, so I'm not sure it's terribly important.

Results

Here's a video of the jet stream from 2012 to early 2018, and an earlier effort with a much longer 6.0x delay.

And here's the script, updated from the original Barentsen script and with a bunch of command-line options to let you plot different collections of data: jetstream.py on GitHub.

Fontstuff at LibrePlanet 2018

I’m going to try and capture some thoughts from recent conferences, since otherwise I fear that so much information gets lost in the fog.

* (If you want to think of it this way, consider this post “What’s New in Open Fonts, № 002”)

I went to LibrePlanet a few weeks ago, for the first time. One of the best outcomes from that trip (apart from seeing friends) was the hallway track.

[FYI, I was happy to see that LWN had some contributors on hand to provide coverage; when I was an editor there we always wanted to go, but it was never quite feasible, between the cost and the frequent overlap with other events. Anyway, do read the LWN coverage to get up to speed on the event.]

RFNs

Dave Crossland and I talked about Reserved Font Names (RFNs), an optional feature of the SIL Open Font License (OFL) in which the font publisher claims a reservation on a portion of their font’s name. Anyone’s allowed to make a derivative of the OFL-licensed font (which is true regardless of the RFN-invocation status), but if they do so they cannot use *any* portion of the RFN in their derivative font’s name.

The intent of the clause is to protect the user-visible “mark” (so to speak; my paraphrase) of the font publisher, so that users do not confuse any derivatives with the original when they see it in menus, lists, etc.

A problem arises, however, for software distributors, because the RFN clause is triggered by making any change to the upstream font — a low bar that includes a lot of functions that happen automatically when serving a font over HTTP (like Google Fonts does) and when rebuilding fonts from source (like Debian does).

There’s not a lot of good information out there on the effects that RFN-invocation has on downstream software projects. SIL has a section in its FAQ document, but it doesn’t really address the downstream project’s needs. So Dave and I speculated that it might be good to write up such a document for publication … somewhere … and help ensure that font developers think through the impact of the decision on downstream users before they opt to invoke an RFN.

My own experience and my gut feeling from other discussions is that most open-font designers, especially when they are new, plonk an RFN statement in their license without having explored its impact. It’s too easy to do, you might say; it probably seems like it’s built into the license for a reason, and there’s not really anything educating you about the impact of the choice going forward. You fill in a little blank at the very top of the license template, cause it’s there, and there’s no guidance.  That’s what needs to change.

Packages

We also chatted a little about font packaging, which is something I’m keen to revisit. I’ve been giving a talk about “the unsolved problems in FOSS type” the past couple of months, a discussion that starts with the premise that we’ve had open-source web fonts for years now, but that hasn’t helped open fonts make inroads into any other areas of typography: print, EPUB, print-on-demand, any forms of marketing product, etc. The root cause is that Google Fonts and Open Font Library are focused on providing a web service (as they should), which leaves a
lot of ground to be covered elsewhere, from installation to document templates to what ships with self-contained application bundles (hint: essentially nothing does).

To me, the lowest-hanging fruit at present seems to be making font packages first-class objects in the distribution packaging systems. As it is, they’re generally completely bare-bones: no documentation, no system integration, sketchy or missing metadata, etc. I think a lot can be done to improve this, of course. One big takeaway from the conversation was that Lasse Fister from the Google Fonts crew is working on a specimen micro-site generator.

That would fill a substantial hole in current packages: fonts tend to ship with no document that shows the font in use — something all proprietary, commercial fonts include, and
that designers use to get a feel for how the font works in a real document setting.

Advanced font features in GTK+ and GNOME

Meanwhile Matthias Clasen has been forging ahead with his own work enhancing the GNOME font-selection experience. He’s added support for showing what variation axes a variable font contains and for exposing the OpenType / smart-font features that the font includes.

He did, however, bring up several pain points he’s encountered. The first is that many of the OpenType features are hard to preview/demonstrate because they’re sparsely documented. The only substantive docs out there are ancient Microsoft material definitely written by committee(s) — then revised, in piecemeal format, by multiple unrelated committees. For example, go to the link above, then try and tell me the difference between `salt` (stylistic alternates), `ccNN` (character variants) and `ssNN` (stylistic sets). I think there’s an answer, but it’s detective work.

A more pressing concern Matthias raised was the need to create “demo strings” that show what actually changes when you enable or disable one of the features. The proper string for some features is obvious (like `onum` (oldstyle numerals): the digits 0 to 9). For others, it’s anybody’s guess. And the font-selector widget, ideally, should not have to parse every font’s entire GSUB feature table, look for all affected codepoints, and create a custom demo string. That might be arbitrarily complex, since GSUB substitutions can chain together, and might still be incorrect (not to mention the simpler case, of that method finding you random letters that add up to unhelpful gibberish).

At lunch on Sunday, Matthias, Dave, Owen Taylor, Felipe Sanches, and a few others … who I’m definitely drawing a blank on this far after the fact (go for the comments) … hashed through several other topics. The discussion turned to Pango, which (like several other storied GNOME libraries), isn’t exactly unmaintained, but certainly doesn’t get attention anymore … see also Cairo….). There are evidently still some API mismatches between what a Pango font descriptor gives you and the lower-level handles you need to work with newer font internals like
variation axes.

A longer-term question was whether or not Pango can do more for applications — there are some features it could add, but major work like building in hyphenation or justification would entail serious effort. It’s not clear that anyone is available to take on that role.

Interfaces

Of course, that ties into another issue Matthias raised, which is that it’s hard to specify a feature set for a “smart” font selector widget/framework/whathaveyou for GTK+ when there are not many GTK-based applications that will bring their own demands. GIMP is still using GTK2, Inkscape basically does its own font selection, LibreOffice has a whole cross-platform layer of its own, etc. The upshot is that application developers aren’t bringing itches needing to be scratched. There is always Gedit, as Matthias said (which I think was at least somewhat satirical). But it complicates the work of designing a toolkit element, to be sure.

The discussion also touched on how design applications like Inkscape might want to provide a user interface for the variable-font settings that a user has used before. Should you “bookmark” those somehow (e.g., “weight=332,width=117,slant=10” or whatnot)? If so, where are they saved? Certainly you don’t want users to have to eyeball a bunch of sliders in order to hit the same combination of axes twice; not providing a UI for this inevitably leads to documents polluted with 600-odd variable-font-setting regions that are all only slightly off from each other. Consensus seemed to lean towards saving variable-axes-settings in sort of “recently used” palette, much as many applications already do with the color picker. Still waiting to see the first implementations of this, however.

As we were leaving, Matthias posed a question to me — in response to a comment I’d made about there needing to be a line between a “generic” font selector and a “full-featured” font selector. The question was what sort of UI was I envisioning in the “generic” case, particularly where variable fonts are concerned, as I had suggested that a full set of sliders for the fonts variation axes was too complex.

I’m not sure. On the one hand, the simple answer would be “none” or “list the variation axes in the font”, but that’s not something I have any evidence for: it’s just a easy place to draw a line.

Perhaps I’m just worried that exposing too many dials and controls will turn users off — or slow them down when they’re trying to make a quick choice. The consumer/pro division is a  common tactic, evidently, for trying to avert UI overload. And this seems like a place where it’s worth keeping a watchful eye, but I definitely don’t have answers.

It may be that “pro” versus “consumer” user is not the right plane on which to draw a line anyway: when I was working on font-packaging questions, I found it really helpful to be document-first in my thinking (i.e., let the needs of the document the user is working on reveal what information you want to get from the font package). It’s possible that the how-much-information-do-you-show-in-the-UI question could be addressed by letting the document, rather than some notion of the “professionalism” of the user, be the guide. More thinking is required.

Interview with El Gato Cangrejo

Could you tell us something about yourself?

Well, I think I am a human shaped thing also known as Aedouard A. and also as El Gato Cangrejo, who loves making drawings and listening to music.

Do you paint professionally, as a hobby artist, or both?

I’m really trying to make it professionally, “very hard thing” but also I try to keep the fun in it so I would have to say both.

What genre(s) do you work in?

I like to let my hand and my pen go to wherever they want to go, and then I begin to think about those traces and it leads me to different shapes, themes and genres. I can build an script for a comic or for a short film, an illustration or even sounds based on a web of random traces on a digital canvas or on a piece of paper.

Whose work inspires you most — who are your role models as an artist?

I love the paintings, illustrations, designs and movies from these people: William Boguereau, Alphonse Mucha, Albrecht Durer, Jules Lefebvre, William Waterhouse, Masamune Shirow, Haruhiko Mikimoto, Shoji Kawamori, Mamoru Oshii, Quentin Tarantino, Hideaki Anno, Hayao Miyasaki, Ralph Bakshi, Guillermo del Toro… (not mentioning musicians, they are such an endless source of inspiration, I only can work while listening to music)

How and when did you get to try digital painting for the first time?

I tried digital painting for the first time like 12 years ago, I bought my first PC and I tried with a software called Image Ready from Photoshop, I did a couple of landscapes with the mouse and then I tried scanning my drawings and retrace them in Corel Draw, also with the mouse.

What makes you choose digital over traditional painting?

The production time, everything is like 10 times faster, expensive materials and the super powerful Ctrl-Z.

How did you find out about Krita?

I like to search for new tools and I try to use libre software. I can’t remember when I tried Krita for first time but I think it was like 7 years ago and it ran very very badly on my old PC.

What was your first impression?

I hated Krita at the time, now I love it!

What do you love about Krita?

The shortcuts are essential, the brushes, the animation tools, “insert meme here” it’s free!

What do you think needs improvement in Krita? Is there anything that really annoys you?

The performance in Linux, I recently changed my OS from Windows 7 to Linux Mint and I have noticed a significant difference in performance between the systems. I noticed a difference in performance between working in grayscale and working in color too, and and also I’m waiting for some layer FX’s as the ones in photoshop, specifically the trace effect, which I used a lot when I worked with photoshop.

What sets Krita apart from the other tools that you use?

As I said earlier, the shortcuts are essential, the animation tools combined with those awesome brushes makes a powerful tool for animation, and I love the fact that Krita has been made for professional use but you can also have tons of fun with it.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I would choose Distant.

What techniques and brushes did you use in it?

I like the “Airbrush_Linear” a lot. I set it to a big size and the opacity to 10 percent, then I use the “Eraser_Circle” the hard shaped one, to define shapes, also I use a lot the “Smudge_Soft” I like to play with it taking the paint from one side to another. When I grabbed Krita again it reminded me of my old times drawing with pencil and paper I just loved.

Where can people see more of your work?

https://gatocangrejo.deviantart.com/gallery/

Anything else you’d like to share?

If you are the pretty invisible friend, thanks and I’ll see you in a parallel universe.
If you are the Sorceress, I really sorry about the silence, I had a couple of good reasons…
If I owe you money, I’m trying to pay it.
If you are the extraterrestrial, stop it man.
If you are the C.I.A. stop sending stuff to my invisible friends and to the extraterrestrial.
If you like my drawings, keep your eyes peeled, I’m going to start a patreon/kickstarter campaign that involves comic, animation, Krita, Blender and other libre software.
If you are from Krita staff, thanks for Krita and thanks for the interview.
If you don’t know Krita, just give it a try, it is awesome. You don’t need to be an artist, you just need to have fun.

May 12, 2018

Stay Tuned

“The arc of the moral universe is long, but it bends towards podcasts.”

– Preet Bharara, while interviewing Bassem Youssef for his Stay Tuned podcast.

Krita 4.0.3 Released

Today the Krita team releases Krita 4.0.3, a bug fix release of Krita 4.0.0. This release fixes an important regression in Krita 4.0.2: sometimes copy and paste between images opened in Krita would cause crashes (BUG:394068).

Other Improvements

  • Krita now tries to load the system color profiles on Windows
  • Krita can open .rw2 RAW files
  • The splash screen is updated to work better on HiDPI or Retina displays (BUG:392282)
  • The OpenEXR export filter will convert images with an integer channel depth before saving, instead of giving an error.
  • The OpenEXR export filter no longer gives export warnings calling itself the TIFF filter
  • The emtpy error message dialog that would erroneously be shown after running some export filters is no longer shown (BUG:393850).
  • The setBackGroundColor method in the Python API has been renamed to setBackgroundColor for consistency
  • Fix a crash in KisColorizeMask (BUG:393753)

Download

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

When it is updated, you can also use the Krita Lime PPA to install Krita 4.0.3 on Ubuntu and derivatives. We are working on an updated snap.

OSX

Note: the touch docker, gmic-qt and python plugins are not available on OSX.

Source code

md5sum

For all downloads:

Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here (filenames ending in .sig).

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

May 11, 2018

Making Videos (that work in Firefox) from a Series of Images

I was working on a weather project to make animated maps of the jet stream. Getting and plotting wind data is a much longer article (coming soon), but once I had all the images plotted, I wanted to combine them all into a video showing how the jet stream moves.

Like most projects, it's simple once you find the right recipe. If your images are named outdir/filename00.png, outdir/filename01.png, outdir/filename02.png and so on, you can turn them into an MPEG4 video with ffmpeg:

ffmpeg -i outdir/filename%2d.png -filter:v "setpts=6.0*PTS" -pix_fmt yuv420p jetstream.mp4

%02d, for non-programmers, just means a 2-digit decimal integer with leading zeros, If the filenames just use 1, 2, 3, ... 10, 11 without leading zeros, use %2d instead; if they have three digits, use %03d or %3d, and so on.

The -pix_fmt yuv420p turned out to be the tricky part. The recipes I found online didn't include that part, but without it, Firefox claims "Video can't be played because the file is corrupt", even though most other browsers can play it just fine. If you open Firefox's web console and reload, it offers the additional information "Details: mozilla::SupportChecker::AddMediaFormatChecker(const mozilla::TrackInfo&)::<lambda()>: Decoder may not have the capability to handle the requested video format with YUV444 chroma subsampling.":

Adding -pix_fmt yuv420p cured the problem and made the video compatible with Firefox, though at first I had problems with ffmpeg complaining "height not divisible by 2 (1980x1113)" (even though the height of the images was in fact divisible by 2). I'm not sure what was wrong; later ffmpeg stopped giving me that error message and converted the video. It may depend on where in the ffmpeg command you put the pix_fmt flag or what other flags are present. ffmpeg arguments are a mystery to me.

Of course, if you're only making something to be uploaded to youtube, the Firefox limitation probably doesn't matter and you may not need the -pix_fmt yuv420p argument.

Animated GIFs

Making an animated GIF is easier. You can use ImageMagick's convert:

convert -delay 30 -loop 0 *.png jetstream.gif
The GIF will be a lot larger, though. For my initial test of thirty 1000 x 500 images, the MP4 was 760K while the GIF was 4.2M.

Rest in price

It’s easy to pile on criticism when a major company redesigns their logo, but I couldn’t help myself in this case. The logo looks fine to me, but am I the only one that sees a toe-tag on a corpse when I see the new Best Buy logo?

Cause of death: Excessive color saturation on demo-mode TVs.

May 10, 2018

Krita 4.0.2 released

Today the Krita team releases Krita 4.0.2, a bug fix release of Krita 4.0.0. We fixed more than fifty bugs since the Krita 4.0.0 release! See below for the full list of fixed isses. We’ve also got fixes submitted by two new contributors: Emmet O’Neil and Seoras Macdonald. Welcome!

Please note that:

  • The reference image docker has been removed. Krita 4.1.0 will have a new reference images tool. You can test the code-in-progress by downloading the nightly builds for Windows and Linux. You can also use Antoine Roux’s reference images docker python plugin.
  • Translations are broken in various ways. On Linux everything should work. On Windows, you might have to select your language as an extra override language in the Settings/Select language dialog. This might also be the case on macOS
  • The macOS binaries are now signed, but do not have G’Mic and do not have Python scripting.

If you find a new issue, please consult this draft document on reporting bugs before reporting an issue. After the 4.0 release more than 150 bugs were reported, but most of those reports were duplicates, requests for help or just not useful at all. This puts a heavy strain on the developers and makes it harder to actually find time to improve Krita. Please be helpful!

Improvements

Windows

  • Patch QSaveFile so working on images stored in synchronized folders (dropbox, google drive) is safe. BUG:392408
  • Enable WinInk or prompt if WinTab cannot be loaded

Animation

  • Fix canvas update issues when an animation is being rendered to the cache BUG:392969
  • Fix playback in isolated mode BUG:392559
  • Fix saving animated transparency and filter masks, adjustment layer BUG:393302
  • set size for a few timeline icons as it is painfully small on Windows
  • Fix copy-pasting pixel data from animated layers BUG:364162

Brushes

  • Fix keeping “eraser switch size/opacity” option when saving the brush BUG:393499
  • Fix update of the preset editor GUI when a default preset is created BUG:392869
  • Make strength and opacity sliders from 0 to 100 percent in brush editor

File format support

  • Fix saving state of the selection masks into .kra
  • Read multilayer EXR files saved by Nuke BUG:393771
  • PSD: convert the image if its colorspace is not supported
  • Don’t let autosave close currently running actions

Grids

  • increase the range for the pixel grid threshold
  • only allow isometric grid with OpenGL enabled BUG:392526

Crashes

  • Fix a hangup when closing the image BUG:393916
  • Fix a crash when duplicating active global selection masks BUG:382315
  • Fix crashes on undo/redo of vector path points operations BUG:393209, BUG:393087
  • Fix crash when deleting palette BUG:393353
  • Fix crash when resizing the Tool Options for the shape selection tool BUG:393217

User interface

  • Show the exact bounds in the layer properties dialog
  • Add ability for vanishing point assistants to show and configure radial lines
  • Make the Saturation slider update when picking a color that has Value 100 BUG:391934
  • Fix “Break at segment” to work correctly with closed paths
  • Disable right-clicking on popup palette BUG:391696, BUG:378484
  • Don’t let the color label widget mess up labels when right button is pressed BUG:392815
  • Fix Canvas position popping after pop-up palette rotation reset BUG:391921 (Patch by Emmet O’Neil, thanks!)
  • Change the behaviour of the add layer button BUG:385050 (Patch by Seoras Macdonald, thanks!)
  • Clicking outside preview box moves view to that point BUG:384687 (Patch by Seoras Macdonald, thanks!)
  • Implement double Esc key press shortcut for canceling continued transform mode BUG:361852
  • Display flow and opacity as percentage instead of zero to one on toolbar

Other

Download

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

When it is updated, you can also use the Krita Lime PPA to install Krita 4.0.2 on Ubuntu and derivatives. We are working on an updated snap.

OSX

Note: the gmic-qt and python plugins are not available on OSX.

Source code

md5sum

For all downloads:

Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here (filenames ending in .sig).

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

May 09, 2018

System76 and the LVFS

tl;dr: Don’t buy System76 hardware and expect to get firmware updates from the LVFS

System76 is a hardware vendor that builds laptops with the Pop_OS! Linux distribution pre-loaded. System76 machines do get firmware updates, but do not use the fwupd and LVFS shared infrastructure. I’m writing this blog post so I can point people at some static text rather than writing out long replies to each person that emails me wanting to know why they don’t just use the LVFS.

In April of last year, System76 contacted me, wanting to work out how to get on the LVFS. We wrote 30+ cordial emails back and forth with technical details. Discussions got stuck when we found out they currently use a nonfree firmware flash tool called afuefi rather than use the UEFI specification called UpdateCapsule. All vendors have support for capsule updates as a requirement for the Windows 10 compliance sticker, so it should be pretty easy to use this instead. Every major vendor of consumer laptops is already using capsules, e.g. Dell, HP, Lenovo and many others.

There was some resistance to not using the proprietary AUEFI executable to do the flashing. I still don’t know if System76 has permission to redistribute afuefi. We certainly can’t include the non-free and non-redistributable afuefi as a binary in the .cab file uploaded to the LVFS, as even if System76 does have special permission to distribute it, as the LVFS would be a 3rd party and is mirrored to various places. IANAL and all that.

An employee of System76 wrote a userspace tool in rust to flash the embedded controller (EC) using a reverse engineered protocol (fwupd is written in C) and the intention was add a plugin to fwupd to do this. Peter Jones suggested that most vendors just include the EC update as part of the capsule as the EC and system firmware typically form a tightly-coupled pair. Peter also thought that afuefi is really just a wrapper for UpdateCapsule, and S76 was going to find out how to make the AMI BIOS just accept a capsule. Apparently they even built a capsule that works using UpdateCapsule.

I was really confused when things went so off-course with a surprise announcement in July that System76 had decided that they would not use the LVFS and fwupd afterall even after all the discussion and how it all looked like it was moving forwards. Looking at the code it seems the firmware update notifier and update process is now completely custom to System76 machines. This means it will only work when running Pop_OS! and not with Fedora, Debian, Ubuntu, SUSE, RHEL or any other distribution.

Apparently System76 decided that having their own client tools and firmware repository was a better fit for them. At this point the founder of System76 got cc’d and told me this wasn’t about politics, and it wasn’t competition. I then got told that I’d made the LVFS and fwupd more complicated than it needed to be, and that I should have adopted the infrastructure that System76 had built instead. This was all without them actually logging into the LVFS and seeing what features were available or what constraints were being handled…

The way forward from my point of view would be for System76 to spend a few hours making UpdateCapsule work correctly, another few days to build an EFI binary with the EC update, and a few more hours to write the metadata for the LVFS. I don’t require an apology, and would happily create them a OEM account on the LVFS. It looks instead that the PR and the exclusivity are more valuable that working with other vendors. I guess it might make sense for them to require Pop_OS! on their hardware but it’s not going to help when people buy System76 hardware and want to run Red Hat Enterprise Linux in a business. It also means System76 also gets to maintain all this security-sensitive server and client code themselves for eternity.

It was a hugely disappointing end to the discussion as I had high hopes System76 would do the right thing and work with other vendors on shared infrastructure. I don’t actually mind if System76 doesn’t use fwupd and the LVFS, I just don’t want people to buy new hardware and be disappointed. I’ve heard nothing more from System76 about uploading firmware to the LVFS or using fwupd since about November, and I’d given multiple people many chances to clarify the way forward.

If you’re looking for a nice laptop that will run Linux really well, I’d suggest you buy a Dell XPS instead — it’ll work with any distribution you choose.

Decoding Codes

My friend and colleague of over 20 years, Nick Burka, has written a great article about the Usability for Promotion Codes and Access Codes over on the silverorange blog.

Read Usability for Promotion Codes and Access Codes by Nick Burka on the silverorange blog.

You might not care about promotion codes, but you’ve probably had to type in some kind of code for 2-factor authentication or the rare non-scammy coupon code. Nick’s article covers what can make these codes easy (or difficult) to remember, type, and say over the phone.

It’s too bad the creators of our Canadian postal code system couldn’t have read this before they put all of those Gs and Js in the Quebec postal codes (an English G and French J sound almost identical).

I’m particularly proud of this article as it draws on external expertise – something we’ve been trying to do more of at silverorange. This article in particular draws on things we learned for a literacy and essential skills consultant, and from the non-profit Computers for Success Canada.

May 07, 2018

A Hissy Fit

As I came home from the market and prepared to turn into the driveway I had to stop for an obstacle: a bullsnake who had stretched himself across the road.

[pugnacious bullsnake]

I pulled off, got out of the car and ran back. A pickup truck was coming around the bend and I was afraid he would run over the snake, but he stopped and rolled down the window to help. White Rock people are like that, even the ones in pickup trucks.

The snake was pugnacious, not your usual mellow bullsnake. He coiled up and started hissing madly. The truck driver said "Aw, c'mon, you're not fooling anybody. We know you're not a rattlesnake," but the snake wasn't listening. (I guess that's understandable, since they have no ears.)

I tried to loom in front of him and stamp on the ground to herd him off the road, but he wasn't having any of it. He just kept coiling and hissing, and struck at me when I got a little closer.

I moved my hand slowly around behind his head and gently took hold of his neck -- like what you see people do with rattlesnakes, though I'd never try that with a venomous snake without a lot of practice and training. With a bullsnake, even if they bite you it's not a big deal. When I was a teenager I had a pet gopher snake (a fringe benefit of having a mother who worked on wildlife documentaries), and though "Goph" was quite tame, he once accidentally bit me when I was replacing his water dish after feeding him and he mistook my hand for a mouse. (He seemed acutely embarrassed, if such an emotion can be attributed to a reptile; he let go immediately and retreated to sulk in the far corner of his aquarium.) Anyway, it didn't hurt; their teeth are tiny and incredibly sharp, and it feels like the pinprick from a finger blood test at the doctor's office.

Anyway, the bullsnake today didn't bite. But after I moved him off the road to a nice warm basalt rock in the yard, he stayed agitated, hissing loudly, coiling and beating his tail to mimic a rattlesnake. He didn't look like he was going to run and hide any time soon, so I ran inside to grab a camera.

In the photos, I thought it was interesting how he held his mouth when he hisses. Dave thought it looked like W.C. Fields. I hadn't had a chance to see that up close before: my pet snake never had occasion to hiss, and I haven't often seen wild bullsnakes be so pugnacious either -- certainly not for long enough that I've been able to photograph it. You can also see how he puffs up his neck.

I now have a new appreciation of the term "hissy fit".

[pugnacious bullsnake]

May 05, 2018

May 04, 2018

(NSFW) What Stefan Sees


(NSFW) What Stefan Sees

An Interview with Photographer Stefan Schmitz

Stefan Schmitz is a photographer living in Northern France and specializing in sensual and nude portraits. I stumbled upon his work during one of my searches for photographers using Free Software on Flickr, and as someone who loves shooting portraits his work was an instant draw for me.

Franzi Skamet by Stefan  Schmitz Franzi Skamet by Stefan Schmitz
Khiara Gray by Stefan  Schmitz Khiara Gray by Stefan Schmitz

He’s a member of the forums here (@beachbum) and was gracious enough recently to spare some time chatting with me. Here is our conversation (edited for clarity)…

Are you shooting professionally?

Nope, I’m not a professional photographer, and I think I’m quite happy about that. I do happen to photograph my surroundings for ±40 years now, and I have a basic idea about camera-handling and light. Being a pro is about paying invoices by shooting photos, and I fear that the pressure at the end of some months or quarters can easily take the fun out of photography. I’m an engineer and photography is my second love behind wife and kids.

Every now and then some of my pictures are requested and published by some sort of magazine, press or web-service, and I appreciate the attention and exposure, but there is no (or very little) money in the kind of photography I specialize in, so … everything’s OK the way it is.

Khiara Gray by Stefan Schmitz Khiara Gray by Stefan Schmitz

What would you say are your biggest influences?

Starting with photographers: Andreas Feininger, Peter Lindbergh and Alfred Stieglitz. Check out the portrait of Georgia O’Keeffe by Alfred Stieglitz: it’s 100 years old and it’s all there. Pose, light, intensity, personality - nobody has invented anything [like it] afterwards. We all just try to get close. I feel the same when I look at images taken by Peter Lindbergh, but my eternal #1 is Andreas Feininger.

Georgia O’Keeffe by Alfred Stieglitz

I got the photo-virus from my father and I learned nearly everything from daddy’s well-worn copy of The Complete Photographer [amzn] (Feininger) from 1965. Every single photo in that book is a masterpiece, even the strictly “instructional” ones. You measure every photo-book in the world against this one and they all finish second. Get your copy!

How would you describe your own style overall?

I shoot portraits of women and most of the time they don’t wear clothes. The portrait-part is very important for me: the model must connect with the viewer and ideally the communication goes beyond skin-deep. I want to see (and show) more than just the surface, and when that happens, I just press the shutter-button and try to get out of the way of the model’s performance.

Jennifer Polska by Stefan Schmitz Jennifer Polska by Stefan Schmitz
Franzi Skamet by Stefan Schmitz Franzi Skamet by Stefan Schmitz

What motivates you when deciding what/how/who to shoot?

I like women, so I take photos of women. If I were interested in beetles, I’d buy a macro lens and shoot beetles. All kidding aside, I think it’s a natural thing to do. I am married to a beautiful woman, an ex-model, and when she got fed-up with my eternal “can we do one more shoot” requests, we discussed things and she allowed me to go ahead and shoot models. Her support is very important to me, but her taste is very different from mine. I really never asked myself “why” I shoot sensual portraits and nudes. It just feels like “I want to do that” and I feel comfy with it. Does there have to be a reason?

The location is very important for me. Nothing is more boring than blinding a person with a flashlight in front of a gray wallpaper. A room, a window-sill, a landmark - there’s a lot of inspiration out there, and I often think “this is where I want to shoot”. Sometimes my wife tells me of some place she has been to or seen, and I check that out.

If you had to pick your own favorite 3 images of your work, which ones would you choose and why?

Jennifer Polska by Stefan Schmitz Jennifer Polska by Stefan Schmitz

Jennifer is a very professional and inspiring model. We’ve worked together quite a number of times and while you may think that this shot was inspired by The Who’s “Pinball Wizard”, I’d answer “right band, wrong song”. It’s The Who, alright, but the song’s “A quick one while he’s away”. I chose this photo because it’s all about Jennifer’s pose and facial expression. It’s sensual, even sexy, but looking at Jennifer’s face you forget about the naked skin and all. There’s beauty, there’s depth … that’s what I’m after.

Alice by Stefan Schmitz Alice by Stefan Schmitz

This shot of Alice is an example for the importance of natural light. There are photographers out there who can arrange light in a similar way, but I doubt that Alice would express this natural serenity in a studio setup with cables and stands and electric-transformers humming. She’s at ease, the light is perfect - I just try to be invisible because I don’t want to ruin the moment.

Khiara Gray by Stefan  Schmitz Khiara Gray by Stefan Schmitz

Try to escape Khiara’s eyes. Go, do it. It’s all there, the pose, the room, the ribbon-chair and the little icon, but those eyes make the picture. I did NOT whiten the eyeballs nor did I dodge the iris, and of course it’s all natural/available light.

If you had to pick 3 favorite images from someone else, which ones would you choose and why?

I already named Stieglitz’ Georgia O’Keeffe as an inspiration further up - next to that there’s Helmut Newton’s Big Nude III, Henrietta and Kim Basinger’s striptease in 9 12 weeks (white silk nighty and all). Each one a masterpiece, each one very influential for me. Imagine the truth and depth of Georgia with the force and pride of Henrietta and the erotic playfulness of Kim Basinger. That photo would rule the world.

Big Nude III, Henrietta, Helmut Newton

Is there something outside of your comfort zone you wish you could try/shoot more of?

I would like to work more with women above the age of 35, but it’s hard to find them. In general they stop modeling nude when the kids arrive.

Shooting more often outdoors would be cool, too, but that’s not easy here in northern France - there is no guarantee for good weather, and it’s frustrating when you organize a shoot two weeks in advance just to call it off in the very last minute due to bad weather.

Last but not least there’s a special competition among photographers; it’s totally unofficial and called “the white shirt contest”. Shoot a woman in a white shirt and make everybody “feel” the texture of that shirt. I give it a try on every shoot and very few pictures come out the way I wish. Go for it - it’s way harder than I thought!

Alice by Stefan Schmitz Alice by Stefan Schmitz

How do you find your models usually?

There are websites where models and photographers can present their work and get in contact. The biggest-one worldwide is modelmayhem.com, and I highly recommend to become a member. Another good place is tumblr.com, but you have to go through a lot of dirt before you find some true gems. I have made contact via both sites and I recommend them.

You will need some pictures in your portfolio in order to show that you are - in fact - a photographer with a basic idea of portrait-work. If you shoot portraits (I mean really portraits, not some snapshots of granny and the kids under the Christmas-tree), you probably have enough photos on your disk to state the point. But if you don’t and you want to start (nude) portraits, spend some money on a workshop. I did that twice and it really helped me in several ways: communication with the model, how to start a session, do’s and don’ts - and at the end of the day you will drive home with a handful of pictures for your portfolio.

Hannah by Stefan Schmitz Hannah by Stefan Schmitz

Speaking of gear, what are you shooting with currently (or what is your favorite setup)?

Gear is overrated. I am with Nikon since 1979 and today I own and use two bodies: a 1975 Nikon F2 photomic (bought used in 82), loaded with Kodak Tri-X and a Nikon D610 DSLR. 90% of my pictures are shot with a 50mm standard lens. Next on the list is the 35mm - you will need that in small rooms when the 50mm is already a bit too long and you want to keep some distance. I happen to own a 85mm, but the locations I book and shoot rarely offer enough space to make use of that lens.

There are these cheap, circular 1m silver reflectors on amazon. They cost about 15 €/$ and you get a crappy stand for the same price. That stuff is pure gold - I use the reflector a lot and I highly recommend to learn how to work with it. It’s my little secret weapon when I shoot against the light (see Alice here above).

A camera with a reasonably fast standard lens, a second battery and a silver reflector is all I need. The rest is luxury for me, but I am pretty much a one-trick-pony. Other photographers will benefit more from a bigger kit.

Most of your images appear to be making great use of natural light. Do you use other lighting gear (speedlights, monoblocks, modifiers, etc)?

Right - available light is where it’s at. I very rarely shoot with a flash kit today because it distracts me from the work with the model. I’m a loner on the set, no assistants or friends who come and help, so everything must be totally simple and foolproof.

Saying that, I own an alarming number of speedlights, umbrellas, triggers and softboxes, but I don’t need that gear very often. I try to visit the locations before I shoot. I check the directions and plan for a realistic timeframe, so today I will neither find myself in a totally dark dungeon nor in a sun-filled room with contrasts à gogo. Windows to the west - shoot in the morning, windows facing south-east: shooting in the (late) afternoon.

Karolina Lewschenko by Stefan Schmitz Karolina Lewschenko by Stefan Schmitz

Here’s a shot of Karolina Lewschenko. We took this photo in a hotel room by the end of October and the available (window) light got too weak, so I used an Aurora Firefly 65 cm softbox with a Metz speedlight and set-up some classic Rembrandt-Light. I packed that gear because I knew that our timeframe wasn’t guaranteed to work out perfectly. “Better be safe than sorry”.

Franzi Skamet by Stefan  Schmitz Franzi Skamet by Stefan Schmitz

Do you pre-visualize and plan your shoots ahead of time usually, or is there a more organic interaction with the model and the space you’re shooting in?

Yes, I do. When I visit a place, a possible location, I have some Ideas of where to shoot, what furniture to push around and what pose to try. I can pretty much see the final picture (or my idea of it) before I book the model. Having said that, you know that no battle-plan has ever survived the first shot fired…

When the model arrives, we take some time to walk around the locations and discuss possible sets. We will then start to shoot fully clothed in order to get used to another and see how the light will be on the final shots. It’s very important for me to get feedback from the model. She might say that a pose is difficult for her or hurts after a few seconds, that she’s not comfy with something or that she would like to try a totally different thing here. I always pay a lot of attention to those ideas and - out of experience - those shots based on the model’s ideas are in general among the best of the day.

Karolina Lewschenko by Stefan Schmitz Karolina Lewschenko by Stefan Schmitz

I mean we’re not here because I shoot bugs or furniture, you don’t give me the opportunity to express myself here because you are a fan of crickets; all the attention is linked to the beautiful women on my photos and how they connect with the beholder. I am just the one who captures the moments, it’s the models who fill those moments with intensity and beauty. It would be very stupid of me not to cooperate with a model who knows how to present herself and who comes up with her own ideas.

Always listen to the model, always communicate, never go quiet.

The discussion with the model also includes what degree of nudity we consider. So the second round of photos starts with the “open shirt” or topless shots before the model undresses completely. If we take photos in lingerie, we do that last (after the nudes) because lingerie often leaves traces on the skin and we don’t want that to show.

Franzi Skamet by Stefan Schmitz Franzi Skamet by Stefan Schmitz

It is important to know what to do and in what order. You don’t want to have a nude model standing in front of you, asking “what’s next?” and you answer “I dunno - maybe (!) try this or that again”. If you lose your directions for a moment, just say so or say “please get your bathrobe and let’s have a look at the last pictures together”. If you are “not sure”, the model might be “not comfy”, and that’s something we want to avoid.

Would you describe your workflow a bit? Which projects do you use regularly?

A typical session is 90 to 120 minutes and I will end-up with about 500 exposures on the SD-card and maybe a roll of exposed Kodak Tri-X. The film goes to a lab and I will get the negatives and scans back within 15 to 30 days.

There’s two SD-cards, one with RAW files that I import with gThumb to /photos/year/month/day. The other card holds fine-quality JPG and those go to /pictures/year/name_of_model. My camera is already set to monochrome, I get every picture I shoot in b/w on the camera-screen and the JPG-files are also monochrome.

Next step is a pre-selection in Geeqie. That’s one great picture viewer and I delete all the missed shots (bad framing, out of focus etc.) and note/mark all the promising/good shots here. This is normally the end of day one.

Switching from RAWstudio to darktable has been a giant step for me. dt is just a great program and I still learn about new functions and modules every day. The file comes in, is converted to monochrome and afterwards color saturation and lights (red and yellow) are manipulated . This way I can treat the skin (brighter or darker) without influencing the general brightness of the picture. Highlights and lowlights may be pushed a bit to the left and I add the signature and a frame 0,5% wide, lens correction is set automatically. That’s the whole deal. On very rare occasions I add some vignette or drop the brightness gradually from top to bottom, but again: it doesn’t happen all that often. I never cut, crop or re-frame a shot. WYSIWYG. Cropping something out, turning the picture in order to get perfectly vertical lines or the likes - it all feels like cheating. I have no client to please, no deadline to meet, I can take a second longer and frame my photo when I look through the viewfinder.

Franzi Skamet by Stefan Schmitz Franzi Skamet by Stefan Schmitz

The photos will then be treated in the GIMP. Some dodge and burn (especially when there are problematic, very high or low contrasts), maybe stamp an electric plug away and in the end I re-size them down to 2560 on the long side (big enough for A3 prints) and (sometimes) apply the sharpening tool with value 20 or 25. Done. I can’t save a crappy shot in post-prod and I won’t try. Out of the 500 or so frames, 10 to 15 will be processed like that and it feels like nothing has changed over the last 40 years. The golden rule was “one good shot per roll of film” and I happen to be there, too. Spot-on!

I load those 15 pictures up on my Flickr account and about once or twice a week I place a shot in the many Flickr groups. Also once a week (or every ten days) I post a photo on my Tumblr account. Today I have about 5k followers and my photos are seen between 500’000 and one million times a month, depending on the time of year and weather. There’s less traffic on warm summer days and more during cold and rainy winter-nights.

It takes me some time before I add a shot to my own website. In comparison I show few photos there, every one for a reason and I point point people to that address, so I hope I only show the best.

Aya Kashi by Stefan Schmitz Aya Kashi by Stefan Schmitz

Is your choice to use Free Software for pragmatic reasons, or more idealistic?

I owned an Apple II in 1983 and a digital MicroVax in 1990 or so. My way to FOSS started out pragmatic and it became a conviction later on. In the late 90’s and early 2000’s I had my own small business and worked with MS Office on a Win NT machine. Photos were processed with a Nikon film-scanner through the proprietary software into an illegal copy of Adobe PS4. It was OK, stable and I didn’t fear anything, but I wasn’t really happy neither. One day I swung over to Star-Office/OpenOffice.org for financial reasons and I also got rid of that unlicensed PS and installed the GIMP (I don’t know what version, but I upgraded some time later to 1.2, that’s for sure). I had internet access and an email address since 1994, but in the late 90’s big programs still came on CDs attached to computer-magazines. Downloading the GIMP was out of question.

Gaming was never my thing and when I installed Win XP, all hell broke lose - keeping a computer safe, virus-free and running wasn’t easy before the first service pack, but MS reacted way too slow in my opinion - I tried debian (10 CD kit) on my notebook, got it running, found the GIMP and OOo - and that was it. It took a bit of trial and error and I had to buy a number of W-Lan sticks because very few were supported and so on, but in the end I got the machines running.

Later on I got hold of an Ubuntu 7.10 CD, tried that and never looked back. The few changes on my system were from Gnome to XFCE desktop and from Thunderbird to a browser-based mail-client. Xubuntu is a no-brainer, it runs stable and fast. I contribute every December 100.- € to FOSS. That’s in general 50 and 40 to two projects and a tenner to Wikipedia. I’d spend an extra tenner to any project that helps to convert old star-office files (.sdw and so on) to today’s standards (odt…), but nobody seems interested.

What is one piece of advice you would offer to another photographer?

Don’t take any advise from me, i’m still learning myself. Or wait: be kind and a gentleman with the models. They all - each and everyone of them - have had bad experiences with photographers who forgot that the models are nude for the camera, not for the man behind it. They all have been in a room with a photographer who breathes a bit too hard and doesn’t get his gear working … don’t be that arsewipe!

Irina by Stefan Schmitz Irina by Stefan Schmitz

Arrange for a place where the model can undress in privacy - she didn’t come for a strip-show and you shouldn’t try to make it one. Have some bottles of water at hand and talk about your plans, poses and sets with the model. Few people can read minds, so communication works best when you say what you have in mind and the model says how she thinks this can be realized. The more you talk, the better you communicate, the better the pictures. No good photo has ever been shot during a quiet session, believe me.

In general the model will check your portfolio/website and expect to do more or less the same kind of work with you. If you want to do something different, say so when booking the model. If your website shows a lot of nude portraits, models will expect to do that kind of photos. They may be a bit upset if you ask them out of nowhere to wear a latex suit because it’s fetish-Friday in your world. The more open and honest you are from the beginning, the better the shooting will go down.

Irina by Stefan Schmitz Irina by Stefan Schmitz

Don’t overdo the gear-thingy. 90% of my photos are taken with the 50mm standard lens. Period. Sometimes I have to switch to 35mm because the room is a bit to small and the distance too close for the one four-fifty, so everything I bring to an indoor-shooting is the camera, a 50, a 35, an el-cheap-o 100cm reflector from amazon (+/- 15 €/$) and an even cheaper stand for the reflector. Gear is not important, communication is.

Want to spend 300 €/$ on new gear? Spend it on a workshop. Learn how to communicate, get inspiration and fill your portfolio with a first set of pictures, so the next model you email can see that you already have some experience in the field of (nude) portraits. That’s more important than a new flashlight in your bag.

Isabelle Descamps by Stefan Schmitz Isabelle Descamps by Stefan Schmitz

Thank You Stefan!

I want to thank Stefan again for taking the time and being patient enough to chat with me!

Stefan is currently living in Northern France. Before that he lived and worked in Miami, FL, and Northern Germany where he is from, went to school, and met his wife. His main website is at https://whatstefansees.com/, and he can be found on Flickr, Facebook, Twitter, Instagram, and Tumblr.

Unless otherwise noted, all of the images are copyright Stefan Schmitz (all rights reserved) and are used with permission.

May 02, 2018

Bíonn gach tosach lag*

Tá mé ag foghlaim Gaeilge; tá uaim scríobh postálacha blag as Gaeilge, ach níl mé oilte ar labhairt nó scríbh as Gaeilge go fóill. Tiocfaidh sé le tuilleadh cleachtaidh.**

Catching up

I have definitely fallen off the blog wagon; as you may or may not know the past year has been quite difficult for me personally, far beyond being an American living in Biff Tannen’s timeline these days. Blogging definitely was pushed to the bottom of the formidable stack I must balance but in hindsight I think the practice of writing is beneficial matter what it’s about so I will carve regular time out to do it.

Tá mé ag foghlaim Gaeilge

This post title and opening is in Irish; I am learning Irish and trying to immerse myself as much as one can outside of a Gaeltacht. There’s quite a few reasons for this:

  • The most acute trigger is that I have been doing some genealogy and encountered family records written in Irish. I couldn’t recall enough of the class I’d taken while in college and got pulled in wanting to brush up.
  • Language learning is really fun, and Irish is of course part of my heritage and I would love to be able to teach my kids some since it’s theirs, too.
  • One of the main reasons I took Japanese in college for 2 years is because I wanted to better understand how kanji worked and how to write them. With Irish, I want to understand how to pronounce words, because from a native English speaker point of view they sound very different than they look!
  • Right now appears to be an exciting moment for the language; it has shed some of the issues that I think plagued it during ‘The Troubles’ and you can actually study and speak it now without making some kind of unintentional political statement. There’s far more demand for Gaelscoils (schools where the medium for education in all subjects is Irish) than can be met. In the past year, the Pop Up Gaeltacht movement has started and really caught on, a movement run in an open source fashion I might add!
  • I am interested in how the brain recovers from trauma and I’ve a little theory that language acquisition could be used as a model for brain recovery and perhaps suggest more effective therapies for that. Being knee deep in language learning, at the least, is an interesting perspective in this context.
  • I also think – as a medium that permeates everything you do, languages are similar to user interfaces – you don’t really pay attention to a language when you speak it if you’re fluent, it’s just the medium. Where you pay attention to the language rather than the content is where you have a problem speaking it or understanding it. (Yes, the medium is the message except when it isn’t. 🙂 )Similarly, user interfaces aren’t something you should pay attention to – you should pay attention to the content, or your work, rather than focus on the intricacies of how the interface works. I think drawing connections between these two things is at least interesting, if not informative. (Can you tell I like mashing different subjects together to see what comes out?)

Anyway, I could go on and on, but yes, $REASONS. I’m trying to learn a little bit every day rather than less frequent intensive courses. For example, I’m trying to ‘immerse’ as I can by using my computers and phone in the Irish language, keep long streaks in the Duolingo course, listen to RnaG and watch TG4 and some video courses, and some light conversation with other Irish learners and speakers.

Maybe I’ll talk more about the approach I’m taking in detail in another post. In general, I think a good approach to language learning is a policy I try to subscribe to in all areas of life – just f*ing do it (speak it, write it, etc. Do instead of talking about doing. Few things infuriate me more although I’m as guilty as anyone. 🙂 ) There you go for now, though.

What else is going on?

I have been working on some things that will be unveiled at the Red Hat Summit and don’t want to be a spoiler. I am planning to talk a bit more about that kind of work here. One involves a coloring book :), and another involves a project Red Hat is working on with Boston University and Boston Children’s Hospital.

Just this week, I received my laptop upgrade 🙂 It is the Thinkpad Yoga X1 3rd Gen and I am loving it so far. I have pre-release Fedora 28 on it and am very happy with the out-of-the-box experience. I’m planning to post a review about running Fedora 28 on it soon!

Slán go fóill!

(Bye for now!)

* Every beginning is weak.

** I’m learning Irish; I want to write blog posts in Irish, but I don’t speak or write Irish well enough yet. It’ll come with practice. (Warning: This is likely Gaeilge bhriste / broken Irish)

FreeCAD BIM development news - April 2018

Hello everybody, This is time for a new report on FreeCAD development, particularly the development of BIM tools. To resume the affair for who is new to this column, I recently started to "divide" the development of BIM tools in FreeCAD between the original Arch, which is included in FreeCAD itself, and the new BIM...

May 01, 2018

Goodbye Kansas Studios

Goodbye Kansas Studios is a VFX studio that creates award-winning visual effects, digital animation and motion capture for movies, game trailers and commercials. Goodbye Kansas Studios main office lies in Stockholm, Sweden, but they are also located in Los Angeles, London, Hamburg and Uppsala.

Goodbye Kansas Studio

Text by Nils Lagergren and Daniel Bystedt, Goodbye Kansas

We pride ourselves in having a structure at work where we put the artists first and the administration works a lot to support the artists. This has in turn created a company culture where artists help each other out as soon they run into any CG related issue. We also have a very strong creative atmosphere where artists feel ownership of their tasks and go out of their way to achieve visual excellence.

At Goodbye Kansas Studios we use several 3D applications, such as Houdini, Blender, Zbrush and Maya. We always try to approach a challenge with the tool that is best suited for solving the problem at hand. Blender first caught our eye because some of our artists had started trying it out and were surprised over how much faster they could produce models. Even though not every artist at the company use Blender it is becoming more and more popular in the modeling department at the Stockholm office. Let’s have a look at some projects!

Characters for Unity – Adam Demo

Characters were modeled in Blender and Zbrush. The low poly version of the character was entirely done in Blender.

Blender fits nicely into our pipeline because of its powerful modeling tools. We also use it for hair grooming, which then is exported as curves and used for procedural hair setups in other packages. Blender has a very nice mix between procedural tools, standard box modeling and sculpting. Generally we use Zbrush for character work and Blender for hard surface and props/environment work. We also use it in parts of our environment workflow for scattering objects.

Walking dead – season 8

Retopology and UV-mapping of human actor scans were done in Zbrush and Blender. Grooming of hairstyles were also done in Blender.

Here are something that artists say about Blender.

“Things that are very complex to achieve in other applications are suddenly easy!”
“As a modeler it’s a program that works with you, instead of against you.”
“Suddenly I love Dutch people”
“It made box modelling fun again”
“It feels so strange that Blender is free when it’s actually better than most other modeling programs on the market”

Overkill’s: The Walking dead – Aidan trailer

“Upresolution” of zombie game assets were made both in Zbrush and Blender. Grooming of zombie hairstyles were done in Blender, and we also made a bunch of environment assets.

Along with the gods – The two worlds

The stone chamber was created with Blender. There was a lot of tedious work with placing rocks so they would no intersect in this environment. Thanks to Blender’s fast rigid body simulation system, we could simulate a low resolution version of the rocks and drop them in place. The rocks were then relinked to a high resolution version and published as an environment model. The stone characters in this scene were also done in Blender in two passes. First, the rocks were scattered onto a human base mesh and then they were nudged around by hand for better art direction. The big stone walls were also sculpted in Blender.

Biomutant – cinematic trailer

The little hero character was modeled in Zbrush and Blender. Grooming of the fur was done in Blender.

Raid: World War 2 – Cinematic Trailer

Several environments were done in Blender. We started the layout process using Grease Pencil. This was great, since we could do it very quickly, side-by-side with the art director and address his thoughts and notes. This Grease Pencil sketch was later linked into each environment artists’ scene so they had a good reference when building it. The environment artists did also link each others scene so that they could see each others work update. This made it easy to tie the separate rooms together.

Mass Effect: The Andromeda Initiative

The Moon environment was made in Blender. Being able to sculpt the ground at the same time as scattering out rocks made it really easy to iterate the shot and see how everything looked in the camera. By importing the character animation with Alembic from Maya to Blender, the environment artist could make sure that nothing intersected the characters feet while they were walking. This also enabled us to create the environment simultaneously as we were animating the shots.

April 30, 2018

Interview with JK Riki

Could you tell us something about yourself?

Hi everyone! My name is JK. I am an animator, graphic designer, author, and the Art-half of the Weekend Panda game studio.

Do you paint professionally, as a hobby artist, or both?

My full time job in game development has me doing art professionally, but I’m always working on improving my skills by doing digital painting as a hobby as well – so a little bit of both.

What genre(s) do you work in?

My most practiced genre is the comic/cartoon art style seen in the image above, which I have a lot of fun doing. I also strive to push beyond my comfort zone and try everything from fully rendered illustrations to graphic styles.

I want to continue to improve all-around as an artist so every genre becomes a possibility.

Whose work inspires you most — who are your role models as an artist?

* In animation: Glen Keane, who worked on things like Ariel in The Little Mermaid and Ratigan in The Great Mouse Detective (or as some know it, Basil of Baker Street).
* In comics: Bill Amend, who does the syndicated comic strip Fox Trot.
* In figure drawing: Samantha Youssef, who runs Studio Technique and has been a wonderful mentor.
* In painting: There are so many, and I seem to find more every day!

How and when did you get to try digital painting for the first time?

I imagine the first time I tried it was back in Art School, though that’s probably close to 15 years ago, so the memories are hazy.

What makes you choose digital over traditional painting?

I am a big proponent of “Fail fast and often.” Digital painting allows for just that. I can make (and try to correct) 20 mistakes digitally in the time it takes to pinpoint and alter one mistake traditionally.

Of course, I still love traditional art, even though I find it takes far longer to do. I have sketchbooks littered around my office, and would happily animate with paper and pencil any time any day.

How did you find out about Krita?

It was actually from my wife, who is a software engineer! She needed to do some graphics for a project at her old job, and wanted to find a free program to do it. After Adobe went to a forced subscription-only model, I was looking to make a change, and she showed me Krita.

What was your first impression?

Well, to be honest, I have a hard time learning new programs, so initially I was a little bit resistant! There were so many brushes, and I had to adapt to the differences between Krita and Photoshop. It won me over far more quickly than any other program, though. The flow and feel of painting and drawing in Krita is on a whole different level, probably because it was designed with that in mind! I would never want to go back now.

What do you love about Krita?

Every day I find new tools and tricks in Krita that blow me away. I recently discovered the Assistant Tool and it was practically life-changing. I can do certain things so much faster thanks to learning about that magical little icon.

I also adore so many of the brush presets. They seem much more aligned with what I’m trying to do than the ones that come with other art programs.

The fact that Krita is free is icing on the cake. (Spoiler: Artists love free stuff.)

What do you think needs improvement in Krita? Is there anything that
really annoys you?

I’ve never quite gotten used to the blending mode list/UI in Krita vs. Photoshop. The PS one just feels more intuitive to me. I’d love to see an option to make the Krita drop down menu more like that one.

What sets Krita apart from the other tools that you use?

Apart from the price tag, Krita is just more fun to work in than most other programs I use. I genuinely enjoy creating art in Krita. Sometimes with other programs it feels like half of my job is fighting the software. Rarely do I feel that way in Krita.

If you had to pick one favourite of all your work done in Krita so far,
what would it be, and why?

You torture me, how can I choose?! I suppose it would be this one:

It may not be the most finished or technically impressive art I’ve ever done, but it was one of the first times digital painting really clicked with me and I thought “Hey, maybe I can do this!” I’ve always felt an affinity for comic and cartoon style, but realism often eludes me. This piece proved in some small way that my practice was starting to pay off and I was getting somewhere. It felt like a turning point. So even if no one else feels the same way, this little bird will always be special to me.

What techniques and brushes did you use in it?

My most-used brushes are Ink_tilt_10 and Ink_tilt_20 (as seen in this screen capture!)

These days I use many more brushes and techniques, but that whole image was done with just those two, and different levels of flow and opacity. I didn’t even know about the Alpha Lock on the layers panel for this, which I use now in almost every digital painting.

Where can people see more of your work?

People can PLAY some of my work in the mobile game The Death of Mr. Fishy! All the art assets for that game were done in Krita. I’m doing more art for our next game right now as well. The latest details will always be posted at WeekendPanda.com.

I also share my practice art and work-in-progress on my personal Twitter account which is @JK_Riki.

Anything else you’d like to share?

Yes. A note to other artists out there: You can have the greatest tools and knowledge in the world but if you don’t practice, and truly put in the work, you will never achieve your best art. It is hard. I know, I’m with you there. It’s worth it, though. Work hard, practice a ton, and we’ll all improve together. Let’s do it! And if you ever need someone to encourage you to keep going, send me a note! 🙂

April 28, 2018

Displaying PDF with Python, Qt5 and Poppler

I had a need for a Qt widget that could display PDF. That turned out to be surprisingly hard to do. The Qt Wiki has a page on Handling PDF, which suggests only two alternatives: QtPDF, which is C++ only so I would need to write a wrapper to use it with Python (and then anyone else who used my code would have to compile and install it); or Poppler. Poppler is a common library on Linux, available as a package and used for programs like evince, so that seemed like the best route.

But Python bindings for Poppler are a bit harder to come by. I found a little one-page example using Poppler and Gtk3 via gi.repository ... but in this case I needed it to work with a Qt5 program, and my attempts to translate that example to work with Qt were futile. Poppler's page.render(ctx) takes a Cairo context, and Cairo is apparently a Gtk-centered phenomenon: I couldn't find any way to get a Cairo context from a Qt5 widget, and although I found some web examples suggesting renderToImage(), the Poppler available in gi.repository doesn't have that function.

But it turns out there's another Poppler: popplerqt5, available in the Debian package python3-poppler-qt5. That Poppler does have renderToImage, and you can take that image and paint it in a paint() callback or turn it into a pixmap you can use with a QLabel. Here's the basic sequence:

    document = Poppler.Document.load(filename)
    document.setRenderHint(Poppler.Document.TextAntialiasing)
    page = document.page(pageno)
    img = self.page.renderToImage(dpi, dpi)

    # Use the rendered image as the pixmap for a label:
    pixmap = QPixmap.fromImage(img)
    label.setPixmap(pixmap)

The line to set text antialiasing is not optional. Well, theoretically it's optional; go ahead, try it without that and see for yourself. It's basically unreadable.

Of course, there are plenty of other details to take care of. For instance, you can get the size of the rendered image:

    size = page.pageSize()
... after which you can use size.width() and size.height(). They're in points. There are 72 points per inch, so calculate accordingly in the dpi values you pass to renderToImage if you're targeting a specific DPI or need it to fit in a specific window size.

Window Resize and Efficient Rendering

Speaking of fitting to a window size, I wanted to resize the content whenever the window was resized, which meant redefining resizeEvent(self, event) on the widget. Initially my PDFWidget inherited from Qwidget with a custom paintEvent(), like this:

        # Create self.img once, early on:
        self.img = self.page.renderToImage(self.dpi, self.dpi)

    def paintEvent(self, event):
        qp = QPainter()
        qp.begin(self)
        qp.drawImage(QPoint(0, 0), self.img)
        qp.end()
(Poppler also has a function page.renderToPainter(), but I never did figure out how to get it to do anything useful.)

That worked, but when I added resizeEvent I got an infinite loop: paintEvent() called resizeEvent() which triggered another paintEvent(), ad infinitum. I couldn't find a way around that (GTK has similar problems -- seems like nearly everything you do generates another expose event -- but there you can temporarily disable expose events while you're drawing). So I rewrote my PDFWidget class to inherit from QLabel instead of QWidget, converted the QImage to a QPixmap and passed it to self.setPixmap(). That let me get rid of the paintEvent() function entirely and let QLabel handle the painting, which is probably more efficient anyway.

Showing all pages in a scrolled widget

renderToImage gives you one image corresponding to one page of the PDF document. More often, you'll want to see the whole document laid out, with all the pages. So you need a way to stack a bunch of widgets vertically, one for each page. You can do that with a QVBoxLayout on a widget inside a QScrollArea.

I haven't done much Qt5 programming, so I wasn't familiar with how these QVBoxes work. Most toolkits I've worked with have a VBox container widget to which you add child widgets, but in Qt5, you create a widget (no particular type -- a QWidget is enough), then create a layout object that modifies the widget, and add the sub-widgets to the layout object. There isn't much documentation for any of this, and very few examples of doing it in Python, so it took some fiddling to get it working.

Initial Window Size

One last thing: Qt5 doesn't seem to have a concept of desired initial window size. Most of the examples I found, especially the ones that use a .ui file, use setGeometry(); but that requires an (X, Y) position as well as (width, height), and there's no way to tell it to ignore the position. That means that instead of letting your window manager place the window according to your preferences, the window will insist on showing up at whatever arbitrary place you set in the code. Worse, most of the Qt5 examples I found online set the geometry to (0, 0): when I tried that, the window came up with the widget in the upper left corner of the screen and the window's titlebar hidden above the top of the screen, so there's no way to move the window to a better location unless you happen to know your window manager's hidden key binding for that. (Hint: on many Linux window managers, hold Alt down and drag anywhere in the window to move it. If that doesn't work, try holding down the "Windows" key instead of Alt.)

This may explain why I've been seeing an increasing number of these ill-behaved programs that come up with their titlebars offscreen. But if you want your programs to be better behaved, it works to self.resize(width, height) a widget when you first create it.

The current incarnation of my PDF viewer, set up as a module so you can import it and use it in other programs, is at qpdfview.py on GitHub.

April 26, 2018

GIMP 2.10.0 Released

The long-awaited GIMP 2.10.0 is finally here! This is a huge release, which contains the result of 6 long years of work (GIMP 2.8 was released almost exactly 6 years ago!) by a small but dedicated core of contributors.

The Changes in short

We are not going to list the full changelog here, since you can get a better idea with our official GIMP 2.10 release notes. To get an even more detailed list of changes please see the NEWS file.

Still, to get you a quick taste of GIMP 2.10, here are some of the most notable changes:

  • Image processing nearly fully ported to GEGL, allowing high bit depth processing, multi-threaded and hardware accelerated pixel processing, and more.
  • Color management is a core feature now, most widgets and preview areas are color-managed.
  • Many improved tools, and several new and exciting tools, such as the Warp transform, the Unified transform and the Handle transform tools.
  • On-canvas preview for all filters ported to GEGL.
  • Improved digital painting with canvas rotation and flipping, symmetry painting, MyPaint brush support…
  • Support for several new image formats added (OpenEXR, RGBE, WebP, HGT), as well as improved support for many existing formats (in particular more robust PSD importing).
  • Metadata viewing and editing for Exif, XMP, IPTC, and DICOM.
  • Basic HiDPI support: automatic or user-selected icon size.
  • New themes for GIMP (Light, Gray, Dark, and System) and new symbolic icons meant to somewhat dim the environment and shift the focus towards content (former theme and color icons are still available in Preferences).
  • And more, better, more, and even more awesome!

» READ COMPLETE RELEASE NOTES «

Enjoy GIMP!

Wilber likes it spicy!

Profiling a camera with darktable-chart


Profiling a camera with darktable-chart

Figure out the development process of your camera

What is a camera profile?

A camera profile is a combination of a color lookup table (LUT) and a tone curve which is applied to a RAW file to get a developed image. It translates the colors that a camera captures into the colors they should look like. If you shoot in RAW and JPEG at the same time, the JPEG file is already a developed picture. Your camera can do color corrections to the data it gets from the sensor when developing a picture. In other words, if a certain camera tends to turn blue into turquoise, the profile will correct for the color shift and convert those turquoise values back to their proper hue.

The camera manufacturer creates a tone curve for the camera and understands what color drifts the camera tends to capture and can correct it. We can mimic what the camera does using a tone curve and a color LUT.

Why do we want a color profile?

The camera captures light as linear RGB values. RAW development software needs to transform those into CIE XYZ tristimulus values for mathematical calculations. The color transformation is often done under the assumption that the conversion from camera RGB to CIE XYZ is a linear 3x3 mapping. Unfortunately it is not because the process is spectral and the camera sensor sensitivity also absorbs spectral light. In darktable the conversion is done the following way: The camera RGB values are transformed using the color matrix (either coming from the Adobe DNG Converter or dcraw) to arrive at approximately profiled XYZ values. darktable provides color lookup table in Lab color space to fix inaccuracies or implement styles which are semi-camera independent. A very cool feature is that a user can edit the color LUT. This color LUT can be created by darktable-chart as this article will show so that you don’t have to create it yourself.

What we want to have is the same knowlege about colors in our raw development software as the manufacturer put into the camera. Therefore we have two ways to achieve this. Either we fit to a JPEG generated by the camera, which can also apply creative styles, or we fit against real color. For real color a color target ships with a file providing the color values for each patch it has. Software for raw development normally just has a standard color matrix to tweak colors so that it looks acceptable and they apply a reasonable tone curve to ensure good shadow detail. We want to do better than that!

We can develop a profile for our development process which improves the colors. We can also take advantage of the color calibration a manufacturer has done for its cameras by fitting a JPEG.

Creating pictures for color profiling

To create the required pictures for camera profiling we need a color chart (aka Color Checker) or an IT8 chart as our target. The difference between a color chart and and IT8 chart is the number of patches and the price. As the IT8 chart has more patches the result will be much better. Optimal would be if the color target comes with a grey card for creating a custom White Balance. I can recommend the X-Rite ColorChecker Passport Photo. It is small, lightweight, all plastic, a good quality tool and also has a grey card. An alternative is the Spyder Checkr. If you want a better profiling result, a good IT8 chart is the ColorChecker Digital SG.

We are creating a color profile for sunlight conditions which can be used in various scenarios. For this we need some special conditions.

The Color Checker needs to be photographed in direct sunlight, which helps to reduce any metamerism of colors on the target and ensures a good match to the data file, that tells the profiling software what the colors on the target should look like. However a major concern is glare, but we can reduce it with some tricks.

One of the things we can do to reduce glare, is to build a simple shooting box. For this we need a cardboard box and three black t-shirts. The box should be open on the top and on the front like in the following picture (Figure 1).

A cardboard box Figure 1: Cardboard box suitable for color profiling

Normally you just need to cut one side open. Then coat the inside of the box with black t-shirts like this:

A cardboard box coated with black t-shirts Figure 2: A simple box for color profiling

To further reduce glare we just need the right location to shoot the picture. Of course, a lot depends on where you are located and the time of year, but in general, the best time to shoot the target is either 1-2 hours before mid-day or 1-2 hours after mid-day (when the sun has the highest elevation, keep Daylight Saving Time (DST) in mind). Try to shoot on a day with minimal clouds so the sun isn’t changing intensity while you shoot. The higher the temperature the more water is in the atmosphere, which means the quality of the images for profiling are reduced. Temperatures below 20°C are better than above.

Shooting outdoor

If you want to shoot outdoor, look for an empty tared parking lot. It should be pretty big, like from a mall, without any cars or trees. You should be far away from walls or anything which can reflect. Put the box on the ground and shoot with the sun above your right or left shoulder behind you. You can use a black fabric (bed sheets) if the ground reflects.

Shooting indoor

Find a place indoor where you can put the box in the sun and place you camera with a tripod in the shadow. The darker the room the better! Garages with an additional garage door are great. Also the sun needs to shine at an angle on the Color Checker. This means when you photograph the color chart with the sun above your right or left shoulder behind you. Use a black fabric to cover anything which could reflect.

How to shoot the target?

  1. Put your shooting box in the sun and setup your camera on a tripod. The best is to have the camera looking down on on the color chart like in the following picture:
A camera pointing into the profiling box Figure 3: Camera doing a custom white balance with the color profiling box
  1. You should use a prime lens for taking the pictures. If possible a 50mm or 85mm lens (or anything in between). The less glass the light has to travel through the better it is for profiling. Thus those two lenses are a good choice in the number of glass elements they have and their field of view! With a tele lens we would be too far away and with a wide angle lens we would need to be too near to have just the black box in the picture.

  2. Set your metering mode to matrix metering and use an aperture of at least f/4.0. Make sure the color chart is parallel to plane of the camera sensors so all patches of the chart are in focus. The color chart should be in the middle of the image using about 1/3 of the screen so that vignetting is not an issue.

  3. Set the camera to capture “RAW & JPEG” and disable lens corrections (vignetting corrections) for JPEG files if possible.

  4. If your camera has a custom white balance feature and you have a gray card provided by your target, create a custom white balance with it and use it (see figure 3). Put the gray card in your black box in the sunlight at the same position as the Color Checker.

  5. We want to have a camera profile for the most used ISO values. So for each ISO value you need to take 4 pictures of your target. One photo for -1/3 EV, 0 EV, 1/3 EV and 2/3 EV. Start with ISO 100, don’t shoot for Extended ISO values (50, 64, 80). Normally they are captured with ISO 100 and overexposed and then exposure is reduced. Use the ISO 100 profile for them. If you hit the maximum shutter speed (1/8000), start to close the aperture. Creating profile for values above ISO 12800 doesn’t really make sense. Probably with ISO 6400 the result start to be not 100% accurate anymore! You can use the profile for ISO 6400 on higher values.

Once you have done all the required shots, it is time to download the RAW and JPEG files to your computer.

Verifying correct images in darktable

For verifying the images we need to know the L-value from the Lab color space of the neutral gray field in the gray ramp of our color target. For the ColorChecker Passport we can look it up in the color information (CIE) file (ColorCheckerPassport.cie) shipping with ArgyllCMS, which should be located at:

/usr/share/color/argyll/ref/ColorCheckerPassport.cie

Note: ArgllCMS offers CIE and CHT files for different color charts, if you already have one or are going to buy one, check if ArgyllCMS offers support for it first! You can always add support to your color chart to ArgylCMS, but the process is much more complex.

The ColorChecker Passport has actually two gray ramps. The neutral gray field is the field on the bottom right on both sides. On the left it is called NEU8 and on the right side it is D1. If we check the CIE file, we will find out that the neutral gray field has an L-value of: L=96.260066. Lets round it to L=96. For other color targets you can find the L-value in the description or specification of your target, often it is L=92. Better check the CIE file!

You then open the RAW file in darktable and disable most modules, especially the base curve! Select the standard input matrix in the input color profile module and disable gamut clipping. Make sure “camera white balance” in the white balance module is selected. If lens corrections are automatically applied to your JPEG files, you need to enable lens corrections for your RAW files too! Only apply what has been applied to the JPEG file too.

Apply the changes to all RAW files you have created!

You can also crop the image but you need to apply exactly the same crop to the RAW and JPEG file!

Now we need to use the global color picker module in darkroom to find out the value of the natural white field on the color target.

  • Open the first RAW file in darkroom and expand the global color picker module on the left.
  • Select area, mean and Lab in the color picker and use the eye-dropper to select the natural gray field (bottom right) on the Color Checker you photographed. Here is an example:
darktable global color picker Figure 4: Determining the color of the neutral white patch
  • If the value displayed in the color picker module matches the L-value of the field or is close (+/-2), give the RAW file and the corresponding JPEG file 5 stars. In the picture above it is the first value of: (96.491, -0.431, 3.020). This means L=96.491, which is what you’re looking for on this color target. You might be looking for e.g. L=92 if you are using a different Color Checker. See above how to find our the L-value for your target.

Exporting images for darktable-chart

For exporting we need to select Lab as output color profile. This color space is not visible in the combo box by default. You can enable it by starting darktable with the following command line argument:

darktable --conf allow_lab_output=true

Or you always enable it by setting allow_lab_output to TRUE in

~/.config/darktable/darktablerc

As the output format select “PFM (float)” and for the export path you can use:

$(FILE_FOLDER)/PFM/$(MODEL)_ISO$(EXIF_ISO)_$(FILE_EXTENSION)

Select all 5 star RAW and JPEG files and export them.

darktable export dialog Figure 5: Exporting the images for profiling

Profiling with darktable-chart

Before we can start you need the chart file for your color target. The chart file contains the layout of the color checker. For example it tells the profiling software where the gray ramp is located or which field contains which color. For the “X-Rite Colorchecker Passport Photo” there is a (ColorCheckerPassport.cht) file provided by ArgyllCMS. You can find it here:

/usr/share/color/argyll/ref/ColorCheckerPassport.cht

Now it is time to start darktable-chart. The initial screen will look like this:

darktable-chart startup Figure 6: The darktable-chart screen after startup

Source Image

In the source image tab, select your PFM exported RAW file as image and for chart your Color Checker chart file. Then fit the displayed grid on your image.

darktable-chart source image Figure 7: Selecting the source image in darktable-chart

Make sure that the inner rectangular of the grid is completely inside of the color field, see Figure 8. If it is to big, you can use the size slider in the top right corner to adjust it.

darktable-chart source image with grid Figure 8: Placing the chart grid on the source image

Reference values

In the next tab as the mode you have to select color chart image and as the reference image select the PFM exported JPEG file which corresponds to the RAW file in the source image tab. Once opened you need to resize the grid again to match the Color Checker in your image. Adjust the size with the slider if necessary.

(If you want to fit for real color instead the camera produced JPEG, leave mode as cie/it8 file and load the corresponding CIE file for your color chart.)

darktable-chart selecting reference values Figure 9: Selecting the reference value for profiling in darktable-chart

Process

In this tab you’re asked to select the patches with the gray ramp. For the ‘X-Rite Color Checker Passport’ these are the ‘NEU1 .. NEU8’ fields. The input number of final patches defines how many editable color patches the resulting style will use within the color look up table module. More patches gives a better result but slows down the process. I think 32 is a good compromise.

Once you have done this click on ‘process’ to start the calculation. The quality of the result in terms of average delta E and maximum delta E are displayed below the button. These data show how close the resulting style applied to the source image will be able to match the reference values – the lower the better.

Click on ‘export’ to save the darktable style.

darktable-chart export Figure 10: Processing the image in darktable-chart

In the export window you should already get a good name for the style. Add a leading zero for ISO values smaller than 1000 get get correct sorting in the styles module, for example: ILCE-7M3_ISO0100_JPG.dtstyle. The JPG in the name should indicate that we fitted against a JPG file. If you fitted against a CIE file, remove it. If you applied a creative style to the JPG, probably add it at the end of the file name and style name.

Importing your dtstyle in darktable

To use your just created style, you need to import it in the style module in the lighttable. In the lighttable open the module on the right and click on ‘import’. Select the dtstyle file you created to add it. Once imported you can select a raw file and then double click on the style in the ‘style module’ to apply it.

Open the image in darkroom and you will notice that the base curve has been disabled and a few modules been enabled. The additional modules activated are normally: input color profile, color lookup table and tone curve.

Verifying your profile

To verify the style you created you can either apply it to one of the RAW files you created for profiling. Then use the global color picker to compare the color in the RAW with the style applied to the one in the JPEG file.

I also shoot a few normal pictures with nice colors like flowers in RAW and JPEG and then compare the result. Sometimes some colors can be off which can indicate that your pictures for profiling are not the best. This can be because there were some kind of clouds, glare or the wrong daytime. Redo the shots till you get the result you’re satisfied with.

How does the result look like?

In the following screenshot (Figure 11) you can see the calculated tone curve by darktable chart and the Sony base curve of darktable. The tone curve is based on the color LUT. It will look flat if you apply it without the LUT.

darktable base curve vs. tone curve Figure 11: Comparison of the default base curve with the new generated tone curve

Here is a comparison between the base curve for Sony on the left and the dtstyle (color LUT + tone curve) created with darktable-chart:

darktable comparison Figure 12: Side by side comparison on an image (left the standard base curve, right the calculated dtstyle)

Discussion

As always the ways to get better colors are open for discussion an it can be improved in collaboration.

Feedback is very welcome.

Thanks to the darktable developers for such a great piece of software! :-)

April 25, 2018

Who is Producer X?

Astute observers of Seder-Masochism will notice one “Producer X” on the poster:

Poster_ProducerX

This is consistent with the film’s opening credits:

Moses_ProducerX_edit

and end credits:

Endcredit_ProducerX_edit

Why? Who? WTF?

I made Sita Sings the Blues almost entirely alone. That caused an unforeseen problem when it came time to send the film out into the world: I was usually the only person who could represent it at festivals. Other films have producers who aren’t also the director. Other films also have crews, staff, multiple executives, and money. As SSTB’s only executive, I couldn’t be everywhere at once. Often I couldn’t be anywhere at once, due to having a life that includes occasional crises. Sometimes, if I was lucky, I could send an actor like Reena Shah, or musician like Todd Michaelesen, or narrator like Aseem Chaabra, or sound designer Greg Sextro. But most of the time it meant there was no human being representing the film when it screened at film festivals.

I’m even more hermitic now, and made Seder-Masochism in splendid isolation in Central Illinois. This time I worked with no actors, narrators, or musicians. I did try recording some friends discussing Passover, but that experiment didn’t make it into the film. Greg Sextro is again doing the sound design, but we’re working remotely (he’s in New York).

I like working alone. But I don’t like going to film festivals alone. And sometimes, I can’t go at all.

Such as right now: in June, Seder-Masochism is having its world premiere at Annecy, but I have to stay in Illinois and get surgery. I have an orange-sized fibroid in my cervix, and finally get to have my uterus removed. (I’ve suffered a lifetime of debilitating periods, but was consistently instructed to just suck it up, buttercup; no doctor bothered looking for fibroids over the last 30 years in spite of my pain. But now that I’m almost menopausal, out it goes at last!)

Film festivals are “people” events, and having a human there helps bring attention to the film. The reason I want my film in festivals is to increase attention. The more attention, the better for the film, especially as a Free Culture project. So I want a producer with it at festivals.

Fortunately, Producer X has been with Seder-Masochism from the very beginning. After Sita’s festival years, I knew that credit would be built into my next film.

So who is Producer X?

Whoever I say it is.

She’ll see you in Annecy!

Share/Bookmark

flattr this!

April 24, 2018

3 Students Accepted for Google Summer of Code 2018

Since 2006, we have had the opportunity for Google to sponsor students to help out with Krita. For 2018 we have 3 talented students working over the summer. Over the next few months they will be getting more familiar with the Krita code base and working on their projects. They will be blogging about their experience and what they are learning along the way. We will be sure to share any progress or information along the way.

Here is a summary of their projects and what they hope to achieve.

Ivan Yossi – Optimize Krita Soft, Gaussian and Stamp brushes mask generation to use AVX with Vc Library

Krita digital painting app relies on quick painting response to give a natural experience. A painted line is composed of thousands of images placed one after the other. This image mask creation hast to be performed super fast as it is done thousands of times each second. If the process of applying the images on canvas is not fast enough the painting process gets compromised and the enjoyment of painting is reduced.

Optimizing the mask creation can be done using the AVX instructions sets to apply transformation in vectors of data in one step. In this case the data is the image component coordinates composing the mask. Programming AVX can be done using Vc optimization library, which manages low level optimization adaptable to the user processor features. However the data must be prepared so it optimizes effectively. Optimization has already been done on the Default brush mask engine allowing it to be as much as 5 times faster than the current Gaussian mask engine.

The project aims to improve painting performance by implementing AVX optimization code for Circular Gauss, Circular Soft, Rectangular Gaussian, Rectangular Soft Rectangular and Stamp mask.

Michael Zhou – A Swatches Docker for Krita

This project intends to create a swatches docker for Krita. It’s similar to the palette docker that’s already in Krita today, but it has the following advantages:

  • Users can easily add, delete, drag and drop colors to give the palette a better visual pattern so that it’s easier for them to keep track of the colors.
  • Users can store a palette with a work so that they can ensure the colors they use throughout a painting is consistent.
  • It will have a more intuitive UI design

Andrey Kamakin Optimize multithreading in Krita’s Tile Manager

This project is about improving Krita overall performance by introducing lock-free hash table for storing tiles and improving locks described in proposal

Problem: In single threaded execution of program there is no need to monitor shared resources, because it is guaranteed that only one thread can access resource. But in multi-threaded program flow it is a common problem that resources must be shared between threads, furthermore, situations such as dirty read, etc must be excluded for normal program behavior. So the simplest solution is to use locks on table operations so that only one thread can access resources and read/write

We wish all the students the best of luck this summer!

darktable 2.4.3 released

we’re proud to announce the third bugfix release for the 2.4 series of darktable, 2.4.3!

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.4.3.

as always, please don’t use the autogenerated tarball provided by github, but only our tar.xz. the checksums are:

$ sha256sum darktable-2.4.3.tar.xz
1dc5fc7bd142f4c74a5dd4706ac1dad772dfc7cd5538f033e60e3a08cfed03d3 darktable-2.4.3.tar.xz
$ sha256sum darktable-2.4.3.dmg
290ed5473e3125a9630a235a4a33ad9c9f3718f4a10332fe4fe7ae9f735c7fa9 darktable-2.4.3.1.dmg
$ sha256sum darktable-2.4.3-win64.exe
a34361924b4d7d3aa9cb4ba7e5aeef928c674822c1ea36603b4ce5993678b2fa darktable-2.4.3-win64.exe
$ sha256sum darktable-2.4.3-win64.zip
3e14579ab0da011a422cd6b95ec409565d34dd8f7084902af2af28496aead5af darktable-2.4.3-win64.zip

when updating from the currently stable 2.2.x series, please bear in mind that your edits will be preserved during this process, but it will not be possible to downgrade from 2.4 to 2.2.x any more.

Important note: to make sure that darktable can keep on supporting the raw file format for your camera, please read this post on how/what raw samples you can contribute to ensure that we have the full raw sample set for your camera under CC0 license!

and the changelog as compared to 2.4.2 can be found below.

New Features

  • Support for tags and ratings in the watermark module
  • Read Xmp.exif.DateTimeOriginal from XMP sidecars
  • Build and install noise tools
  • Add a script for converting .dtyle to an .xmp

Bugfixes

  • Don’t create unneeded folders during export in some cases
  • When collecting by tags, don’t select subtags
  • Fix language selection on OSX
  • Fix a crash while tethering

Camera support, compared to 2.4.2

Warning: support for Nikon NEF ‘lossy after split’ raws was unintentionally broken due to the lack of such samples. Please see this post for more details. If you have affected raws, please contribute samples!

Base Support

  • Fujifilm X-H1 (compressed)
  • Kodak EOS DCS 3
  • Olympus E-PL9
  • Panasonic DC-GX9 (4:3)
  • Sony DSC-RX1RM2
  • Sony ILCE-7M3

White Balance Presets

  • Sony ILCE-7M3

Noise Profiles

  • Canon PowerShot G1 X Mark III
  • Nikon D7500
  • Sony ILCE-7M3

Blender at FMX 2018

FMX 2018 (Stuttgart, April 24-27) is one of Europe’s most influential conference dedicated to Digital Visual Arts, Technologies, and Business. This year Blender is going to take part in 3 events, featuring Ton Roosendaal and artists from the Blender studio crew.

Blender at FMX 2018

Presentations and Panels

Blender will be represented at the following events on April 26th:

Come and see us!

If you are attending FMX and would like to hang out on Thursday, get in touch with francesco@blender.org or reach out to us directly on social media!

April 20, 2018

UEFI booting and RAID1

I spent some time yesterday building out a UEFI server that didn’t have on-board hardware RAID for its system drives. In these situations, I always use Linux’s md RAID1 for the root filesystem (and/or /boot). This worked well for BIOS booting since BIOS just transfers control blindly to the MBR of whatever disk it sees (modulo finding a “bootable partition” flag, etc, etc). This means that BIOS doesn’t really care what’s on the drive, it’ll hand over control to the GRUB code in the MBR.

With UEFI, the boot firmware is actually examining the GPT partition table, looking for the partition marked with the “EFI System Partition” (ESP) UUID. Then it looks for a FAT32 filesystem there, and does more things like looking at NVRAM boot entries, or just running BOOT/EFI/BOOTX64.EFI from the FAT32. Under Linux, this .EFI code is either GRUB itself, or Shim which loads GRUB.

So, if I want RAID1 for my root filesystem, that’s fine (GRUB will read md, LVM, etc), but how do I handle /boot/efi (the UEFI ESP)? In everything I found answering this question, the answer was “oh, just manually make an ESP on each drive in your RAID and copy the files around, add a separate NVRAM entry (with efibootmgr) for each drive, and you’re fine!” I did not like this one bit since it meant things could get out of sync between the copies, etc.

The current implementation of Linux’s md RAID puts metadata at the front of a partition. This solves more problems than it creates, but it means the RAID isn’t “invisible” to something that doesn’t know about the metadata. In fact, mdadm warns about this pretty loudly:

# mdadm --create /dev/md0 --level 1 --raid-disks 2 /dev/sda1 /dev/sdb1 mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90

Reading from the mdadm man page:

-e, --metadata= ... 1, 1.0, 1.1, 1.2 default Use the new version-1 format superblock. This has fewer restrictions. It can easily be moved between hosts with different endian-ness, and a recovery operation can be checkpointed and restarted. The different sub-versions store the superblock at different locations on the device, either at the end (for 1.0), at the start (for 1.1) or 4K from the start (for 1.2). "1" is equivalent to "1.2" (the commonly preferred 1.x format). "default" is equivalent to "1.2".

First we toss a FAT32 on the RAID (mkfs.fat -F32 /dev/md0), and looking at the results, the first 4K is entirely zeros, and file doesn’t see a filesystem:

# dd if=/dev/sda1 bs=1K count=5 status=none | hexdump -C 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00001000 fc 4e 2b a9 01 00 00 00 00 00 00 00 00 00 00 00 |.N+.............| ... # file -s /dev/sda1 /dev/sda1: Linux Software RAID version 1.2 ...

So, instead, we’ll use --metadata 1.0 to put the RAID metadata at the end:

# mdadm --create /dev/md0 --level 1 --raid-disks 2 --metadata 1.0 /dev/sda1 /dev/sdb1 ... # mkfs.fat -F32 /dev/md0 # dd if=/dev/sda1 bs=1 skip=80 count=16 status=none | xxd 00000000: 2020 4641 5433 3220 2020 0e1f be77 7cac FAT32 ...w|. # file -s /dev/sda1 /dev/sda1: ... FAT (32 bit)

Now we have a visible FAT32 filesystem on the ESP. UEFI should be able to boot whatever disk hasn’t failed, and grub-install will write to the RAID mounted at /boot/efi.

However, we’re left with a new problem: on (at least) Debian and Ubuntu, grub-install attempts to run efibootmgr to record which disk UEFI should boot from. This fails, though, since it expects a single disk, not a RAID set. In fact, it returns nothing, and tries to run efibootmgr with an empty -d argument:

Installing for x86_64-efi platform. efibootmgr: option requires an argument -- 'd' ... grub-install: error: efibootmgr failed to register the boot entry: Operation not permitted. Failed: grub-install --target=x86_64-efi WARNING: Bootloader is not properly installed, system may not be bootable

Luckily my UEFI boots without NVRAM entries, and I can disable the NVRAM writing via the “Update NVRAM variables to automatically boot into Debian?” debconf prompt when running: dpkg-reconfigure -p low grub-efi-amd64

So, now my system will boot with both or either drive present, and updates from Linux to /boot/efi are visible on all RAID members at boot-time. HOWEVER there is one nasty risk with this setup: if UEFI writes anything to one of the drives (which this firmware did when it wrote out a “boot variable cache” file), it may lead to corrupted results once Linux mounts the RAID (since the member drives won’t have identical block-level copies of the FAT32 any more).

To deal with this “external write” situation, I see some solutions:

  • Make the partition read-only when not under Linux. (I don’t think this is a thing.)
  • Create higher-level knowledge of the root-filesystem RAID configuration is needed to keep a collection of filesystems manually synchronized instead of doing block-level RAID. (Seems like a lot of work and would need redesign of /boot/efi into something like /boot/efi/booted, /boot/efi/spare1, /boot/efi/spare2, etc)
  • Prefer one RAID member’s copy of /boot/efi and rebuild the RAID at every boot. If there were no external writes, there’s no issue. (Though what’s really the right way to pick the copy to prefer?)

Since mdadm has the “--update=resync” assembly option, I can actually do the latter option. This required updating /etc/mdadm/mdadm.conf to add <ignore> on the RAID’s ARRAY line to keep it from auto-starting:

ARRAY <ignore> metadata=1.0 UUID=123...

(Since it’s ignored, I’ve chosen /dev/md100 for the manual assembly below.) Then I added the noauto option to the /boot/efi entry in /etc/fstab:

/dev/md100 /boot/efi vfat noauto,defaults 0 0

And finally I added a systemd oneshot service that assembles the RAID with resync and mounts it:

[Unit] Description=Resync /boot/efi RAID DefaultDependencies=no After=local-fs.target [Service] Type=oneshot ExecStart=/sbin/mdadm -A /dev/md100 --uuid=123... --update=resync ExecStart=/bin/mount /boot/efi RemainAfterExit=yes [Install] WantedBy=sysinit.target

(And don’t forget to run “update-initramfs -u” so the initramfs has an updated copy of /dev/mdadm/mdadm.conf.)

If mdadm.conf supported an “update=” option for ARRAY lines, this would have been trivial. Looking at the source, though, that kind of change doesn’t look easy. I can dream!

And if I wanted to keep a “pristine” version of /boot/efi that UEFI couldn’t update I could rearrange things more dramatically to keep the primary RAID member as a loopback device on a file in the root filesystem (e.g. /boot/efi.img). This would make all external changes in the real ESPs disappear after resync. Something like:

# truncate --size 512M /boot/efi.img # losetup -f --show /boot/efi.img /dev/loop0 # mdadm --create /dev/md100 --level 1 --raid-disks 3 --metadata 1.0 /dev/loop0 /dev/sda1 /dev/sdb1

And at boot just rebuild it from /dev/loop0, though I’m not sure how to “prefer” that partition…

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

April 16, 2018

GIMP 2.10.0 Release Candidate 2 Released

Hot on the heels of the first release candidate, we’re happy to have a second RC ready! In the last 3 weeks since releasing GIMP 2.10.0-RC1, we’ve fixed 44 bugs and introduced important performance improvements.

As usual, for a complete list of changes please see NEWS.

Optimizations and multi-threading for painting and display

A major regression of GIMP 2.10, compared to 2.8, was slower painting. To address this issue, several contributors (Ell, Jehan, Massimo Valentini, Øyvind Kolås…) introduced improvements to the GIMP core, as well as to the GEGL and babl libraries. Additionally, Elle Stone and Jose Americo Gobbo contributed performance testing.

The speed problems pushed Ell to implement multi-threading within GIMP, so that painting and display are now run on separate threads, thus greatly speeding up feedback of the graphical interface.

The new parallelization framework is not painting-specific and could be used for improving other parts of GIMP.

Themes rewritten

Since the development version 2.9.4, we had new themes shipped with GIMP, and in particular dark themes (as is now common for creative applications). Unfortunately they were unmaintained, bugs kept piling up, and the user experience wasn’t exactly stellar.

GIMP Themes Light, Gray, and Dark themes.

Our long-time contributor Ville Pätsi took up the task of creating brand new themes without any of the usability issues and glitches of previous ones. While cleaning up, only the Gray theme has been kept, whereas Light and Dark were rewritten from scratch. Darker and Lighter themes have been removed (they won’t likely reappear unless someone decides to rewrite and contribute them as well, and unless this person stays around for maintenance).

Gradient tool improved to work in linear color space

Thanks to Michael Natterer and Øyvind Kolås, the gradient tool can now work in either perceptual RGB, linear RGB, or CIE LAB color space at your preference.

Gradient tool in linear space Gradient tool in perceptual and linear spaces

We also used the opportunity to rename the tool, which used to be called “Blend tool” until now, even though barely anyone uses such name. “Gradient tool” is a much more understandable naming.

New on-canvas control for 3D rotation

A new widget for on-canvas interaction of 3D rotation (yaw, pitch, roll) has been implemented by Ell. This new widget is currently only used for the Panorama Projection filter.

GEGL Panorama View Panorama projection filter (image: Hellbrunn Banquet Hall by Matthias Kabel (cba))

Improvements in handling masks, channels, and selections

GIMP doesn’t do any gamma conversion when converting between selection, channels, and masks anymore. This makes the selection -> channel -> selection roundtrips correct and predictable.

Additionally, for all >8-bit per channel images, GIMP now uses linear color space for channels. This and many other fixes in the new release were done by Michael Natterer.

Translations

8 translations have been updated between the two release candidates. We are very close to releasing the final version of GIMP 2.10.0. If you plan to update a translation into your language and be in time for the release, we recommend starting now.

GEGL changes

Mosty of the changes in GEGL since the release in March are performance improvements and micro-optimizations in display paths. Additionally, avoiding incorrectly gamma/ungamma correcting alpha in u8 formats provides a tiny 2-3% performance boost.

For further work on mipmaps support, GEGL now keeps track of valid/invalid areas on smaller granularity than tiles in mipmap.

The Panorama Projection operation got reverse transform, which permits using GIMP for retouching zenith, nadir or other arbitrary gaze directions in equirectangular, also known as 360×180 panoramas.

Finally, abyss policy support in the base class for scale operations now makes it possible to achieve hard edges on rescaled buffers.

What’s Next

We are now 7 blocker bugs away from the final release.

On your marks, get set…

Interview with Runend

Could you tell us something about yourself?

Hi! I’m Faqih Muhammad and my personal brand name is runend. I’m 22 years old and live in Medan in Indonesia. I love film animation, concept art, game making, 3d art, and everything illustration.

Do you paint professionally, as a hobby artist, or both?

It can be said that I’m a hobbyist now, but I keep learning, practicing, experimenting to find new forms and new styles of self-expression, all to improve my skills and to be a professional artist in the near future!

What genre(s) do you work in?

So far I’ve made scenery background with character as a base to learn something. Starting from the basic we can make something more interesting, but still it was quite difficult for me.

Whose work inspires you most — who are your role models as an artist?

Hhmmm, there are many artists who give me inspiration. Mainly I follow Jeremy Fenske, Atey Ghailan and Ruan Jia. I won’t forget to mention masters like Rizal Abdillah, Agung Oka and Yogei, as well as my friends and mentors.

How and when did you get to try digital painting for the first time?

It was in 2014 using photoshop, which I used to create photo-manipulations with. In 2015 I finally bought my wacom intuos manga tablet and could finally begin learning about digital painting.

What makes you choose digital over traditional painting?

Digital painting has many features that make it easy to create art. Of course there’s no need to buy art supplies: with a computer, pen and tablet you can make art.

Lately I’ve been learning traditional painting using poster color, and that makes me feel both happy and challenged.

How did you find out about Krita?

I used Google to search for “free digital painting software” and I found Krita :D.

What was your first impression?

I was like “WOW”, grateful to find software as good as this.

What do you love about Krita?

I have tried some of the features, especially the brush engine, UI/UX, layering, animation tools, I love all of them! And of course it’s free and open source.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Probably the filter layer and filter mask performance. Those run very slowly, I think it would be better if they ran more smoothly and more realtime.

What sets Krita apart from the other tools that you use?

Free open source software that runs cross-platform, no need to spend more. If you get a job or a paid project with Krita, there is a donate button to make Krita better still.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I love all my work, sometimes some paintings look inconsistent, then I will make it better.

What techniques and brushes did you use in it?

Before starting I think about what I want to create like situation and color mood. If that’s difficult from only imagination I usually use some reference.

I first make a sketch, basic color, shading, texture, refine the painting, and check the value using fill black layer blending mode in color.

In Krita 4.0 beta there are many new brush presets, I think that’s enough to make awesome art.

Where can people see more of your work?

Artstation: https://www.artstation.com/runend
Twitter: https://twitter.com/runendarts
Facebook: https://web.facebook.com/runendartworks
Instagram: https://www.instagram.com/runend.artworks/

Anything else you’d like to share?

Krita is an amazing program, I’d like to thank the Krita team. I wish Krita a good future, I hope Krita can be better known to the people of Indonesia, for instance on campus, schools, the creative industry etcetera.

How to create camera noise profiles for darktable


How to create camera noise profiles for darktable

An easy way to create correct profiling pictures

Noise in digital images is similar to film grain in analogue photography. In digital cameras, noise is either created by the amplification of digital signals or heat produced by the sensor. It appears as random, colored speckles on an otherwise smooth surface and can significantly degrade image quality.

Noise is always present, and if it gets too pronounced, it detracts from the image and needs to be mitigated. Removing noise can decrease image quality or sharpness. There are different algorithms to reduce noise, but the best option is if having profiles for a camera to understand the noise patterns a camera model produces.

Noise reduction is an image restoration process. You want to remove the digital artefacts from the image in such a way that the original image is discernible. These artefacts can be just some kind of grain (luminance noise) or colorful, disturbing dots (chroma noise). It can either add to a picture or detract from it. If the noise is disturbing, we want to remove it. The following pictures show a picture with noise and a denoised version:

Noisy cup Denoised cup

To get the best noise reduction, we need to generate noise profiles for each ISO value for a camera.

Creating the pictures for noise profling

For every ISO value your camera has, you have to take a picture. The pictures need to be exposed a particular way to gather the information correctly. The photos need to be out of focus with a widespread histogram like in the following image:

Histogram

We need overexposed and underexposed areas, but mostly particularly the grey areas in between. These areas contain the information we are looking for.

Let’s go through the noise profile generation step by step. For easier creation of the required pictures, we will create a stencil which will make it easier to capture the photos.

Stencil for DSLM/DSLR lenses

You need to get some thicker black paper or cardboard. No light should shine through it! First we need to use the lens hood to get the size. The lens hood helps to move the paper away from the lens a bit and the lens hood gives us something to attach it to. Then we need to create a punch card. For wide angle lenses you need a close raster and for longer focal lengths, a wider raster. It is harder to create it for compact cameras with small lenses (check below).

Find the middle and mark the size of the lens hood:

Stencil Step 1

If you have the size, draw a grid on the paper:

Stencil Step 2

Once you have done that you need to choose a punch card raster for your focal length. I use a 16mm wide angle lens on a full frame body, so I choose a raster with a lot of holes:

Stencil Step 3

Untested: For a 50mm or 85mm lens I think you should start with 5 holes in the middle created just with a needle. Put your stencil on the lens hood and check. Then you know if you need bigger holes and maybe how much. Please share your findings in the comments below!

Stencil for compact cameras

I guess you would create a stencil, like for bigger lenses, but create a funnel to the camera. Contributions and ideas are welcome!

Taking the pictures

Wait for a cloudy day with thick clouds and no sun to take the pictures. The problem is the shutter speed and it is likely that you’ll hit the limit. My camera has 37 ISO values (including extended iso), so I need to start with 0.6 seconds exposure time to take the last picture with the limit of my camera, 1/8000 of a second exposure time. So a darker day helps to start with a slow shutter speed.

Use a tripod and point the camera to the sky, attach the lens hood and put the punch card on it. Better make sure that all filters are removed, so we don’t get any strange artefacts. In the end the setup should look like this:

Punch card on camera

Choose the fastest aperture available on your lens (e.g. f/2.8 or even faster), change the camera to manual focus, and focus on infinity. Take the shot! The result should look like this:

punch card picture

The holes will overexpose the picture, but you also need an underexposed area. So start to put most of my dark areas in the middle of the histogram and moved it to the black (left) side of the histogram until the first values start to clip. It is important to not to clip to much, as we are mostly interested the grey values between the overexposed and underexposed areas.

Once you’re done taking the pictures it is time to move to the computer.

Creating the noise profiles

STEP 1

Run

/usr/lib/darktable/tools/darktable-gen-noiseprofile --help

If this gives you the help of the tool, continue with STEP 2 othersise go to STEP 1a

STEP 1a

Your darktable installation doesn’t offer the noise tools so you need to compile it yourself. Before you start make sure that you have the following dependencies installed on your system:

  • git
  • gcc
  • make
  • gnuplot
  • convert (ImageMagick)
  • darktable-cli

Get the darktable source code using git:

git clone https://github.com/darktable-org/darktable.git

Now change to the source and build the tools for creating noise profiles using:

mkdir build
cd build
cmake -DCMAKE_INSTALL_PREFIX=/opt/darktable -DBUILD_NOISE_TOOLS=ON ..
cd tools/noise
make
sudo make install

STEP 2

Download the pictures from your camera and change to the directory on the commandline:

cd /path/to/noise_pictures

and run the following command:

/usr/lib/darktable/tools/darktable-gen-noiseprofile -d $(pwd)

or if you had to download and build the source, run:

/opt/darktable_source/lib/tools/darktable-gen-noiseprofile -d $(pwd)

This will automatically do everything for you. Note that this can take quite some time to finish. I think it took 15 to 20 minutes on my machine. If a picture is not shot correctly, the tool will tell you the image name and you have to recapture the picture with that ISO.

The tool will tell you, once completed, how to test and verify the noise profiles you created.

Once the tool finished, you end up with a tarball you can send to darktable for inclusion. You can open a bug at:

https://redmine.darktable.org/

The interesting files are the presets.json file (darktable input) and, for the developers, the noise_result.pdf file. You can find an example PDF here. It is a collection of diagrams showing the histogram for each picture and the results of the calculations.

A detailed explanation of the diagrams and the math behind it can be found in the original noise profile tutorial by Johannes Hanika.

For discussion

I’ve created the stencil above to make it easier to create noise profiles. However I’ve tried different ways to create the profiles and here is one which was a good idea but failed for low ISO values (ISO <= 320). We are in the open source world, and I think it is important to share failures too. Others may have an idea to improve it or at least learn from it.

For a simpler approach than the one described above, I’ve created a gradient from black to white. Then I used some black cardboard to attached it to the monitor to get some real black. Remember you need an underexposed area and the monitor is not able to output real black, as it is backlit.

In the end my setup looked liked this:

Gradient on Monitor

I’ve turned off the lights and took the shots. However the results for ISO values below and equal to ISO320 are not good. All other ISO values looked fine.

If you’re interested in the results, you can find them here:

Please also share pictures of working stencils you created.

Feedback is very much welcome in the comments below!

April 15, 2018

Hero – Blender Grease Pencil showcase

After a series of successful short film production focused on high-end 3D computer animation pipelines, the Blender team presents a 3 minutes short film showcasing Blender’s upcoming Grease Pencil 2.0.

Grease Pencil means 2D animation tools within a full 3D pipeline. In Blender. In Open Source. Free for everyone!

The original Grease Pencil technology has been in Blender for many years now, and it already got the attention of story artists in the animation industry worldwide. The upcoming Grease Pencil is meant to push the boundaries and allows feature quality animation production in Blender 2.8.

The Hero animation showcase is a fruit of the collaboration between Blender developers and a team of artist based in Barcelona, Spain, led by Daniel M. Lara. This is the 6th short film funded by the Blender Cloud, confirming once more the value of a financial model that combines crowdfunding of artistic and technical goals through the creation of Open Content.

The inclusion of Grease Pencil in Blender for mainstream release is part of the Blender 2.8 Code Quest, an outstanding development effort that is currently happening at the Blender headquarters in Amsterdam. The first beta of Blender 2.8 will be available in the second part of 2018.

Press Contact:
Francesco Siddi, Producer
francesco@blender.org

April 13, 2018

security things in Linux v4.16

Previously: v4.15

Linux kernel v4.16 was released last week. I really should write these posts in advance, otherwise I get distracted by the merge window. Regardless, here are some of the security things I think are interesting:

KPTI on arm64

Will Deacon, Catalin Marinas, and several other folks brought Kernel Page Table Isolation (via CONFIG_UNMAP_KERNEL_AT_EL0) to arm64. While most ARMv8+ CPUs were not vulnerable to the primary Meltdown flaw, the Cortex-A75 does need KPTI to be safe from memory content leaks. It’s worth noting, though, that KPTI does protect other ARMv8+ CPU models from having privileged register contents exposed. So, whatever your threat model, it’s very nice to have this clean isolation between kernel and userspace page tables for all ARMv8+ CPUs.

hardened usercopy whitelisting
While whole-object bounds checking was implemented in CONFIG_HARDENED_USERCOPY already, David Windsor and I finished another part of the porting work of grsecurity’s PAX_USERCOPY protection: usercopy whitelisting. This further tightens the scope of slab allocations that can be copied to/from userspace. Now, instead of allowing all objects in slab memory to be copied, only the whitelisted areas (where a subsystem has specifically marked the memory region allowed) can be copied. For example, only the auxv array out of the larger mm_struct.

As mentioned in the first commit from the series, this reduces the scope of slab memory that could be copied out of the kernel in the face of a bug to under 15%. As can be seen, one area of work remaining are the kmalloc regions. Those are regularly used for copying things in and out of userspace, but they’re also used for small simple allocations that aren’t meant to be exposed to userspace. Working to separate these kmalloc users needs some careful auditing.

Total Slab Memory: 48074720 Usercopyable Memory: 6367532 13.2% task_struct 0.2% 4480/1630720 RAW 0.3% 300/96000 RAWv6 2.1% 1408/64768 ext4_inode_cache 3.0% 269760/8740224 dentry 11.1% 585984/5273856 mm_struct 29.1% 54912/188448 kmalloc-8 100.0% 24576/24576 kmalloc-16 100.0% 28672/28672 kmalloc-32 100.0% 81920/81920 kmalloc-192 100.0% 96768/96768 kmalloc-128 100.0% 143360/143360 names_cache 100.0% 163840/163840 kmalloc-64 100.0% 167936/167936 kmalloc-256 100.0% 339968/339968 kmalloc-512 100.0% 350720/350720 kmalloc-96 100.0% 455616/455616 kmalloc-8192 100.0% 655360/655360 kmalloc-1024 100.0% 812032/812032 kmalloc-4096 100.0% 819200/819200 kmalloc-2048 100.0% 1310720/1310720

This series took quite a while to land (you can see David’s original patch date as back in June of last year). Partly this was due to having to spend a lot of time researching the code paths so that each whitelist could be explained for commit logs, partly due to making various adjustments from maintainer feedback, and partly due to the short merge window in v4.15 (when it was originally proposed for merging) combined with some last-minute glitches that made Linus nervous. After baking in linux-next for almost two full development cycles, it finally landed. (Though be sure to disable CONFIG_HARDENED_USERCOPY_FALLBACK to gain enforcement of the whitelists — by default it only warns and falls back to the full-object checking.)

automatic stack-protector

While the stack-protector features of the kernel have existed for quite some time, it has never been enabled by default. This was mainly due to needing to evaluate compiler support for the feature, and Kconfig didn’t have a way to check the compiler features before offering CONFIG_* options. As a defense technology, the stack protector is pretty mature. Having it on by default would have greatly reduced the impact of things like the BlueBorne attack (CVE-2017-1000251), as fewer systems would have lacked the defense.

After spending quite a bit of time fighting with ancient compiler versions (*cough*GCC 4.4.4*cough*), I landed CONFIG_CC_STACKPROTECTOR_AUTO, which is default on, and tries to use the stack protector if it is available. The implementation of the solution, however, did not please Linus, though he allowed it to be merged. In the future, Kconfig will gain the knowledge to make better decisions which lets the kernel expose the availability of (the now default) stack protector directly in Kconfig, rather than depending on rather ugly Makefile hacks.

That’s it for now; let me know if you think I should add anything! The v4.17 merge window is open. :)

Edit: added details on ARM register leaks, thanks to Daniel Micay.

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

April 11, 2018

Krita 4.0.1 Released

Today the Krita team releases Krita 4.0.1, a bug fix release of Krita 4.0.0. We fixed more than fifty bugs since the Krita 4.0.0 release! See below for the full list of fixed isses. Translations work again with the appimage and the macOS build. Please note that:

  • The reference image docker has been removed. Krita 4.1.0 will have a new reference images tool. You can test the code-in-progress by downloading the nightly builds for Windows and Linux
  • There is no scripting available on macOS. We had it almost working when the one macbook the project has received a broken update, which undid all our work. G’Mic is also not available on macOS.
  • The lock and collapse icons on the docker titlebars are removed: too many people were too confused by them.

If you find a new issue, please consult this draft document on reporting bugs, before reporting an issue. After the 4.0 release, more than 150 bugs were reported, but most of those reports were duplicates, requests for help or just not useful at all. This puts a heavy strain on the developers, and makes it harder to actually find time to improve Krita. Please be helpful!

Improvements

Windows

  • Patch QSaveFile so working on images stored in synchronized folders (dropbox, google drive) is safe

Shortcuts

  • Fix duplicate shortcut on Photoshop scheme
  • Alphabetize shortcut to make the diffs easier to read when doing changes

UI

  • Make the triangles larger on the categorized list view so they are more visible
  • Disable the macro recorder and playback plugin
  • Remove the docker titlebar lock and collapse buttons. BUG:385238 BUG:392235
  • Set the pixel grid to show up at 2400% zoom by default. BUG:392161
  • Improve the layout of the palette docker
  • Disable drag and drop in the palette view: moving swatches around did not actually change the palette. BUG:392349
  • Fix selecting the last used template in the new document dialog when using appimages. BUG:391973
  • Fix canvas lockup when using Guides at the top of the image. BUG:391098
  • Do not reset redo history when changing layer’s visibility. BUG:390581
  • Fix shifting the pan position after using the popup widget rotation circle. BUG:391921
  • Fix height map to normal map in wraparound mode. BUG:392191

Text

  • Make it possible to edit the font size in the svg text tool. BUG:392714
  • Let Text Shape have empty lines. BUG:392471
  • Fix updates of undo/redo actions. BUG:392257
  • Implement “Convert text into path” function. BUG:391294
  • Fix a crash in SvgTextTool when deleting hovered/selected shape. BUG:392128
  • Make the text editor window application modal. BUG:392248
  • Fix alignment of RTL text. BUG:392065 BUG:392064
  • Fix painting parts of text outside the bounding box on the canvas. BUG:392068
  • Fix rendering of the text with relative offsets. BUG:391160
  • Fix crash when transforming text with Transform Tool twice. BUG:392127

Animation

  • Fix handling of keyframes when saving. BUG:392233 BUG:392559
  • Keep show in timeline and onion skin options when merging layers. BUG:377358
  • Keep keyframe color labels when merging layers. BUG:388913
  • Fix exporting out audio with video formats MKV and OGV.

File handling

  • Do not load/save layer channel flags anymore (channel flags were removed from the UI in Krita 2.9). BUG:392504
  • Fix saving of Transform Mask into rendered formats. BUG:392229
  • Fix reporting errors when loading fails. BUG:392413
  • Fix a memory leak when loading file layers
  • Fix loading a krita file with a loop in the clone layers setup. BUG:384587
  • Fix showing a wait cursor after loading a PNG image. BUG:392249
  • Make bundle loading feedback a bit clearer regarding the bundle.

Vector bugs

  • Fix crash when creating a vector selection. BUG:391292
  • Fix crash when doing right-click on the gradient fill stop opacity input box of a vector BUG:392726
  • Fix setting the aspect ratio of vector shapes. BUG:391911
  • Fix a crash if a certain shape is not valid when writing SVG. BUG:392240
  • Fix hidden stroke and fill widgets not to track current shape selection BUG:391990

Painting and brush engines

  • Fix crash when creating a new spray preset. BUG:392869
  • Fix rounding of the the pressure curve
  • Fix painting with colorsmudge brushes on transparency masks. BUG:391268
  • Fix uninitialized distance info for KisHairyPaintOp BUG:391940
  • Fix rounding of intermediate pressure values
  • Fix the colorsmudge brush when painting in wraparound mode. BUG:392312

Layers and masks

  • Fix flattening of group layers with Inherit Alpha property set. BUG:390095
  • Fix a crash when using a transformation mask on a file layer. BUG:391270
  • Improve performance of the transformation mask

Download

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

When it is updated, you can also use the Krita Lime PPA to install Krita 4.0.1 on Ubuntu and derivatives. We are working on an updated snap.

OSX

Note: the gmic-qt and python plugins are not available on OSX.

Source code

md5sum

For all downloads:

Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here (filenames ending in .sig).

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

April 10, 2018

FreeCAD 0.17 is released

Hello everybody, Finally, after two years of intense work, the FreeCAD community is happy and proud to announce the release 0.17 of FreeCAD. You can grab it at the usual places, either via the Downloads page or directly via the github release page. There are installers for Windows and Mac, and an AppImage for Linux. Our...

April 05, 2018

Cave Creek Hiking and Birding Trip

A week ago I got back from a trip to the Chiricahua mountains of southern Arizona, specifically Cave Creek on the eastern side of the range. The trip was theoretically a hiking trip, but it was also for birding and wildlife watching -- southern Arizona is near the Mexican border and gets a lot of birds and other animals not seen in the rest of the US -- and an excuse to visit a friend who lives near there.

Although it's close enough that it could be driven in one fairly long day, we took a roundabout 2-day route so we could explore some other areas along the way that we'd been curious about.

First, we wanted to take a look at the White Mesa Bike Trails northwest of Albuquerque, near the Ojito Wilderness. We'll be back at some point with bikes, but we wanted to get a general idea of the country and terrain. The Ojito, too, looks like it might be worth a hiking trip, though it's rather poorly signed: we saw several kiosks with maps where the "YOU ARE HERE" was clearly completely misplaced. Still, how can you not want to go back to a place where the two main trails are named Seismosaurus and Hoodoo?

[Cabezon] The route past the Ojito also led past Cabezon Peak, a volcanic neck we've seen from a long distance away and wanted to see closer. It's apparently possible to climb it but we're told the top part is fairly technical, more than just a hike.

Finally, we went up and over Mt Taylor, something we've been meaning to do for many years. You can drive fairly close to the top, but this being late spring, there was still snow on the upper part of the road and our Rav4's tires weren't up to the challenge. We'll go back some time and hike all the way to the top.

We spent the night in Grants, then the following day, headed down through El Malpais, stopping briefly at the beautiful Sandstone Overlook, then down through the Datil and Mogollon area. We wanted to take a look at a trail called the Catwalk, but when we got there, it was cold, blustery, and starting to rain and sleet. So we didn't hike the Catwalk this time, but at least we got a look at the beginning of it, then continued down through Silver City and thence to I-10, where just short of the Arizona border we were amused by the Burma Shave dust storm signs about which I already wrote.

At Cave Creek

[Beautiful rocks at Cave Creek] Cave Creek Ranch, in Portal, AZ, turned out to be a lovely place to stay, especially for anyone interested in wildlife. I saw several "life birds" and mammals, plus quite a few more that I'd seen at some point but had never had the opportunity to photograph. Even had we not been hiking, just hanging around the ranch watching the critters was a lot of fun. They charge $5 for people who aren't staying there to come and sit in the feeder area; I'm not sure how strictly they enforce it, but given how much they must spend on feed, it would be nice to help support them.

The bird everyone was looking for was the Elegant Trogon. Supposedly one had been seen recently along the creekbed, and we all wanted to see it.

They also had a nifty suspension bridge for pedestrians crossing a dry (this year) arroyo over on another part of the property. I guess I was so busy watching the critters that I never went wandering around, and I would have missed the bridge entirely had Dave not pointed it out to me on the last day.

The only big hike I did was the Burro Trail to Horseshoe Pass, about 10 miles and maybe 1800 feet of climbing. It started with a long hike up the creek, during which everybody had eyes and ears trained on the sycamores (we were told the trogon favored sycamores). No trogon. But it was a pretty hike, and once we finally started climbing out of the creekbed there were great views of the soaring cliffs above Cave Creek Canyon. Dave opted to skip the upper part of the trail to the saddle; I went, but have to admit that it was mostly just more of the same, with a lot of scrambling and a few difficult and exposed traverses. At the time I thought it was worth it, but by the time we'd slogged all the way back to the cars I was doubting that.

[ Organ Pipe Formation at Chiricahua National Monument ] On the second day the group went over the Chiricahuas to Chiricahua National Monument, on the other side. Forest road 42 is closed in winter, but we'd been told that it was open now since the winter had been such a dry one, and it wasn't a particularly technical road, certainly easy in the Rav4. But we had plans to visit our friend over at the base of the next mountain range west, so we just made a quick visit to the monument, did a quick hike around the nature trail and headed on.

Back with the group at Cave Creek on Thursday, we opted for a shorter, more relaxed hike in the canyon to Ash Spring rather than the brutal ascent to Silver Peak. In the canyon, maybe we'd see the trogon! Nope, no trogon. But it was a very pleasant hike, with our first horned lizard ("horny toad") spotting of the year, a couple of other lizards, and some lovely views.

Critters

We'd been making a lot of trogon jokes over the past few days, as we saw visitor after visitor trudging away muttering about not having seen one. "They should rename the town of Portal to Trogon, AZ." "They should rename that B&B Trogon's Roost Bed and Breakfast." Finally, at the end of Thursday's hike, we stopped in at the local ranger station, where among other things (like admiring their caged gila monster) we asked about trogon sightings. Turns out the last one to be seen had been in November. A local thought maybe she'd heard one in January. Whoever had relayed the rumor that one had been seen recently was being wildly optimistic.

[ Northern Cardinal ] [ Coati ] [ Javalina ] [ white-tailed buck ]
Fortunately, I'm not a die-hard birder and I didn't go there specifically for the trogon. I saw lots of good birds and some mammals I'd never seen before (full list), like a coatimundi (I didn't realize those ever came up to the US) and a herd (pack? flock?) of javalinas. And white-tailed deer -- easterners will laugh, but those aren't common anywhere I've lived (mule deer are the rule in California and Northern New Mexico). Plus some good hikes with great views, and a nice visit with our friend. It was a good trip.

On the way home, again we took two days for the opportunity to visit some places we hadn't seen. First, Cloudcroft, NM: a place we'd heard a lot about because a lot of astronomers retire there. It's high in the mountains and quite lovely, with lots of hiking trails in the surrounding national forest. Worth a visit some time.

From Cloudcroft we traveled through the Mescalero Apache reservation, which was unexpectedly beautiful, mountainous and wooded and dotted with nicely kept houses and ranches, to Ruidoso, a nice little town where we spent the night.

Lincoln

[ Lincoln, NM ] Our last stop, Saturday morning, was Lincoln, site of the Lincoln County War (think Billy the Kid). The whole tiny town is set up as a tourist attraction, with old historic buildings ... that were all closed. Because why would any tourists be about on a beautiful Saturday in spring? There were two tiny museums, one at each end of town, which were open, and one of them tried to entice us into paying the entrance fee by assuring us that the ticket was good for all the sites in town. Might have worked, if we hadn't already walked the length of the town peering into windows of all the closed sites. Too bad -- some of them looked interesting, particularly the general store. But we enjoyed our stroll through the town, and we got a giggle out of the tourist town being closed on Saturday -- their approach to tourism seems about as effective as Los Alamos'.

Photos from the trip are at Cave Creek and the Chiricahuas.

April 03, 2018

The LVFS CDN will change soon

tl;dr: If you have https://s3.amazonaws.com/lvfsbucket/downloads/firmware.xml.gz in /etc/fwupd/remotes.d/lvfs.conf then you need to nag your distribution to update the fwupd package to 1.0.6.

Slightly longer version:

The current CDN (~$100/month) is kindly sponsored by Amazon, but that won’t last forever and the donations I get for the LVFS service don’t cover the cost of using S3. Long term we are switching to a ‘dumb’ provider (currently BunnyCDN) which works out 1/10th of the cost — which is covered by the existing kind donations. We’ve switched to use a new CNAME to make switching CDN providers easy in the future, which means this should be the only time this changes client side.

If you want to patch older versions of fwupd, you can apply this commit.

April 02, 2018

Fun at SCaLE 2018

I am finally back and have a moment to write a bit about the wonderful time I had out in Pasadena at the Southern California Linux Expo (SCaLE 16x)!

SCaLE 16x Logo

SCaLE has been held annualy in southern California for many years (the “16x” indicates this is the sixteenth annual meeting - though they’ve been holding meetings for longer as a LUG).

Libre Graphics Track

This year, Nate Willis reached out to see if we might be willing to help organize the first ever “Libre Graphics” track at the meeting. Usually the conference is geared towards enterprise technologies and users, but we thought it might be a nice opportunity to bring to light some of the awesome graphics projects that are out there.

It was an awesome opportunity to share the stage with some really talented folks. The days track and presentations can all be seen here:

  • Laidout
    by Tom Lechner

  • Extending Inkscape with SVG Filters
    by Ted Gould

  • Busting Things Up with the Fracture Modifier VFX Branch of Blender
    by JT Nelson

  • Making freely licensed movies with freely licensed tools
    by Matt Lee

  • Developers, Developers, Developers—How About Creatives
    by Ryan Gorley

  • Why the GIMP Team Obviously Hates You
    by Pat David

  • Git for Photographers
    by Mica Semrick

Overall it was a great day filled with some really neat presentations. More importantly was the opportunity to demonstrate to the attendees that the world of Libre Graphics projects is alive and well! The talks were well attended (approx 30-40 visitors depending on the talk) and the interest and participation was quite nice. Each speaker found a receptive audience with interested follow-on questions (my presentation had about 12 minutes of questions at the end).

One of the most ineresting take-aways at the end of my presentation (and in the following weeks through email) was the astonishment people had at the size of the team working on GIMP. It seemed that the overall impression was that there was some large team of folks hacking on the project, and many people were amazed that the crew is actually as small as it is.

What was heartening was the number of attendees after my presentation who took the time to offer their help in some way. These were all offers to help with writing tutorials or other non-development roles. Possible tasking for various areas of help will be communicated to those offering which should result in some new and/or updated tutorials soon!

GIMP + Inkscape Expo Booth

Even better was the opportunity to share a booth at the Expo with the Inkscape team. Presenting is fantastic fun, and I love it, but it’s ridiculously humbling to get a chance to meet face-to-face with users (in the booth on the expo floor) and to hear their stories, soak in their praise, or deflect their anger to someone else while quietly sneaking away (kidding of course).

Thanks to the great work of Ryan Gorley we even had a pair of fantastic banners to hang in the booth:

GIMP + Inkscape Banners Ryan Gorley was kind enough to design this pair of banners we hung in the booth.

There was great foot traffic during the expo and we had an opportunity to meet with and chat with quite a few folks making their way through the expo floor. There were even a few folks who had heard of GIMP but hadn’t really taken the time to look at it (which was a great opportunity to talk about the project and what they could do with it). Everyone was extremely kind and gracious.

GIMP + Inkscape Booth The booth! With yours truly in the bottom left.

Overall the conference was a success, I’d say! We had an opportunity to help represent the world of Free Software graphics applications and to showcase works using these tools to an audience that might not have otherwise considered them. There were quite a few attendees who were surprised to see us and very engaged both in the booth and during the Libre Graphics track and we sparked a nice interest in people volunteering to help with non-programming related tasks (whose willingness to help out is greatly appreciated).

Interview with Christopher

Could you tell us something about yourself?

Sure, my name is Christopher, and I’m an illustrator and visual designer who lives in California. I am presently exploring writing my own graphic novel, definitely a challenge. Talking about writing for sequential art is a whole interview on its own.

Do you paint professionally, as a hobby artist, or both?

Yes. I’ve been both for a while now, mostly as a freelancer. However, I will practice trying out new things with personal work. It helps me grow as an artist. I’m always looking for the opportunity to work with new and interesting people and projects.

What genre(s) do you work in?

Science Fiction, Fantasy, and Comic Book/Sequential art where much of my childhood inspiration to create art came from. The Fantastic Four, Elric of Melniboné, The Metamorphosis Odyssey, books and art from this kind of work will always be inspirational to me.

 

Whose work inspires you most — who are your role models as an artist?

N. C. Wyeth and The Brandywine School of artists has always been influential to me. They were my first and lasting impression of Illustration. I saw their work at The Delaware Art Museum during a school field trip and was transfixed. These artists’ sense of dramatic storytelling and compositional choices made a lasting impression. It’s a great collection. Other artists like Brom, Adolf Hiremy-Hirschl, Mead Schaffer, Dean Corwell, J. C. Leyendecker and Ricardo Fernandez Ortega have been study material for me recently. I don’t think I could ever claim just one source as the epicenter of my inspiration.

How and when did you get to try digital painting for the first time?

About 10 years ago. Until then I worked predominantly in oils. I’ve always been willing to try new mediums. I thought digital painting would be a great area to explore because I could still do color work without the worry of solvents and preparing my work area.

My first experience using a professional stylus was at a Comic Con demo in 2009. That was amazing! I had observed its use and tried a really basic stylus but never really had the opportunity to use one of high quality, the pressure sensitivity was surprisingly responsive. The lag was something I noticed right off but as I adjusted my hand it was a negligible issue when working with this type of interface. It took time to orient myself but after I got the hang of it it was really cool. After that experience I knew I needed a digital setup. It was just that much fun!

What makes you choose digital over traditional painting?

For now: Immediacy.

I can start a study or painting with almost zero prep time on a digital platform. When you only have a certain amount of time to work with, either within a deadline or around other responsibilities. For example if a client or person asks you to render an image within a certain time, it’s expected within that time frame. Rarely do you get more time, usually you get less. Time is a resource that can’t be replaced. Time is something that you can’t get back, when it’s gone it’s gone. So to have a tool that vastly improves the amount of time alterations & color adjustments take and make them faster to complete is invaluable in a time sensitive environment. Especially when someone wants something changed or altered for a bevy of reasons.

Also the physicality of traditional medium has different challenges and advantages, however the solution will take longer to accomplish in general.

With all that said I’m still keeping my paints and brushes!

How did you find out about Krita?

A friend of mine back East who is really into Open Source does digital painting from time to time. He knew I was dissatisfied with Painter X and CS so he recommended Krita. Painter wasn’t particularly intuitive and CS, while OK, I wanted something different. Just because something is popular doesn’t mean it’s the right fit for everyone. So then I asked him where I could get Krita. He said to me “Open Source. Just download it. From their site”. I was like “it couldn’t be that simple”. But it was. I installed it and I was hooked.

What was your first impression?

Intuitive, more features than I had expected. It had a UI that I had very few difficulties with and could arrange to my liking. The ability to customize so much is really appealing.

What do you love about Krita?

The brushes. Their responsiveness, the ability to customize brushes was also a huge plus. I found the brush creation tool to be quite approachable and easy to use. I love the test area.

The interface was easy to navigate. There wasn’t any odd interaction with types of layers that I found with other paint programs.

Krita just felt comfortable.

What do you think needs improvement in Krita?

Well, while the UI is one of the strengths of Krita when I tried to install it recently on a Microsoft Surface there was no way to make the UI a size to use it effectively on that device. If the UI was scalable in certain parts or overall, I think that would be very helpful.

Also import/export color history as a file within an open Krita document.

What sets Krita apart from the other tools that you use?

Krita has created a community of people willing to help each other to not only make their work better but to have an evolving tool to create it with. Krita doesn’t impede talent being explored, it freely supports it.

What techniques and brushes do you like to use?

Over time I have collected a set of brushes that I use frequently. They consist of some of the Muses brush set, Cazu, Nlynook and custom brushes I have created. I use some of David Revoy’s Brushes, specifically the Splatter brushes. I am using a more limited palette and my use of layer effects these days. This is part of developing a new approach while still adhering to plein air techniques.

Where can people see more of your work?

You can see my work in print. A comic book called “After The Gold Rush” has an illustration of mine featured in issue #4. It was created in Krita. I also have a site redacesmedia.com, you can reach me there!

Anything else you’d like to share?

Yes.

To the artists reading this: keep drawing and painting. There’s artists in the Krita community who will be supportive of your work!

Also, thanks for the interview and a special thanks to the developers and community that make Krita something special.

March 30, 2018

FreeCAD BIM development news - March 2018

I hope you noticed the small improvement in the title... It's not that I became suddenly a big fan of the "BIM" term, but really the word "Arch" is too narrow in todays construction field. Besides, as I explained last month, I am now starting to split the BIM stuff in FreeCAD in two parts:...

March 27, 2018

darktable 2.4.2 released

we’re proud to announce the second bugfix release for the 2.4 series of darktable, 2.4.2!

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.4.2.

as always, please don’t use the autogenerated tarball provided by github, but only our tar.xz. the checksums are:

$ sha256sum darktable-2.4.2.tar.xz
19cccb60711ed0607ceaa844967b692a3b8666b12bf1d12f2242ec8942fa5a81 darktable-2.4.2.tar.xz
$ sha256sum darktable-2.4.2.dmg
2b0b456f6efbc05550e729a388c55e195eecc827b0b691cd42d997b026f0867c darktable-2.4.2.dmg
$ sha256sum darktable-2.4.2-win64.exe
5181dad9afd798090de8c4d54f76ee4d43cbf76ddf2734364ffec5ccb1121a34 darktable-2.4.2-win64.exe
$ sha256sum darktable-2.4.2-win64.zip
935ba4756e208369b9cabf1ca441ed0b91acb73ebf9125dcaf563210ebe4524d darktable-2.4.2-win64.zip

when updating from the currently stable 2.2.x series, please bear in mind that your edits will be preserved during this process, but it will not be possible to downgrade from 2.4 to 2.2.x any more.

Important note: to make sure that darktable can keep on supporting the raw file format for your camera, please read this post on how/what raw samples you can contribute to ensure that we have the full raw sample set for your camera under CC0 license!

and the changelog as compared to 2.4.1 can be found below.

New Features

  • Add presets to location search in map mode
  • Add timestamps to the output of -d command line switches
  • Add a compression level slider to the TIFF export module
  • Add native binary NetPNM loading, without using GraphicsMagick
  • Add a battery indicator for people running darktable on a laptop. This is not very portable code and disabled by default
  • Allow to use /? to show the help message on Windows

Bugfixes

  • Turn off smooth scrolling for X11/Quartz. That might help with oversensitive scrolling
  • Fix reading and writing of TIFFs with non-ASCII filenames on Windows
  • Ellipsize background job labels when too long
  • Hard code D50 white point when exporting to OpenEXR
  • Add tootips to the haze removal module
  • Fix a crash when changing lenses while tethering
  • Fix incorrect Atom CPU detection on Windows
  • Revised performance configuration
  • Don’t overlay the colorbalance sliders on the left for a cleaner look
  • Honor local copy in copy export format
  • Make trashing of files on Windows silent
  • Fix string termination override on memmove
  • Fix a use after free and some memleaks
  • Fix a crash in PDF export
  • Fix the min color picker
  • Don’t hardcode ‘/’ in OpenCL paths on Windows

Camera support, compared to 2.4.1

Warning: support for Nikon NEF ‘lossy after split’ raws was unintentionally broken due to the lack of such samples. Please see this post for more details. If you have affected raws, please contribute samples!

Base Support

  • Canon PowerShot G1 X Mark III
  • Panasonic DMC-FZ2000 (3:2)
  • Panasonic DMC-FZ2500 (3:2)
  • Panasonic DMC-ZS100 (3:2)
  • Sony DSC-RX0
  • Sony DSC-RX10M4

Noise Profiles

  • Canon EOS 200D
  • Canon EOS Kiss X9
  • Canon EOS Rebel SL2
  • Canon EOS 760D
  • Canon EOS 8000D
  • Canon EOS Rebel T6s
  • Canon PowerShot G1 X Mark II
  • Canon PowerShot G9 X
  • Fujifilm X100F
  • Nikon D850
  • Panasonic DC-G9
  • Panasonic DMC-GF6
  • Panasonic DMC-LX10
  • Panasonic DMC-LX15
  • Panasonic DMC-LX9
  • Panasonic DMC-TZ70
  • Panasonic DMC-TZ71
  • Panasonic DMC-ZS50

Translations

  • Dutch
  • French
  • German
  • Hungarian
  • Italian

March 26, 2018

Dust Storm Burma Shave Signs

I just got back from a trip to the Chiricahuas, specifically Cave Creek. More on that later, after I've done some more photo triaging. But first, a story from the road.

[NM Burma Shave dust storm signs]

Driving on I-10 in New Mexico near the Arizona border, we saw several signs about dust storms. The first one said,

ZERO VISIBILITY IS POSSIBLE

Dave commented, "I prefer the ones that say, 'may exist'." And as if the highway department heard him, a minute or two later we passed a much more typical New Mexico road sign:

DUST STORMS MAY EXIST
New Mexico, the existential state.

But then things got more fun. We drove for a few more miles, then we passed a sign that obviously wasn't meant to stand alone:

IN A DUST STORM

"It's a Burma Shave!" we said simultaneously. (I'm not old enough to remember Burma Shave signs in real life, but I've heard stories and love the concept.) The next sign came quickly:

PULL OFF ROADWAY

"What on earth are they going to find to rhyme with 'roadway'?" I wondered. I racked my brains but couldn't come up with anything. As it turns out, neither could NMDOT. There were three more signs:

TURN VEHICLE OFF
FEET OFF BRAKES
STAY BUCKLED

"Hmph", I thought. "What an opportunity missed." But I still couldn't come up with a rhyme for "roadway". Since we were on Interstate 10, and there's not much to do on a long freeway drive, I penned an alternative:

IN A DUST STORM
PULL OFF TEN
YOU WILL LIVE
TO DRIVE AGAIN

Much better, isn't it? But one thing bothered me: you're not really supposed to pull all the way off Interstate 10, just onto the shoulder. How about:

IN A DUST STORM
PULL TO SHOULDER
YOU WILL LIVE
TO GET MUCH OLDER

I wasn't quite happy with it. I thought my next attempt was an improvement:

IN A DUST STORM
PULL TO SHOULDER
YOU MAY CRASH IF
YOU ARE BOLDER
but Dave said I should stick with "GET MUCH OLDER".

Oh, well. Even if I'm not old enough to remember real Burma Shave signs, and even if NMDOT doesn't have the vision to make their own signs rhyme, I can still have fun with the idea.

March 25, 2018

GIMP 2.10.0 Release Candidate 1 Released

Newly released GIMP 2.10.0-RC1 is the first release candidate before the GIMP 2.10.0 stable release. With 142 bugs fixed and more than 750 commits since the 2.9.8 development version from mid-December, the focus has really been on getting the last details right.

All the new features we added for this release are instrumental in either improving how GIMP handles system resources, or helping you to report bugs and recover lost data. For a complete list of changes please see NEWS.

(Update): Thanks to Ell the windows installer (64-bit) is now available from the Development Downloads page.

New features

Dashboard dockable

A new Dashboard dock helps with monitoring GIMP’s resource usage to keep things in check, allowing you to make more educated decisions about various configuration options.

Dashboard dock

On the developer side, it also helps us in debugging and profiling various operations or parts of the interface, which is important in our constant quest to improve GIMP and GEGL, and detect which parts are the biggest bottlenecks.

The feature was contributed by Ell — one of GIMP’s most productive developers of late.

Debug dialog

What we consistently hear from users is that they have had zero GIMP crashes in years of using it. Still, as with any software, it is not exempt from bugs, and unfortunately sometimes might even crash.

While we encourage you to report all bugs you encounter, we do admit that producing useful information for a report can be difficult, and there is little we can do about a complaint that says GIMP crashed. I don’t know what I was doing and I have no logs”.

So GIMP now ships with a built-in debugging system that gathers technical details on errors and crashes.

Debug dialog to simplify bug reporting

On development versions, the dialog will be raised on all kind of errors (even minor ones). On stable releases, it will be raised only during crashes. The default behavior can be customized in Edit > Preferences > Debugging.

Note: you are still expected to write down contextual information when you report bugs, i.e.: What were you doing when the bug happened? If possible, step by step reproduction procedures are a must.

The feature was contributed by Jehan Pages from ZeMarmot project.

Image recovery after crash

With the debugging system in place to detect a crash, it was easy enough to add crash recovery. In case of a crash, GIMP will now attempt to backup all images with unsaved changes, then suggest to reopen them the next time you start the application.

Crash recovery dialog

This is not a 100%-guaranteed procedure, since a program state during a crash is unstable by nature, so backing up images might not always succeed. What matters is that it will succeed sometimes, and this might rescue your unsaved work!

This feature was also contributed by the ZeMarmot project.

Shadows-Highlights

This new filter is now available in GIMP in the Colors menu thanks to a contribution by Thomas Manni who created a likewise named GEGL operation.

Shadows-Highlights

The filter allows adjusting shadows and highlights in an image separately, with some options available. The implementation closely follows its counterpart in the darktable digital photography software.

Completed features

Layer masks on layer groups

Masks on layer groups are finally possible! This work, started years ago, has now been finalized by Ell. Group-layer masks work similarly to ordinary-layer masks, with the following considerations.

Mask on a layer group

The group’s mask size is the same as group’s size (i.e., the bounding box of its children) at all times. When the group’s size changes, the mask is cropped to the new size — areas of the mask that fall outside of the new bounds are discarded, and newly added areas are filled with black (and hence are transparent by default).

JPEG 2000 support ported to OpenJPEG

JPEG 2000 images importing was already supported, using the library called Jasper. Yet this library is now deprecated and slowly disappearing from most distributions. This is why we moved to OpenJPEG.

The port was initially started by Mukund Sivaraman. It was later completed by Darshan Kadu, under the FSF internship program, and mentored by Jehan who polished it up.

In particular, now GIMP can properly import JPEG 2000 images in any bit depth (over 32-bit per channel will be clamped to 32-bit and non-multiple of 8-bit will be promoted, for instance 12-bit will end up as 16-bit per channel in GIMP). Images in YCbCr and xvYCC color spaces will be converted to sRGB.

Imported JPEG 2000 file

JPEG 2000 codestream files are also supported. While color space can be detected for JPEG 2000 images, for codestream files you will be asked to specify the color space.

Linear workflow updates

Curves and Levels filters have been updated to have a switch between linear and perceptual (non-linear) modes, depending on which one you need.

Curves in linear mode

You can apply Levels in perceptual mode to a linear image, or Curves in linear mode to a perceptual image — whichever suits you best for the task at hand.

The same switch in the Histogram dock has been updated accordingly.

Screenshot and color-picking

On Linux, taking screenshots with the Freedesktop API has been implemented. This should become the preferred API in the hopefully near future, especially because it is meant to work inside sandboxed applications. Though for the time being, it is still not given priority because it lacks some basic features and is not color-managed in any implementation we know of, which makes it a regression compared to other implementations.

On Windows, Simon Mueller has improved the screenshot plug-in to handle hardware-accelerated software and multi-monitor displays.

On macOS, color picking with the Color dock is now color-managed.

Metadata preferences

Settings were added for metadata export handling in the “Image Import & Export” page of the Preferences dialog. By default, the settings are checked, which means that GIMP will export all metadata, but you can uncheck them (since metadata can often contain a lot of sensitive private information).

Metadata preservation

Note that these options can also be changed per format (“Load Defaults” and “Save Defaults” button), and of course per file during exporting, just like any other option.

Lock brush to view

GIMP finally gives you a choice whether you want a brush locked to a certain zoom level and rotation angle of the canvas.

Lock brush to view demo

The option is available for all painting tools that use a brush except for the MyPaint Brush tool.

Missing icons

8 new icons were added by Alexandre Prokoudine, Aryeom Han (ZeMarmot film director), and Ell.

Various GUI refining

Many last-minute details have been handled, such as renaming the composite modes to be more descriptive, shortened color channel labels with their conventional 1- or 2-letter abbreviations, color models rearranged in the Color dock, and much more!

Translations

String freeze has started and GIMP has received updates from: Basque, Brazilian Portuguese, Catalan, Chinese (Taiwan), Danish, Esperanto, French, German, Greek, Hungarian, Icelandic, Italian, Japanese, Latvian, Polish, Russian, Serbian, Slovenian, Spanish, Swedish, Turkish.

The Windows installer is now also localized with gettext.

GEGL changes

The GEGL library now used by GIMP for all image processing has also received numerous updates.

Most importantly, all scaling for display is now done on linear data. This produces more accurate scaled-down thumbnails and more valid results of mipmap computations. GIMP 2.10.0-RC1 doesn’t use mipmaps yet, but it will further down the line.

More work has been done to improve performance of GEGL across many parts of the source code. Improvements to pixel data fetching and setting functions have led to performance boosts across many GEGL operations (in particular, Gaussian blur), and for some performance-critical display cases, performance should have improved two- to three-fold since the release in December 2017.

There are 5 new operations in the workshop now. Among those, enlarge and inpaint are part of the new experimental inpainting framework by Øyvind Kolås, domain transform by Felipe Einsfeld Kersting is an edge-preserving smoothing filter, and recursive-transform is Ell’s take on the famous Droste effect.

Helping GIMP

We’d like to remind you that GIMP is free software. Therefore the first way to help is to contribute your time. You can report bugs and send patches, whether they are code patches, icons, brushes, documentation, tutorials, translations, etc.

In this release for instance, about 15% of changes were done by non-regular contributors.

You can also contribute tutorials or news for our website, as Pat David explained so well in his talk Why the GIMP Team Obviously Hates You. Pat David is himself one of the important GIMP contributors on the community side (he also created our current website back in 2015).

Last but not least, we remind that you can contribute financially in a few ways. You can donate to the project itself, or you can support the core team developers who raise funds individually, in particular Øyvind Kolås for his work on GEGL, GIMP graphics engine, and ZeMarmot project (Aryeom & Jehan) for their work on GIMP itself (about 35% of this release is contributed by their project).

What’s Next

This is the last stretch before the final GIMP 2.10.0 release. There are a few more changes planned before we wrap it up. For instance, Americo Gobbo is working (with minor help from ZeMarmot) on improving our default brush set. His work will be available either in another release candidate (if we make another one) or in the final release.

We are currently 12 blocker bugs away from making the final release. We’ll do our best to make it quick!

March 23, 2018

LVFS Mailing List

I have created a new low-volume lvfs-announce mailing list for the Linux Vendor Firmware Service, which will only be used to make announcements about new features and planned downtime. If you are interested in what’s happening with the LVFS you can subscribe here. If you need to contact me about anything LVFS-related, please continue to email me (not the mailing list) as normal.

The Great Gatsby and onboarding new contributors

I am re-reading “The Great Gatsby” – my high-school son is studying it in English, and I would like to be able to discuss it with him with the book fresh in my mind –  and noticed this passage in the first chapter which really resonated with me.

…I went out to the country alone. I had a dog — at least I had him for a few days until he ran away — and an old Dodge and a Finnish woman, who made my bed and cooked breakfast and muttered Finnish wisdom to herself over the electric stove.

It was lonely for a day or so until one morning some man, more recently arrived than I, stopped me on the road.

“How do you get to West Egg village?” he asked helplessly.

I told him. And as I walked on I was lonely no longer. I was a guide, a pathfinder, an original settler. He had casually conferred on me the freedom of the neighborhood.

In particular, I think this is exactly how people feel the first time they can answer a question in an open source community for the first time. A switch is flipped, a Rubicon is crossed. They are no longer new, and now they are in a space which belongs, at least in part, to them.

March 22, 2018

Krita 4.0.0 Released!

Today we’re releasing Krita 4.0! A major release with major new features and improvements: improved vector tools, SVG support, a new text tool, Python scripting and much, much, much more!

The new splash screen for Krita 4.0, created by Tyson Tan, shows Kiki among the plum blossoms. We had wanted to release Krita 4 last year already, but trials and tribulations caused considerable delays. But, like the plum blossoms that often bloom most vibrantly when it’s coldest, we have overcome, and Krita 4 is now ready!

Highlights

We’ve again created a long, long page with all the details of everything that’s new and improved in Krita 4.

See the full release notes with all changes!

We already mentioned SVG support, a new text tool and Python scripting, so here are some other highlights:

  • Masked brushes: add a mask to your brush tip for a more lively effect. This opens up some really cool possibilities!

  • New brush presets! We overhauled the entire brush set for Krita 4. Brush presets are now packaged as a bundle, too. And Krita 3’s brush set is available as well, it’s just disabled by default.

Known issues

Krita 4 is a huge step for the Krita project, as big as, if not bigger than the 3.0 release. There are some known issues and caveats:

  • Krita 4 uses SVG for vector layers. This means that Krita 3 files with vector layers may not be loaded entirely correctly. Keep backups!
  • Krita 4’s new text tool is still limited compared to what we wanted to implement. We focused on creating a reliable base and making the text tool work reliably for just one, simple use-case: creating text for comic book balloons, and we’ll continue working on improving and extending the text tool.
  • We have a new binary build factory for Windows and Linux. Unfortunately, we don’t have 32 bits Windows builds at this point in time.
  • Because macOS has a very low limit on shared memory segments, G’Mic cannot work on macOS at the moment.
  • The Reference Images Docker has been removed. It was too easy to crash it if invalid image files where present. In Krita 4.1 it will be replaced by a new reference images tool.

Download

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

At this moment, we do not have 32 bits Windows builds available.

Note that on Windows 7 and 8 you need to install the Universal C Runtime separately to enable Python scripting. See the manual.

Linux

At the moment, the appimage does not have working translations.

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

You can also use the Krita Lime PPA to install Krita 4.0.0 on Ubuntu and derivatives. We are working on an updated snap.

OSX

Note: the gmic-qt and python plugins are not available on macOS.

Source code

md5sums

For all downloads:

Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here.

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

Artwork by Ramon Miranda

March 21, 2018

Builder Nightly

One of the great aspects of the Flatpak model, apart from separating apps from the OS, is that you can have multiple versions of the same app installed concurrently. You can rely on the stable release while trying things out in the development or nightly built version. This creates a need to easily identify the two versions apart when launching it with the shell.

I think Mozilla has set a great precendent on how to manage multiple version identities.

Thus came the desire to spend a couple of nights working on the Builder nightly app icon. While we’ve generally tried to simplify app icons to match what’s happening on the mobile platforms and trickling down to the older desktop OSes, I’ve decided to retain the 3D wokflow for the builder icon. Mainly because I want to get better at it, but also because it’s a perfect platform for kit bashing.

For Builder specifically I’ve identified some properties I think should describe the ‘nightly’ icon:

  • Dark (nightly)
  • Modern (new stuff)
  • Not as polished – dangling cables, open panels, dirty
  • Unstable / indicating it can move (wheels, legs …)

Next up is giving a stab at a few more apps and then it’s time to develop some guidelines for these nightly app icons and emphasize it with some Shell styling. Overlaid emblems haven’t particularly worked in the past, but perhaps some tag style for the label could do.

March 19, 2018

Interview with Jennifer

Could you tell us something about yourself?

I’m almost 35 years old, from a city called Passo Fundo, state of Rio Grande do Sul, Brazil. I like cats, cartoons and rock and roll. 1994 was the year when I started to have some interest in drawing. I looked for learning how to draw just for fun and sometimes to let my soul talk. But I can say that the digital art that I started practicing last year has helped me get rid of a recent depression.

Do you paint professionally, as a hobby artist, or both?

As a hobby. At least for now.

What genre(s) do you work in?

I usually draw cartoons. But I also like painting nature and fantasy elements.

Whose work inspires you most — who are your role models as an artist?

Most times when I draw, I don’t look up to a specific artist. I search random images on internet or the painting comes from my own mind. I think any kind of art should come from the artist’s inner soul.

How and when did you get to try digital painting for the first time?

It occurred last year, in May, I guess. I had no job but I had a nasty depression. Then my husband said he would like to learn how to draw and start work with that. It was when I started to draw again. Yes, I had stopped drawing, limiting myself to draw just when I had nothing more to do. Then we got an online course from Ivan Quirino and here I am, less than an year later, doing all kinds of digital painting.

What makes you choose digital over traditional painting?

The practicality. It is really hard when you do something wrong drawing at the traditional way. In digital painting, you can redo as many times as necessary.

How did you find out about Krita?

At Youtube or at some blog. I can’t tell for sure.

What was your first impression?

When I used Krita for the first time I already knew most of the tools, so it was easy to use. But I needed to learn more, then I watched a video that explained the basic tools and method to paint. I thought then that Krita was a good tool for painting. Today I can tell it’s a great tool for digital artists. My personal opinion: Krita is the best and I really can’t use a different program.

What do you love about Krita?

The quick access to the tools I need. The ease to work with it. I like so much of that function that allows you to paint just the line art. It’s awesome.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Nothing! As I said before: I really enjoy work with Krita and I recommend it to anyone who is choosing this path of digital art.

What sets Krita apart from the other tools that you use?

The brushes, the way Krita works with layers (for example: if you have a line on the top layer and you paint a background on the layer below, you won’t paint over what is drawn on the top layer). I don’t know about the functionality of all painting software, but I think this is pretty cool.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I like my latest work. At the time I made the work, I hadn’t thought about a name yet. But looking at it now, I could call it “The peace of the mermaid”. I think it fits well.

What techniques and brushes did you use in it?

Well, I’m not good with names of techniques, but I used the default brushes and some of those by David Revoy (airbrush, fill brush, wet brushes, some ink for the little details, some customized brush “LJF water brush 3”). I also used effect layers. So I started with the water base, just filling the area with blue and white tones. Then I painted the sky, mixing tones of blue with white. The sun was made with an airbrush, mixing yellow and white. After that, I used the customized brush to do the details of the water. Always mixing the colors to get the vision that I was looking for. Then I painted the blocks of sand, leaving the details, done with “splat_texture – Marcas”, for the end. At this point, I could start drawing the mermaid. Started by doing a shadow mermaid. After that, I put the colors, lights and shadows and the details. The effect layers were used to get more luminance on a specific element (In Portuguese: Luz viva – used on the mermaid and the starfish -, Luz suave – used to get the luminance on the full scene – , Desvio linear – to get the effect on the water light).

Where can people see more of your work?

For the moment I have:
Blog: https://jbh-digitalart.blogspot.com.br/
Facebook page: https://www.facebook.com/JBHdigitalart/?ref=bookmarks

Anything else you’d like to share?

I just wanted to thank the people that work to improve Krita, an amazing box of tools for digital painting!

March 12, 2018

A follow up on Fedora 28’s background art

A quick post – I have a 4k higher-quality render of one of Fedora 28 background candidates mentioned in a recent post about the Fedora 28 background design process. Click on the image below to grab it if you would like to try / test it and hopefully give some feedback on it:

3D render of the Fedora logo in blue fiber optic light strands against a black background. Image is angled with some blur and bokeh effects. the angling of this version is such that it comes from below and looks up.

One of the suggestions I’ve received from your feedback is to try to vary the height between the ‘f’ and the infinity symbol so they stand out. I’m hoping to find some time this week to figure out how exactly to do that (I’m a Blender newbie 😳), but if you want to try your hand, the Blender source file is available.

March 10, 2018

Intel Galileo v2 Linux Basics

[Intel Galileo Gen2 by Mwilde2 on Wikimedia commons] Our makerspace got a donation of a bunch of Galileo gen2 boards from Intel (image from Mwilde2 on Wikimedia commons).

The Galileo line has been discontinued, so there's no support and no community, but in theory they're fairly interesting boards. You can use a Galileo in two ways: you can treat it like an Arduino, after using the Arduino IDE to download a Galileo hardware definition since they're not Atmega chips. They even have Arduino-format headers so you can plug in an Arduino shield. That works okay (once you figure out that you need to download the Galileo v2 hardware definitions, not the regular Galileo). But they run Linux under the hood, so you can also use them as a single-board Linux computer.

Serial Cable

The first question is how to talk to the board. The documentation is terrible, and web searches aren't much help because these boards were never terribly popular. Worse, the v1 boards seem to have been more widely adopted than the v2 boards, so a lot of what you find on the web doesn't apply to v2. For instance, the v1 required a special serial cable that used a headphone jack as its connector.

Some of the Intel documentation talks about how you can load a special Arduino sketch that then disables the Arduino bootloader and instead lets you use the USB cable as a serial monitor. That made me nervous: once you load that sketch, Arduino mode no longer works until you run a command on Linux to start it up again. So if the sketch doesn't work, you may have no way to talk to the Galileo. Given the state of the documentation I'd already struggled with for Arduino mode, it didn't sound like a good gamble. I thought a real serial cable sounded like a better option.

Of course, the Galileo documentation doesn't tell you what needs to plug in where for a serial cable. The board does have a standard FTDI 6-pin header on the board next to the ethernet jack, and the labels on the pins seemed to correspond to the standard pinout on my Adafruit FTDI Friend: Gnd, CTS, VCC, TX, RX, RTS. So I tried that first, using GNU screen to connect to it from Linux just like I would a Raspberry Pi with a serial cable:

screen /dev/ttyUSB0 115200

Powered up the Galileo and sure enough, I got boot messages and was able to log in as root with no password. It annoyingly forces orange text on a black background, making it especially hard to read on a light-background terminal, but hey, it's a start.

Later I tried a Raspberry Pi serial cable, with just RX (green), TX (white) and Gnd (black) -- don't use the red VCC wire since the Galileo is already getting power from its own power brick -- and that worked too. The Galileo doesn't actually need CTS or RTS. So that's good: two easy ways to talk to the board without buying specialized hardware. Funny they didn't bother to mention it in the docs.

Blinking an LED from the Command Line

Once connected, how do you do anything? Most of the Intel tutorials on Linux are useless, devoting most of their space to things like how to run Putty on Windows and no space at all to how to talk to pins. But I finally found a discussion thread with a Python example for Galileo. That's not immediately helpful since the built-in Linux doesn't have python installed (nor gcc, natch). Fortunately, the Python example used files in /sys rather than a dedicated Python library; we can access /sys files just as well from the shell.

Of course, the first task is to blink an LED on pin 13. That apparently corresponds to GPIO 7 (what are the other arduino/GPIO correspondences? I haven't found a reference for that yet.) So you need to export that pin (which creates /sys/class/gpio/gpio7 and set its direction to out. But that's not enough: the pin still doesn't turn on when you echo 1 > /sys/class/gpio/gpio7/value. Why not? I don't know, but the Python script exports three other pins -- 46, 30, and 31 -- and echoes 0 to 30 and 31. (It does this without first setting their directions to out, and if you try that, you'll get an error, so I'm not convinced the Python script presented as the "Correct answer" would actually have worked. Be warned.)

Anyway, I ended up with these shell lines as preparation before the Galileo can actually blink:

# echo 7 >/sys/class/gpio/export

# echo out > /sys/class/gpio/gpio7/direction

# echo 46 >/sys/class/gpio/export
# echo 30 >/sys/class/gpio/export
# echo 31 >/sys/class/gpio/export

# echo out > /sys/class/gpio/gpio30/direction
# echo out > /sys/class/gpio/gpio31/direction
# echo 0  > /sys/class/gpio/gpio30/value
# echo 0  > /sys/class/gpio/gpio31/value

And now, finally, you can control the LED on pin 13 (GPIO 7):

# echo 1 > /sys/class/gpio/gpio7/value
# echo 0 > /sys/class/gpio/gpio7/value
or run a blink loop:
# while /bin/true; do
> echo 1  > /sys/class/gpio/gpio7/value
> sleep 1
> echo 0  > /sys/class/gpio/gpio7/value
> sleep 1
> done

Searching Fruitlessly for a "Real" Linux Image

All the Galileo documentation is emphatic that you should download a Linux distro and burn it to an SD card rather than using the Yocto that comes preinstalled. The preinstalled Linux apparently has no persistent storage, so not only does it not save your Linux programs, it doesn't even remember the current Arduino sketch. And it has no programming languages and only a rudimentary busybox shell. So finding and downloading a Linux distro was the next step.

Unfortunately, that mostly led to dead ends. All the official Intel docs describe different download filenames, and they all point to generic download pages that no longer include any of the filenames mentioned. Apparently Intel changed the name for its Galileo images frequently and never updated its documentation.

After forty-five minutes of searching and clicking around, I eventually found my way to Intel® IoT Developer Kit Installer Files, which includes sizable downloads with names like

  • iss-iot-linux_12-09-16.tar.bz2 (324.07 MB),
  • intel-iot-yocto.tar.xz (147.53 MB),
  • intel-iot-wrs-pulsar-64.tar.xz (283.86 MB),
  • intel-iot-wrs-32.tar.xz (386.16 MB), and
  • intel-iot-ubuntu.tar.xz (209.44 MB)

From the size, I suspect those are all Linux images. But what are they and how do they differ? Do any of them still have working repositories? Which ones come with Python, with gcc, with GPIO support, with useful development libraries? Do any of them get security updates?

As far as I can tell, the only way to tell is to download each image, burn it to a card, boot from it, then explore the filesystem trying to figure out what distro it is and how to try updating it.

But by this time I'd wasted three hours and gotten no further than the shell commands to blink a single LED, and I ran out of enthusiasm. I mean, I could spend five more hours on this, try several of the Linux images, and see which one works best. Or I could spend $10 on a Raspberry Pi Zero W that has abundant documentation, libraries, books, and community howtos. Plus wi-fi, bluetooth and HDMI, none of which the Galileo has.

Arduino and Linux Living Together

So that's as far as I've gone. But I do want to note one useful thing I stumbled upon while searching for information about Linux distributions:

Starting Arduino sketch from Linux terminal shows how to run an Arduino sketch (assuming it's already compiled) from Linux:

sketch.elf /dev/ttyGS0 &

It's a fairly cool option to have. Maybe one of these days, I'll pick one of the many available distros and try it.

March 08, 2018

Code Quest Campaign: A Success Story

A software project is a living thing, and every few years it needs to take a leap. A leap for survival, for innovation, to respond and adapt to new trends and technologies, to lay the foundation for future trends. This is a risky endeavour. Ambitious targets tend to significantly slow down development momentum due to complex engineering decisions, disagreements between team members or lack of outwards communication to the user community.

Blender has achieved this feat once before during its existence –the well known 2.5 project in 2010– thanks to the relentless leadership of Ton Roosendaal and a tight-knit team of developers and power users. After nearly 8 years of gradual improvements, during which Blender’s user base more than quadrupled, it was time for another jump. Enter Blender 2.8.

Blender 2.8

The main goal of Blender 2.8 is to further improve support for diverse workflows, complemented by features such as a high quality PBR viewport, 2D animation tools, advanced asset management and a powerful animation system. While Blender is often regarded as an oddity, its flexibility is being discovered and appreciated by a growing audience.

After over one year of work, the project needed a final sprint to deliver the first beta of Blender 2.8. To achieve this, the idea of a “Code Quest” was proposed: to bring together nearly all of the core developers for three months in one location, in the Blender Institute in Amsterdam.

This period would enable the team to tackle fundamental engineering issues, as well as to more efficiently focus on interface design and usability.

Code Quest Launch

How to successfully fund an Open Source project

The funding of the Code Quest, which was estimated 200K USD, has been divided between four parties.

The first was the Blender Foundation, the non-profit entity which is coordinating worldwide developers outreach and runs the official online platforms for the project. The Blender Foundation, via the Blender Development Fund is awarding grants to independent developers.

The second was Blender Institute, the Amsterdam-based Open Content powerhouse, who provided the initial funding for the campaign, public relations, communications and logistics. Blender Institute is hiring several of the Blender core developers and is funding part of the Code Quest costs via the Blender Cloud, the open production and training platform.

The remaining two parties were industry sponsors and the Blender user community, together targeted to cover nearly half of the total budget via a crowdfunding campaign.

A rocket ride

With the animation studio Tangent Animation and the makers of Lulzbot 3D printer Aleph Objects signing up as sponsors, industry support started well, and continued for the better with several other Blender-based businesses joining the effort.

However, the biggest challenge was to involve the user community. After reviewing several strategies (including using popular crowdfunding platforms) the Blender Institute team decided to focus the entire campaign on selling a memorable reward token – a space rocket shaped USB drive. Each rocket would cost 39 USD, with its price raising 10 USD within 3 weeks. Rockets would be produced right after the campaign, to give the immediate reward of having supported an ongoing project. Target was set to sell a 1000 rockets minimally.

Code Quest Rocket

And then the user community pulled off something truly outstanding. The goal was achieved in just 4 days, which resulted in confirming the official start in April and a new target of 2500 rockets. This stretch goal was set to expand the Code Quest team. The new target was achieved in less than 3 weeks.

Thanks to this additional support, almost 100K USD were raised, an amount comparable to the historic campaign that made Blender become Open Source back in 2002.

Code Quest months

The Code Quest is an unprecedented opportunity to document the development process in an open and transparent way, building up excitement in anticipation for Blender’s beta release due in July 2018. The Code Quest will be frequently covered using the official code.blender.org blog, via video logs, live streams and demos.

At the same time, two high-profile Blender Open Movies that are in production will be the ultimate stress test for the upcoming release. These are Hero, the first short film combining traditional animation in a three dimensional space, and Spring, a poetic visual journey that will raise the bar set by previous Blender Open Movies.

Blender 2.8 carries a lot of expectations. The Code Quest campaign has proven, once again, that the community is there to make it happen!

Francesco Siddi

Code Quest Landing

What 3 Words?

I dig online maps like everyone else, but it is somewhat clumsy sharing a location. The W3W service addesses the issue by chunking up the whole world into 3x3m squares and assigning each a name (Supposedly around 57 trillion). Sometimes it’s a bit of a tongue twister, but most of the time it’s fun to say to meet at a “massive message chuckle” for some fpv flying. I’m really surprised this didn’t take off.

March 07, 2018

Jupyter lab with an Octave kernel

Octave is a good choice for getting some serious computing done (it’s largely an open-source Matlab). But for interactive exploration, it feels a bit awkward. If you’ve done any data science work lately, you’ll undoubtedly have used the fantastic Jupyter.

There’s a way to combine both and have the great UI of Jupyter with the processing core of Octave:

Jupyter lab with an Octave kernel

I’ve built a variant of the standard Jupyter Docker images that uses Octave as a kernel, to make it trivial to run this combination. You can find it here.


Comments | More on rocketeer.be | @rubenv on Twitter

March 06, 2018

Fedora 28’s Desktop Background Design

Fedora 28 (F28) is slated to release in May 2018. On the Fedora Design Team, we’ve been thinking about the default background wallpaper for F28 since November. Let’s walk through the Fedora 28 background process thus far as a sort of pre-mortem; we’d love your feedback on where we’ve ended up.

November: Inspiration

As of the past 3 releases, we choose a sequential letter of the alphabet and come up with a list of scientists / mathematicians / technologists to serve as an inspiration for the desktop background’s visual concept:

F25's wallpaper - an almost floral blue gradiated blade design, F26 a black tree line reflected in water against a wintry white landscape (the trees + reflection resemble a sound wave), F27 a blue and purple gradiated underwater scene with several jellyfish - long tendrils drifting and twisting - floating up the right side of the image

Backgrounds from Fedora 25, 26, and 27. 25’s inspiration was Archimedes, and the visual concept was an organic Archimedes’ screw. F26’s inspiration was Alexander Graham Bell, and the visual concept was a sound wave of a voice saying “Fedora.” F27’s inspiration was underwater researcher Jacques Cousteau, and the inspiration was transparency in the form of jellyfish.

Gnokii kicked off the process in November by starting the list of D scientists for F28 and holding a vote on the team: we chose Emily Duncan, an early technologist who invented several types of banking calculators.

December: First concepts

We had a meeting in IRC (which I seem to have forgotten to run meetbot on 🙁 ) where we brainstormed different ways to riff off of Emily Duncan’s work as an inspiration. One of the early things we looked at were some of the illustrations from one of Duncan’s patents:

Diagram etchings from 1903 Duncan calculator patent. Center is a cylindrical object covered in a grid with numbers and various mechanical bits

Gnokii started drafting some conceptual mockups, starting with a rough visualization of an Enigma machine and moving to visuals of electric wires and gears:

3D perspective alpha cryptography keys scrolling vertically in 3D space
wires with bright sparks traveling along them atop a gear texture, black background
wires with bright sparks traveling along them atop a gear texture, blue background

During a regular triage meeting, the team met in IRC and we discussed the mockups and had some critique and suggestions which we shared in the ticket.

February: Solidifying Concept

After the holidays, we got back to it with the beta freeze deadline in mind. Note, we don’t have alpha releases in Fedora anymore, which means we need to have more polish in our initial wallpaper than we had traditionally in order to get useful feedback for the final wallpaper. This started with a regular triage meeting where the F28 wallpaper ticket came up. We brainstormed a lot of ideas and went through a lot of different and of-the-moment visual styles. Maria shared a link to a Behance article on 2018 design trends and it seemed 3D styles in a lot of different ways are the trend of the moment. Some works that particularly inspired us:

Rose Pilkington’s Soft Bodies for Electric Objects

Gently-textured pastel hues of bright cyan, orange, yellow, and pink in a softly gradiated set of flat but almost 3D like rounded abstract shapes

Ari Weinkle’s Wormholes

Almost psychedelic, cavelike, wavy environment made with cascading 3D ridges, orange and purple hued palette

Ari Weinkle’s Paint waves

Vibrant, rainbow hued, gracefully curving and spiraling super thick sculpted 3D paint with a ridged texture

Both myself and terezahl, taking these inspirations as directions, started on another round of mockups.

Terezahl created mockups, one which appears to be inspired by Pilkington’s work, based of the concept of 28’s being a triangular number:

On top, a black to greenish blue shaded abstract composition with a floating triangle floating in front of a background with an inverse gradient. On bottom, rounded abstract shapes in purple, blue, and cyan jewel tones.

I was inspired by Weinkle’s paint waves, but couldn’t figure out a technique to approximate it in Blender. Conceptually, I wanted to take gnokii’s wires with data ‘lights’ travelling down the wires, and have those lights travel down the ridges in an abstract swirled wave. I figured probably it would take some work with Blender’s particular system, since the mass of a character’s hair is typically created that way. I had never used Blender’s particle system before, so I took a tutorial that seemed the closest to the effect I wanted – a Blender Guru tutorial by Andrew Price:

As per the feedback I received from gnokii – the end result was too close to the output you’d expect from such a tutorial. I wasn’t able to achieve a more solid mass than the fiber optic strands, although they visually represented the ‘data light’ concept fork I was going for:

Sparkling blue-hued fiber optic threads against a black background, their ends glowing light blue, with some blurring and bokeh effects - 3D rendered

Time was short, so we ended up deciding to ship this mockup – as close to the tutorial as it was – in the F28 beta to see what kind of feedback we got on the look. Thankfully Luya was able to package it up for us with some time to spare! So far, the preliminary feedback we’ve gotten from folks on social media and/or who’ve seen it via Luya’s package for beta has been positive.

March: Finalization

Since the time-consuming work of building the platform in Blender from the tutorial is done, I’ve started playing around with the idea to see what kind of visuals we could get. The obvious, of course, is to work the Fedora logo into it. Fedora 26’s wallpaper had a sound wave depicting the vocalization of the word “Fedora” – I was trying to think of how to have the fiber optic ‘data’ show the same. Perhaps this is too literal. Anyhow, here are the two crowd favorites thus far:

#3

3D render of the Fedora logo in blue fiber optic light strands against a black background. Image is angled with some blur and bokeh effects

#9

3D render of the Fedora logo in blue fiber optic light strands against a black background. Image is angled with some blur and bokeh effects. the angling of this version is such that it comes from below and looks up.

we need your help!

Anyway, this is where you come in. Take a look at these. With the system built in Blender, we have a lot of things we can tweak easily – the angles, the lens / bokeh / focus, the shape / path of the strands (like how the latest renderings follow the Fedora f/infinity), the shape / type of object the strands are made of (right now long / narrow cylinders.) These kinds of tweaks are quick. Any ideas you have on a path forward here, or just simple feedback, would be much appreciated. 🙂

March 05, 2018

Interview with Johan Brits

Could you tell us something about yourself?

I’m from South Africa. I’ve been drawing my whole life, mostly with graphite pencil but when I discovered digital drawing I was hooked. I started out just using a standard desktop mouse and GIMP and got kind of good at it. Since then I have improve a lot and plan to keep improving and creating new art for as long as I can.

Do you paint professionally, as a hobby artist, or both?

I paint as a hobby, but I sometimes use the skills I’ve learned from painting in a professional capacity when I need to edit or create images.

What genre(s) do you work in?

I don’t really have a specific genre besides perhaps drawing in a more realistic style. I like to challenge myself to draw new things. I usually paint something with life in it like creatures or people.

Whose work inspires you most — who are your role models as an artist?

Jazza from the YouTube channel Draw with Jazza. Although he mostly does traditional art his ability to draw amazing things from random prompts really inspire me. There are also amazing artists on ArtStation.com and I only need to scroll through a few images before I feel the urge to draw something myself.

How and when did you get to try digital painting for the first time?

I found GIMP on a Linux computer in college and I played around with some of the filters. I was amazed at what was possible with a few simple steps. After browsing around on YouTube I saw some artists drawing pictures from scratch in Photoshop. Because I already knew how to draw with pencil I wanted to give it a try using free software and quickly fell in love with it.

What makes you choose digital over traditional painting?

So many things. The ease of changing things when you are already far into the drawing, the fact that you can undo mistakes and best of all it’s not as messy. I also love computers so drawing digitally is like having best of both worlds.

How did you find out about Krita?

A friend told me about it after trying it with his Wacom tablet. I am a software developer so any new software is like a new toy for me. I checked out the website and what other people had created using it and I was intrigued.

What was your first impression?

The interface was so much more modern than GIMP, and I’m a firm believer that the interface makes a big difference in first impressions. I played around with it a bit and quickly saw that it had all the features I use with GIMP and more.

What do you love about Krita?

I love the interface. I also like the fact that you can do animations with it. I have only started dabbling in animation but so far I am fascinated by it. I also love how responsive Krita is and the fact that it supports my tablet, which GIMP did not. And finally I love that it is still being improved upon by the developers. It means any issues I might encounter can still be solved.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Having spent many hours drawing in Krita I can honestly say there is nothing that is really annoying. There is the occasional odd thing that happens as with any drawing software but nothing I haven’t been able to find a workaround for.

What sets Krita apart from the other tools that you use?

The amount of things you can do with it, all neatly wrapped up in a beautiful design. Also the fact that it is free but still has the quality of paid software.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

Usually my newest drawing is my favourite but the Ferret mount I drew really stands out for me. I tried to push myself to create a sense of depth and a scene that I haven’t been able to achieve in any of my previous drawings. I learned a lot from drawing it and it was a lot of fun to do.

What techniques and brushes did you use in it?

Some of the techniques I used is to blur the foreground and background and add a bright light source to create the impression of depth. I used the default brushes that come with Krita to create everything from the fur to the texture of the dirt.

Where can people see more of your work?

YouTube: https://www.youtube.com/johanjbrits
Facebook: https://www.facebook.com/JohanBritsArt/
ArtStation: https://www.artstation.com/britsie_1
Instagram: https://www.instagram.com/britsie_1
DeviantArt: http://britsie1.deviantart.com/
Twitter: https://twitter.com/britsie_1

Anything else you’d like to share?

I would just like to thank the team working on Krita for the amazing job they’ve done in creating a truly awesome drawing application.

March 02, 2018

FreeCAD Arch development news - February 2018

Hi all, Time for our monthly development update. This month again, no new feature has landed in the FreeCAD codebase, because we are still in "feature freeze mode", which means no new feature (that might break something) can be added to the FreeCAD source code, only bug fixes. We hoped to release version 0.17 in February, but,...

March 01, 2018

Re-enabling PHP when a Debian system upgrade disables it

I updated my Debian Testing system via apt-get upgrade, as one does during the normal course of running a Debian system. The next time I went to a locally hosted website, I discovered PHP didn't work. One of my websites gave an error, due to a directive in .htaccess; another one presented pages that were full of PHP code interspersed with the HTML of the page. Ick!

In theory, Debian updates aren't supposed to change configuration files without asking first, but in practice, silent and unexpected Apache bustage is fairly common. But for this one, I couldn't find anything in a web search, so maybe this will help.

The problem turned out to be that /etc/apache2/mods-available/ includes four files:

$ ls /etc/apache2/mods-available/*php*
/etc/apache2/mods-available/php7.0.conf
/etc/apache2/mods-available/php7.0.load
/etc/apache2/mods-available/php7.2.conf
/etc/apache2/mods-available/php7.2.load

The appropriate files are supposed to be linked from there into /etc/apache2/mods-enabled. Presumably, I previously had a link to ../mods-available/php7.0.* (or perhaps 7.1?); the upgrade to PHP 7.2 must have removed that existing link without replacing it with a link to the new ../mods-available/php7.2.*.

The solution is to restore those links, either with ln -s or with the approved apache2 commands (as root, of course):

# a2enmod php7.2
# systemctl restart apache2

Whew! Easy fix, but it took a while to realize what was broken, and would have been nice if it didn't break in the first place. Why is the link version-specific anyway? Why isn't there a file called /etc/apache2/mods-available/php.* for the latest version? Does PHP really change enough between minor releases to break websites? Doesn't it break a website more to disable PHP entirely than to swap in a newer version of it?

February 28, 2018

Searching for hardware on the LVFS

The LVFS acquired another new feature today.

You can now search for firmware and hardware vendors — but the algorithm is still very much WIP and we need some real searches from real users. If you have a spare 10 seconds, please search for your hardware on the LVFS. I’ll be fixing up the algorithm as we find problems. I’ll also be using the search data to work out what other vendors we need to reach out to. Comments welcome.

February 26, 2018

Everything is Better in Slow Motion

Powerslidin’ Sunday from jimmac on Vimeo.

Superb weather over the weekend, despite the thermometer dipping below 10°C.

February 24, 2018

G’MIC 2.2 : New features and filters!

The IMAGE team of the GREYC laboratory (UMR CNRS 6072, Caen, France) is pleased to announce the release of a new 2.2 version of G’MIC, its open-source, generic, and extensible framework for image processing. As we already did in the past, we take this opportunity to look at the latest notable features added since the previous major release (2.0, last June).



Note 1: click on a picture to view a larger version.
Note 2: This is a translation of an original article, in French, published on Linuxfr.

1. Context and recent evolutions

G’MIC is a free and open-source software developed since August 2008 (distributed under the CeCILL license), by folks in the IMAGE team at the GREYC, a French public research laboratory located in Caen and supervised by three institutions: the CNRS, the University of Caen, and the ENSICAEN engineering school. This team is made up of researchers and lecturers specialized in the fields of algorithms and mathematics for image processing.
As one of the main developer of G’MIC, I wanted to sum up the work we’ve made on this software during these last months.

G'MIC logo
Fig. 1.1: The G’MIC project logo, and its cute little mascot “Gmicky” (designed by David Revoy).

G’MIC is multi-platform (GNU/Linux, MacOS, Windows …) and provides many ways of manipulating generic image data, i.e. still images or image sequences acquired as hyperspectral 2D or 3D floating-point arrays (including usual color images). More than 950 different image processing functions are already available in the G’MIC framework, this number being expandable through the use of the G’MIC scripting capabilities.

G'MIC plugin for GIMP
Fig.1.2: The G’MIC-Qt plugin for GIMP, currently the most popular G’MIC interface.

Since the last major version release there have been two important events in the project life:

1.1. Port of the G’MIC-Qt plugin to Krita

When we released version 2.0 of G’MIC a few months ago, we were happy to announce a complete rewrite (in Qt) of the plugin code for GIMP. An extra step has been taken, since this plugin has been extended to fit into the open-source digital painting software Krita.
This has been made possible thanks to the development work of Boudewijn Rempt (maintainer of Krita) and Sébastien Fourey (developer of the plugin). The G’MIC-Qt plugin is now available for Krita versions 3.3+ and, although it does not yet implement all the I/O functionality of its GIMP counterpart, the feedback we’ve had so far is rather positive.
This new port replaces the old G’MIC plugin for Krita which has not been maintained for some time. The good news for Krita users (and developers) is that they now have an up-to-date plugin whose code is common with the one running in GIMP and for which we will be able to ensure the maintenance and further developments.
Note this port required the writing of a source file host_krita.cpp (in C++) implementing the communication between the host software and the plugin, and it is reasonable to think that a similar effort would allow other programs to get their own version of the G’MIC plugin (and the 500 image filters that come with it!).

G'MIC for Krita
Fig. 1.3: Overview of the G’MIC-Qt plugin running on Krita.

1.2. CeCILL-C, a more permissive license

Another major event concerns the new license of use : The CeCILL-C license (that is in the spirit of the LGPL) is now available for some components of the G’MIC framework. This license is more permissive than the previously proposed CeCILL license (which is GPL-compatible) and is more suitable for the distribution of software libraries. This license extension (now double licensing) applies precisely to the core files of G’MIC, i.e. its C++ library libgmic. Thus, the integration of the libgmic features (therefore, all G’MIC image filters) is now allowed in software that are not themselves licensed under GPL/CeCILL (including closed source products).
The source code of the G’MIC-Qt plugin, meanwhile, remains distributed under the single CeCILL license (GPL-like).

2. Fruitful collaboration with David Revoy

If you’ve followed us for a while, you may have noticed that we very often refer to the work of illustrator David Revoy for his multiple contributions to G’MIC: mascot design, ideas of filters, articles or video tutorials, tests of all kinds, etc. More generally, David is a major contributor to the world of free digital art, as much with the comic Pepper & Carrot he produces (distributed under free license CC -BY), as with his suggestions and ongoing bug reports for the open-source software he uses.
Therefore, it seems quite natural to devote a special section to him in this article, summarizing the different ideas, contributions and experiments he has brought to G’MIC just recently. A big thank you, David for your availability, the sharing of your ideas, and for all your work in general!

2.1. Improving the lineart colorization filter

Let’s first mention the progress made on the Black & White / Colorize lineart (smart-coloring) filter that had appeared at the time of the 2.0 G’MIC release.
This filter is basically a lineart colorization assistant which was developed in collaboration with David. It tries to automatically generate a colorization layer for a given lineart, from the analysis of the contours and the geometry of that lineart. Following David‘s suggestions, we were able to add a new colorization mode, named “Autoclean“. The idea is to try to automatically “clean” a coloring layer (made roughly by the user) provided in addition to the lineart layer, using the same geometric analysis as for the previous colorization modes.
The use of this new mode is illustrated below, where a given lineart (left) has been colorized approximately by the user. From the two layers line art + color layer, our “Autoclean” algorithm generates an image (right), where the colors do not overflow the lineart contours (even for “virtual” contours that are not closed). The result is not always perfect, but nevertheless reduces the time spent in the tedious process of colorization.

Gmic_autoclean
Fig. 2.1: The new “Autoclean” mode of the lineart colorization filter can automatically “clean” a rough colorization layer.

Note that this filter is also equipped with a new hatch detection module, which makes it possible to avoid generating too many small areas when using the previously available random colorization mode, particularly when the input lineart contains a large number of hatches (see figure below).

Gmic_hatch_detect
Fig. 2.2: The new hatching detection module limits the number of small colored areas generated by the automatic random coloring mode.

2.2. Color equalizer in HSI, HSL and HSV spaces

More recently, David suggested the idea of a filter to separately vary the hue and saturation of colors having certain levels of luminosity. The underlying idea is to give the artist the ability to draw or paint digitally using only grayscale, then colorize his masterpiece afterwards by re-assigning specific colors to the different gray values of the image. The obtained result has of course a limited color range, but the overall color mood is already in place. The artist only has to retouch the colors locally rather than having to colorize the entire painting by hand.
The figure below illustrates the use of this new filter Colors/Equalize HSI/HSL/HSV available in the G’MIC plugin : each category of values can be finely adjusted, resulting in preliminary colorizations of black and white paintings.

Equalize HSI1
Equalize HSI2
Equalize HSI3
Fig. 2.3: Equalization in HSI/HSL/HSV colorspaces allows to easily set the global color mood for B&W paintings.

Note that the effect is equivalent to applying a color gradient to the different gray values of the image. This is something that could already be done quite easily in GIMP. But the main interest here is we can ensure that the pixel brightness remains unchanged during the color transformation, which is not an obvious property to preserve when using a gradient map.
What is nice about this filter is that it can apply to color photographs as well. You can change the hue and saturation of colors with a certain brightness, with an effect that can sometimes be surprising, like with the landscape photography shown below.

Equalize HSI4
Fig. 2.4: The filter “Equalize HSI/HSL/HSV” applied on a color photograph makes it possible to change the colorimetric environment, here in a rather extreme way.

2.3. Angular deformations

Another one of the David‘s ideas concerned the development of a random local deformation filter, having the ability to generate angular deformations. From an algorithmic point of view, it seemed relatively simple to achieve.
Note that once the implementation has been done (in concise style: 12 lines!) and pushed into the official filter updates, David just had to press the “Update Filters” button of his G’MIC-Qt plug-in, and the new effect Deformations/Crease was there immediately for testing. This is one of the practical side of developing new filters using the G’MIC script language!

G'MIC Crease
Fig. 2.5: New effect “Crease” for local angular deformations.

However, I must admit I didn’t really have an idea on what this could be useful for in practice. But the good thing about cooperating with David is that HE knows exactly what he’s going to do with it! For instance, to give a crispy look to the edges of his comics, or for improving the render of his alien death ray.

G'MIC Crease 2
G'MIC Crease 3
Fig. 2.6: Using the G’MIC “Crease” filter for two real cases of artistic creation.

3. Filters, filters, filters…

David Revoy is not the only user of G’MIC: we sometimes count up to 900 daily downloads from the main project website. So it happens, of course, that other enthusiastic users inspire us new effects, especially during those lovely discussions that take place on our forum, kindly made available by the PIXLS.US community.

3.1. Bring out the details without creating “halos”

Many photographers will tell you that it is not always easy to enhance the details in digital photographs without creating naughty artifacts that often have to be masked manually afterwards. Conventional contrast enhancement algorithms are most often based on increasing the local variance of pixel lightness, or on the equalization of their local histograms. Unfortunately, these operations are generally done by considering neighborhoods with a fixed size and geometry, where each pixel of a neighborhood is always considered with the same weight in the statistical calculations related to these algorithms.
It is simpler and faster, but from a qualitative point of view it is not an excellent idea: we often get “halos” around contours that were already very contrasted in the image. This classic phenomenon is illustrated below with the application of the Unsharp mask filter (the one present by default in GIMP) on a part of a landscape image. This generates an undesirable “halo” effect at the frontier between the mountain and the sky (this is particularly visible in full resolution images).

G'MIC details filters
Fig. 3.1: Unwanted “halo” effects often occur with conventional contrast enhancement filters.

The challenge of the detail enhancement algorithms is to be able to analyze the geometry of the local image structures in a more fine way, to take into account geometry-adaptive local weights for each pixel of a given neighborhood. To make it simple, we want to create anisotropic versions of the usual enhancement methods, orienting them by the edges detected in the images.
Following this logic, we have added two new G’MIC filters recently, namely Details/Magic details and Details/Equalize local histograms, which try to better take the geometric content of the image into account for local detail enhancement (e.g. using the bilateral filter).

G'MIC magic details
G'MIC equalize local histograms
G'MIC equalize local histograms
Fig. 3.2: The new G’MIC detail enhancement filters.

Thus, the application of the new G’MIC local histogram equalization on the landscape image shown before gives something slightly different : a more contrasted result both in geometric details and colors, and reduced halos.

G'MIC magic details
G'MIC magic details
Fig. 3.3: Differences of results between the standard GIMP Unsharp Mask filter and the local histogram equalization of G’MIC, for details enhancement.

3.2. Different types of image deformations

New filters to apply geometric deformations on images are added to G’MIC on a regular basis, and this new major version 2.2 offers therefore a bunch of new deformation filters.
So let’s start with Deformations/Spherize, a filter which allows to locally distort an image to give the impression that it is projected on a 3D sphere or ellipsoid. This is the perfect filter to turn your obnoxious office colleague into a Mr. Potato Head!

G'MIC spherize
G'MIC spherize
Fig .3.4: Two examples of 3D spherical deformations obtained with the G’MIC “Spherize” filter.

On the other hand, the filter Deformations/Square to circle implements the direct and inverse transformations from a square domain (or rectangle) to a disk (as mathematically described on this page), which makes it possible to generate this type of deformations.

G'MIC square to circle
Fig. 3.5: Direct and inverse transformations from a square domain to a disk.

The effect Degradations/Streak replaces an image area masked by the user (filled with a constant color) with one or more copies of a neighboring area. It works mainly as the GIMP clone tool but prevents the user to fill the entire mask manually.

G'MIC streak
Fig. 3.6: The “Streak” filter clones parts of the image into a user-defined color mask.

3.3. Artistic Abstractions

You might say that image deformations are nice, but sometimes you want to transform an image in a more radical way. Let’s introduce now the new effects that turn an image into a more abstract version (simplification and re-rendering). These filters have in common the analysis of the local image geometry, followed by a step of image synthesis.

For example, G’MIC filter Contours/Super-pixels locally gathers the image pixels with the same color to form a partitioned image, like a puzzle, with geometric shapes that stick to the contours. This partition is obtained using the SLIC method (Simple Linear Iterative Clustering), a classic image partitioning algorithm, which has the advantage of being relatively fast to compute.

G'MIC super pixels 1
G'MIC super pixels 2
Fig. 3.7: Decomposition of an image in super-pixels by the Simple Linear Iterative Clustering algorithm (SLIC).

The filter Artistic/Linify tries to redraw an input image by superimposing semi-transparent colored lines on an initially white canvas, as shown in the figure below. This effect is the re-implementation of the smart algorithm initially proposed on the site http://linify.me (initially implemented in JavaScript).

G'MIC linify 1
G'MIC linify 2
Fig. 3.8: The “Linify” effect tries to redraw an image by superimposing only semi-transparent colored lines on a white canvas.

The effect Artistic/Quadtree variations first decomposes an image as a quadtree, then re-synthesize it by drawing oriented and plain ellipses on a canvas, one ellipse for each quadtree leaf. This renders a rather interesting “painting” effect. It is likely that with more complex shapes, even more attractive renderings could be synthesized. Surely an idea to keep in mind for the next filters update 🙂

G'MIC quadtree 1
G'MIC quadtree 2
Fig. 3.9: Decomposing an image as a quadtree allows to re-synthesize it by superimposing only plain colored ellipses.

3.4. “Are there any more?”

And now that you have processed so many beautiful pictures, why not arrange them in the form of a superb photo montage? This is precisely the role of the filter Arrays & tiles/Drawn montage, which allows to create a juxtaposition of photographs very quickly, for any kind of shapes.
The idea is to provide the filter with a colored template in addition to the serie of photographs (Fig.3.10a), and then to associate each photograph with a different color of the template (Fig.3.10b). Next, the arrangement is done automatically by G’MIC, by resizing the images so that they appear best framed within the shapes defined in the given montage template (Fig.3.10c).
We made a video tutorial illustrating the use of this specific filter.

G'MIC drawn montage
Fig. 3.10a: Step 1: The user draws the desired organization of the montage with shapes of different colors.

G'MIC drawn montage
Fig. 3.10b: Step 2: G’MIC’s “Drawn Montage” filter allows you to associate a photograph for each template color.

G'MIC drawn montage
Fig. 3.10c: Step 3: The photo montage is then automatically synthetized by the filter.

But let’s go back to more essential questions: have you ever needed to draw gears? No?! It’s quite normal, that’s not something we do everyday! But just in case, the new G’MIC filter Rendering/Gear will be glad to help, with different settings to adjust gear size, colors and number of teeth. Perfectly useless, so totally indispensable!

G'MIC drawn montage
Fig. 3.11: The Gear filter, running at full speed.

Need a satin texture right now? No?! Too bad, the filter Patterns / Satin could have been of a great help!

G'MIC satin
Fig. 3.12: G’MIC’s satin filter will make your life more silky.

And finally, to end up with the series of these “effects that are useless until we need them”, note the apparition of the new filter Degradations/JPEG artifacts which simulates the appearance of JPEG compression artifacts due to the quantization of the DCT coefficients encoding 8×8 image blocks (yes, you will get almost the same result saving your image as a JPEG file with the desired quality).

Simulate JPEG Artifacts
Simulate JPEG Artifacts
Fig. 3.13: The “JPEG artifacts” filter simulates the image degradation due to 8×8 block DCT compression.

4. Other notable improvements

This review of these new available G’MIC filters should not overshadow the various improvements that have been made “under the hood” and that are equally important, even if they are less visible in practice for the user.

4.1. A better G’MIC-Qt plugin interface

A big effort of cleaning and restructuring the G’MIC-Qt plugin code has been realized, with a lot of little inconsistencies fixed in the GUI. Let’s also mention in bulk order some new interesting features that have appeared in the plugin:

  • The ability to set a timeout when trying to preview some computationnaly intensive filters.
  • A better management of the input-output parameters for each filter (with persistence, better menu location, and a reset button).
  • Maximizing the size of the preview area is now easier. Editing its zoom level manually is now possible, as well as chosing the language of the interface (regardless of the language used for the system), etc.

All these little things gathered together globally improves the user experience.

G'MIC Preferences
Fig. 4.1: Overview of the G’MIC-Qt plugin interface in its latest version 2.2.

4.2. Improvements in the G’MIC core

Even less visible, but just as important, many improvements have appeared in the G’MIC computational core and its associated G’MIC script language interpreter. You have to know that all of the available filters are actually written in the form of scripts in the G’MIC language, and each small improvement brought to the interpreter may have a beneficial consequence for all filters at once. Without going too much into the technical details of these internal improvements, we can highlight those points:

  • The notable improvement in the syntax of the language itself, which goes along with better performances for the analysis of the language syntax (therefore for the script executions), all this with a smaller memory footprint.
  • The G’MIC built-in mathematical expression evaluator is also experiencing various optimizations and new features, to consider even more possibilities for performing non-trivial operations at the pixel level.

  • A better support of raw video input/outputs (.yuv format) with support for4:2:2 and 4:4:4 formats, in addition to4:2:0 which was the only mode supported before.

  • Finally, two new animations have been added to the G’MIC demos menu (which is displayed e.g. when invoking gmic without arguments from the command-line):

    • First, a 3D starfield animation:

    Starfield demo
    Fig.4.2: New 3D starfield animation added to the G’MIC demo menu.

    Hanoi Demo
    Fig. 4.3: The playable 3D version of the “Tower of Hanoi”, available in G’MIC.

  • Finally, let us mention the introduction of the command tensors3d dedicated to the 3D representation of second order tensor fields. In practice, it does not only serve to make you want to eat Smarties®! It can be used for example to visualize certain regions of MRI volumes of diffusion tensors:

Tensors3d
Fig. 4.4: G’MIC rendering of a 3D tensor field, with command tensors3d.

4.3. New design for G’MIC Online

To finish this tour, let us also mention the complete redesign of G’MIC Online during the year 2017, done by Christophe Couronne and Véronique Robert from the development departement of the GREYC laboratory.
G’MIC Online is a web service allowing you to apply a subset of G’MIC filters on your images, directly inside a web browser. These web pages now have a responsive design, which makes them more enjoyable than before on mobile devices (smartphones and tablets). Shown below is a screenshot of this service running in Chrome/Android, on a 10” tablet.

G'MICol
Fig. 4.5: New responsive design of the G’MIC Online web service, running here on a 10″ tablet.

5. Conclusion and perspectives

The overview of this new version 2.2 of G’MIC is now over.
One possible conclusion could be: “There are plenty of perspectives!“.

G’MIC is a free project that can be considered as mature: the first lines of code were composed almost ten years ago, and today we have a good idea of the possibilities (and limits) of the beast. We hope to see more and more interest from FOSS users and developers, for example for integrating the G’MIC-Qt generic plugin in various software focused on image or video processing.

The possibility of using the G’MIC core under a more permissive CeCILL-C license can also be a source of interesting collaborations in the future (some companies have already approached us about this). While waiting for potential collaborations, we will do our best to continue developping G’MIC and feed it with new filters and effects, according to the suggestions of our enthusiastic users. A big thanks to them for their help and constant encouragement (the motivation to write code or articles, past 11pm, would not be the same without them!).

“Long live to open-source image processing and artistic creation!”

February 23, 2018

PEEC Planetarium Show: "The Analemma Dilemma"

[Analemma by Giuseppe Donatiello via Wikimedia Commons] Dave and I are giving a planetarium show at PEEC tonight on the analemma.

I've been interested in the analemma for years and have written about it before, here on the blog and in the SJAA Ephemeris. But there were a lot of things I still didn't understand as well as I liked. When we signed up three months ago to give this talk, I had plenty of lead time to do more investigating, uncovering lots of interesting details regarding the analemmas of other planets, the contributions of the two factors that go into the Equation of Time, why some analemmas are figure-8s while some aren't, and the supposed "moon analemmas" that have appeared on the Astronomy Picture of the Day. I added some new features to the analemma script I'd written years ago as well as corresponding with an expert who'd written some great Equation of Time code for all the planets. It's been fun.

I'll write about some of what I learned when I get a chance, but meanwhile, people in the Los Alamos area can hear all about it tonight, at our PEEC show: The Analemma Dilemma, 7 pm tonight, Friday Feb 23, at the Nature Center, admission $6/adult, $4/child.

February 21, 2018

G'MIC 2.2


G'MIC 2.2

New features and filters!

The IMAGE team of the GREYC laboratory (UMR CNRS 6072, Caen, France) is pleased to announce the release of a new 2.2 version of G’MIC, its open-source, generic, and extensible framework for image processing. As we already did in the past, we take this opportunity to look at the latest notable features added since the previous major release (2.0, last June).



Note 1: click on a picture to view a larger version. Note 2: This is a translation of an original article, in French, published on Linuxfr.

1. Context and recent evolutions

G’MIC is a free and open-source software developed since August 2008 (distributed under the CeCILL license), by folks in the IMAGE team at the GREYC, a French public research laboratory located in Caen and supervised by three institutions: the CNRS, the University of Caen, and the ENSICAEN engineering school. This team is made up of researchers and lecturers specialized in the fields of algorithms and mathematics for image processing. As one of the main developer of G’MIC, I wanted to sum up the work we’ve made on this software during these last months.

G'MIC logo Fig. 1.1: The G’MIC project logo, and its cute little mascot “Gmicky” (designed by David Revoy).

G’MIC is multi-platform (GNU/Linux, MacOS, Windows …) and provides many ways of manipulating generic image data, i.e. still images or image sequences acquired as hyperspectral 2D or 3D floating-point arrays (including usual color images). More than 950 different image processing functions are already available in the G’MIC framework, this number being expandable through the use of the G’MIC scripting capabilities.

G'MIC plugin for GIMP Fig.1.2: The G’MIC-Qt plugin for GIMP, currently the most popular G’MIC interface.

Since the last major version release there have been two important events in the project life:

1.1. Port of the G’MIC-Qt plugin to Krita

When we released version 2.0 of G’MIC a few months ago, we were happy to announce a complete rewrite (in Qt) of the plugin code for GIMP. An extra step has been taken, since this plugin has been extended to fit into the open-source digital painting software Krita. This has been made possible thanks to the development work of Boudewijn Rempt (maintainer of Krita) and Sébastien Fourey (developer of the plugin). The G’MIC-Qt plugin is now available for Krita versions 3.3+ and, although it does not yet implement all the I/O functionality of its GIMP counterpart, the feedback we’ve had so far is rather positive. This new port replaces the old G’MIC plugin for Krita which has not been maintained for some time. The good news for Krita users (and developers) is that they now have an up-to-date plugin whose code is common with the one running in GIMP and for which we will be able to ensure the maintenance and further developments. Note this port required the writing of a source file host_krita.cpp (in C++) implementing the communication between the host software and the plugin, and it is reasonable to think that a similar effort would allow other programs to get their own version of the G’MIC plugin (and the 500 image filters that come with it!).

G'MIC for Krita Fig. 1.3: Overview of the G’MIC-Qt plugin running on Krita.

1.2. CeCILL-C, a more permissive license

Another major event concerns the new license of use : The CeCILL-C license (that is in the spirit of the LGPL) is now available for some components of the G’MIC framework. This license is more permissive than the previously proposed CeCILL license (which is GPL-compatible) and is more suitable for the distribution of software libraries. This license extension (now double licensing) applies precisely to the core files of G’MIC, i.e. its C++ library libgmic. Thus, the integration of the libgmic features (therefore, all G’MIC image filters) is now allowed in software that are not themselves licensed under GPL/CeCILL (including closed source products). The source code of the G’MIC-Qt plugin, meanwhile, remains distributed under the single CeCILL license (GPL-like).

2. Fruitful collaboration with David Revoy

If you’ve followed us for a while, you may have noticed that we very often refer to the work of illustrator David Revoy for his multiple contributions to G’MIC: mascot design, ideas of filters, articles or video tutorials, tests of all kinds, etc. More generally, David is a major contributor to the world of free digital art, as much with the comic Pepper & Carrot he produces (distributed under free license CC -BY), as with his suggestions and ongoing bug reports for the open-source software he uses. Therefore, it seems quite natural to devote a special section to him in this article, summarizing the different ideas, contributions and experiments he has brought to G’MIC just recently. A big thank you, David for your availability, the sharing of your ideas, and for all your work in general!

2.1. Improving the lineart colorization filter

Let’s first mention the progress made on the Black & White / Colorize lineart (smart-coloring) filter that had appeared at the time of the 2.0 G’MIC release. This filter is basically a lineart colorization assistant which was developed in collaboration with David. It tries to automatically generate a colorization layer for a given lineart, from the analysis of the contours and the geometry of that lineart. Following David‘s suggestions, we were able to add a new colorization mode, named “Autoclean“. The idea is to try to automatically “clean” a coloring layer (made roughly by the user) provided in addition to the lineart layer, using the same geometric analysis as for the previous colorization modes. The use of this new mode is illustrated below, where a given lineart (left) has been colorized approximately by the user. From the two layers line art + color layer, our “Autoclean“ algorithm generates an image (right), where the colors do not overflow the lineart contours (even for “virtual” contours that are not closed). The result is not always perfect, but nevertheless reduces the time spent in the tedious process of colorization.

Gmic_autoclean Fig. 2.1: The new “Autoclean” mode of the lineart colorization filter can automatically “clean” a rough colorization layer.

Note that this filter is also equipped with a new hatch detection module, which makes it possible to avoid generating too many small areas when using the previously available random colorization mode, particularly when the input lineart contains a large number of hatches (see figure below).

Gmic_hatch_detect Fig. 2.2: The new hatching detection module limits the number of small colored areas generated by the automatic random coloring mode.

2.2. Color equalizer in HSI, HSL and HSV spaces

More recently, David suggested the idea of a filter to separately vary the hue and saturation of colors having certain levels of luminosity. The underlying idea is to give the artist the ability to draw or paint digitally using only grayscale, then colorize his masterpiece afterwards by re-assigning specific colors to the different gray values of the image. The obtained result has of course a limited color range, but the overall color mood is already in place. The artist only has to retouch the colors locally rather than having to colorize the entire painting by hand. The figure below illustrates the use of this new filter Colors/Equalize HSI/HSL/HSV available in the G’MIC plugin : each category of values can be finely adjusted, resulting in preliminary colorizations of black and white paintings.

Equalize HSI1 Equalize HSI2 Equalize HSI3 Fig. 2.3: Equalization in HSI/HSL/HSV colorspaces allows to easily set the global color mood for B&W paintings.

Note that the effect is equivalent to applying a color gradient to the different gray values of the image. This is something that could already be done quite easily in GIMP. But the main interest here is we can ensure that the pixel brightness remains unchanged during the color transformation, which is not an obvious property to preserve when using a gradient map. What is nice about this filter is that it can apply to color photographs as well. You can change the hue and saturation of colors with a certain brightness, with an effect that can sometimes be surprising, like with the landscape photography shown below.

Equalize HSI4 Fig. 2.4: The filter “Equalize HSI/HSL/HSV” applied on a color photograph makes it possible to change the colorimetric environment, here in a rather extreme way.

2.3. Angular deformations

Another one of the David‘s ideas concerned the development of a random local deformation filter, having the ability to generate angular deformations. From an algorithmic point of view, it seemed relatively simple to achieve. Note that once the implementation has been done (in concise style: 12 lines!) and pushed into the official filter updates, David just had to press the “Update Filters“ button of his G’MIC-Qt plug-in, and the new effect Deformations/Crease was there immediately for testing. This is one of the practical side of developing new filters using the G’MIC script language!

G'MIC Crease Fig. 2.5: New effect “Crease” for local angular deformations.

However, I must admit I didn’t really have an idea on what this could be useful for in practice. But the good thing about cooperating with David is that HE knows exactly what he’s going to do with it! For instance, to give a crispy look to the edges of his comics, or for improving the render of his alien death ray.

G'MIC Crease 2 G'MIC Crease 3 Fig. 2.6: Using the G’MIC “Crease” filter for two real cases of artistic creation.

3. Filters, filters, filters…

David Revoy is not the only user of G’MIC: we sometimes count up to 900 daily downloads from the main project website. So it happens, of course, that other enthusiastic users inspire us new effects, especially during those lovely discussions that take place on our forum, kindly made available by the PIXLS.US community.

3.1. Bring out the details without creating “halos”

Many photographers will tell you that it is not always easy to enhance the details in digital photographs without creating naughty artifacts that often have to be masked manually afterwards. Conventional contrast enhancement algorithms are most often based on increasing the local variance of pixel lightness, or on the equalization of their local histograms. Unfortunately, these operations are generally done by considering neighborhoods with a fixed size and geometry, where each pixel of a neighborhood is always considered with the same weight in the statistical calculations related to these algorithms. It is simpler and faster, but from a qualitative point of view it is not an excellent idea: we often get “halos” around contours that were already very contrasted in the image. This classic phenomenon is illustrated below with the application of the Unsharp mask filter (the one present by default in GIMP) on a part of a landscape image. This generates an undesirable “halo” effect at the frontier between the mountain and the sky (this is particularly visible in full resolution images).

G'MIC details filters Fig. 3.1: Unwanted “halo” effects often occur with conventional contrast enhancement filters.

The challenge of the detail enhancement algorithms is to be able to analyze the geometry of the local image structures in a more fine way, to take into account geometry-adaptive local weights for each pixel of a given neighborhood. To make it simple, we want to create anisotropic versions of the usual enhancement methods, orienting them by the edges detected in the images. Following this logic, we have added two new G’MIC filters recently, namely Details/Magic details and Details/Equalize local histograms, which try to better take the geometric content of the image into account for local detail enhancement (e.g. using the bilateral filter).

G'MIC magic details G'MIC equalize local histograms G'MIC equalize local histograms Fig. 3.2: The new G’MIC detail enhancement filters.

Thus, the application of the new G’MIC local histogram equalization on the landscape image shown before gives something slightly different : a more contrasted result both in geometric details and colors, and reduced halos.

G'MIC magic details G'MIC magic details Fig. 3.3: Differences of results between the standard GIMP Unsharp Mask filter and the local histogram equalization of G’MIC, for details enhancement.

3.2. Different types of image deformations

New filters to apply geometric deformations on images are added to G’MIC on a regular basis, and this new major version 2.2 offers therefore a bunch of new deformation filters. So let’s start with Deformations/Spherize, a filter which allows to locally distort an image to give the impression that it is projected on a 3D sphere or ellipsoid. This is the perfect filter to turn your obnoxious office colleague into a Mr. Potato Head!

G'MIC spherize G'MIC spherize Fig .3.4: Two examples of 3D spherical deformations obtained with the G’MIC “Spherize” filter.

On the other hand, the filter Deformations/Square to circle implements the direct and inverse transformations from a square domain (or rectangle) to a disk (as mathematically described on this page), which makes it possible to generate this type of deformations.

G'MIC square to circle Fig. 3.5: Direct and inverse transformations from a square domain to a disk.

The effect Degradations/Streak replaces an image area masked by the user (filled with a constant color) with one or more copies of a neighboring area. It works mainly as the GIMP clone tool but prevents the user to fill the entire mask manually.

G'MIC streak Fig. 3.6: The “Streak” filter clones parts of the image into a user-defined color mask.

3.3. Artistic Abstractions

You might say that image deformations are nice, but sometimes you want to transform an image in a more radical way. Let’s introduce now the new effects that turn an image into a more abstract version (simplification and re-rendering). These filters have in common the analysis of the local image geometry, followed by a step of image synthesis.

For example, G’MIC filter Contours/Super-pixels locally gathers the image pixels with the same color to form a partitioned image, like a puzzle, with geometric shapes that stick to the contours. This partition is obtained using the SLIC method (Simple Linear Iterative Clustering), a classic image partitioning algorithm, which has the advantage of being relatively fast to compute.

G'MIC super pixels 1 G'MIC super pixels 2 Fig. 3.7: Decomposition of an image in super-pixels by the Simple Linear Iterative Clustering algorithm (SLIC).

The filter Artistic/Linify tries to redraw an input image by superimposing semi-transparent colored lines on an initially white canvas, as shown in the figure below. This effect is the re-implementation of the smart algorithm initially proposed on the site http://linify.me (initially implemented in JavaScript).

G'MIC linify 1 G'MIC linify 2 Fig. 3.8: The “Linify” effect tries to redraw an image by superimposing only semi-transparent colored lines on a white canvas.

The effect Artistic/Quadtree variations first decomposes an image as a quadtree, then re-synthesize it by drawing oriented and plain ellipses on a canvas, one ellipse for each quadtree leaf. This renders a rather interesting “painting” effect. It is likely that with more complex shapes, even more attractive renderings could be synthesized. Surely an idea to keep in mind for the next filters update :)

G'MIC quadtree 1 G'MIC quadtree 2 Fig. 3.9: Decomposing an image as a quadtree allows to re-synthesize it by superimposing only plain colored ellipses.

3.4. “Are there any more?”

And now that you have processed so many beautiful pictures, why not arrange them in the form of a superb photo montage? This is precisely the role of the filter Arrays & tiles/Drawn montage, which allows to create a juxtaposition of photographs very quickly, for any kind of shapes. The idea is to provide the filter with a colored template in addition to the serie of photographs (Fig.3.10a), and then to associate each photograph with a different color of the template (Fig.3.10b). Next, the arrangement is done automatically by G’MIC, by resizing the images so that they appear best framed within the shapes defined in the given montage template (Fig.3.10c). We made a video tutorial illustrating the use of this specific filter.

G'MIC drawn montage Fig. 3.10a: Step 1: The user draws the desired organization of the montage with shapes of different colors.
G'MIC drawn montage Fig. 3.10b: Step 2: G’MIC’s “Drawn Montage” filter allows you to associate a photograph for each template color.
G'MIC drawn montage Fig. 3.10c: Step 3: The photo montage is then automatically synthetized by the filter.

But let’s go back to more essential questions: have you ever needed to draw gears? No?! It’s quite normal, that’s not something we do everyday! But just in case, the new G’MIC filter Rendering/Gear will be glad to help, with different settings to adjust gear size, colors and number of teeth. Perfectly useless, so totally indispensable!

G'MIC drawn montage Fig. 3.11: The Gear filter, running at full speed.

Need a satin texture right now? No?! Too bad, the filter Patterns / Satin could have been of a great help!

G'MIC satin Fig. 3.12: G’MIC’s satin filter will make your life more silky.

And finally, to end up with the series of these “effects that are useless until we need them”, note the apparition of the new filter Degradations/JPEG artifacts which simulates the appearance of JPEG compression artifacts due to the quantization of the DCT coefficients encoding 8×8 image blocks (yes, you will get almost the same result saving your image as a JPEG file with the desired quality).

Simulate JPEG Artifacts Simulate JPEG Artifacts Fig. 3.13: The “JPEG artifacts” filter simulates the image degradation due to 8×8 block DCT compression.

4. Other notable improvements

This review of these new available G’MIC filters should not overshadow the various improvements that have been made “under the hood” and that are equally important, even if they are less visible in practice for the user.

4.1. A better G’MIC-Qt plugin interface

A big effort of cleaning and restructuring the G’MIC-Qt plugin code has been realized, with a lot of little inconsistencies fixed in the GUI. Let’s also mention in bulk order some new interesting features that have appeared in the plugin:

  • The ability to set a timeout) when trying to preview some computationnaly intensive filters.
  • A better management of the input-output parameters for each filter (with persistence, better menu location, and a reset button).
  • Maximizing the size of the preview area is now easier. Editing its zoom level manually is now possible, as well as chosing the language of the interface (regardless of the language used for the system), etc.

All these little things gathered together globally improves the user experience.

G'MIC Preferences Fig. 4.1: Overview of the G’MIC-Qt plugin interface in its latest version 2.2.

4.2. Improvements in the G’MIC core

Even less visible, but just as important, many improvements have appeared in the G’MIC computational core and its associated G’MIC script language interpreter. You have to know that all of the available filters are actually written in the form of scripts in the G’MIC language, and each small improvement brought to the interpreter may have a beneficial consequence for all filters at once. Without going too much into the technical details of these internal improvements, we can highlight those points:

  • The notable improvement in the syntax of the language itself, which goes along with better performances for the analysis of the language syntax (therefore for the script executions), all this with a smaller memory footprint.
  • The G’MIC built-in mathematical expression evaluator is also experiencing various optimizations and new features, to consider even more possibilities for performing non-trivial operations at the pixel level.

  • A better support of raw video input/outputs (.yuv format) with support for4:2:2 and 4:4:4 formats, in addition to4:2:0 which was the only mode supported before.

  • Finally, two new animations have been added to the G’MIC demos menu (which is displayed e.g. when invoking gmic without arguments from the command-line):

    • First, a 3D starfield animation:
    Starfield demo Fig.4.2: New 3D starfield animation added to the G’MIC demo menu.
    Hanoi Demo Fig. 4.3: The playable 3D version of the “Tower of Hanoi”, available in G’MIC.
  • Finally, let us mention the introduction of the command tensors3d dedicated to the 3D representation of second order tensor fields. In practice, it does not only serve to make you want to eat Smarties®! It can be used for example to visualize certain regions of MRI volumes of diffusion tensors:

    Tensors3d Fig. 4.4: G’MIC rendering of a 3D tensor field, with command tensors3d.

4.3. New design for G’MIC Online

To finish this tour, let us also mention the complete redesign of G’MIC Online during the year 2017, done by Christophe Couronne and Véronique Robert from the development departement of the GREYC laboratory. G’MIC Online is a web service allowing you to apply a subset of G’MIC filters on your images, directly inside a web browser. These web pages now have a responsive design, which makes them more enjoyable than before on mobile devices (smartphones and tablets). Shown below is a screenshot of this service running in Chrome/Android, on a 10’’ tablet.

G'MICol Fig. 4.5: New responsive design of the G’MIC Online web service, running here on a 10” tablet.

5. Conclusion and perspectives

The overview of this new version 2.2 of G’MIC is now over. One possible conclusion could be: “There are plenty of perspectives!“.

G’MIC is a free project that can be considered as mature: the first lines of code were composed almost ten years ago, and today we have a good idea of the possibilities (and limits) of the beast. We hope to see more and more interest from FOSS users and developers, for example for integrating the G’MIC-Qt generic plugin in various software focused on image or video processing.

The possibility of using the G’MIC core under a more permissive CeCILL-C license can also be a source of interesting collaborations in the future (some companies have already approached us about this). While waiting for potential collaborations, we will do our best to continue developping G’MIC and feed it with new filters and effects, according to the suggestions of our enthusiastic users. A big thanks to them for their help and constant encouragement (the motivation to write code or articles, past 11pm, would not be the same without them!).

“Long live open-source image processing and artistic creation!”

February 20, 2018

CSS Grid

This would totally have been a tweet or a facebook post, but I’ve decided to invest a little more energy and post these on my blog, accessible to everybody. Getting old, I guess. We’re all mortal and the web isn’t open by its own.

In the past few days I’ve been learning about CSS grid while redesigning Flatpak and Flathub sites (still coming). And with the knowledge of really grokking only a fraction of it, I’m in love. So far I really dig:

  • Graceful fallback
  • Layout fully controlled by the theme
  • Controlled whitespace (meaning the layout won’t fall apart when you add or remove some whitespace)
  • Reasonable code legibility
  • Responsive layouts even without media queries

Whitespace on the Web

The fact that things are sized and defined very differently and getting grips with implicit sizing will take some time, but it seems to have all the answers to the problems I ran into so far. Do note that I never got super fluent in the flexbox, either.

I love the few video bites that Jen Simmons publishes periodically. The only downside to all this is seeing the mess with the legacy grid systems I have on the numerous websites like this one.

February 19, 2018

Interview with Christine Garner

Could you tell us something about yourself?

I’m 35 years old from Shropshire in Britain. I like comfy socks and tea.

I did Archaeology in University and I love history, mythology, folklore and nature. I’ve always been drawing from an early age. I graduated in 2003 with an archaeology degree. I taught myself digital art and web coding skills for fun and practical reasons. I used to do self-employed web design and admin type jobs, but in 2013 I became disillusioned with my life and had depression. I took a Foundation art course in 2013 deciding to pursue my artistic passions instead.

Since 2014 I’ve been practising art like crazy and building up a portfolio of illustration work.

Do you paint professionally, as a hobby artist, or both?

Both. I use digital painting to make studies for my illustration work and as a way to experiment. I would love to do paid freelance illustration work should the right opportunity come along. I am happy making my own projects as well, it helps me learn and get better.

What genre(s) do you work in?

I’ve found I like to draw and paint themes from nature and animals. I read about myths and folklore. I’ve been working on personal projects about mermaids and river goddesses. It’s still in the early stages. I’m developing my style to incorporate more fantasy subjects in the future and mix in my pattern work.

Whose work inspires you most — who are your role models as an artist?

I look at archaeology, art history and graphic design for inspiration. I don’t have a favourite artist at the moment. There are lots of styles I’m inspired by it’s difficult to choose one. I’ve been looking at folk art more and trying to loosen up with my painting.

How and when did you get to try digital painting for the first time?

I tried digital painting for the first time in 2004 when I wanted to texture my own skins for the Maxis Sims game I used to play. I learnt web coding as well so i could share my efforts with people. My sister had a copy of Photoshop 6 which I used, and I had a tiny non-sensitive drawing tablet my mum had given me. My first job as a data entry clerk in a bank (yawn) enabled me to save up and get my first proper Wacom tablet around 2005. I wished I could do something with my Archaeology instead but there were no paid local jobs. From looking on the Internet I saw people made amazing works of art with digital software. I got obsessed with concept art and fantasy art and tried to get better. I used to haunt the CGSociety and GFXArtist and I bought ImagineFX magazine to try and learn all I could on my own. There were not the resources back then that there are now, but it was fun and a good distraction from the rest of my life.

What makes you choose digital over traditional painting?

Digital painting is a versatile medium. When I was starting out it saved me money on buying expensive art materials, and I didn’t have any room for doing traditional painting in my small bedroom. I bought a student version of Corel Painter to start with, which was one of my only options back in 2005. This enabled me to try lots of different types of art styles without the mess. Now I have my own house, I have more room and I’ve collected more art materials I tend to mix using traditional and digital mediums depending on the effect I want. If I want to have the texture of coloured pencils for example, I’ve found it is easier to do it with coloured pencils first. Over the years I’ve tried a lot of different digital painting programs and learnt how to do vector art as well.

How did you find out about Krita?

I found out about Krita from surfing on the Internet around 2011. I’d been using MyPaint and heard about Krita from my geeky research. I’m a bit of a geek when it comes to digital art so I like trying different softwares.

What was your first impression?

The first time I tried it I was still a heavy Corel Painter user, but I bookmarked it for future reference seeing it had potential. I tried it again around 2013 and fell in love with it over using Corel Painter. The interface was great, it was much nicer and easier to use than Corel Painter, and it had lovely practical brushes out of the box.

What do you love about Krita?

I love the layer system and the way the brushes work. They are as sensitive as any in more expensive programs, and are as customisable. The mirror tool is fun and I like the seamless pattern feature. I’ve been so busy I haven’t gotten around to trying animation yet, which is a cool feature.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I can’t think of anything which annoys me about Krita, unless it’s a fault of my own hardware setup. Sometimes I’ll have to minimise and maximise Krita to get pressure sensitivity back after using different programs while painting. It’s probably because I have an Ugee 2150 and I use Windows 10 which keeps doing annoying creator updates. Because I use other software as well (Photoshop CS3, Affinity Designer, Affinity Photo) any weaknesses in Krita can be overcome using those and vice versa.

What sets Krita apart from the other tools that you use?

I like the default brushes which come with Krita. In all the other programs I’ve used I have had to customise and make new brushes or buy brushes to get what I want. There is also a strong community feel about Krita and growing resources. I wrote a long article about why Krita is great for beginners to digital painting on Medium.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

The gold fish studies, because they were fun to do and turned out pretty well. I want to do more work in the same style.

What techniques and brushes did you use in it?

I used the default brushes. Most of them are by David Revoy. The digital sketch brush is one of my favourites, and the alchemy brush is useful for blocking in areas. First I blocked in the silhouette of the goldfish with an opaque round brush. Then I tool advantage of layer clipping masks to paint within the shape of the goldfish blocking in areas of colour with alchemy brush and the magnetic lasso tool. I also used gradient fills in the early stages to get nice base colour blending for painting the details into. I used different layer styles for shadows (multiply) and highlights (screen / overlay). Start with blocking in, big brushes and go smaller for the details.

Where can people see more of your work?

I have a website at https://thimblefolio.com.

I’m also on the following websites:

Behance: https://www.behance.net/chrisgarnerart
Dribbble: https://dribbble.com/chrisgarnerart
Instagram: https://www.instagram.com/thimblefoliopress/
Tumblr: https://thimblefolio.tumblr.com/
YouTube: https://www.youtube.com/user/cribbyGart/featured
Medium: https://medium.com/@thimblefolio
Society 6: https://society6.com/thimblefoliopress
Curioos: https://www.curioos.com/thimblefolio
Spoonflower: https://www.spoonflower.com/profiles/thimblefolio

Anything else you’d like to share?

Thank you very much to all the people who work on developing and improving Krita, and to everyone who creates tutorials and resources. Its great digital art is accessible for as many people as possible, learning digital painting is a positive and fun thing to do.

February 17, 2018

Multiplexing Input or Output on a Raspberry Pi Part 2: Port Expanders

In the previous article I talked about Multiplexing input/output using shift registers for a music keyboard project. I ended up with three CD4021 8-bit shift registers cascaded. It worked; but I found that I was spending all my time in the delays between polling each bit serially. I wanted a way to read those bits faster. So I ordered some I/O expander chips.

[Keyboard wired to Raspberry Pi with two MCP23017 port expanders] I/O expander, or port expander, chips take a lot of the hassle out of multiplexing. Instead of writing code to read bits serially, you can use I2C. Some chips also have built-in pullup resistors, so you don't need all those extra wires for pullups or pulldowns. There are lots of options, but two common chips are the MCP23017, which controls 16 lines, and the MCP23008 and PCF8574p, which each handle 8. I'll only discuss the MCP23017 here, because if eight is good, surely sixteen is better! But the MCP23008 is basically the same thing with fewer I/O lines.

A good tutorial to get you started is How To Use A MCP23017 I2C Port Expander With The Raspberry Pi - 2013 Part 1 along with part 2, Python and part 3, reading input.

I'm not going to try to repeat what's in those tutorials, just fill in some gaps I found. For instance, I didn't find I needed sudo for all those I2C commands in Part 1 since my user is already in the i2c group.

Using Python smbus

Part 2 of that tutorial uses Python smbus, but it doesn't really explain all the magic numbers it uses, so it wasn't obvious how to generalize it when I added a second expander chip. It uses this code:

DEVICE = 0x20 # Device address (A0-A2)
IODIRA = 0x00 # Pin direction register
OLATA  = 0x14 # Register for outputs
GPIOA  = 0x12 # Register for inputs

# Set all GPA pins as outputs by setting
# all bits of IODIRA register to 0
bus.write_byte_data(DEVICE,IODIRA,0x00)

# Set output all 7 output bits to 0
bus.write_byte_data(DEVICE,OLATA,0)

DEVICE is the address on the I2C bus, the one you see with i2cdetect -y 1 (20, initially).

IODIRA is the direction: when you call

bus.write_byte_data(DEVICE, IODIRA, 0x00)
you're saying that all eight bits in GPA should be used for output. Zero specifies output, one input: so if you said
bus.write_byte_data(DEVICE, IODIRA, 0x1F)
you'd be specifying that you want to use the lowest five bits for output and the upper three for input.

OLATA = 0x14 is the command to use when writing data:

bus.write_byte_data(DEVICE, OLATA, MyData)
means write data to the eight GPA pins. But what if you want to write to the eight GPB pins instead? Then you'd use
OLATB  = 0x15
bus.write_byte_data(DEVICE, OLATB, MyData)

Likewise, if you want to read input from some of the GPB bits, use

GPIOB  = 0x13
val = bus.read_byte_data(DEVICE, GPIOB)

The MCP23017 even has internal pullup resistors you can enable:

GPPUA  = 0x0c    # Pullup resistor on GPA
GPPUB  = 0x0d    # Pullup resistor on GPB
bus.write_byte_data(DEVICE, GPPUB, inmaskB)

Here's a full example: MCP23017.py on GitHub.

Using WiringPi

You can also talk to an MCP23017 using the WiringPi library. In that case, you don't set all the bits at once, but instead treat each bit as though it were a separate pin. That's easier to think about conceptually -- you don't have to worry about bit shifting and masking, just use pins one at a time -- but it might be slower if the library is doing a separate read each time you ask for an input bit. It's probably not the right approach to use if you're trying to check a whole keyboard's state at once.

Start by picking a base address for the pin number -- 65 is the lowest you can pick -- and initializing:

pin_base = 65
i2c_addr = 0x20

wiringpi.wiringPiSetup()
wiringpi.mcp23017Setup(pin_base, i2c_addr)

Then you can set input or output mode for each pin:

wiringpi.pinMode(pin_base, wiringpi.OUTPUT)
wiringpi.pinMode(input_pin, wiringpi.INPUT)
and then write to or read from each pin:
wiringpi.digitalWrite(pin_no, 1)
val = wiringpi.digitalRead(pin_no)

WiringPi also gives you access to the MCP23017's internal pullup resistors:

wiringpi.pullUpDnControl(input_pin, 2)

Here's an example in Python: MCP23017-wiringpi.py on GitHub, and one in C: MCP23017-wiringpi.c on GitHub.

Using multiple MCP23017s

But how do you cascade several MCP23017 chips?

Well, you don't actually cascade them. Since they're I2C devices, you wire them so they each have different addresses on the I2C bus, then query them individually. Happily, that's easier than keeping track of how many bits you've looped through ona shift register.

Pins 15, 16 and 17 on the chip are the address lines, labeled A0, A1 and A2. If you ground all three you get the base address of 0x20. With all three connected to VCC, it will use 0x27 (binary 111 added to the base address). So you can send commands to your first device at 0x20, then to your second one at 0x21 and so on. If you're using WiringPi, you can call mcp23017Setup(pin_base2, i2c_addr2) for your second chip.

I had trouble getting the addresses to work initially, and it turned out the problem wasn't in my understanding of the address line wiring, but that one of my cheap Chinese breadboard had a bad power and ground bus in one quadrant. That's a good lesson for the future: when things don't work as expected, don't assume the breadboard is above suspicion.

Using two MCP23017 chips with their built-in pullup resistors simplified the wiring for my music keyboard enormously, and it made the code cleaner too. Here's the modified code: keyboard.py on GitHub.

What about the speed? It is indeed quite a bit faster than the shift register code. But it's still too laggy to use as a real music keyboard. So I'll still need to do more profiling, and maybe find a faster way of generating notes, if I want to play music on this toy.

Look, new presets! Another Krita 4 development build!

We’ve been focusing like crazy on the Krita 4 release. We managed to close some 150 bugs in the past month, and Krita 4 is getting stable enough for many people to use day in, day out. There’s still more to be done, of course! So we’ll continue fixing issues and applying polish for at least another four weeks.

One of the things we’re doing as well is redesigning the set of default brush presets and brush tips that come with Krita. Brush tips are the little images one can paint with, and brush presets are the brushes you can select in the brush palette or brush popup. The combination of a tip, some settings and a smart bit of coding!

Our old set was fine, but it was based on David Revoy‘s earliest Krita brush bundles, and for Krita 4 we are revamping the entire set. We’ve added many new options to the brushes since then! So, many artists are working together to create a good-looking, useful and interesting brushes for Krita 4.

It’s still work in progress! But we want your feedback. So… Download the new Krita 4 development builds, and start doodling, sketching, painting, rendering… And then tell us what you think:

Do the brush preset survey!

Apart from the new brushes, and the bug fixes, there’s also news for Linux users: we updated our AppImage build scripts, and the new appimage includes Python scripting and the Touch UI docker. Note: by default all scripts are disabled, so you need to go to Settings/Configure Krita/Python scripts and enable the scripts you want to use.

Help with the release!

We’re having our hands full with tasks like coding, bug fixing, manual updating… More than full! One task that looks like it’s going to slip is creating a cool what-new-in-4 release video. So: if you’re good at creating videos and want to help the team, please join us on #krita on irc.freenode.net and help us out!

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes. On Windows, gmic-qt is included.

Linux

On Linux, you need to download the gmic-qt appimage separately. You can configure the location of gmic-qt in Krita’s settings. Krita requires the “XDG_DATA_DIRS” environment variable to be set. Most distributions do this, but if yours doesn’t, set “XDG_DATA_DIRS” to “/usr/local/share/:/usr/share/” as a workaround. Next build Krita will do this by itself.

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

OSX/macOS

The minimum version of OSX/macOS is 10.11

Note: the python, gmic-qt and pdf plugins are not available on OSX.

md5sums

For all downloads:

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

February 16, 2018

LVFS will block old versions of fwupd for some firmware

The ability to restrict firmware to specific versions of fwupd and the existing firmware version was added to fwupd in version 0.8.0. This functionality was added so that you could prevent the firmware being deployed if the upgrade was going to fail, either because:

  • The old version of fwupd did not support the new hardware quirks
  • If the upgraded-from firmware had broken upgrade functionality

The former is solved by updating fwupd, the latter is solved by following the vendor procedure to manually flash the hardware, e.g. using a DediProg to flash the EEPROM directly. Requiring a specific fwupd version is used by the Logitech Unifying receiver update for example, and requiring a previous minimum firmware version is used by one (soon to be two…) laptop OEMs at the moment.

Although fwupd 0.8.0 was released over a year ago it seems people are still downloading firmware with older fwupd versions. 98% of the downloads from the LVFS are initiated from gnome-software, and 2% of people using the fwupdmgr command line or downloading the .cab file from the LVFS using a browser manually.

At the moment, fwupd is being updated in Ubuntu xenial to 0.8.3 but it is still stuck at the long obsolete 0.7.4 in Debian stable. Fedora, or course, is 100% up to date with 1.0.5 in F27 and 0.9.6 in F26 and F25. Even RHEL 7.4 has 0.8.2 and RHEL 7.5 will be 1.0.1.

Detecting the fwupd version also gets slightly more complicated, as the user agent only gives us the ‘client version’ rather than the ‘fwupd version’ in most software. This means we have to use the minimum fwupd version required by the client when choosing if it is safe to provide the file. GNOME Software version 3.26.0 was the first version to depend on fwupd ≥ 0.8.0 and so anything newer than that would be safe. This gives a slight problem, as Ubuntu will be shipping an old gnome-software 3.20.x and a new-enough fwupd 0.8.x and so will be blacklisted for any firmware that requires a specific fwupd version. Which includes the Logitech security update…

The user agent we get from gnome-software is gnome-software/3.20.1 and so we can’t do anything very clever. I’m obviously erring on not bricking a tiny amount of laptop hardware rather than making a lot of Logitech hardware secure on Ubuntu 16.04, given the next LTS 18.04 is out on April 26th anyway. This means people might start getting a detected fwupd version too old message on the console if they try updating using 16.04.

A workaround for xenial users might be if someone at Canonical could include this patch that changes the user agent in gnome-software package to be gnome-software/3.20.1 fwupd/0.8.3 and I can add a workaround in the LVFS download code to parse that. Comments welcome.

February 13, 2018

Multiplexing Input or Output on a Raspberry Pi Part 1: Shift Registers

I was scouting for parts at a thrift shop and spotted a little 23-key music keyboard. It looked like a fun Raspberry Pi project.

I was hoping it would turn out to use some common protocol like I2C, but when I dissected it, it turned out there was a ribbon cable with 32 wires coming from the keyboard. So each key is a separate pushbutton.

[23-key keyboard wired to a Raspberry Pi] A Raspberry Pi doesn't have that many GPIO pins, and neither does an Arduino Uno. An Arduino Mega does, but buying a Mega to go between the Pi and the keyboard kind of misses the point of scavenging a $3 keyboard; I might as well just buy an I2C or MIDI keyboard. So I needed some sort of I/O multiplexer that would let me read 31 keys using a lot fewer pins.

There are a bunch of different approaches to multiplexing. A lot of keyboards use a matrix approach, but that makes more sense when you're wiring up all the buttons from scratch, not starting with a pre-wired keyboard like this. The two approaches I'll discuss here are shift registers and multiplexer chips.

If you just want to get the job done in the most efficient way, you definitely want a multiplexer (port expander) chip, which I'll cover in Part 2. But for now, let's look at the old-school way: shift registers.

PISO Shift Registers

There are lots of types of shift registers, but for reading lots of inputs, you need a PISO shift register: "Parallel In, Serial Out." That means you can tell the chip to read some number -- typically 8 -- of inputs in parallel, then switch into serial mode and read all the bits one at a time.

Some PISO shift registers can cascade: you can connect a second shift register to the first one and read twice as many bits. For 23 keys I needed three 8-bit shift registers.

Two popular cascading PISO shift registers are the CD4021 and the SN74LS165. They work similarly but they're not exactly the same.

The basic principle with both the CD4021 and the SN74LS165: connect power and ground, and wire up all your inputs to the eight data pins. You'll need pullup or pulldown resistors on each input line, just like you normally would for a pushbutton; I recommend picking up a few high-value (like 1-10k) resistor arrays: you can get these in SIP (single inline package) or DIP (dual-) form factors that plug easily into a breadboard. Resistor arrays can be either independent two pins for each resistor in the array) or bussed (one pin in the chip is a common pin, which you wire to ground for a pulldown or V+ for a pullup; each of the rest of the pins is a resistor). I find bussed networks particularly handy because they can reduce the number of wires you need to run, and with a job where you're multiplexing lots of lines, you'll find that getting the wiring straight is a big part of the job. (See the photo above to see what a snarl this was even with resistor networks.)

For the CD4021, connect three more pins: clock and data pins (labeled CLK and either Q7 or Q8 on the chip's pinout, pins 10 and 3), plus a "latch" pin (labeled M, pin 9). For the SN74LS165, you need one more pin: you need clock and data (labeled CP and Q7, pins 2 and 9), latch (labeled PL, pin 1), and clock enable (labeled CE, pin 15).

At least for the CD4021, some people recommend a 0.1 uF bypass capacitor across the power/ground connections of each CD4021.

If you need to cascade several chips with the CD4021, wire DS (pin 11) from the first chip to Q7 (pin 3), then wire both chips clock lines together and both chips' data lines together. The SN74LS165 is the same: DS (pin 10) to Q8 (pin 9) and tie the clock and data lines together.

Once wired up, you toggle the latch to read the parallel data, then toggle it again and use the clock pin to read the series of bits. You can see the specific details in my Python scripts: CD4021.py on GitHub and SN74LS165.py on GitHub.

Some References

For wiring diagrams, more background, and Arduino code for the CD4021, read Arduino ShiftIn. For the SN74LS165, read: Arduino: SN74HC165N, 74HC165 8 bit Parallel in/Serial out Shift Register, or Sparkfun: Shift Registers.

Of course, you can use a shift register for output as well as input. In that case you need a SIPO (Serial In, Parallel Out) shift register like a 74HC595. See Arduino ShiftOut: Serial to Parallel Shifting-Out with a 74HC595 Interfacing 74HC595 Serial Shift Register with Raspberry Pi. Another, less common option is the 74HC164N: Using a SN74HC164N Shift Register With Raspberry Pi

For input from my keyboard, initially I used three CD4021s. It basically worked, and you can see the code for it at keyboard.py (older version, for CD4021 shift registers), on GitHub.

But it turned out that looping over all those bits was slow -- I've been advised that you should wait at least 25 microseconds between bits for the CD4021, and even at 10 microseconds I found there wasa significant delay between hitting the key and hearing the note.I thought it might be all the fancy numpy code to generate waveforms for the chords, but when I used the Python profiler, it said most of the program's time was taken up in time.sleep(). Fortunately, there's a faster solution than shift registers: port expanders, which I'll talk about in Multiplexing Part 2: Port Expanders.

fwupd now tells you about known issues

After a week of being sick and not doing much, I’m showing the results of a day-or-so of hacking:

So, most of that being familiar to anyone that’s followed my previous blog posts. But wait, what’s that about a known issue?

That one little URL for the user to click on is the result of a rule engine being added to the LVFS. Of course, firmware updates shouldn’t ever fail, but in the real world they do, because distros don’t create /boot/efi correctly (cough, Arch Linux) or just because some people are running old versions of efivar, a broken git snapshot of libfwupdate or because a vendor firmware updater doesn’t work with secure boot turned on (urgh). Of all the failures logged on the LVFS, 95% fall into about 3 or 4 different failure causes, and if we know hundreds of people are hitting an issue we already understand we can provide them with some help.

So, how does this work? If you’re a user you don’t see any of this, you just download the metadata and firmware semi-automatically and get on with your life. If you’re a blessed hardware vendor on the LVFS (i.e. you can QA the updates into the stable branch) you can also create and view the rules for firmware owned by just your vendor group:

This new functionality will be deployed to the LVFS during the next downtime window. Comments welcome.

February 12, 2018

Shelved Wallpapers

GNOME 3.28 will release with another batch of new wallpapers that only a freaction of you will ever see. Apart from those I also made a few for different purposes that didn’t end up being used, but it would be a shame to keep shelved.

So here’s a bit of isometric goodness I quite enjoy on my desktop, you might as well.

Announcing Blender 2.8 Code Quest

Today the Blender Foundation announced the “Blender 2.8 Code Quest”, a crowd-funded event to gather the core developers to release the first version of the much anticipated Blender 2.8. It will be the first time that such a large development team will be working in one place. The crowd-funded goal is to get at least 10 contributors together for a period of 3 months, in the Blender Institute, Amsterdam, starting in April 2018.

For the Blender 2.8 project – with focus on workflow and an advanced new viewport system – several complex architectural changes are being added. The current team of core developers working on Blender 2.8 is distributed over multiple continents. Collaborating online from different places, in different timezones, using chat and emails, is not always efficient and is slowing down the process.

To prevent the 2.8 project to drag for another year – and to lose the exciting momentum – the Blender Foundation and the Blender Institute will combine their efforts to invite the core 2.8 developers to work in Amsterdam for 3 months. They will be working at the Blender Institute along with a group of artists who are using Blender 2.8 to produce a short film. This ensures we can tackle issues with enough attention, including time to provide feedback on UI design and usability. At least one of the permanent seats in the team is reserved for a designer to perform these tasks full-time.

The crowd-funding campaign is hosted on blender.org. The goal is to reach at least 1000 contributors who will pledge $39 and in return they will receive a limited edition Blender Rocket USB drive.

Confirmed participants are:
Ton Roosendaal, Sergey Sharybin, Dalai Felinto, Campbell Barton, Brecht van Lommel, Bastien Montagne, Clement Foucault, Joshua Leung, Sybren Stüvel and Pablo Vazquez. More people to be announced.

Read more about the Code Quest: https://www.blender.org/2-8/quest/

February 11, 2018

Razer doesn’t care about Linux

tl;dr: Don’t buy hardware from Razer and expect firmware updates to fix security problems on Linux.

Razer is a vendor that makes high-end gaming hardware, including laptops, keyboards and mice. I opened a ticket with Razor a few days ago asking them if they wanted to support the LVFS project by uploading firmware and sharing the firmware update protocol used. I offered to upstream any example code they could share under a free license, or to write the code from scratch given enough specifications to do so. This is something I’ve done for other vendors, and doesn’t take long as most vendor firmware updaters all do the same kind of thing; there are only so many ways to send a few kb of data to USB devices. The fwupd project provides high-level code for accessing USB devices, so yet-another-update-protocol is no big deal. I explained all about the LVFS, and the benefits it provided to a userbase that is normally happy to vote using their wallet to get hardware that’s supported on the OS of their choice.

I just received this note on the ticket, which was escalated appropriately:

I have discussed your offer with the dedicated team and we are thankful for your enthusiasm and for your good idea.
I am afraid I have also to let you know that at this moment in time our support for software is only focused on Windows and Mac.

The CEO of Razer Min-Liang Tan said recently “We’re inviting all Linux enthusiasts to weigh in at the new Linux Corner on Insider to post feedback, suggestions and ideas on how we can make it the best notebook in the world that supports Linux.” If this is true, and more than just a sound-bite, supporting the LVFS for firmware updates on the Razer Blade to solve security problems like Meltdown and Spectre ought to be a priority?

Certainly if peripheral updates or system firmware UpdateCapsule are not supportable on Linux, it would be good to correct well read articles as those makes it sound like Razor is interested in Linux users, of which the reality seems somewhat less optimistic. I’ve updated the vendor list with this information to avoid other people asking or filing tickets. Disappointing, but I’ll hopefully have some happier news soon about a different vendor.

February 05, 2018

security things in Linux v4.15

Previously: v4.14.

Linux kernel v4.15 was released last week, and there’s a bunch of security things I think are interesting:

Kernel Page Table Isolation
PTI has already gotten plenty of reporting, but to summarize, it is mainly to protect against CPU cache timing side-channel attacks that can expose kernel memory contents to userspace (CVE-2017-5754, the speculative execution “rogue data cache load” or “Meltdown” flaw).

Even for just x86_64 (as CONFIG_PAGE_TABLE_ISOLATION), this was a giant amount of work, and tons of people helped with it over several months. PowerPC also had mitigations land, and arm64 (as CONFIG_UNMAP_KERNEL_AT_EL0) will have PTI in v4.16 (though only the Cortex-A75 is vulnerable). For anyone with really old hardware, x86_32 is under development, too.

An additional benefit of the x86_64 PTI is that since there are now two copies of the page tables, the kernel-mode copy of the userspace mappings can be marked entirely non-executable, which means pre-SMEP hardware now gains SMEP emulation. Kernel exploits that try to jump into userspace memory to continue running malicious code are dead (even if the attacker manages to turn SMEP off first). With some more work, SMAP emulation could also be introduced (to stop even just reading malicious userspace memory), which would close the door on these common attack vectors. It’s worth noting that arm64 has had the equivalent (PAN emulation) since v4.10.

retpoline
In addition to the PTI work above, the retpoline kernel mitigations for CVE-2017-5715 (“branch target injection” or “Spectre variant 2”) started landing. (Note that to gain full retpoline support, you’ll need a patched compiler, as appearing in gcc 7.3/8+, and currently queued for release in clang.)

This work continues to evolve, and clean-ups are continuing into v4.16. Also in v4.16 we’ll start to see mitigations for the other speculative execution variant (i.e. CVE-2017-5753, “bounds check bypass” or “Spectre variant 1”).

x86 fast refcount_t overflow protection
In v4.13 the CONFIG_REFCOUNT_FULL code was added to stop many types of reference counting flaws (with a tiny performance loss). In v4.14 the infrastructure for a fast overflow-only refcount_t protection on x86 (based on grsecurity’s PAX_REFCOUNT) landed, but it was disabled at the last minute due to a bug that was finally fixed in v4.15. Since it was a tiny change, the fast refcount_t protection was backported and enabled for the Longterm maintenance kernel in v4.14.5. Conversions from atomic_t to refcount_t have also continued, and are now above 168, with a handful remaining.

%p hashing
One of the many sources of kernel information exposures has been the use of the %p format string specifier. The strings end up in all kinds of places (dmesg, /sys files, /proc files, etc), and usage is scattered through-out the kernel, which had made it a very hard exposure to fix. Earlier efforts like kptr_restrict‘s %pK didn’t really work since it was opt-in. While a few recent attempts (by William C Roberts, Greg KH, and others) had been made to provide toggles for %p to act like %pK, Linus finally stepped in and declared that %p should be used so rarely that it shouldn’t used at all, and Tobin Harding took on the task of finding the right path forward, which resulted in %p output getting hashed with a per-boot secret. The result is that simple debugging continues to work (two reports of the same hash value can confirm the same address without saying what the address actually is) but frustrates attacker’s ability to use such information exposures as building blocks for exploits.

For developers needing an unhashed %p, %px was introduced but, as Linus cautioned, either your %p remains useful when hashed, your %p was never actually useful to begin with and should be removed, or you need to strongly justify using %px with sane permissions.

It remains to be seen if we’ve just kicked the information exposure can down the road and in 5 years we’ll be fighting with %px and %lx, but hopefully the attitudes about such exposures will have changed enough to better guide developers and their code.

struct timer_list refactoring
The kernel’s timer (struct timer_list) infrastructure is, unsurprisingly, used to create callbacks that execute after a certain amount of time. They are one of the more fundamental pieces of the kernel, and as such have existed for a very long time, with over 1000 call sites. Improvements to the API have been made over time, but old ways of doing things have stuck around. Modern callbacks in the kernel take an argument pointing to the structure associated with the callback, so that a callback has context for which instance of the callback has been triggered. The timer callbacks didn’t, and took an unsigned long that was cast back to whatever arbitrary context the code setting up the timer wanted to associate with the callback, and this variable was stored in struct timer_list along with the function pointer for the callback. This creates an opportunity for an attacker looking to exploit a memory corruption vulnerability (e.g. heap overflow), where they’re able to overwrite not only the function pointer, but also the argument, as stored in memory. This elevates the attack into a weak ROP, and has been used as the basis for disabling SMEP in modern exploits (see retire_blk_timer). To remove this weakness in the kernel’s design, I refactored the timer callback API and and all its callers, for a whopping:

1128 files changed, 4834 insertions(+), 5926 deletions(-)

Another benefit of the refactoring is that once the kernel starts getting built by compilers with Control Flow Integrity support, timer callbacks won’t be lumped together with all the other functions that take a single unsigned long argument. (In other words, some CFI implementations wouldn’t have caught the kind of attack described above since the attacker’s target function still matched its original prototype.)

That’s it for now; please let me know if I missed anything. The v4.16 merge window is now open!

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

Interview with Owly Owlet

 

Could you tell us something about yourself?

Hello. I’m Maria, more often I use my nickname: Owly Owlet. I have a youtube channel, where I make video tutorials (in Russian) about how to use art software, mostly Krita.

Do you paint professionally, as a hobby artist, or both?

Art is my hobby, but I wish I could become a professional artist someday. For now there is much to be learned.

What genre(s) do you work in?

My art usually is more cartoony-like. I like fantasy world, fairy tales with medieval clothes, castles and magical creatures.

Whose work inspires you most — who are your role models as an artist?

There are so many incredible artists, whose art makes me want to learn and practice drawing more and more, it’s immensely hard to pick just a single one. But for now I really found of Andreas Rocha’s work. I also love the art style of David Revoy.

How and when did you get to try digital painting for the first time?

I’ve been drawing digitally since March 2017, so almost a year now. My husband gave me a tablet as a present for my birthday. Before that I drew with vector tools and a mouse a bit.

What makes you choose digital over traditional painting?

The freedom you have with digital art. With traditional painting you have to learn not only the basics of how to draw: perspective, light, color… You also have to learn how to work with different tools: pencils, markers, watercolor, acrylic paint, oil paint and so on. And it’s harder to fix your mistakes when you are just learning. And when you draw digitally, you have the magic of “Ctrl+Z” and layers. And besides that you can change the color scheme, mirror your image which makes way easier to identify and fix your proportion and color mistakes. You just have to find the software you are comfortable to paint with and you are good to go.

How did you find out about Krita?

It was Age of Asparagus’ “Krita meets Bob Ross” tutorials. They are awesome and really helped me to learn how to use Krita and not to be afraid of it. (https://www.youtube.com/channel/UCkKFLSJjYtKNdFy3P7Q-CAA)

What was your first impression?

Before Krita I used FireAlpaca a bit, which is fine software too, especially for a beginner. So, switching to Krita after FireAlpaca was a bit scary, you know that “so many buttons” kind of thing.

What do you love about Krita?

My favourite is the Assistant Tool, vanishing point and perspective. And I also love the dynamic brush and the quality of mixing brushes.

What do you think needs improvement in Krita? Is there anything that really annoys you?

That would be nice if Krita had not only Brightness/Contrast curve, but also contrast sliders, like those in Photoshop.

What sets Krita apart from the other tools that you use?

It has so many cool features and tools, so flexible when it comes to brush settings and working with color, and yet it’s free. Isn’t that amazing?

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?


I think this was the first one decent enough. When I drew it, I thought that I was finally getting somewhere.

What techniques and brushes did you use in it?

Most of the time for cartoony characters I use basic Krita brushes, just edit settings a bit. Airbrush_pressure for sketching and shading, Fill_circle for colouring, ink_ballpen for lineart, Basic_wet_soft for blending.

Where can people see more of your work?

My drawings on Deviantart: https://owlyowlet.deviantart.com/
My Krita tutorials (in Russian): https://www.youtube.com/c/СтудияСовятня

Anything else you’d like to share?

I wanted to say that I admire people who created Krita, who work on it, the developers, artists, translators, test engineers and other good people who make it possible for everyone to learn digital art with Krita. Thank you for all your hard work.

February 04, 2018

FreeCAD Arch development news - January 2018

Hi everybody, Sorry about the late news again, but it would have been a pity not to include a fresh feedback from the FOSDEM that happened this weekend. So here are the main topics that happened in the last month. Once again, thanks a million to eveybody who helps the effort by contributing to my Patreon...

February 02, 2018

Raspberry Pi Console over USB: Configuring an Ethernet Gadget

When I work with a Raspberry Pi from anywhere other than home, I want to make sure I can do what I need to do without a network.

With a Pi model B, you can use an ethernet cable. But that doesn't work with a Pi Zero, at least not without an adapter. The lowest common denominator is a serial cable, and I always recommend that people working with headless Pis get one of these; but there are a lot of things that are difficult or impossible over a serial cable, like file transfer, X forwarding, and running any sort of browser or other network-aware application on the Pi.

Recently I learned how to configure a Pi Zero as a USB ethernet gadget, which lets you network between the Pi and your laptop using only a USB cable. It requires a bit of setup, but it's definitely worth it. (This apparently only works with Zero and Zero W, not with a Pi 3.)

The Cable

The first step is getting the cable. For a Pi Zero or Zero W, you can use a standard micro-USB cable: you probably have a bunch of them for charging phones (if you're not an Apple person) and other devices.

Set up the Pi

Setting up the Raspberry Pi end requires editing two files in /boot, which you can do either on the Pi itself, or by mounting the first SD card partition on another machine.

In /boot/config.txt add this at the end:

dtoverlay=dwc2

In /boot/cmdline.txt, at the end of the long list of options but on the same line, add a space, followed by: modules-load=dwc2,g_ether

Set a static IP address

This step is optional. In theory you're supposed to use some kind of .local address that Bonjour (the Apple protocol that used to be called zeroconf, and before that was called Rendezvous, and on Linux machines is called Avahi). That doesn't work on my Linux machine. If you don't use Bonjour, finding the Pi over the ethernet link will be much easier if you set it up to use a static IP address. And since there will be nobody else on your USB network besides the Pi and the computer on the other end of the cable, there's no reason not to have a static address: you're not going to collide with anybody else.

You could configure a static IP in /etc/network/interfaces, but that interferes with the way Raspbian handles wi-fi via wpa_supplicant and dhcpcd; so you'd have USB networking but your wi-fi won't work any more.

Instead, configure your address in Raspbian via dhcpcd. Edit /etc/dhcpcd.conf and add this:

interface usb0
static ip_address=192.168.7.2
static routers=192.168.7.1
static domain_name_servers=192.168.7.1

This will tell Raspbian to use address 192.168.7.2 for its USB interface. You'll set up your other computer to use 192.168.7.1.

Now your Pi should be ready to boot with USB networking enabled. Plug in a USB cable (if it's a model A or B) or a micro USB cable (if it's a Zero), plug the other end into your computer, then power up the Pi.

Setting up a Linux machine for USB networking

The final step is to configure your local computer's USB ethernet to use 192.168.7.1.

On Linux, find the name of the USB ethernet interface. This will only show up after you've booted the Pi with the ethernet cable plugged in to both machines.

ip a
The USB interface will probably start eith en and will probably be the last interface shown.

On my Debian machine, the USB network showed up as enp0s26u1u1. So I can configure it thusly (as root, of course):

ip a add 192.168.7.1/24 dev enp0s26u1u1
ip link set dev enp0s26u1u1 up
(You can also use the older ifconfig rather than ip: sudo ifconfig enp0s26u1u1 192.168.7.1 up)

You should now be able to ssh into your Raspberry Pi using the address 192.168.7.2, and you can make an appropriate entry in /etc/hosts, if you wish.

For a less hands-on solution, if you're using Mac or Windows, try Adafruit's USB gadget tutorial. It's possible that might also work for Linux machines running Avahi. If you're using Windows, you might prefer CircuitBasics' ethernet gadget tutorial.

Happy networking!

February 01, 2018

Firmware Telemetry for Vendors

We’ve shipped nearly 1.2 MILLION firmware updates out to Linux users since we started the LVFS project.

I found out this nugget of information using a new LVFS vendor feature, soon to be deployed: Telemetry. This builds on the previously discussed success/failure reporting and adds a single page for the vendor to get statistics about each bit of hardware. Until more people are running the latest fwupd and volunteering to share their update history it’s less useful, but still interesting until then.

No new batches of ColorHug2

I was informed by AMS (the manufacturer that makes the XYZ sensor that’s the core of the CH2 device) that the AS73210 (aka MTCSiCF) and the MTI08D are end of life products. The replacement for the sensor the vendor offers is the AS73211, which of course is more expensive and electrically incompatible with the AS73210.

The somewhat-related new AS7261 sensor does look interesting as it somewhat crosses the void between a colorimeter and something that can take non-emissive readings, but it’s a completely different sensor to the one on the ColorHug2, and mechanically to the now-abandoned ColorHug+. I’m also feeling twice burned buying specialist components from single-source suppliers.

Being a parents to a 16 week old baby doesn’t put Ania and I in a position where I can go through the various phases of testing, prototypes, test batch, production batch etc for a device refresh like we did with the ColorHug -> ColorHug2. I’m hoping I can get a chance to play with some more kinds of sensors from different vendors, although that’s not going to happen before I start getting my free time back. At the moment I have about 50 fully completed ColorHug2 devices in boxes ready to be sold.

In the true spirit of OpenHardware and free enterprise, if anyone does want to help with the design of a new ColorHug device I’m open for ideas. ColorHug was really just a hobby that got out of control, and I’d love for someone else to have the thrill and excitement of building a nano-company from scratch. Taking me out the equation completely, I’d be as equally happy referring on people who want to buy a ColorHug upgrade or replacement to a different project, if the new product met with my approval :)

So, 50 ColorHugs should last about 3 months before stock runs out, but I know a few people are using devices on production lines and other sorts of industrial control — if that sounds familiar, and you’d like to buy a spare device, now is the time to do so. Of course, I’ll continue supporting all the existing 3162 devices well into the future. I hope to be back building OpenHardware soon, and hopefully with a new and improved ColorHug3.

January 30, 2018

Another Morevna fundraiser!

Nikolai Mamashev, of Pepper & Carrot animation fame, and his team are working on a new episode of the Morevna open-source animation series. As in the previous project, they’re using Krita for creating and processing the artwork and Blender and other open-source tools for animation. Everything will be published under the Creative Commons Attribution-ShareAlike license.

Nikolai and his team are running a crowdfunding campaign to make this happen. Among the rewards:

– a training video course so you, too can master the techniques used to create this animation

– your own image animated! Check out this one of Kiki from the previous campaign:

And many other exclusive rewards! By claiming any of them, you are helping Nikolai and the team of Morevna to bring one more animation project powered by open-source technologies. The success of this project depends on your support!

The series is a futuristic adaptation of the traditional Russian fairy tale “Marya Morevna” (also known as “The Death of Koschei the Deathless”). The action takes place in the distant future, where horses are replaced by bikes and cars, the main protagonist Ivan Tsarevich turns into a talented mechanic, Marya Morevna is a biker queen with a samurai sword, and Koschei is more immortal and evil than ever – as he is a battle robot now!


Support the new episode of Morevna

January 26, 2018

Go: debugging multiple response.WriteHeader calls

Say you’re building a HTTP service in Go and suddenly it starts giving you these:

http: multiple response.WriteHeader calls

Horrible when that happens, right?

It’s not always very easy to figure out why you get them and where they come from. Here’s a hack to help you trace them back to their origin:

type debugLogger struct{}

func (d debugLogger) Write(p []byte) (n int, err error) {
	s := string(p)
	if strings.Contains(s, "multiple response.WriteHeader") {
		debug.PrintStack()
	}
	return os.Stderr.Write(p)
}

// Now use the logger with your http.Server:
logger := log.New(debugLogger{}, "", 0)

server := &http.Server{
    Addr:     ":3001",
    Handler:  s,
    ErrorLog: logger,
}
log.Fatal(server.ListenAndServe())

This will output a nice stack trace whenever it happens. Happy hacking!


Comments | More on rocketeer.be | @rubenv on Twitter

January 25, 2018

Tricks for Installing a Laser Printer on Linux in CUPS

(Wherein I rant about how bad CUPS has become.)

I had to set up two new printers recently. CUPS hasn't gotten any better since the last time I bought a printer, maybe five years ago; in fact, it's gotten quite a bit worse. I'm amazed at how difficult it was to add these fairly standard laser printers, both of which I'd researched beforehand to make sure they worked with Linux.

It took me about three hours for the first printer. The second one, a few weeks later, "only" took about 45 minutes ... at which point I realized I'd better write everything down so it'll be faster if I need to do it again, or if I get the silly notion that I might want to print from another computer, like my laptop.

I used the CUPS web interface; I didn't try any of the command-line tools.

Figure out the connection type

In the CUPS web interface, after you log in and click on Administration, whether you click on Find New Printers or Add Printer, you're faced with a bunch of identical options with no clue how to choose between them. For example, Find New Printers with a Dell E310dw connected shows:

Available Printers
  • [Add This Printer] Virtual Braille BRF Printer (CUPS-BRF)
  • [Add This Printer] Dell Printer E310dw (Dell Printer E310dw)
  • [Add This Printer] Dell Printer E310dw (Dell Printer E310dw)
  • [Add This Printer] Dell Printer E310dw (Dell Printer E310dw (driverless))

What is a normal human supposed to do with this? What's the difference between the three E210dw entries and which one am I supposed to choose? (Skipping ahead: None of them.) And why is it finding a virtual Braille BRF Printer?

The only way to find out the difference is to choose one, click on Next and look carefully at the URL. For the three E310dw options above, that gives:

  • dnssd://Dell%20Printer%20E310dw._ipp._tcp.local/?uuid=[long uuid here]
  • lpd://DELL316BAA/BINARY_P1
  • ipp://DELL316BAA.local:631/ipp/print

Again skipping ahead: none of those are actually right. Go ahead, try all three of them and see. You'll get error messages about empty PPD files. But while you're trying them, write down, for each one, the URL listed as Connection (something like the dnssd:, lpd: or ipp: URLs listed above); and note, in the driver list after you click on your manufacturer, how many entries there are for your printer model, and where they show up in the list. You'll need that information later.

Download some drivers

Muttering about the idiocy of all this -- why ship empty drivers that won't install? Why not just omit drivers if they're not available? Why use the exact same name for three different printer entries and four different driver entries? -- the next step is to download and install the manufacturer's drivers. If you're on anything but Redhat, you'll probably either need to download an RPM and unpack it, or else google for the hidden .deb files that exist on both Dell's and Brother's websites that their sites won't actually find for you.

It might seem like you could just grab the PPD from inside those RPM files and put it wherever CUPS is finding empty ones, but I never got that to work. Much as I dislike installing proprietary .deb files, for both printers that was the only method I found that worked. Both Dell and Brother have two different packages to install. Why two and what's the difference? I don't know.

Once you've installed the printer driver packages, you can go back to the CUPS Add Printer screen. Which hasn't gotten any clearer than before. But for both the Brother and the Dell, ipp: is the only printer protocol that worked. So try each entry until you find the one that starts with ipp:.

Set up an IP address and the correct URL

But wait, you're not done. Because CUPS gives you a URL like ipp://DELL316BAA.local:631/ipp/print, and whatever that .local thing is, it doesn't work. You'll be able to install the printer, but when you try to print to it it fails with "unable to locate printer".

(.local apparently has something to do with assuming you're running a daemon that does "Bonjour", the latest name for the Apple service discovery protocol that was originally called Rendezvous, then renamed to Zeroconf, then to Bonjour. On Linux it's called Avahi, but even with an Avahi daemon this .local thing didn't work for me. At least it made me realize that I had the useless Avahi daemon running, so now I can remove it.).

So go back to Add Printer and click on Internet Printing Protocol (ipp) under Other network printers and click Continue. That takes you to a screen that suggests that you want URLs like:

http://hostname:631/ipp/
http://hostname:631/ipp/port1

ipp://hostname/ipp/
ipp://hostname/ipp/port1

lpd://hostname/queue

socket://hostname
socket://hostname:9100

None of these is actually right. What these printers want -- at least, what both the Brother and the Dell wanted -- was ipp://printerhostname:631/ipp/print

printerhostname? Oh, did I forget to mention static IP? I definitely recommend that you make a static IP for your printer, or at least add it to your router's DHCP list so it always gets the same address. Then you can make an entry in /etc/hosts for printerhostname. I guess that .local thing was supposed to compensate for an address that changes all the time, which might be a nifty idea if it worked, but since it doesn't, make a static IP and use it in your ipp: URL.

Choose a driver

Now, finally! you can move on to choosing a driver. After you pick the manufacturer, you'll be presented with a list that probably includes at least three entries for your printer model. Here's where it helps if you paid attention to how the list looked before you installed the manufacturer's drivers: if there's a new entry for your printer that wasn't there before, that's the non-empty one you want. If there are two or more new entries for your printer that weren't there before, as there were for the Dell ... shrug, all you can do is pick one and hope.

Of course, once you manage to get through configuration to "Printer successfully added", you should immediately run Maintenance->Print Test Page. You may have to power cycle the printer first since it has probably gone to sleep while you were fighting with CUPS.

All this took me maybe three hours the first time, but it only took me about 45 minutes the second time. Hopefully now that I've written this, it'll be much faster next time. At least if I don't succumb to the siren song of thinking a fairly standard laser printer ought to have a driver that's already in CUPS, like they did a decade ago, instead of always needing a download from the manufacturer.

If laser printers are this hard I don't even want to think about what it's like to install a photo printer on Linux these days.

January 23, 2018

GCab and CVE-2018-5345

tl;dr: Update GCab from your distributor.

Longer version: Just before Christmas I found a likely exploitable bug in the libgcab library. Various security teams have been busy with slightly more important issues, and so it’s taken a lot longer than usual to be verified and assigned a CVE. The issue I found was that libgcab attempted to read a large chunk into a small buffer, overwriting lots of interesting things past the end of the buffer. ALSR and SELinux saves us in nearly all cases, so it’s not the end of the world. Almost a textbook C buffer overflow (rust, yada, whatever) so it was easy to fix.

Some key points:

  • This only affects libgcab, not cabarchive or libarchive
  • All gcab versions less than 0.8 are affected
  • Anything that links to gcab is affected, so gnome-software, appstream-glib and fwupd at least
  • Once you install the fixed gcab you need to restart anything that’s using it, e.g. fwupd
  • There is no silly branded name for this bug
  • The GCab project is incredibly well written, and I’ve been hugely impressed with the code quality
  • You can test if your GCab has been fixed by attempting to decompress this file, if the program crashes, you need to update

With Marc-André’s blessing, I’ve released version v0.8 of gcab with this fix. I’ve also released v1.0 which has this fix (and many more nice API additions) which also switches the build system to Meson and cleans up a lot of leaks using g_autoptr(). If you’re choosing a version to update to, the answer is probably 1.0 unless you’re building for something more sedate like RHEL 5 or 6. You can get the Fedora 27 packages here or they’ll be on the mirrors tomorrow.

January 22, 2018

darktable 2.4.1 released

we’re proud to announce the first bugfix release for the 2.4 series of darktable, 2.4.1!

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.4.1.

as always, please don’t use the autogenerated tarball provided by github, but only our tar.xz. the checksums are:

$ sha256sum darktable-2.4.1.tar.xz
6254c63f9b50894b3fbf431d98c0fe8ec481957ab91f9af76e33cc1201c29704 darktable-2.4.1.tar.xz
$ sha256sum darktable-2.4.1.dmg
75077f17332a6fda144125ab0f1d3dd219c214bf7602b0b252208f1ec665d031 darktable-2.4.1.dmg
$ sha256sum darktable-2.4.1-win64.exe
0be1e0dd8dec61a7cea41598c52db258edaee8783c543b4311fa0ac56ab43d2a darktable-2.4.1-win64.exe
$ sha256sum darktable-2.4.1-win64.zip
560d82e4c87c002f0284daca922023df136c822713e3670ba42358c9427fe26c darktable-2.4.1-win64.zip

when updating from the currently stable 2.2.x series, please bear in mind that your edits will be preserved during this process, but it will not be possible to downgrade from 2.4 to 2.2.x any more.

Important note: to make sure that darktable can keep on supporting the raw file format for your camera, please read this post on how/what raw samples you can contribute to ensure that we have the full raw sample set for your camera under CC0 license!

and the changelog as compared to 2.4.0 can be found below.

New Features

  • Allow to select the GUI language in the preferences
  • Add a filter rule to the collect module to find locally copied images
  • Add favourite toggle to darkroom modules’ right click popup
  • Allow blending/masking in the hot pixels module
  • Add keyboard shortcuts to zoom and pan an image in darkroom. Panning uses the arrow keys, zooming defaults to ctrl- and ctrl+. Use alt and ctrl to change the step size of panning.
  • Some minor speedups in the grain module
  • Handling stdout on Windows: do not redirect stdout for simple command line arguments (--help and --version)
  • On Windows, show the location of the log file in the help message
  • Enable searching in the more modules list – click into the list to give focus to it, then start typing. The default GTK shortcut ctrl-f doesn’t work as it’s used for filmstrip already
  • Add a debug print when compiling OpenCL kernels

Bugfixes

  • Use the configured overwrite color profile when exporting from Lua – this broke GIMP integration
  • Support presets with < in their name
  • Fix export to non-existing path with \ as the path separator on Windows
  • Don’t insist on the db being locked when it doesn’t even exist
  • Don’t touch the mix slider when resetting the curve in color zones
  • Fix a bug in the exposure module that would only allow corrections of up to 10 stops
  • Fix custom shortcuts with shift modifier
  • Properly ellipsize text in the recently used collections list
  • Fix exported galeries with filenames containing a '
  • Fix finding mipmaps cache folder in purge_from_cache.sh script
  • Fix a crash in the recently used collections list due to a broken config file
  • Set the sqlite threading mode to Serialized
  • Fix old export presets using OpenEXR
  • Fix building with clang on Windows

Changed Dependencies

  • iso-codes version 3.66 or newer is suggested for a nicer list of translations in the preferences.
  • The Windows installer comes with an updated libexiv2 so TIFF exports should be much faster now

Camera support, compared to 2.4.0

Warning: support for Nikon NEF ‘lossy after split’ raws was unintentionally broken due to the lack of such samples. Please see this post for more details. If you have affected raws, please contribute samples!

Base Support

  • Panasonic DC-G9 (4:3)
  • Paralenz Dive Camera (chdk)
  • Pentax KP
  • Sjcam SJ6 LEGEND (chdk-b, chdk-c)

White Balance Presets

  • Leaf Credo 40
  • Nikon D3400
  • Olympus E-M1MarkII
  • Panasonic DC-G9
  • Sony ILCE-7RM3

Noise Profiles

  • Canon EOS 750D
  • Canon EOS Kiss X8i
  • Canon EOS Rebel T6i
  • Canon EOS 77D
  • Canon EOS 9000D
  • Canon EOS M100
  • Canon EOS M6
  • Sony DSC-RX100M4
  • YI TECHNOLOGY M1

Translations

  • Czech
  • Dutch
  • French
  • German
  • Hebrew
  • Hungarian
  • Italian
  • Slovenian

Interview with Baukje Jagersma

Source of infinity

Could you tell us something about yourself?

Hey! My name Is Baukje Jagersma, I’m 22 years old and live in the Netherlands. I studied game design and recently started doing freelance, to try and make a living out of something I enjoy doing!

Do you paint professionally, as a hobby artist, or both?

Both: I’ve always enjoyed creating my own stories and worlds with drawing and recently started doing freelance work as well.

What genre(s) do you work in?

Most if not all of my work has something to do with fantasy. To me that’s the best part of drawing, you can create things that don’t exist and make them look believable. Besides that I mostly work as an illustrator and concept artist.

Whose work inspires you most — who are your role models as an artist?

There are a lot of sources where I get inspiration from, art in games for example, movies or art sites.

A few artists that are really worth mentioning would be Grzegorz Rutkowski, Ruan Jia and Piotr Jablonski.

How and when did you get to try digital painting for the first time?

Probably when I first discovered Deviantart. I was already familiar with GIMP, which I used to create photo-manipulations with. But seeing all the amazingly talented artists on there made me want to try out digital painting for myself.

What makes you choose digital over traditional painting?

I feel like traditional has more limitations and can get messy. In digital you can easily pick any color you like, or undo something that doesn’t work. For me it just works a lot faster.

Forest elf and her kitty

How did you find out about Krita?

Somewhere around 2013-2014 when an artist posted his Krita art on a GIMP forum.

What was your first impression?

I really didn’t know where to start, haha! There were just so many more options than I was used to in GIMP, especially with all the individual brush engines. It really took me a while to get comfortable with the program.

What do you love about Krita?

Now I’ve just grown to love the multiple brush engines! The wrap-around mode, animation tool, brush smoothing options, symmetry options, assistant tool and the different layer and mask options are probably the key features that I love about it. It’s a program that just has so much to offer which makes it a lot of fun to explore with!

What do you think needs improvement in Krita? Is there anything that really annoys you?

Probably the only thing that really bugs me is the text tool, which seems to have a few weird issues right now. I’d also love to see the possibility to import and use vector layers and an alternative to the pattern brush option to make it work less repetitive (something similar to Photoshop’s dual brush perhaps).

What sets Krita apart from the other tools that you use?

Kinda mentioned it earlier already, it has a lot to offer which makes it fun to explore with! Besides that it’s available to everyone and works just as well as any other ‘professional’ digital painting program.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

Probably one of my few non-illustrative works. I really wanted to try out the animation tool so I decided to try out a run cycle. I had little knowledge of animating beforehand- but I like how the animation and design turned out in the end.

Tiger run cycle animation

What techniques and brushes did you use in it?

I made a few different style concepts beforehand, where I chose a design from and later on used as a reference. I first made a sketch version of the animation which I then refined and colored. I actually made a little video about it which I posted on youtube.

Where can people see more of your work?

Deviantart: https://baukjespirit.deviantart.com/
Artstation: https://www.artstation.com/baukjespirit
Instagram: https://www.instagram.com/baukjespirit/
Twitter: https://twitter.com/BaukjeJagersma
Youtube: https://www.youtube.com/user/baukjespirit

Anything else you’d like to share?

I’d like to thank the Krita team for developing this amazing program and making it available to everyone! I’m very excited to see how Krita will further develop in the future!

January 21, 2018

Reading Buttons from a Raspberry Pi

When you attach hardware buttons to a Raspberry Pi's GPIO pin, reading the button's value at any given instant is easy with GPIO.input(). But what if you want to watch for button changes? And how do you do that from a GUI program where the main loop is buried in some library?

Here are some examples of ways to read buttons from a Pi. For this example, I have one side of my button wired to the Raspberry Pi's GPIO 18 and the other side wired to the Pi's 3.3v pin. I'll use the Pi's internal pulldown resistor rather than adding external resistors.

The simplest way: Polling

The obvious way to monitor a button is in a loop, checking the button's value each time:

import RPi.GPIO as GPIO
import time

button_pin = 18

GPIO.setmode(GPIO.BCM)

GPIO.setup(button_pin, GPIO.IN, pull_up_down = GPIO.PUD_DOWN)

try:
    while True:
        if GPIO.input(button_pin):
            print("ON")
        else:
            print("OFF")

        time.sleep(1)

except KeyboardInterrupt:
    print("Cleaning up")
    GPIO.cleanup()

But if you want to be doing something else while you're waiting, instead of just sleeping for a second, it's better to use edge detection.

Edge Detection

GPIO.add_event_detect, will call you back whenever it sees the pin's value change. I'll define a button_handler function that prints out the value of the pin whenever it gets called:

import RPi.GPIO as GPIO
import time

def button_handler(pin):
    print("pin %s's value is %s" % (pin, GPIO.input(pin)))

if __name__ == '__main__':
    button_pin = 18

    GPIO.setmode(GPIO.BCM)

    GPIO.setup(button_pin, GPIO.IN, pull_up_down = GPIO.PUD_DOWN)

    # events can be GPIO.RISING, GPIO.FALLING, or GPIO.BOTH
    GPIO.add_event_detect(button_pin, GPIO.BOTH,
                          callback=button_handler,
                          bouncetime=300)

    try:
        time.sleep(1000)
    except KeyboardInterrupt:
        GPIO.cleanup()

Pretty nifty. But if you try it, you'll probably find that sometimes the value is wrong. You release the switch but it says the value is 1 rather than 0. What's up?

Debounce and Delays

The problem seems to be in the way RPi.GPIO handles that bouncetime=300 parameter.

The bouncetime is there because hardware switches are noisy. As you move the switch from ON to OFF, it doesn't go cleanly all at once from 3.3 volts to 0 volts. Most switches will flicker back and forth between the two values before settling down. To see bounce in action, try the program above without the bouncetime=300. There are ways of fixing bounce in hardware, by adding a capacitor or a Schmitt trigger to the circuit; or you can "debounce" the button in software, by waiting a while after you see a change before acting on it. That's what the bouncetime parameter is for.

But apparently RPi.GPIO, when it handles bouncetime, doesn't always wait quite long enough before calling its event function. It sometimes calls button_handler while the switch is still bouncing, and the value you read might be the wrong one. Increasing bouncetime doesn't help. This seems to be a bug in the RPi.GPIO library.

You'll get more reliable results if you wait a little while before reading the pin's value:

def button_handler(pin):
    time.sleep(.01)    # Wait a while for the pin to settle
    print("pin %s's value is %s" % (pin, GPIO.input(pin)))

Why .01 seconds? Because when I tried it, .001 wasn't enough, and if I used the full bounce time, .3 seconds (corresponding to 300 millisecond bouncetime), I found that the button handler sometimes got called multiple times with the wrong value. I wish I had a better answer for the right amount of time to wait.

Incidentally, the choice of 300 milliseconds for bouncetime is arbitrary and the best value depends on the circuit. You can play around with different values (after commenting out the .01-second sleep) and see how they work with your own circuit and switch.

You might think you could solve the problem by using two handlers:

    GPIO.add_event_detect(button_pin, GPIO.RISING, callback=button_on,
                          bouncetime=bouncetime)
    GPIO.add_event_detect(button_pin, GPIO.FALLING, callback=button_off,
                          bouncetime=bouncetime)
but that apparently isn't allowed: RuntimeError: Conflicting edge detection already enabled for this GPIO channel.

Even if you look just for GPIO.RISING, you'll still get some bogus calls, because there are both rising and falling edges as the switch bounces. Detecting GPIO.BOTH, waiting a short time and checking the pin's value is the only reliable method I've found.

Edge Detection from a GUI Program

And now, the main inspiration for all of this: when you're running a program with a graphical user interface, you don't have control over the event loop. Fortunately, edge detection works fine from a GUI program. For instance, here's a simple TkInter program that monitors a button and shows its state.

import Tkinter
from RPi import GPIO
import time

class ButtonWindow:
    def __init__(self, button_pin):
        self.tkroot = Tkinter.Tk()
        self.tkroot.geometry("100x60")

        self.label = Tkinter.Label(self.tkroot, text="????",
                                   bg="black", fg="white")
        self.label.pack(padx=5, pady=10, side=Tkinter.LEFT)

        self.button_pin = button_pin
        GPIO.setmode(GPIO.BCM)

        GPIO.setup(self.button_pin, GPIO.IN, pull_up_down=GPIO.PUD_DOWN)

        GPIO.add_event_detect(self.button_pin, GPIO.BOTH,
                              callback=self.button_handler,
                              bouncetime=300)

    def button_handler(self, channel):
        time.sleep(.01)
        if GPIO.input(channel):
            self.label.config(text="ON")
            self.label.configure(bg="red")
        else:
            self.label.config(text="OFF")
            self.label.configure(bg="blue")

if __name__ == '__main__':
    win = ButtonWindow(18)
    win.tkroot.mainloop()

You can see slightly longer versions of these programs in my GitHub Pi Zero Book repository.

January 12, 2018

New Stable Release: Krita 3.3.3

Today we’re releasing Krita 3.3.3. This will probably be the last stable release in the Krita 3 series. This release contains several bug fixes and one very important change for Windows users:

  • The Direct3d/Angle renderer is now the default for Intel users. Recent updates to the Intel display drivers have broken Krita on many more systems than before, so it’s better that everyone gets this workaround by default. If you experience decreased performance, you can always try to enable OpenGL again.

Other fixes and improvements include:

  • Fix an issue where it would not be possible to select certain blending modes when the current layer is grayscale but the image is rgb.
  • Set the OS and platform when reporting a bug from within Krita on Windows.
  • Make it possible to enter color values as percentage in the specific color selector
  • Add OpenGL warnings and make ANGLE default on Intel GPUs
  • Add an Invert button to the levels filter
  • Implement loading and saving of styles for group layers to and from PSD
  • Fix the erase mode not showing correctly when returning to the brush tool
  • Save the visibility of individual assistants in .kra files
  • Add an option to draw ruler tips as a power of 2
  • Disable autoscroll on move and transform tools.
  • Improve handling of native mouse events when using a pen and the Windows Ink API
  • Fix the focal point for the pinch zoom gesture
  • Fix loading netpbm files with comment.

Download

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

When it is updated, you can also use the Krita Lime PPA to install Krita 3.3.3 on Ubuntu and derivatives. We are working on an updated snap.

OSX

Note: the gmic-qt and pdf plugins are not available on OSX.

Source code

md5sums

For all downloads:

Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here.

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

January 11, 2018

Krita 4.0 Beta 1

We’ve officially gone into String Freeze mode now! That’s developer speak for “No New Features, Honest”. Everything that’s going into Krita 4.0 now is in, and the only thing left to do is fixing bugs and refining stuff.

Given how much has changed between Krita 3 and Krita 4, that’s an important part of the job! Let us here repeat a very serious warning.

THE FILE FORMAT FOR VECTOR LAYERS HAS CHANGED. IF YOU SAVE AN IMAGE WITH KRITA 4.0 THAT HAS VECTOR LAYERS, KRITA 3 CANNOT OPEN IT. IF YOU OPEN A KRITA 3 FILE WITH VECTOR LAYERS IN KRITA 4, THE VECTOR LAYERS MIGHT GET MESSED UP. BEFORE WORKING ON SUCH FILES IN KRITA 4, MAKE A BACKUP.

This doubles for files that contain text. Text in krita 3 is based on the ODT standard, text in Krita 4 is implemented using SVG. This beta is the first release that contains the new text tool. Here’s the low-down on the new text tool:

  • It’s not the text tool we wanted to create, and you could consider it as a stop-gap measure. Because of all the tax office worries, we simply didn’t have the time to create the fully-capable opentype-integrated text tool that we wanted to do.
  • But we couldn’t keep the old text tools either: they were broken, and based on ODT. We needed to have something that could replace those tools and that would would be functional enough for the simplest of use-cases, like text balloons in a comic.
  • So, what we’ve got is simple. There’s no vertical text for Chinese or Japanese yet, there’s no OpenType tweaking, there’s no fine typographic control.
  • The user interface is not final yet, and there are quite a few things that need polishing and fixing, and that we’re working on, but Krita 4’s text tool will be mostly what we’ve got now.

Apart from that…

This beta contains pretty much everything… We started working on some of these features, like the export feedback in 2016. Here’s a short list:

  • SVG vector system, with improved tools and workflow
  • New text tool
  • Python scripting
  • SVG import/export
  • Improved palette docker
  • Bigger brush sizes
  • Improved brush editor
  • Refactored saving and exporting: saving happens in the background, and export shows warnings when your file contains features that cannot be saved to a given file format
  • A fast colorize brush
  • The default pixel brush is much faster on systems with many cores
  • Lots of user interface polish

And there’s much more, but please download the builds, or build Krita and see for yourself!

One thing that is still in progress is updating the set of default brush presets: those aren’t in the beta yet, but they are in the nightly builds.

Platform Notes

Windows

Linux

  • The AppImage does not contain support for audio in animations
  • The AppImage does not have Python scripting
  • The AppImage does include the latest gmic-qt release

OSX

  • The app bundle does not contain gmic-qt
  • The app bundle does not contain Python scripting
  • The app bundle does not have support for importing PDF files

Download

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

When it is updated, you can also use the Krita Lime PPA to install Krita 4.0 beta 1 on Ubuntu and derivatives.

OSX

Note: the gmic-qt and pdf plugins are not available on OSX.

Source code

md5sums

For all downloads:

Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here.

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

January 10, 2018

Phoning home after updating firmware?

Somebody made a proposal on the fwupd mailing list that the machine running fwupd should “phone home” to the LVFS with success or failure after the firmware update has been attempted.

This would let the hardware vendor that uploaded firmware know there are problems straight away, rather than waiting for thousands of frustrated users to file bugs. The report should needs to contain something that identifies the machine and a boolean, and in the event of an error, enough debug information to actually be useful. It would obviously involve sending the users IP address to the server too.

I ran a poll on my Google+ page, and this was the result:

So, a significant minority of people felt like it stepped over the line of privacy v.s. pragmatism. This told me I couldn’t just forge onward with automated collection, and this blog entry outlines what we’ve done for the 1.0.4 release. I hope this proposal is acceptable to even the most paranoid of users.

The fwupd daemon now stores the result of each attempted update in a local SQLite database. In the event there’s a firmware update that’s been attempted, we now ask the user if they would like to upload this information to the LVFS. Using GNOME this would just be a slider in the control center Privacy panel, and I’ll leave it to the distros to decide if this slider should be on or off by default. If you’re using the fwupdmgr tool this is what it shows:

$ fwupdmgr report-history
Target:                  https://the-lvfs-server/lvfs/firmware/report
Payload:                 {
                           "ReportVersion" : 1,
                           "MachineId" : "9c43dd393922b7edc16cb4d9a36ac01e66abc532db4a4c081f911f43faa89337",
                           "DistroId" : "fedora",
                           "DistroVersion" : "27",
                           "DistroVariant" : "workstation",
                           "Reports" : [
                             {
                               "DeviceId" : "da145204b296610b0239a4a365f7f96a9423d513",
                               "Checksum" : "d0d33e760ab6eeed6f11b9f9bd7e83820b29e970",
                               "UpdateState" : 2,
                               "Guid" : "77d843f7-682c-57e8-8e29-584f5b4f52a1",
                               "FwupdVersion" : "1.0.4",
                               "Plugin" : "unifying",
                               "Version" : "RQR12.05_B0028",
                               "VersionNew" : "RQR12.07_B0029",
                               "Flags" : 674,
                               "Created" : 1515507267,
                               "Modified" : 1515507956
                             }
                           ]
                         }
Proceed with upload? [Y|n]: 

Using this new information that the user volunteers, we can display a new line in the LVFS web-console:

Which expands out to the report below:

This means vendors using the LVFS know first of all how many downloads they have, and also the number of success and failures. This allows us to offer the same kind of staged deployment that Microsoft Update does, where you can limit the number of updated machines to 10,000/day or automatically pause the specific firmware deployment if > 1% of the reports come back with failures.

Some key points:

  • We don’t share the IP address with the vendor, in fact it’s not even saved in the MySQL database
  • The MachineId is a salted hash of your actual /etc/machine-id
  • The LVFS doesn’t store reports for firmware that it did not sign itself, i.e. locally built firmware archives will be ignored and not logged
  • You can disable the reporting functionality in all applications by editing /etc/fwupd/remotes.d/*.conf
  • We have an official GDPR document too — we’ll probably link to that from the Privacy panel in GNOME

Comments welcome.

January 09, 2018

Libre Graphics Meeting + SCaLE 2018

This year is starting off great with not one, but two libre graphics oriented events (one in Europe and the other in North America - for even better coverage)! Now you can double your opportunities to visit like-minded Free Software graphic artists, photographers, and developers.

Libre Graphics Meeting 2018 in Seville

LGM Logo

Come and join us for the 13th annual Libre Graphics Meeting (LGM) being held April 26–30 in Seville, Spain! LGM is a wonderful opportunity for artists, developers, and contributors to meet face-to-face and share/learn from each other. Several GIMP developers will be holding an annual team meeting there (as usual), so come and say hello!

The main programme this year is focusing on:

  • Technical presentations and workshops for developers.
  • Showcases of excellent work made using libre graphics tools.
  • New tools and workflows for graphics and code.
  • Reflections on the activities of existing Free/Libre and Open Source communities.
  • Reflections and practical sessions on promoting the philosophy and use of Libre Graphics tools.

Libre Graphics at Southern California Linux Expo

SCaLE 16x Logo

For the first time this year there will be a dedicated Libre Graphics track at the Southern California Linux Expo (SCaLE) 16x!

SCaLE 16x will be held March 8–11 in Pasadena, California (USA) at the Pasadena Convention Center, and the Libre Graphics track in particular will be on Friday, March 9th.

Pat David will be presenting on the state of GIMP (looking forward) and including GIMP as part of a photography workflow.

There is a Call for Participation out for those that would be interested in talking more about their art or involvement in the Free Software graphics world. If you’d like to participate submit a proposal to graphics-cfp@socallinuxexpo.org!

January 08, 2018

LibreOffice Vanilla 5.4.4 released on the Mac App Store

Collabora has now released LibreOffice Vanilla 5.4.4 on the Mac App Store. It is built from the official LibreOffice 5.4.4 sources. If you have purchased LibreOffice Vanilla earlier from the App Store, it will be upgraded in the normally automatic manner of apps purchased from the App Store.

LibreOffice Vanilla from the Mac App Store is recommended to Mac users who want LibreOffice with the minimum amount of manual hassle with installation and upgrades. If you don't mind that, by all means download and install the build from TDF instead.

We would have loved to continue to include a link to the TDF download site directly in the app's description, as we have promised, but we were not allowed to do that this time by Apple's reviewer.

Because of the restrictions on apps distributed in the App Store, features implemented in Java are not available in LibreOffice Vanilla. Those features are mainly the HSQLDB database engine in Base, and some wizards.

This time we include the localised help files, as there were some issues in accessing the on-line help.

Since the LibreOffice Vanilla 5.2 build that was made available in the Mac App Store in September 2016, there have been a few Mac-specific fixes, like the one related to landscape vs. portrait mode printing on Letter paper. There are more Mac-specific bugs in Bugzilla that will be investigated as resources permit.

Some fine-tuning to the code signing script has been necessary. For instance, one cannot include shell scripts in the Contents/MacOS subfolder of the application bundle when building for upload to the App Store. This is because the code signatures for such shell scripts would be stored as extended attributes and those won't survive the mechanism used to upload a build to the App Store for review and distribution. (For other non-binary files, in the Resources folder, signatures are stored in a separate file.)

We also have made sure the LibreOffice code builds with a current Xcode (and macOS SDK).

Interview with Emily K. Mell

Could you tell us something about yourself?

I’m a 30-year-old housewife and mother living in New Mexico in the United States. I’ve always had an interest in becoming a storyteller, and visual art is the most appealing way for me to do it.

Do you paint professionally, as a hobby artist, or both?

Right now I would call myself a hobbyist, as I’ve never published anything professionally. I hope for that to change. What I need is to somehow find the time and energy to crank out images at high volume!

What genre(s) do you work in?

I’ve always been interested primarily in fantasy, although much of my work has consisted of humorous cartoons. Humor creeps into whatever I try to make whether or not I intend for it to be there. My goal for the next decade is to begin a serial, consistent fantasy world as portrayed through cartoons.

Whose work inspires you most — who are your role models as an artist?

My first and largest influences are the old-fashioned Disney animated films from the 20th century, as exemplified by the work of artists like Bill Tytla. I also consider cartoonists and illustrators like Bill Watterson, Maurice Sendak, and Hal Foster to be very influential on my style. Fantasy illustrators like Frank Frazetta inspire me on an emotional level.

How and when did you get to try digital painting for the first time?

I began using digital tools purely as a form of practice a few years ago. To me it was very casual; I was figure drawing and making fan art referencing in-jokes from a favorite podcast of mine called “We Hate Movies.” I didn’t intend to pursue it seriously, but found myself returning to it more and more often.

What makes you choose digital over traditional painting?

Traditional painting definitely produces the most beautiful results, but it’s a pain in the neck. Not only does it take up more time and physical space, it’s far more expensive and wasteful. I don’t like the “digital look,” and it will take a lot of practice to minimize that. However, I believe that real artists should never blame their tools for any failure.

I’d almost always drawn pictures with only a pencil before, but digital painting allows me to ink and color my images much more easily. It also opens up options for experimentation when I can simply recolor anything I don’t like. The hardest part has been trying to learn the kind of control with a stylus that I have with my own hand.

How did you find out about Krita?

My husband is an engineer who works frequently with Linux and is familiar with the open-source world. He suggested Krita to me when I was looking for a digital painting program not called Photoshop.

What was your first impression?

Krita was the most professional-looking Photoshop alternative that I’d come across. It also played nicely with my stylus and tablet in a way that some other software didn’t. Krita did have some bugginess and crashing, though.

What do you love about Krita?

That it’s free! I think it’s remarkable that the open-source community could create something of this quality without a money spigot. Given Adobe’s outrageous pricing scheme for Photoshop, you’d think that software like this couldn’t exist anywhere else. Krita is a much better option.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Bugs and crashes come with the territory in open-source projects, but those are likely to be reduced over time. The real problem is how inaccessible Krita is to lay people. When I was looking to download the program for the first time, I had to follow a completely unintuitive chain of links that, to somebody like me, appeared to be made up of Seussian nonsense words (AppImages for cats? What’s a Gentoo? Why is it bloody?). Idiots like myself just want a giant button that says “GET THE LATEST STABLE VERSION OF KRITA HERE!” The way things are now, the less technically literate will give up on trying Krita before they have even started.

What sets Krita apart from the other tools that you use?

Krita has a great community of support that will ensure that it gets better year by year. It has the right priorities, i.e. essential tools like layers and brushes get implemented before more obscure features. Other than occasional crashes, I can just jump in and use it.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I made this page in Krita in order to see whether I could use it for a webcomic I plan on creating. The experiment was a success, and I believe that my skills will continue to grow. This proves that I can use Krita to tell the kinds of stories that I have plans for.

What techniques and brushes did you use in it?

I mostly use a basic round brush that I resize accordingly. Line brushes are convenient for separating the panels. I think that skills are more important than tools, and I want to train myself to use the simplest tools that I can.

Where can people see more of your work?

I’m shy about publishing, but I realize that this is something I need to change about myself. Once I’ve worked up a sizable volume of content, which should be within the next year, I will be posting a regular webcomic called The Unknown Engine.

Anything else you’d like to share?

I’m grateful that the Internet and the open-source movement affords artists like me the opportunity to create and publish in new ways. I often think about Bill Watterson’s long fight with his syndicates, and how different it would have been for him if he was able to publish Calvin and Hobbes on the Internet under his own control. I’m nowhere near his level of course, but I want to improve my own skills until I get there.

January 04, 2018

SMEP emulation in PTI

An nice additional benefit of the recent Kernel Page Table Isolation (CONFIG_PAGE_TABLE_ISOLATION) patches (to defend against CVE-2017-5754, the speculative execution “rogue data cache load” or “Meltdown” flaw) is that the userspace page tables visible while running in kernel mode lack the executable bit. As a result, systems without the SMEP CPU feature (before Ivy-Bridge) get it emulated for “free”.

Here’s a non-SMEP system with PTI disabled (booted with “pti=off“), running the EXEC_USERSPACE LKDTM test:

# grep smep /proc/cpuinfo
# dmesg -c | grep isolation
[    0.000000] Kernel/User page tables isolation: disabled on command line.
# cat <(echo EXEC_USERSPACE) > /sys/kernel/debug/provoke-crash/DIRECT
# dmesg
[   17.883754] lkdtm: Performing direct entry EXEC_USERSPACE
[   17.885149] lkdtm: attempting ok execution at ffffffff9f6293a0
[   17.886350] lkdtm: attempting bad execution at 00007f6a2f84d000

No crash! The kernel was happily executing userspace memory.

But with PTI enabled:

# grep smep /proc/cpuinfo
# dmesg -c | grep isolation
[    0.000000] Kernel/User page tables isolation: enabled
# cat <(echo EXEC_USERSPACE) > /sys/kernel/debug/provoke-crash/DIRECT
Killed
# dmesg
[   33.657695] lkdtm: Performing direct entry EXEC_USERSPACE
[   33.658800] lkdtm: attempting ok execution at ffffffff926293a0
[   33.660110] lkdtm: attempting bad execution at 00007f7c64546000
[   33.661301] BUG: unable to handle kernel paging request at 00007f7c64546000
[   33.662554] IP: 0x7f7c64546000
...

It should only take a little more work to leave the userspace page tables entirely unmapped while in kernel mode, and only map them in during copy_to_user()/copy_from_user() as ARM already does with ARM64_SW_TTBR0_PAN (or CONFIG_CPU_SW_DOMAIN_PAN on arm32).

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

January 01, 2018

24 Free Goddess Gifs

Goddess_gif_small_2 Goddess_gif_small_1 Goddess_gif_small_3Goddess_gif_small_4 Goddess_gif_small_5 Goddess_gif_small_6Goddess_gif_small_11 Goddess_gif_small_7 Goddess_gif_small_8Goddess_gif_small_9 Goddess_gif_small_19 Goddess_gif_small_10Goddess_gif_small_15 Goddess_gif_small_14 Goddess_gif_small_20Goddess_gif_small_12 Goddess_gif_small_13 Goddess_gif_small_16Goddess_gif_small_17 Goddess_gif_small_18 Goddess_gif_small_22Goddess_gif_small_23 Goddess_gif_small_21 Goddess_gif_small_24

Here are 24 individual goddess gifs to use for whatever. Free Culture. No permission needed. Go crazy. I love you.

Share/Bookmark

flattr this!

Blender projects in 2018 to look forward to

The blender.org project is very lucky with attracting talent –great developers working together with fantastic artists. It’s how Blender manages to stand out as a successful and highly functional free & open software project. In this post I want to thank everyone for a wonderful Blender year and give a view at all of the exciting things that are going to happen– in 2018! (Fingers crossed :)

Eevee

In 2016 it was just an idea, having an interactive viewport in Blender with rendering quality at PBR levels. Last year this project took off in ways beyond expectation – everyone should have seen the demos by now.

Eevee in Blender

Early in 2018 animation support will come back (with support for modifiers), with as highlight OpenSubdiv support (GPU based adaptive subdivision surfaces).

Read about the Eevee roadmap here.

Grease Pencil

Blender is an original innovator in this area –providing a fully functional 2D animation tool in a 3D environment. You have to see it to believe it –it’s a mindblowing workflow for animators and story artists.

Grease Pencil

In Q1 of 2018 the short film “Hero” will be finished as proof-of-concept for the new workflow and tools of Grease Pencil in 2.8x.

You can read the latest status report here.

Workflow & “Blender 101”

Optimizing and organizing one’s working environment can significantly improve the workflow in 3D applications. We can’t make everyone happy with a single Blender configuration anymore. This is where the new Workspaces and Application Templates come in. In Q1 and Q2 of 2018 the first prototypes for radically configured simple Blenders are going to be made (a.k.a. the Blender 101 project).

Meanwhile work continues on usability and configurations in a daily production environment. Blender’s latest Open Movie “Spring” is going to be used for this.

Blender 2.8x is also getting a complete new layer system, allowing to organize your scenes in advanced ways. A Scene can have unlimited amount of layers (= drawings or renders),  unlimited amounts of collections and per collection render settings and overrides.

Visit the code.blender.org blog to read more about it.

New UI theme

No, there are no pictures yet! But one of the cool things of releasing a massive update is to also to update the looks. Nothing radical, just to make it look fresh and to match contemporary desktop environments. We’re still using the (great) design from 2009-2010. In computer years, that’s a century ago! Work on this should start Q1 and get finalized before Q2 ends. Contributions welcome (check ‘get involved’).

Cycles

In 2017 we saw the rise of AMD GPUs. Thanks to a full time developer who worked on OpenCL for a year, Blender is now a good choice for use on AMD hardware. For 2018 we want to work on solving the kernel compiling waiting time.

The Daily Dweebs

Cycles is now one of the most popular areas for developers to work in. Most of these are doing this as part of their daytime job – to make sure Cycles stays an excellent choice for production rendering. Expect in 2018 a lot of high quality additions and especially ways to manage fast renders.

Read more about the Cycles Roadmap here.

Blender Game Engine

One of Blender’s best features is that it’s a complete integrated 3D creation suite –enabling artists to create projects from concept to final edits or playback. Unfortunately the game engine has fallen behind in development– not getting the focus or development time it needs. There are many reasons for it, but one of these is that the code base for BGE is too much separated from the rest of Blender. That means that newly added Blender features need to be ported over to the engine to work.

Blender Game Engine

For the 2.8 project we want to achieve a better integration of BGE and Blender itself. The Eevee project has proven already how important real-time tools are and how well this can work for interactive 3D design and game creators.

That being said, outside of blender.org interesting Blender-related development for game engines happens too. Check out the Blender fork UPBGE for example, or the fascinating Armory Engine (see image above, it’s written in Haxe and Kha). And don’t forget the open source WebGL environments Blend4Web and Verge3D.

Assets and presets

Another ‘2.8 workflow’ related feature: we are working on better managing your files and 3d assets. Partially it’s for complex production setup, partially it’s also about configuring your Workspaces with nice visible presets – lists of pictures of shaders or primitives for example, ready to be dragged & dropped or selected.

Asset engine preview

More information can be found here, the planning for asset management and overrides.

Viewport Compositing

An important design aspect of Blender’s new viewport system is that each engine is drawing in its own buffer. These buffers then get composited in real-time.

Blender 267 splash screen

To illustrate how fast it is: in 2.8x the “Overlay engine” is using real-time compositing in the viewport (to draw selections or widgets).

When 2.8x is ready to get out of beta, we will also check on how to allow (node based) real time compositing in viewports. That then is the final step to fully have replaced the old “Blender Internal” render engine with an OpenGL based system.

This will especially be interesting for the Non-Photo-Realistic rendering enthusiasts out there. Note – FreeStyle rendering will have to fully recoded for 2.8. That’s still an open issue.

Modifiers & Physics upgrade

Blender’s modifier code is getting fully rewritten for 2.8. That’s needed for the new dependency graph system (threadable animation updates and duplication of data).

Blender Physics

A nice side effect of this recode is that all modifiers then will be ‘node ready’. We expect first experiments with modifier nodes to happen in 2018. Don’t get too excited yet, it’s especially the complexity of upgrading the old particle and hair system that’s making it a very hard project to handle.

An important related issue here is how to handle “caches” well (i.e. generated mesh data by modifiers or physics systems). This needs to be saved and managed properly – which is what the dependency graph has to do as well. As soon that’s solved we can finally merge in the highly anticipated Fracture Modifier branch.

Animation tools

Blender’s armature and rigging system is based on a design from the 90ies. It’s a solid concept, but it’s about time to refresh and upgrade it. When Blender 2.8x gets closer to beta I want to move my focus on getting a project organized (and funded) to establish a small team of developers on animation tools for the next decade – Animation 2020! Contact me if you want to contribute.

Discourse forums

Improving onboarding for new developers is on our wish list already for years. There are several areas we should do better – more swiftly handle reviews for provided patches and branches for example.

Discourse Forum

We also often hear that Blender developer channels are hard to find or not very accessible. The blender.org teams  still mainly use IRC chat and MailMan mailing lists for communciation.

In January we will test a dedicated blender.org developer forum using Discourse (fully free/open software). This forum will focus on people working with Blender’s code, developer tools and anything related to becoming a contributor. If this experiment works well we can expand it to a more general “get involved” website (for docs, educators, scientists, conferences, events).
However, user questions, feature requests – would be off topic, there are better places that handle this.

20th anniversary of first public Blender release

Oh yes! Today is exactly 20 years ago that I released the first Blender version in public – only for the Silicon Graphics IRIX platform.

Blender on IRIX

A FreeBSD and Linux version were made a couple of months after.

All as “freeware” then, not open source. I first had to learn the lesson of bursting internet bubbles before going fully open!

Blender 2.80 beta release

Originally planned for Q2 this year… luckily that quarter lasts until July 1st. All depends on how well the current projects go the coming months. But if it’s not July first, then at least we have…

SIGGRAPH, Vancouver

The largest annual 3D CG event is at August 12-16 this year. We aim at a great presence there and certainly it’s a great milestone to showcase 2.80 there!

Open issues

The 2.8 team tries to keep focus – not to do too many things at once and to finish what’s being worked on in the highest usable quality possible. That means that some topics are being done first, and some later. The priorities for 2.8 have been written down in this mail to the main developers list.

We can still use a lot of help. Please don’t hesitate to reach out – especially when workflow and usability are your strength! But we can use contributors in many ‘orphaned’ areas: such as Booleans, Video editor, Freestyle render, Particles, Physics caching, Hair, Nurbs… but also to work on better integration with Windows and MacOS desktop environments.

Credits

An important part of the blender.org project are the studios and companies who contribute to Blender.

Special thanks goes to Blender Foundation (development fund grants), Blender Institute/Animation Studio (hiring 3-5 devs), Tangent Animation (viewport development), Aleph Objects (from Lulzbot printers, supporting Blender 101), Nimble Collective (Alembic), AMD (Cycles OpenCL support), Intel (seeding hardware, Cycles development), Nvidia (seeding hardware), Theory Animation and Barnstorm VFX (Cycles development, VFX pipeline).

Special thanks also to the biggest supporters of the Development Fund: Valve Steam Workshop and Blender Market.

Ton Roosendaal, Chairman Blender Foundation

December 30, 2017

GIMP and GEGL in 2017

When you say you mostly do bugfixing now, seven kinds of new features will crawl under your bed and bite your silly toes off. If we were to come up with a short summary for 2017, it would be along those very lines.

So yes, we ended up with more new features that, however, make GIMP faster and improve workflows. Here’s just a quick list of the top v2.10 parole violators: multi-threading via GEGL, linear color space workflow, better support for CIE LCH and CIE LAB color spaces, much faster on-canvas Warp Transform tool, complete on-canvas gradients editing, better PSD support, metadata viewing and editing, under- and overexposure warning on the canvas.

All of the above features (and many more) are available in GIMP 2.9.8 released earlier this month. We are now in the strings freeze mode which means there will be very few changes to the user interface so that translators could safely do their job in time for the v2.10 release.

Everyone is pretty tired of not having GIMP 2.10 out by now, so we only work on bugs that block the v2.10 release. There are currently 25 such bugs. Some are relatively easy to fix, some require more time and effort. Some have patches or there is work in progress, and some need further investigation. We will get there faster, if more people join to hack on GIMP.

Speaking of which, one thing that has changed in the GIMP project for the better this year is the workload among top contributors. Michael Natterer is still responsible for 33% of all GIMP commits in the past 12 months, but that’s a ca. 30% decrease from the last year. Jehan Pagès and Ell now have a 38% share of all contributions, and Øyvind Kolås tops that with his 5% thanks to the work on layers blending/compositing and linear color space workflow in GIMP.

In particular, Ell committed the most between 2.9.6 and 2.9.8, implemented on-canvas gradients editing, introduced other enhancements, and did a lot of work on tuning performance in both GIMP and GEGL. We want to thank him especially for being the most prolific developer of GIMP for this last development release!

Another increasingly active contributor in the GEGL project is Debarshi Ray who uses the library for his project, GNOME Photos. Debarshi focused mostly on GEGL operations useful for digital photography such as exposure and shadows-highlights, and did quite a lot of bugfixing. We also got a fair share of contributions from Thomas Manni who added some interesting experimental filters like SLIC (Simple Linear Iterative Clustering) and improved existing filters.

Changes in GEGL and babl in 2017 included (but are not limited to) 15 new filters, improvements in mipmap processing and multi-threading computations, a video editing engine called gcut, more fast paths to convert pixel data between various color spaces, support for custom RGB primaries and TRC, ICC color profiles parsing and generation, and numerous bugfixes.

At least some of the work done by Øyvind Kolås on both GEGL, babl, and GIMP this year was sponsored by you, the GIMP community, via Patreon and Liberapay platforms. Please see his post on 2017 crowdfunding results for details and consider supporting him. Improving GEGL is crucial for GIMP to become a state-of-the art professional image editing program. Over the course of 2017, programming activity in GEGL and babl increased by 120% and 102% respectively in terms of commits, and we’d love to see the dynamics keep up in 2018 and onwards.

Even though the focus of another crowdfunded effort by Jehan Pagès and Aryeom Han is to create an animated short movie, Jehan Pagès contributed roughly 1/5 of code changes this year, fixing bugs, improving painting-related features, maintaining GIMP official Flatpak, and these statistics don’t even count the work on a much more sophisticated animation plug-in currently available in a dedicated Git branch. Hence supporting this project results in better user experience for GIMP users. You can help fund Jehan and Aryeom on Liberapay, Patreon or Tipeee. You can also read their end-of-2017 report.

We also want to thank Julien Hardelin who has been a great help in updating the user manual for upcoming v2.10, as well as all the translators and people who contributed patches. Moreover, we thank Pat David for further work on the new website, and Michael Schumacher for tireless bug triaging. They all don’t get nearly as much praise as they deserve.

Happy 2018!

December 29, 2017

26 Goddesses

I’ve been painstakingly hand-removing backgrounds from photos in GIMP, so I can use them in Moho.Goddesses_ancient2_4

Goddesses_ancient2_9

Queen of the Night

 

Snake_Dancer_12

Here’s my collection:

Goddesses_ancient - Frame 0.2

Goddesses_ancient - Frame 0.3

Goddesses_ancient - Frame 0.4

Goddesses_ancient2 - Frame 0

Share/Bookmark

flattr this!

Looking back, looking forward

First and foremost, 2017 ends well. We will end this year putting Krita 4.0 in string freeze, which means a release early next year! In 2017, we’ve released several versions of Krita 3.x. We’ve gained a lot of new contributors with great contributions to Krita. We’ve got money in the bank, too. Less than last year, but sales on the Windows Store help quite a bit! And development fund subscriptions have been steadily climbing, and we’re at 70 subscribers now! We’ve also done a great project with Intel, which not only brought some more money in, but also great performance improvements for painting and rendering animations.

It’s been a tough year, though! Our maintainer had only just recovered from being burned out from working full-time on Krita and on a day job when the tax office called… The result was half a year of stress and negotiations, ending in a huge tax bill and a huge accountant’s bill. And enough uncertainty that we couldn’t have our yearly fund raiser, and enough extra non-coding work that the work on the features funded in 2016 took much, much more time than planned. In the period when we were talking to the tax office, until we could go public, Boudewijn and Dmitry were supported by members from the community; without that support the project might not have survived.

But then, when we could go public with our problems, the response was phenomenal. At that point, we were confident we would survive anyway, with the work we were doing for Intel, the Windows Store income and private savings, but it would have been extremely tight.  The community rallied around us magnificently, and then Private Internet Access (who also sponsor KDE, Gnome, Blender and Inkscape, among others) contacted us with their decision to pay the bill!

From nearly broke, we went to be in a position to start planning again!

We reported about those plans before, but to recap:

  • We will to release Krita 4.0 with Python scripting, SVG vector layers, a new text tool, the stacked brushes feature (now renamed to masking brush) and the lazy coloring brush, and many more features. String freeze December 31st 23:59:59, release planned for March!
  • We want to spend next year working on bug fixes, performance improvements and polish
  • But there will also be work on a new reference images tool, improved session management and other things.
  • We will look into the possibility of porting Krita to Android and iOS, though the first attempts have not been promising.
  • We will do another fund raiser, though whether that will be a kickstarter hasn’t been decided yet.
  • After being constrained from attending open source conferences in 2016 and 2017, we intend to have at least someone present at the Libre Graphics Meeting and Akademy. We shouldn’t get disconnected from our roots!

Akademy is the yearly KDE community conference, and Krita has always been part of the KDE community. And KDE has always been more than a desktop environment for Linux and other Unix-like operating systems. As a community, KDE offers an environment where projects like Krita can flourish. Every developer in the KDE community can work on any of the KDE projects; the level of trust in each other is very high.

These days, judging by the number of bugs reported and closed, Krita is the second-most used KDE project, after the Plasma desktop shell. Without KDE, Krita wouldn’t be where it’s now. Without KDE, and the awesome job it’s volunteer sysadmins are doing, we wouldn’t have working forums, continuous integration, bug trackers or a project management platform. Sprints would be far more difficult to organize, and, of course, Krita depends heavily on a number of KDE framework libraries that make our coding life much easier. KDE is currently having the annual End of Year Fundraiser!

Our contributor sprint in 2016 was partly sponsored by KDE as well. With all the bother, it looked like we wouldn’t meet up in 2017. But with the project back on a sound footing, we managed to have a small sprint in November after all, and much vigorous discusion was had by all participants, ending up with firm plans for the last few 4.0 features that we were working on. Next year, we intend to have another big contributor sprint as well.

And, of course, lots of lovely releases, bug fixes, features, artist interviews, documentation updates, and the please of seeing so many people create great art!

December 28, 2017

First Krita painting course in Colombia

Krita is not widely known in Latin America. In Colombia, we found that people are interested in knowing more about how to use it. This year, in April 2017, the program of the Latin American Free Software Install Fest included a workshop by David Bravo about Krita. The workshop was fully booked and inspired us to create this course.

colombia krita course participants

Left to right: Mateo Leal, Angie Alzate, David Bravo (teacher), Lina Porras, Lucas Gelves, Juan Pablo Sainea, Javier Gaitán

During 4 sessions of 3 hours each, David Bravo guided a group of six students through their first steps in Krita, including sketch, canvas, digitalization, lines, curves and brush, light and shadow, digital color, painting and color palette, texture, effects, exporting files for digital media and printing.

David Bravo and his drawing

David Bravo (front). The projected drawing is his work.

This course was made possible by the cooperation of three organizations: Onoma Project, Corre Libre Foundation and Ubuntu Colombia. The cost for the students was about 16 USD; all of the proceeds were donated to the Krita Foundation.

Lucas Gelves' work

Lucas Gelves teaching himself to draw.

We think that we can offer an intermediate course in 2018. And of course we want to say thank you to the Krita Foundation for sending gifts for the course students and for staying in touch with us. We hope to cooperate in the near future for future courses!

onoma logoDavid Bravo is a digital and multimedia designer from Colegio Mayor de Cundinamarca, currently working in multimedia freelance projects with a focus on traditional animation, 3D and visualization in virtual environments. He is also the leader of the Onoma Project, an online free platform that is under development. The main objective of this project is to provide tools for easy and secure learning of FLOSS for design.

Ubuntu Colombia logoUbuntu Colombia acts as coordinator and communicator of the course. Ubuntu Colombia is a community with 12 years of history in spreading Ubuntu and FLOSS in Colombia; the Krita course was part of this year’s efforts of the community to promote education on FLOSS tools, as were LaTeX courses and LPIC preparation courses.

Corre Libre Foundation is an NGO created in 2008. Its objectives are:
– to promote the creation of free/open knowledge
– to sponsor free technological projects with social impact
– to promote and spread the use and development of technologies that contribute to human freedom
– to promote and spread collaborative work.

They support Orfeo, which is documentary software. For this course they provided a place to work, which would otherwise have been too difficult and expensive to find in our city.

December 25, 2017

Interview with Rositsa Zaharieva

Could you tell us something about yourself?

My name is Rositsa (also known as Roz) and I’m somewhat of a late blooming artist. When I was a kid I was constantly drawing and even wanted to become an artist. Later on I chose a slightly different path for my education and career and as a result I now have decent experience as a web and graphic designer, front end developer and copywriter. I am now completely sure that I want to devote myself entirely to art and that’s what I’m working towards.

Do you paint professionally, as a hobby artist, or both?

I mainly work on personal projects. I have done some freelance paintings in the past, though. I’d love to paint professionally full time sometime soon, hopefully for a game or a fantasy book of some sort.

What genre(s) do you work in?

I prefer fantasy most of all and anything that’s not entirely realistic. It has to have some magic in it, something from another world. That’s when I feel most inspired.

Whose work inspires you most — who are your role models as an artist?

I’m a huge fan of Bill Tiller’s work for The Curse of Monkey Island, A Vampyre Story and Duke Grabowski, Mighty Swashbuckler! Other than him I’m following countless other artists on social networks and use their work to get inspired. Also, as a member of a bunch of art groups I see great
artworks from artists I’ve never heard of every single day and that’s also a huge inspiration.

How and when did you get to try digital painting for the first time?

My first encounter with digital painting was in 2006-2007 on deviantART but it wasn’t until 2010-2011 when I finally got my precious Wacom Bamboo tablet (which I still use by the way!) that I could finally begin my own digital art journey for real.

What makes you choose digital over traditional painting?

Digital painting combines my two loves – computers and art. It only seems logical to me that I chose it over traditional art but back then I didn’t give it that much thought – I just thought how awesome all the paintings I was seeing at the time were and how I’d love to do that kind of art myself. I’ve since come to realize that one doesn’t really have to choose one or the other – I find doing traditional art every once in a while incredibly soothing, even though I’ve chosen to focus on digital art as my career path.

How did you find out about Krita?

I think I first got to know about Krita from David Revoy on Twitter some years ago, but it wasn’t until this year when I finally decided to give it a try.

What was your first impression?

My first impression was just WOW. I thought “OMG, it’s SO similar to Photoshop but has all these features in addition and it’s FREE!” I was really impressed that I could do all that I was used to do in Photoshop but in a native Linux application and free of charge.

What do you love about Krita?

Exactly what I mentioned above. I’m still kind of a newbie with Krita so there’s not so much to tell but I’m sure I’m yet to discover a lot more to love as time goes by.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I’d like to see an improved way of working with the Bezier Curve Selection Tool as I use it a lot but am having trouble doing perfect selections at one go. I’d really like to be able to variate between corner anchor points and curves on the fly, as I’m creating the selection, instead of creating a somewhat messy selection and then having to go back and clean it up by adding and subtracting parts of it until it looks the way I’d intended. That would certainly save me a lot of time.

What sets Krita apart from the other tools that you use?

That it’s free to use but not any less usable than the most popular paid applications of the sort! Also, the feeling I get whenever I’m involved with Krita in any way – be it by reading news about it, interacting on social media or painting with it. I’m just so excited that it exists and grows and is getting better and better. I feel somewhat proud that I’m contributing even in the tiniest way.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I love everything I’ve created in Krita so far. I don’t think it’s that much about the software you create a certain artwork with but rather allthe love you put in it as you’re creating it.

What techniques and brushes did you use in it?

I’m trying to use less brushstrokes and more colorful shapes as I paint. I mainly use the Bezier Curve Selection Tool, the Gradient Tool and a small set of Krita’s predefined brushes for my artworks. I have tried creating my own custom brushes but with little luck so far (I think I have much more reading and experimenting to do before I succeed).

Where can people see more of your work?

I have a portfolio website (in Bulgarian): www.artofroz.com; but you can find me on Facebook, Twitter, Behance, Artstation and a bunch of other places either as ArtofRoz, Rositsa Zaharieva or some combo/derivative of both.

Anything else you’d like to share?

I’d like to tell everyone that’s been using other software for their digital paintings to definitely give Krita a try, too. Not that other software is bad in any way, but Krita is awesome!