October 25, 2016

Tue 2016/Oct/25

  • Librsvg gets Rusty

    I've been wanting to learn Rust for some time. It has frustrated me for a number of years that it is quite possible to write GNOME applications in high-level languages, but for the libraries that everything else uses ("the GNOME platform"), we are pretty much stuck with C. Vala is a very nice effort, but to me it never seemed to catch much momentum outside of GNOME.

    After reading this presentation called "Rust out your C", I got excited. It *is* possible to port C code to Rust, small bits at a time! You rewrite some functions in Rust, make them linkable to the C code, and keep calling them from C as usual. The contortions you need to do to make C types accessible from Rust are no worse than for any other language.

    I'm going to use librsvg as a testbed for this.

    Librsvg is an old library. It started as an experiment to write a SAX-based parser for SVG ("don't load the whole DOM into memory; instead, stream in the XML and parse it as we go"), and a renderer with the old libart (what we used in GNOME for 2D vector rendering before Cairo came along). Later it got ported to Cairo, and that's the version that we use now.

    Outside of GNOME, librsvg gets used at Wikimedia to render the SVGs all over Wikipedia. We have gotten excellent bug reports from them!

    Librsvg has a bunch of little parsers for the mini-languages inside SVG's XML attributes. For example, within a vector path definition, "M10,50 h20 V10 Z" means, "move to the coordinate (10, 50), draw a horizontal line 20 pixels to the right, then a vertical line to absolute coordinate 10, then close the path with another line". There are state machines, like the one that transforms that path definition into three line segments instead of the PostScript-like instructions that Cairo understands. There are some pixel-crunching functions, like Gaussian blurs and convolutions for SVG filters.

    It should be quite possible to port those parts of librsvg to Rust, and to preserve the C API for general consumption.

    Every once in a while someone discovers a bug in librsvg that makes it all the way to a CVE security advisory, and it's all due to using C. We've gotten double free()s, wrong casts, and out-of-bounds memory accesses. Recently someone did fuzz-testing with some really pathological SVGs, and found interesting explosions in the library. That's the kind of 1970s bullshit that Rust prevents.

    I also hope that this will make it easier to actually write unit tests for librsvg. Currently we have some pretty nifty black-box tests for the whole library, which essentially take in complete SVG files, render them, and compare the results to a reference image. These are great for smoke testing and guarding against regressions. However, all the fine-grained machinery in librsvg has zero tests. It is always a pain in the ass to make static C functions testable "from the outside", or to make mock objects to provide them with the kind of environment they expect.

    So, on to Rustification!

    I've started with a bit of the code from librsvg that is fresh in my head: the state machine that renders SVG markers.

    SVG markers

    This image with markers comes from the official SVG test suite:

    SVG reference image        with markers

    SVG markers let you put symbols along the nodes of a path. You can use them to draw arrows (arrowhead as an end marker on a line), points in a chart, and other visual effects.

    In the example image above, this is what is happening. The SVG defines four marker types:

    • A purple square that always stays upright.
    • A green circle.
    • A blue triangle that always stays upright.
    • A blue triangle whose orientation depends on the node where it sits.

    The top row, with the purple squares, is a path (the black line) that says, "put the purple-square marker on all my nodes".

    The middle row is a similar path, but it says, "put the purple-square marker on my first node, the green-circle marker on my middle nodes, and the blue-upright-triangle marker on my end node".

    The bottom row has the blue-orientable-triangle marker on all the nodes. The triangle is defined to point to the right (look at the bottommost triangles!). It gets rotated 45 degrees at the middle node, and 90 degrees so it points up at the top-left node.

    This was all fine and dandy, until one day we got a bug about incorrect rendering when there are funny paths paths. What makes a path funny?

    SVG image with funny        arrows

    For the code that renders markers, a path is not in the "easy" case when it is not obvious how to compute the orientation of nodes. A node's orientation, when it is well-behaved, is just the average angle of the node's incoming and outgoing lines (or curves). But if a path has contiguous coincident vertices, or stray points that don't have incoming/outgoing lines (imagine a sequence of moveto commands), or curveto commands with Bézier control points that are coincident with the nodes... well, in those cases, librsvg has to follow the spec to the letter, for it says how to handle those things.

    In short, one has to walk the segments away from the node in question, until one finds a segment whose "directionality" can be computed: a segment that is an actual line or curve, not a coincident vertex nor a stray point.

    Librsvg's algorithm has two parts to it. The first part takes the linear sequence of PostScript-like commands (moveto, lineto, curveto, closepath) and turns them into a sequence of segments. Each segment has two endpoints and two tangent directions at those endpoints; if the segment is a line, the tangents point in the same direction as the line. Or, the segment can be degenerate and it is just a single point.

    The second part of the algorithm takes that list of segments for each node, and it does the walking-back-and-forth as described in the SVG spec. Basically, it finds the first non-degenerate segment on each side of a node, and uses the tangents of those segments to find the average orientation of the node.

    The path-to-segments code

    In the C code I had this:

    typedef struct {
        gboolean is_degenerate; /* If true, only (p1x, p1y) are valid.  If false, all are valid */
        double p1x, p1y;
        double p2x, p2y;
        double p3x, p3y;
        double p4x, p4y;
    } Segment;

    P1 and P4 are the endpoints of each Segment; P2 and P3 are, like in a Bézier curve, the control points from which the tangents can be computed.

    This translates readily to Rust:

    struct Segment {
        is_degenerate: bool, /* If true, only (p1x, p1y) are valid.  If false, all are valid */
        p1x: f64, p1y: f64,
        p2x: f64, p2y: f64,
        p3x: f64, p3y: f64,
        p4x: f64, p4y: f64

    Then a little utility function:

    /* In C */
    #define EPSILON 1e-10
    #define DOUBLE_EQUALS(a, b) (fabs ((a) - (b)) < EPSILON)
    /* In Rust */
    const EPSILON: f64 = 1e-10;
    fn double_equals (a: f64, b: f64) -> bool {
        (a - b).abs () < EPSILON

    And now, the actual code that transforms a cairo_path_t (a list of moveto/lineto/curveto commands) into a list of segments. I'll interleave C and Rust code with commentary.

    /* In C */
    typedef enum {
    } SegmentState;
    static void
    path_to_segments (const cairo_path_t *path,
                      Segment **out_segments,
                      int *num_segments)
    /* In Rust */
    enum SegmentState {
    fn path_to_segments (path: cairo::Path) -> Vec<Segment> {

    The enum is pretty much the same; Rust prefers CamelCase for enums instead of CAPITALIZED_SNAKE_CASE. The function prototype is much nicer in Rust. The cairo::Path is courtesy of gtk-rs, the budding Rust bindings for GTK+ and Cairo and all that goodness.

    The C version allocates the return value as an array of Segment structs, and returns it in the out_segments argument (... and the length of the array in num_segments). The Rust version returns a mentally easier vector of Segment structs.

    Now, the variable declarations at the beginning of the function:

    /* In C */
        int i;
        double last_x, last_y;
        double cur_x, cur_y;
        double subpath_start_x, subpath_start_y;
        int max_segments;
        int segment_num;
        Segment *segments;
        SegmentState state;
    /* In Rust */
        let mut last_x: f64;
        let mut last_y: f64;
        let mut cur_x: f64;
        let mut cur_y: f64;
        let mut subpath_start_x: f64;
        let mut subpath_start_y: f64;
        let mut has_first_segment : bool;
        let mut segment_num : usize;
        let mut segments: Vec<Segment>;
        let mut state: SegmentState;

    In addition to having different type names (double becomes f64), Rust wants you to say when a variable will be mutable, i.e. when it is allowed to change value after its initialization.

    Also, note that in C there's an "i" variable, which is used as a counter. There isn't a similar variable in the Rust version; there, we will use an iterator. Also, in the Rust version we have a new "has_first_segment" variable; read on to see its purpose.

        /* In C */
        max_segments = path->num_data; /* We'll generate maximum this many segments */
        segments = g_new (Segment, max_segments);
        *out_segments = segments;
        last_x = last_y = cur_x = cur_y = subpath_start_x = subpath_start_y = 0.0;
        segment_num = -1;
        state = SEGMENT_END;
        /* In Rust */
        cur_x = 0.0;
        cur_y = 0.0;
        subpath_start_x = 0.0;
        subpath_start_y = 0.0;
        has_first_segment = false;
        segment_num = 0;
        segments = Vec::new ();
        state = SegmentState::End;

    No problems here, just initializations. Note that in C we pre-allocate the segments array with a certain size. This is not the actual minimum size that the array will need; it is just an upper bound that comes from the way Cairo represents paths internally (it is not possible to compute the minimum size of the array without walking it first, so we use a good-enough value here that doesn't require walking). In the Rust version, we just create an empty vector and let it grow as needed.

    Note also that the C version initializes segment_num to -1, while the Rust version sets has_first_segment to false and segment_num to 0. Read on!

        /* In C */
        for (i = 0; i < path->num_data; i += path->data[i].header.length) {
            last_x = cur_x;
            last_y = cur_y;
        /* In Rust */
        for cairo_segment in path.iter () {
            last_x = cur_x;
            last_y = cur_y;

    We start iterating over the path's elements. Cairo, which is written in C, has a peculiar way of representing paths. path->num_data is the length of the path->data array. That array has elements in path->data[] that can be either commands, or point coordinates. Each command then specifies how many elements you need to "eat" to take in all its coordinates. Thus the "i" counter gets incremented on each iteration by path->data[i].header.length; this is the "how many to eat" magic value.

    The Rust version is more civilized. Get a path.iter() which feeds you Cairo path segments, and boom, you are done. That civilization is courtesy of the gtk-rs bindings. Onwards!

        /* In C */
            switch (path->data[i].header.type) {
            case CAIRO_PATH_MOVE_TO:
                g_assert (segment_num < max_segments);
        /* In Rust */
            match cairo_segment {
                cairo::PathSegment::MoveTo ((x, y)) => {
                    if has_first_segment {
                        segment_num += 1;
                    } else {
                        has_first_segment = true;

    The C version switch()es on the type of the path segment. It increments segment_num, our counter-of-segments, and checks that it doesn't overflow the space we allocated for the results array.

    The Rust version match()es on the cairo_segment, which is a Rust enum (think of it as a tagged union of structs). The first match case conveniently destructures the (x, y) coordinates; we will use them below.

    If you recall from the above, the C version initialized segment_num to -1. This code for MOVE_TO is the first case in the code that we will hit, and that "segment_num++" causes the value to become 0, which is exactly the index in the results array where we want to place the first segment. Rust *really* wants you to use an usize value to index arrays ("unsigned size"). I could have used a signed size value starting at -1 and then incremented it to zero, but then I would have to cast it to unsigned — which is slightly ugly. So I introduce a boolean variable, has_first_segment, and use that instead. I think I could refactor this to have another state in SegmentState and remove the boolean variable.

            /* In C */
                g_assert (i + 1 < path->num_data);
                cur_x = path->data[i + 1].point.x;
                cur_y = path->data[i + 1].point.y;
                subpath_start_x = cur_x;
                subpath_start_y = cur_y;
             /* In Rust */
                    cur_x = x;
                    cur_y = y;
                    subpath_start_x = cur_x;
                    subpath_start_y = cur_y;

    In the C version, I assign (cur_x, cur_y) from the path->data[], but first ensure that the index doesn't overflow. In the Rust version, the (x, y) values come from the destructuring described above.

            /* In C */
                segments[segment_num].is_degenerate = TRUE;
                segments[segment_num].p1x = cur_x;
                segments[segment_num].p1y = cur_y;
                state = SEGMENT_START;
             /* In Rust */
                    let seg = Segment {
                        is_degenerate: true,
                        p1x: cur_x,
                        p1y: cur_y,
                        p2x: 0.0, p2y: 0.0, p3x: 0.0, p3y: 0.0, p4x: 0.0, p4y: 0.0 // these are set in the next iteration
                    segments.push (seg);
                    state = SegmentState::Start;

    This is where my lack of Rust idiomatic skills really starts to show. In C I put (cur_x, cur_y) in the (p1x, p1y) fields of the current segment, and since it is_degenerate, I'll know that the other p2/p3/p4 fields are not valid — and like any C programmer who wears sandals instead of steel-toed boots, I leave their memory uninitialized. Rust doesn't want me to have uninitialized values EVER, so I must fill a Segment structure and then push() it into our segments vector.

    So, the C version really wants to have a segment_num counter where I can keep track of which index I'm filling. Why is there a similar counter in the Rust version? We will see why in the next case.

            /* In C */
            case CAIRO_PATH_LINE_TO:
                g_assert (i + 1 < path->num_data);
                cur_x = path->data[i + 1].point.x;
                cur_y = path->data[i + 1].point.y;
                if (state == SEGMENT_START) {
                    segments[segment_num].is_degenerate = FALSE;
                    state = SEGMENT_END;
                } else /* SEGMENT_END */ {
                    g_assert (segment_num < max_segments);
                    segments[segment_num].is_degenerate = FALSE;
                    segments[segment_num].p1x = last_x;
                    segments[segment_num].p1y = last_y;
                segments[segment_num].p2x = cur_x;
                segments[segment_num].p2y = cur_y;
                segments[segment_num].p3x = last_x;
                segments[segment_num].p3y = last_y;
                segments[segment_num].p4x = cur_x;
                segments[segment_num].p4y = cur_y;
             /* In Rust */
                cairo::PathSegment::LineTo ((x, y)) => {
                    cur_x = x;
                    cur_y = y;
                    match state {
                        SegmentState::Start => {
                            segments[segment_num].is_degenerate = false;
                            state = SegmentState::End;
                        SegmentState::End => {
                            segment_num += 1;
                            let seg = Segment {
                                is_degenerate: false,
                                p1x: last_x,
                                p1y: last_y,
                                p2x: 0.0, p2y: 0.0, p3x: 0.0, p3y: 0.0, p4x: 0.0, p4y: 0.0  // these are set below
                            segments.push (seg);
                    segments[segment_num].p2x = cur_x;
                    segments[segment_num].p2y = cur_y;
                    segments[segment_num].p3x = last_x;
                    segments[segment_num].p3y = last_y;
                    segments[segment_num].p4x = cur_x;
                    segments[segment_num].p4y = cur_y;

    Whoa! Buts let's piece it apart bit by bit.

    First we set cur_x and cur_y from the path data, as usual.

    Then we roll the state machine. Remember we got a LINE_TO. If we are in the state START ("just have a single point, possibly a degenerate one"), then we turn the old segment into a non-degenerate, complete line segment. If we are in the state END ("we were already drawing non-degenerate lines"), we create a new segment and fill it in. I'll probably change the names of those states to make it more obvious what they mean.

    In C we had a preallocated array for "segments", so the idiom to create a new segment is simply "segment_num++". In Rust we grow the segments array as we go, hence the "segments.push (seg)".

    I will probably refactor this code. I don't like it that it looks like

        case move_to:
            start possibly-degenerate segment
        case line_to:
            are we in a possibly-degenerate segment?
                yes: make it non-degenerate and remain in that segment...
                no: create a new segment, switch to it, and fill its first fields...
    	... for both cases, fill in the last fields of the segment

    That is, the "yes" case fills in fields from the segment we were handling in the *previous* iteration, while the "no" case fills in fields from a *new* segment that we created in the present iteration. That asymmetry bothers me. Maybe we should build up the next-segment's fields in auxiliary variables, and only put them in a complete Segment structure once we really know that we are done with that segment? I don't know; we'll see what is more legible in the end.

    The other two cases, for CURVE_TO and CLOSE_PATH, are analogous, except that CURVE_TO handles a bunch more coordinates for the control points, and CLOSE_PATH goes back to the coordinates from the last point that was a MOVE_TO.

    And those tests you were talking about?

    Well, I haven't written them yet! This is my very first Rust code, after reading a pile of getting-started documents.

    Already in the case for CLOSE_PATH I think I've found a bug. It doesn't really create a segment for multi-line paths when the path is being closed. The reftests didn't catch this because none of the reference images with SVG markers uses a CLOSE_PATH command! The unit tests for this path_to_segments() machinery should be able to find this easily, and closer to the root cause of the bug.

    What's next?

    Learning how to link and call that Rust code from the C library for librsvg. Then I'll be able to remove the corresponding C code.

    Feeling safer already?

darktable 2.0.7 released

we're proud to announce the seventh bugfix release for the 2.0 series of darktable, 2.0.7!

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.0.7.

as always, please don't use the autogenerated tarball provided by github, but only our tar.xz. the checksum is:

a9226157404538183549079e3b8707c910fedbb669bd018106bdf584b88a1dab  darktable-2.0.7.tar.xz
???  darktable-2.0.7.dmg

and the changelog as compared to 2.0.6 can be found below.

New Features

  • Filter-out some EXIF tags when exporting. Helps keep metadata size below max limit of ~64Kb
  • Support the new Canon EOS 80D {m,s}RAW format
  • Always show rendering intent selector in lighttable view
  • Clear elevation when clearing geo data in map view
  • Temperature module, invert module: add SSE vectorization for X-Trans
  • Temperature module: add keyboard shortcuts for presets


  • Rawspeed: fixes for building with libjpeg (as opposed to libjpeg-turbo)
  • OpenCL: always use blocking memory transfer hostdevice
  • OpenCL: remove bogus static keyword in extended.cl
  • Fix crash with missing configured display profile
  • Histogram: always show aperture with one digit after dot
  • Show if OpenEXR is supported in --version
  • Rawspeed: use a non-deprecated way of getting OSX version
  • Don't show bogus message about local copy when trying to delete physically deleted image

Base Support (newly added or small fixes)

  • Canon EOS 100D
  • Canon EOS 300D
  • Canon EOS 6D
  • Canon EOS 700D
  • Canon EOS 80D (sRaw1, sRaw2)
  • Canon PowerShot A720 IS (dng)
  • Fujifilm FinePix S100FS
  • Nikon D3400 (12bit-compressed)
  • Panasonic DMC-FZ300 (4:3)
  • Panasonic DMC-G8 (4:3)
  • Panasonic DMC-G80 (4:3)
  • Panasonic DMC-GX80 (4:3)
  • Panasonic DMC-GX85 (4:3)
  • Pentax K-70

Base Support (fixes, was broken in 2.0.6, apologies for inconvenience)

  • Nikon 1 AW1
  • Nikon 1 J1 (12bit-compressed)
  • Nikon 1 J2 (12bit-compressed)
  • Nikon 1 J3
  • Nikon 1 J4
  • Nikon 1 S1 (12bit-compressed)
  • Nikon 1 S2
  • Nikon 1 V1 (12bit-compressed)
  • Nikon 1 V2
  • Nikon Coolpix A (14bit-compressed)
  • Nikon Coolpix P330 (12bit-compressed)
  • Nikon Coolpix P6000
  • Nikon Coolpix P7000
  • Nikon Coolpix P7100
  • Nikon Coolpix P7700 (12bit-compressed)
  • Nikon Coolpix P7800 (12bit-compressed)
  • Nikon D1
  • Nikon D3 (12bit-compressed, 12bit-uncompressed)
  • Nikon D3000 (12bit-compressed)
  • Nikon D3100
  • Nikon D3200 (12bit-compressed)
  • Nikon D3S (12bit-compressed, 12bit-uncompressed)
  • Nikon D4 (12bit-compressed, 12bit-uncompressed)
  • Nikon D5 (12bit-compressed, 12bit-uncompressed)
  • Nikon D50
  • Nikon D5100
  • Nikon D5200
  • Nikon D600 (12bit-compressed)
  • Nikon D610 (12bit-compressed)
  • Nikon D70
  • Nikon D7000
  • Nikon D70s
  • Nikon D7100 (12bit-compressed)
  • Nikon E5400
  • Nikon E5700 (12bit-uncompressed)

We were unable to bring back these 4 cameras, because we have no samples.
If anyone reading this owns such a camera, please do consider providing samples.

  • Nikon E8400
  • Nikon E8800
  • Nikon D3X (12-bit)
  • Nikon Df (12-bit)

White Balance Presets

  • Pentax K-70

Noise Profiles

  • Sony DSC-RX10

Translations Updates

  • Catalan
  • German

October 23, 2016

Los Alamos Artists Studio Tour

[JunkDNA Art at the LA Studio Tour] The Los Alamos Artists Studio Tour was last weekend. It was a fun and somewhat successful day.

I was borrowing space in the studio of the fabulous scratchboard artist Heather Ward, because we didn't have enough White Rock artists signed up for the tour.

Traffic was sporadic: we'd have long periods when nobody came by (I was glad I'd brought my laptop, and managed to get some useful development done on track management in pytopo), punctuated by bursts where three or four groups would show up all at once.

It was fun talking to the people who came by. They all had questions about both my metalwork and Heather's scratchboard, and we had a lot of good conversations. Not many of them were actually buying -- I heard the same thing afterward from most of the other artists on the tour, so it wasn't just us. But I still sold enough that I more than made back the cost of the tour. (I hadn't realized, prior to this, that artists have to pay to be in shows and tours like this, so there's a lot of incentive to sell enough at least to break even.) Of course, I'm nowhere near covering the cost of materials and equipment. Maybe some day ...

[JunkDNA Art at the LA Studio Tour]

I figured snacks are always appreciated, so I set out my pelican snack bowl -- one of my first art pieces -- with brownies and cookies in it, next to the business cards.

It was funny how wrong I was in predicting what people would like. I thought everyone would want the roadrunners and dragonflies; in practice, scorpions were much more popular, along with a sea serpent that had been sitting on my garage shelf for a month while I tried to figure out how to finish it. (I do like how it eventually came out, though.)

And then after selling both my scorpions on Saturday, I rushed to make two more on Saturday night and Sunday morning, and of course no one on Sunday had the slightest interest in scorpions. Dave, who used to have a foot in the art world, tells me this is typical, and that artists should never make what they think the market will like; just go on making what you like yourself, and hope it works out.

Which, fortunately, is mostly what I do at this stage, since I'm mostly puttering around for fun and learning.

Anyway, it was a good learning experience, though I was a little stressed getting ready for it and I'm glad it's over. Next up: a big spider for the front yard, before Halloween.

October 20, 2016


My prior post showed my research from earlier in the year at the 2016 Linux Security Summit on kernel security flaw lifetimes. Now that CVE-2016-5195 is public, here are updated graphs and statistics. Due to their rarity, the Critical bug average has now jumped from 3.3 years to 5.2 years. There aren’t many, but, as I mentioned, they still exist, whether you know about them or not. CVE-2016-5195 was sitting on everyone’s machine when I gave my LSS talk, and there are still other flaws on all our Linux machines right now. (And, I should note, this problem is not unique to Linux.) Dealing with knowing that there are always going to be bugs present requires proactive kernel self-protection (to minimize the effects of possible flaws) and vendors dedicated to updating their devices regularly and quickly (to keep the exposure window minimized once a flaw is widely known).

So, here are the graphs updated for the 668 CVEs known today:

  • Critical: 3 @ 5.2 years average
  • High: 44 @ 6.2 years average
  • Medium: 404 @ 5.3 years average
  • Low: 216 @ 5.5 years average

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

The Electoral College

Episode 4 in a series “Things that are the way they are because of constraints that no longer apply” (or: why we don’t change processes we have invested in that don’t make sense any more)

A US presidential election year is a wondrous thing. There are few places around the world where the campaign for head of state begins in earnest 18 months before the winner will take office. We are now in the home straight, with the final Presidential debate behind us, and election day coming up in 3 weeks, on the Tuesday after the first Monday in November (this year, that’s November 8th). And as with every election cycle, much time will be spent explaining the electoral college. This great American institution is at the heart of how America elects its President. Every 4 years, there are calls to reform it, to move to a different system, and yet it persists. What is it, where did it come from, and why does it cause so much controversy?

In the US, people do not vote for the President directly in November. Instead, they vote for electors – people who represent the state in voting for the President. A state gets a number of electoral votes equal to its number of senators (2) and its number of US representatives (this varies based on population). Sparsely populated states like Alaska and Montana get 3 electoral votes, while California gets 55. In total, there are 538 electors, and a majority of 270 electoral votes is needed to secure the presidency. What happens if the candidates fail to get a majority of the electors is outside the scope of this blog post, and in these days of a two party system, it is very unlikely (although not impossible).

State parties nominate elector lists before the election, and on election day, voters vote for the elector slate corresponding to their preferred candidate. Electoral votes can be awarded differently from state to state. In Nebraska, for example, there are 2 statewide electors for the winner of the statewide vote, and one elector for each congressional district, while in most states, the elector lists are chosen on a winner take all basis. After the election, the votes are counted in the local county, and sent to the state secretary for certification.

Once the election results are certified (which can take up to a month), the electors meet in their state in mid December to record their votes for president and vice president. Most states (but not all!) have laws restricting who electors are allowed to vote for, making this mostly a ceremonial position. The votes are then sent to the US senate and the national archivist for tabulation, and the votes are then cross referenced before being sent to a joint session of Congress in early January. Congress counts the electoral votes and declares the winner in the presidency. Two weeks later, the new President takes office (those 2 weeks are to allow for the process where no-one gets a majority in the electoral college).

Because it is possible to win heavily in some states with few electoral votes, and lose narrowly in others with a lot of electoral votes, it is possible to win the presidency without having a majority of Americans vote for you (as George W. Bush did in 2000). In modern elections, the electoral college can result in a huge difference of attention between “safe” states, and “swing” states – the vast majority of campaigning is done in only a dozen or so states, while states like Texas and Massachusetts do not get as much attention.

Why did the founding fathers of the US come up with such a convoluted system? Why not have people vote for the President directly, and have the counts of the states tabulated directly, without the pomp and ceremony of the electoral college vote?

First, think back to 1787, when the US constitution was written. The founders of the state had an interesting set of principles and constraints they wanted to uphold:

  • Big states should not be able to dominate small states
  • Similarly, small states should not be able to dominate big states
  • No political parties existed (and the founding fathers hoped it would stay that way)
  • Added 2016-10-21: Different states wanted to give a vote to different groups of people (and states with slavery wanted slaves to count in the population)
  • In the interests of having presidents who represented all of the states, candidates should have support outside their own state – in an era where running a national campaign was impractical
  • There was a logistical issue of finding out what happened on election day and determining the winner

To satisfy these constraints, a system was chosen which ensured that small states had a proportionally bigger say (by giving an electoral vote for each Senator), but more populous states still have a bigger say (by getting an electoral vote for each congressman). In the first elections, electors voted for 2 candidates, of which only one could be from their state, meaning that winning candidates had support from outside their state. The President was the person who got the most electoral votes, and the vice president was the candidate who came second – even if (as was the case with John Adams and Thomas Jefferson) they were not in the same party. It also created the possibility (as happened with Thomas Jefferson and Aaron Burr) that a vice presidential candidate could get the same number of electoral votes as the presidential candidate, resulting in Congress deciding who would be president. The modern electoral college was created with the 12th amendment to the US constitution in 1803.

Another criticism of direct voting is that populist demagogues could be elected by the people, but electors (being of the political classes) could be expected to be better informed, and make better decisions, about who to vote for. Alexander Hamilton wrote in The Federalist #68 that: “It was equally desirable, that the immediate election should be made by men most capable of analyzing the qualities adapted to the station, and acting under circumstances favorable to deliberation, and to a judicious combination of all the reasons and inducements which were proper to govern their choice. A small number of persons, selected by their fellow-citizens from the general mass, will be most likely to possess the information and discernment requisite to such complicated investigations.” These days, most states have laws which require their electors to vote in accordance with the will of the electorate, so that original goal is now mostly obsolete.

A big part of the reason for having over two months between the election and the president taking office (and prior to 1934, it was 4 months) is, in part, due to the size of the colonial USA. The administrative unit for counting, the county, was defined so that every citizen could get to the county courthouse and home in a day’s ride – and after an appropriate amount of time to count the ballots, the results were sent to the state capital for certification, which could take up to 4 days in some states like Kentucky or New York. And then the electors needed to be notified, and attend the official elector count in the state capital. And then the results needed to be sent to Washington, which could take up to 2 weeks, and Congress (which was also having elections) needed to meet to ratify the results. All of these things took time, amplified by the fact that travel happened on horseback.

So at least in part, the electoral college system is based on how long, logistically, it took to bring the results to Washington and have Congress ratify them. The inauguration used to be on March 4th, because that was how long it took for the process to run its course. It was not until 1934 and the 20th amendment to the constitution that the date was moved to January.

Incidentally, two other constitutionally set constraints for election day are also based on constraints that no longer apply. Elections happen on a Tuesday, because of the need not to interfere with two key events: sabbath (Sunday) and market (Wednesday). And the elections were held in November primarily so as not to interfere with harvest. These dates and reasoning, set in stone in 1845, persist today.

October 19, 2016

FOSDEM SDN & NFV DevRoom Call for Content

We are pleased to announce the Call for Participation in the FOSDEM 2017 Software Defined Networking and Network Functions Virtualization DevRoom!

Important dates:

  • Nov 16: Deadline for submissions
  • Dec 1: Speakers notified of acceptance
  • Dec 5: Schedule published

This year the DevRoom topics will cover two distinct fields:

  • Software Defined Networking (SDN), covering virtual switching, open source SDN controllers, virtual routing
  • Network Functions Virtualization (NFV), covering open source network functions, NFV management and orchestration tools, and topics related to the creation of an open source NFV platform

We are now inviting proposals for talks about Free/Libre/Open Source Software on the topics of SDN and NFV. This is an exciting and growing field, and FOSDEM gives an opportunity to reach a unique audience of very knowledgeable and highly technical free and open source software activists.

This year, the DevRoom will focus on low-level networking and high performance packet processing, network automation of containers and private cloud, and the management of telco applications to maintain very high availability and performance independent of whatever the world can throw at their infrastructure (datacenter outages, fires, broken servers, you name it).

A representative list of the projects and topics we would like to see on the schedule are:

  • Low-level networking and switching: IOvisor, eBPF, XDP, fd.io, Open vSwitch, OpenDataplane, …
  • SDN controllers and overlay networking: OpenStack Neutron, Canal, OpenDaylight, ONOS, Plumgrid, OVN, OpenContrail, Midonet, …
  • NFV Management and Orchestration: Open-O, ManageIQ, Juju, OpenBaton, Tacker, OSM, network management, PNDA.io, …
  • NFV related features: Service Function Chaining, fault management, dataplane acceleration, security, …

Talks should be aimed at a technical audience, but should not assume that attendees are already familiar with your project or how it solves a general problem. Talk proposals can be very specific solutions to a problem, or can be higher level project overviews for lesser known projects.

Please include the following information when submitting a proposal:

  • Your name
  • The title of your talk (please be descriptive, as titles will be listed with around 250 from other projects)
  • Short abstract of one or two paragraphs
  • Short bio (with photo)

The deadline for submissions is November 16th 2016. FOSDEM will be held on the weekend of February 4-5, 2017 and the SDN/NFV DevRoom will take place on Saturday, February 4, 2017 (Updated 2016-10-20: an earlier version incorrectly said the DevRoom was on Sunday). Please use the following website to submit your proposals: https://penta.fosdem.org/submission/FOSDEM17 (you do not need to create a new Pentabarf account if you already have one from past years).

You can also join the devroom’s mailing list, which is the official communication channel for the DevRoom: network-devroom@lists.fosdem.org (subscription page: https://lists.fosdem.org/listinfo/network-devroom)

– The Networking DevRoom 2016 Organization Team

Security bug lifetime

In several of my recent presentations, I’ve discussed the lifetime of security flaws in the Linux kernel. Jon Corbet did an analysis in 2010, and found that security bugs appeared to have roughly a 5 year lifetime. As in, the flaw gets introduced in a Linux release, and then goes unnoticed by upstream developers until another release 5 years later, on average. I updated this research for 2011 through 2016, and used the Ubuntu Security Team’s CVE Tracker to assist in the process. The Ubuntu kernel team already does the hard work of trying to identify when flaws were introduced in the kernel, so I didn’t have to re-do this for the 557 kernel CVEs since 2011.

As the README details, the raw CVE data is spread across the active/, retired/, and ignored/ directories. By scanning through the CVE files to find any that contain the line “Patches_linux:”, I can extract the details on when a flaw was introduced and when it was fixed. For example CVE-2016-0728 shows:

 break-fix: 3a50597de8635cd05133bd12c95681c82fe7b878 23567fd052a9abb6d67fe8e7a9ccdd9800a540f2

This means that CVE-2016-0728 is believed to have been introduced by commit 3a50597de8635cd05133bd12c95681c82fe7b878 and fixed by commit 23567fd052a9abb6d67fe8e7a9ccdd9800a540f2. If there are multiple lines, then there may be multiple SHAs identified as contributing to the flaw or the fix. And a “-” is just short-hand for the start of Linux git history.

Then for each SHA, I queried git to find its corresponding release, and made a mapping of release version to release date, wrote out the raw data, and rendered graphs. Each vertical line shows a given CVE from when it was introduced to when it was fixed. Red is “Critical”, orange is “High”, blue is “Medium”, and black is “Low”:

CVE lifetimes 2011-2016

And here it is zoomed in to just Critical and High:

Critical and High CVE lifetimes 2011-2016

The line in the middle is the date from which I started the CVE search (2011). The vertical axis is actually linear time, but it’s labeled with kernel releases (which are pretty regular). The numerical summary is:

  • Critical: 2 @ 3.3 years
  • High: 34 @ 6.4 years
  • Medium: 334 @ 5.2 years
  • Low: 186 @ 5.0 years

This comes out to roughly 5 years lifetime again, so not much has changed from Jon’s 2010 analysis.

While we’re getting better at fixing bugs, we’re also adding more bugs. And for many devices that have been built on a given kernel version, there haven’t been frequent (or some times any) security updates, so the bug lifetime for those devices is even longer. To really create a safe kernel, we need to get proactive about self-protection technologies. The systems using a Linux kernel are right now running with security flaws. Those flaws are just not known to the developers yet, but they’re likely known to attackers, as there have been prior boasts/gray-market advertisements for at least CVE-2010-3081 and CVE-2013-2888.

(Edit: see my updated graphs that include CVE-2016-5195.)

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

October 18, 2016

Microwave Time Remainder Temporal Disorientation, a definition

Microwave Time Remainder Temporal Disorientation - definition: The disorientation experienced when the remaining cook time on a microwave display appears to be a feasible but inaccurate time of day.


1:15 PM: Suzie puts her leftover pork chops in the office microwave, enters 5:00, and hits Start. After 1 minutes and 17 seconds, she hears sizzling, opens the microwave door and takes her meal.

1:25 PM: John walks by the microwave, sees 3:43 on the display and thinks: “What!? My life is slipping away from me!”

October 16, 2016

FreeCAD BIM development news

Here goes a little report from the href="http://www.freecadweb.org">FreeCAD front, showing a couple of things I've been working on in the last weeks. Site As a follow-up of href="http://yorik.uncreated.net/guestblog.php?2016=269">this post, several new features have been added to the href="http://www.freecadweb.org/wiki/index.php?title=Arch_Site">Arch Site object. The most important is that the Site is now a Part object, which means it has a...

October 13, 2016

October 12, 2016

Highlight Bloom and Photoillustration Look

Highlight Bloom and Photoillustration Look

Replicating a 'Lucisart'/Dave Hill type illustrative look

Over in the forums community member Sebastien Guyader (@sguyader) posted a neat workflow for emulating a photo-illustrative look popularized by photographers like Dave Hill where the resulting images often seem to have a sort of hyper-real feeling to them. Some of this feeling comes from a local-contrast boost and slight ‘blooming’ of the lighter tones in the image (though arguably most of the look is due to lighting and compositing of multiple elements).

To illustrate, here are a few representative samples of Dave Hill’s work that reflects this feeling:

Dave Hill Cliff Dave Hill Finishline Lotion Dave Hill Track Dave Hill Nick Saban A collection of example images. ©Dave Hill

A video of Dave presenting on how he brought together the idea and images for the series the first image above is from:

This effect is also popularized in Photoshop® filters such as LucisArt in an effort to attain what some would (erroneously) call an “HDR” effect. Really what they likely mean is a not-so-subtle tone-mapping. In particular the exaggerated local contrasts is often what garners folks attention.

We had previously posted about a method for exaggerating fine local contrasts and details using the “Freaky Details” method described by Calvin Hollywood. This workflow provides a similar idea but different results that many might find more appealing (it’s not as gritty as the Freaky Details approach).

Sebastien produced some great looking preview images to give folks a feeling for what the process would produce:

BMW IFA-F9 Fashion Woman Images from pixabay (CC0, public domain): Motorcycle, car, woman.

Replicating a “Dave Hill”/“LucasArt” effect

Sebastien’s approach relies only on having the always useful G’MIC plugin for GIMP. The general workflow is to do a high-pass frequency separation, and to apply some effects like local contrast enhancement and some smoothing on the residual low-pass layer. Then recombine the high+low pass layers to get the final result.

  1. Open the image.
  2. Duplicate the base layer.
    Rename it to “Lowpass”.
  3. With the top layer (“Lowpass”) active, open G’MIC.
  4. Use the Photocomix smoothing filter:

    Testing → Photocomix → Photocomix smoothing

    Set the Amplitude to 10. Apply.
    This is to taste, but a good startig place might be around 1% of the image dimensions (so a 2000px wide image - try using an Amplitude of 20).
  5. Change the “Lowpass” layer blend mode to Grain extract.
  6. Right-Click on the layer and choose New from visible.
    Rename this layer from “Visible“ to something more memorable like “Highpass” and set its layer mode to Grain merge.
    Turn off this layer visibility for now.
  7. Activate the “Lowpass” layer and set its layer blend mode back to Normal.
    The rest of the filters are applied to this “Lowpass” layer.
  8. Open G’MIC again.
    Apply the Simple local contrast filter:

    Details → Simple local contrast

    • Edge Sensitivity to 25
    • Iterations to 1
    • Paint effect to 50
    • Post-gamma to 1.20
  9. Open G’MIC again.
    Now apply the Graphic novel filter:

    Artistic → Graphic novel

    • check the Skip this step checkbox for Apply Local Normalization
    • Pencil size to 1
    • Pencil amplitude to 100-200
    • Pencil smoother sharpness/edge protection/smoothness
      to 0
    • Boost merging options Mixer to Soft light
    • Painter’s touch sharpness to 1.26
    • Painter’s edge protection flow to 0.37
    • Painter’s smoothness to 1.05
  10. Finally, make the “Highpass” layer visible again to bring back the fine details.

Trying It Out!

Let’s walk through the process. Sebastien got his sample images from the website https://pixabay.com, so I thought I would follow suit and find something suitable from there also. After some searching I found this neat image from Jerzy Gorecki licensed Create Commons 0/Public Domain.

Model The base image (link).
From pixabay, (CC0 - Public Domain): Jerzy Gorecki.

Frequency Separation

The first steps (1—7) are to create a High/Low pass frequency separation of the image. If you have a different method for obtaining the separation then feel free to use it. Sebastien uses the Photocomix smoothing filter to create his low-pass layer (other options might be Gaussian blur, bi-lateral smoothing, or even wavelets).

The basic steps to do this are to duplicate the base layer, blur it, then set the layer blend mode to Grain extract and create a new layer from visible. The new layer will be the Highpass (high-frequency) details and should have its layer blend mode set to Grain merge. The original blurred layer is the Lowpass (low-frequency) information and should have its layer blend mode set back to Normal.

So, following Sebastien’s steps, duplicate the base layer and rename the layer to “lowpass”. Then open G’MIC and apply:

Testing → Photocomix → Photocomix smoothing

with an amplitude of around 20. Change this to suit your own taste, but about 1% of the image width is a decent starting point. You’ll now have the base layer and the “lowpass” layer above it that has been smoothed:

Photocomix Smoothing “lowpass” layer after Photocomix smoothing with Amplitude set to 20.

Setting the “lowpass” layer blend mode to Grain extract will reveal the high-frequency details:

Grain Extract HP The high-frequency details visible after setting the blurred “lowpass” layer blend mode to Grain extract.

Now create a new layer from what is currently visible. Either right-click the “lowpass” layer and choose “New from visible” or from the menus:

Layer → New from Visible

Rename this new layer from “Visible” to “highpass” and set its layer blend mode to Grain merge. Select the “lowpass” layer and set its layer blend mode back to Normal.


The visible result should be back to what your starting image looked like. The rest of the steps for this tutorial will operate on the “lowpass” layer. You can leave the “highpass” filter visible during the rest of the steps to see what your results will look like.

Modifying the Low-Frequency Layer

These next steps will modify the underlying low-frequency image information to smooth it out and give it a bit of a contrast boost. First the “Simple local contrast” filter will separate tones and do some preliminary smoothing, while the “Graphic novel” filter will provide a nice boost to light tones along with further smoothing.

Simple Local Contrast

On the “lowpass” layer, open G’MIC and find the “Simple local contrast” filter:

Details → Simple local contrast

Change the following settings:

  • Edge Sensitivity to 25
  • Iterations to 1
  • Paint effect to 50
  • Post-gamma to 1.20

This will smooth out overall tones while simultaneously providing a nice local contrast boost. This is the step that causes small lighting details to “pop”:

Simple Local Contrast After applying the “Simple local contrast” filter.
(Click to compare to the original image)

The contrast increase provides a nice visual punch to the image. The addition of the “Graphic novel” filter will push the overall image much closer to a feeling of a photo-illustration.

Graphic Novel

Still on the “lowpass” layer, re-open G’MIC and open the “Graphic Novel” filter:

Artistic → Graphic novel

Change the following settings:

  • check the Skip this step checkbox for Apply Local Normalization
  • Pencil size to 1
  • Pencil amplitude to 100-200
  • Pencil smoother sharpness/edge protection/smoothness
    to 0
  • Boost merging options Mixer to Soft light
  • Painter’s touch sharpness to 1.26
  • Painter’s edge protection flow to 0.37
  • Painter’s smoothness to 1.05

The intent with this filter is to further smooth the overall tones, simplify details, and to give a nice boost to the light tones of the image:

Graphic Novel After applying the “Graphic novel” filter.
(Click to compare to the local contrast result)

The effect at 100% opacity can be a little strong. If so, simply adjust the opacity of the “lowpass” layer to taste. In some cases it would probably be desirable to mask areas you don’t want the effect applied to.

I’ve included the GIMP .xcf.bz2 file of this image while I was working on it for this article. You can download the file here (34.9MB). I did each step on a new layer so if you want to see the results of each effect step-by-step, simply turn that layer on/off:

Sample layers Example XCF layers

Finally, a great big Thank You! to Sebastien Guyader (@sguyader) for sharing this with everyone in the community!

A G’MIC Command

Of course, this wouldn’t be complete if someone didn’t come along with the direct G’MIC commands to get a similar result! And we can thank Iain Fergusson (@Iain) for coming up with the commands:

--gimp_anisotropic_smoothing[0] 10,0.16,0.63,0.6,2.35,0.8,30,2,0,1,1,0,1

-sub[0] [1]

-simplelocalcontrast_p[1] 25,1,50,1,1,1.2,1,1,1,1,1,1
-gimp_graphic_novelfxl[1] 1,2,6,5,20,0,1,100,0,1,0,0.78,1.92,0,0,2,1,1,1,1.26,0.37,1.05
-c 0,255

October 11, 2016

New Mexico LWV Voter Guides are here!

[Vote button] I'm happy to say that our state League of Women Voters Voter Guides are out for the 2016 election.

My grandmother was active in the League of Women Voters most of her life (at least after I was old enough to be aware of such things). I didn't appreciate it at the time -- and I also didn't appreciate that she had been born in a time when women couldn't legally vote, and the 19th amendment, giving women the vote, was ratified just a year before she reached voting age. No wonder she considered the League so important!

The LWV continues to work to extend voting to people of all genders, races, and economic groups -- especially important in these days when the Voting Rights Act is under attack and so many groups are being disenfranchised. But the League is important for another reason: local LWV chapters across the country produce detailed, non-partisan voter guides for each major election, which are distributed free of charge to voters. In many areas -- including here in New Mexico -- there's no equivalent of the "Legislative Analyst" who writes the lengthy analyses that appear on California ballots weighing the pros, cons and financial impact of each measure. In the election two years ago, not that long after Dave and I moved here, finding information on the candidates and ballot measures wasn't easy, and the LWV Voter Guide was by far the best source I saw. It's the main reason I joined the League, though I also appreciate the public candidate forums and other programs they put on.

LWV chapters are scrupulous about collecting information from candidates in a fair, non-partisan way. Candidates' statements are presented exactly as they're received, and all candidates are given the same specifications and deadlines. A few candidates ignored us this year and didn't send statements despite repeated emails and phone calls, but we did what we could.

New Mexico's state-wide voter guide -- the one I was primarily involved in preparing -- is at New Mexico Voter Guide 2016. It has links to guides from three of the four local LWV chapters: Los Alamos, Santa Fe, and Central New Mexico (Albuquerque and surrounding areas). The fourth chapter, Las Cruces, is still working on their guide and they expect it soon.

I was surprised to see that our candidate information doesn't include links to websites or social media. Apparently that's not part of the question sheet they send out, and I got blank looks when I suggested we should make sure to include that next time. The LWV does a lot of important work but they're a little backward in some respects. That's definitely on my checklist for next time, but for now, if you want a candidate's website, there's always Google.

I also helped a little on Los Alamos's voter guide, making suggestions on how to present it on the website (I maintain the state League website but not the Los Alamos site), and participated in the committee that wrote the analysis and pro and con arguments for our contentious charter amendment proposal to eliminate the elective office sheriff. We learned a lot about the history of the sheriff's office here in Los Alamos, and about state laws and insurance rules regarding sheriffs, and I hope the important parts of what we learned are reflected in both sides of the argument.

The Voter Guides also have a link to a Youtube recording of the first Los Alamos LWV candidate forum, featuring NM House candidates, DA, Probate judge and, most important, the debate over the sheriff proposition. The second candidate forum, featuring US House of Representatives, County Council and County Clerk candidates, will be this Thursday, October 13 at 7 (refreshments at 6:30). It will also be recorded thanks to a contribution from the AAUW.

So -- busy, busy with election-related projects. But I think the work is mostly done (except for the one remaining forum), the guides are out, and now it's time to go through and read the guides. And then the most important part of all: vote!

October 10, 2016

Visualizing the raw (sensor) highlight clipping

Have you ever over-exposed your images? Have you ever noticed that your images look flat and dull after you apply negative exposure compensation? Even though the over/underexposed warning says there is no overexposure? Have you ever wondered what is going on? Read on.

the Problem

First, why would you want to know which pixels are overexposed, clipped?

Consider this image:

 … Why is the sky so white? Why is the image so flat and dull?

Let's enable overexposure indicator..
Nope, it does not indicate any part of the image to be overexposed.

Now, let's see what happens if we disable the highlight reconstruction module
Eww, the sky is pink!

An experienced person knows that it means the image was taken overexposed, and it is so dull and flat because a negative exposure compensation was applied via the exposure module.

Many of you have sometimes unintentionally overexposed your images. As you know, it is hard to figure out exactly which part of the image is overexposed, clipped.

But. What if it is actually very easy to figure out?

I'll show you the end result, what darktable's new, raw-based overexposure indicator says about that image, and then we will discuss details:

digital image processing, mathematical background

While modern sensors capture an astonishing dynamic range, they still can capture only so much.
A Sensor consists of millions of pixel sensors, each pixel containing a photodetector and an active amplifier. Each of these pixels could be thought of as a bucket: there is some upper limit of photons it can capture.
Which means, there is some point, above which the sensor can not distinguish how much light it received.

Now, the pixel captured some photons, and the pixel now has some charge that can be measured. But it is an analog value. For better or worse, all modern cameras and software operate in the digital world. Thus, next step is the conversion of that charge via ADC into a digital signal.

Most sane cameras that can save raw files, store those values of pixels as an array of unsigned integers.
What can we tell about those values?

  • Sensor readout results in some noise (black noise + readout noise), meaning that even with the shortest exposure, the pixels will not have zero value.
    That is a black level.
    For Canon it is often between \(\mathbf{2000}\) and \(\mathbf{2050}\).
  • Due to the non-magical nature of photosensitive pixels and ADC, there is some upper limit on the value each pixel can have. That limit may be different for each pixel, be it due to the different CFA color, or just manufacturing tolerances. Most modern Canon cameras produce 14-bit raw images, which means each pixel may have a value between \(\mathbf{0}\) and \(\mathbf{{2^{14}}-1}\) (i.e. \(\mathbf{16383}\)).
    So the lowest maximal value that still can be represented by all the pixels is called the white level.
    For Canon it is often between \(\mathbf{13000}\) and \(\mathbf{16000}\).
  • Both of these parameters also often depend on ISO.

why is the white level so low? (you can skip this)

Disclaimer: this is just my understanding of the subject. my understanding may be wrong.

You may ask, why white level is less than the maximal value that can be stored in the raw file (that is, e.g. for 14-bit raw images, less than \(\mathbf{{2^{14}}-1}\) (i.e. \(\mathbf{16383}\)))?
I have intentionally skipped over one more component of the sensor - an active amplifier.
It is the second most important component of the sensor (after the photodetectors themselves).

The saturation point of the photodetector is much lower than the saturation point of the ADC. Also, due to the non-magical nature of ADC, it has a very specific voltage nominal range \(\mathbf{V_{RefLow}..V_{RefHi}}\), outside of which it can not work properly.
E.g. photodetector may output an analog signal with an amplitude of (guess, general ballpark, not precise values) \(\mathbf{1..10}\) \(\mathbf{mV}\), while the ADC expects input analog signal to have an amplitude of \(\mathbf{1..10}\) \(\mathbf{V}\).
So if we directly pass charge from the photodetector to the ADC, at best, we will get a very faint digital signal, with much smaller magnitude, compared to what ADC can produce, and thus with very bad (low) SNR.
Also see: Signal conditioning.

Thus, when quantifying non-amplified analog signal, we lose data, which can not be recovered later.
Which means, the analog signal must be amplified, to equalize the output voltage levels of the photodetector and [expected] input voltage levels of the analog signal to the ADC. That is done by an amplifier. There may be more than just one amplifier, and more than one amplification step.

Okay, what if we amplify an analog signal from photodetector by a magnitude of \(\mathbf{3}\)? I.e. we had \(\mathbf{5}\) \(\mathbf{mV}\), but now got \(\mathbf{5}\) \(\mathbf{V}\). At first all seems in order, the signal is within expected range.
But we need to take into account one important detail: output voltage of a photodetector depends on the amount of light it received, and the exposure time.
So for low light and low shutter speed it will output the minimal voltage (in our example, \(\mathbf{1}\) \(\mathbf{mV}\)), and if we amplify that, we get \(\mathbf{1}\) \(\mathbf{V}\), which is the \(\mathbf{V_{RefLow}}\) of ADC.
Similarly, for bright light and long shutter speed it will output the maximal voltage (in our example, \(\mathbf{10}\) \(\mathbf{mV}\)), and if we amplify that, we get \(\mathbf{10}\) \(\mathbf{V}\), which is, again, the \(\mathbf{V_{RefHi}}\) of ADC.

So there are obvious cases where with constant amplification factor we get bad signal range. Thus, we need multiple amplifiers, each of which with different gain, and we need to be able to toggle them separately, to control the amplification in finer steps.

As you may have guessed by now, the signal amplification is the factor that results in the white level being at the e.g. \(\mathbf{16000}\), or some other value. Basically, this amplification is how the ISO level is implemented in hardware.

TL;DR, so why?

Because of the analog gain that was applied to the data to bring it into the nominal range and not blow (clip, make them bigger than \(\mathbf{16383}\)) the usable highlights. The gain is applied in finite discrete steps, it may be impossible to apply a finer gain, so that the white level is closer to \(\mathbf{16383}\).

This is a very brief summary, for a detailed write-up i can direct you to the Magic Lantern's CMOS/ADTG/Digic register investigation on ISO.

the first steps of processing a raw file

All right, we got a sensor readout – an array of unsigned integers – how do we get from that to an image, that can be displayed?

  1. Convert the values from integer (most often 16-bit unsigned) to float (not strictly required, but best for precision reasons; we use 32-bit float)
  2. Subtract black level
  3. Normalize the pixels so that the white level is \(\mathbf{1.0}\)
    Simplest way to do that is to divide each value by \(\mathbf{({white level} - {black level})}\)
  4. These 3 steps are done by the raw black/white point module.

  5. Next, the white balance is applied. It is as simple as multiplying each separate CFA color by a specific coefficient. This so-called white balance vector can be acquired from several places:
    1. Camera may store it in the image's metadata.
      (That is what preset = camera does)
    2. If the color matrix for a given sensor is known, an approximate white balance (that is, which will only take the sensor into account, but will not adjust for illuminant) can be computed from that matrix.
      (That is what preset = camera neutral does)
    3. Taking a simple arithmetic mean (average) of each of the color channels may give good-enough inverted white-balance multiplier.
      IMPORTANT: the computed white balance will be good only if, on average, that image is gray.
      That is, it will correct white balance so that the average color becomes gray, so if average color is not neutrally gray (e.g. red), the image will look wrong.

      (That is what preset = spot white balance does)
    4. etc (user input, camera wb preset, ...)

    As you remember, in the previous step, we have scaled the data so that the white level is \(\mathbf{1.0}\), for every color channel.
    White balance coefficients scale each channel separately. For example, an example white balance vector may be \({\begin{pmatrix} 2.0 , 0.9 , 1.5 \end{pmatrix}}^{T}\). That is, Red channel will be scaled by \(\mathbf{2.0}\), Green channel will be scaled by \(\mathbf{0.9}\), and Blue channel will be scaled by \(\mathbf{1.5}\).
    In practice, however, the white balance vector is most often normalized so that the Green channel multiplier is \(\mathbf{1.0}\).

  6. That step is done by the white balance module.

  7. And last, highlight handling.
    As we know from definition, all the data values which are bigger than the white level are unusable, clipped. Without / before white balance correction, it is clear that all the values which are bigger than \(\mathbf{1.0}\) are the clipped values, and they are useless without some advanced processing.

    Now, what did the white balance correction do to the white levels? Correct, now, the white levels will be: \(\mathbf{2.0}\) for Red channel, \(\mathbf{0.9}\) for Green channel, and \(\mathbf{1.5}\) for Blue channel.

    As we all know, the white color is \({\begin{pmatrix} 1.0 , 1.0 , 1.0 \end{pmatrix}}^{T}\). But the maximal values (the per-channel white levels) are \({\begin{pmatrix} 2.0 , 0.9 , 1.5 \end{pmatrix}}^{T}\), so our "white" will not be white, but, as experienced users may guess, purple-ish. What do we do?

    Since for white color, all the components have exact the same value – \(\mathbf{1.0}\) – we just need to make sure that the maximal values are the same value. We can not scale each of the channels separately, because that would change white balance. We simply need to pick the minimal white level – \(\mathbf{0.9}\) – in our case, and clip all the data to that level. I.e. all the data which had a value of less than or equal to that threshold, will retain the same value; and all the pixels with the value greater than the threshold will have the value of threshold – \(\mathbf{0.9}\).

    Alternatively, one could try to recover these highlights, see highlight reconstruction module; and
    Color Reconstruction
    (though this last one only guesses color based on surroundings, does not actually reconstruct the channels, and is a bit too late in the pipe).

    If you don't do highlight handling, you get what you have seen in the third image in this article - ugly, unnaturally looking, discolored, highlights.

Note: you might know that there are more steps required (namely: demosaicing, base curve, input color profile, output color profile; there may be others.), but for the purpose of detection and visualization of highlight clipping, they are unimportant, so i will not talk about them here.

From that list, it should now be clear that all the pixels which have a value greater than the minimal per-channel white level right before the highlight reconstruction module, are the clipped pixels.

the Solution

But a technical problem arises: we need to visualize the clipped pixels on top of the fully processed image, while we only know whether the pixel is clipped or not in the input buffer of highlight reconstruction module.
And we can not visualize clipping in the highlight reconstruction module itself, because the data is still mosaiced, and other modules will be applied after that anyway.

The problem was solved by back-transforming the given white balance coefficients and the white level, and then comparing the values of original raw buffer produced by camera with that threshold. And, back-transforming output pixel coordinates through all the geometric distortions to figure out which pixel in the original input buffer needs to be checked.

This seems to be the most flexible solution so far:

  • We can visualize overexposure on top of final, fully-processed image. That means, no module messes with the visualization
  • We do sample the original input buffer. That means we can actually know whether a given pixel is clipped or not

Obviously, this new raw-based overexposure indicator depends on the specific sensor pattern.
Good news is, it just works for both the Bayer, and X-Trans sensors!

modes of operation


The raw-based overexposure indicator has 3 different modes of operation:

  1. mark with CFA color

    • If the clipped pixel was Red, a Red pixel will be displayed.
    • If the clipped pixel was Green, a Green pixel will be displayed.
    • If the clipped pixel was Blue, a Blue pixel will be displayed.

    Sample output, X-Trans image.
    There are some Blue, Green and Red pixels clipped (counting to the centre)

  2. mark with solid color

    • If the raw pixel was clipped, it will be displayed in a given color (one of: red, green, blue, black)

    Same area, with color scheme = black.
    The more black dots the area contains, the more clipped pixels there are in that area.

  3. false color

    • If the clipped pixel was Red, the Red channel for current pixel will be set to \(\mathbf{0.0}\)
    • If the clipped pixel was Green, the Green channel for current pixel will be set to \(\mathbf{0.0}\)
    • If the clipped pixel was Blue, the Blue channel for current pixel will be set to \(\mathbf{0.0}\)

    Same area.

understanding raw overexposure visualization

So, let's go back to the fourth image in this article:
This is mode = mark with CFA color.

What does it tell us?

  • Most of the sky is indeed clipped.
  • In the top-right portion of the image, only the Blue channel is clipped.
  • In the top-left portion of the image, Blue and Red channels are clipped.
  • No Green channel clipping.

Now you know that, you:

  1. Will know better than to over-expose so much next time :) (hint to myself, mostly)
  2. Could try to recover from clipping a bit

    1. either by not applying negative exposure compensation in exposure module
    2. or using highlight reconstruction module with mode = reconstruct in LCh
    3. or using highlight reconstruction module with mode = reconstruct color, though it is known to produce artefacts
    4. or using color reconstruction module

an important note about sensor clipping vs. color clipping

By default, the module visualizes the color clipping, NOT the sensor clipping.
The colors may be clipped, while the sensor is still not clipping.

Let's enable indicator...
The visualization says that Red and Blue channels are clipped.

But now let's disable the white balance module, while keeping indicator active:

Interesting, isn't it? So actually there is no sensor-level clipping, but the image is still overexposed, because after the white balance is applied, the channels do clip.

While there, i wanted to show highlight reconstruction module, mode = reconstruct in LCh.
If you ever used it, you know that it used to produce pretty useless results.
But not anymore:
As you can compare that with the first version of this image in this block, the highlights, although they are clipped, are actually somewhat reconstructed, so the image is not so flat and dull, there is some gradient to it.

Too boring? :)

With sufficiently exposed image (or just set black levels to \(\mathbf{0}\) and white level to \(\mathbf{1}\) in raw black/white point module; and clipping threshold = \(\mathbf{0.0}\), mode = mark with CFA color in raw overexposure indicator), and a lucky combination of image size, output size and zoom level, produces a familiar-looking pattern :)
That is basically an artefact due to the downscaling for display. Though, feedback may ask to actually properly implement this as a feature...

Now, what if we enable the lens correction module? :)
So we could even create glitch-art with this thing!
Technically, that is some kind of visualization of lens distortion.

October 08, 2016

Bullet 2.85 released : pybullet and Virtual Reality support for HTC Vive and Oculus Rift

bullet_pybullet_vrWe have been making a lot of progress in higher quality physics simulation for robotics, games and visual effects. To make our physics simulation easier to use, especially for roboticist and machine learning experts, we created Python bindings, see examples/pybullet. In addition, we added Virtual Reality support for HTC Vive and Oculus Rift using the openvr sdk. See attached youtube movie. Updated documentation will be added soon, as well as possible show-stopper bug-fixes, so the actual release tag may bump up to 2.85.x. Download the release from github here.


October 05, 2016

Play notes, chords and arbitrary waveforms from Python

Reading Stephen Wolfram's latest discussion of teaching computational thinking (which, though I mostly agree with it, is more an extended ad for Wolfram Programming Lab than a discussion of what computational thinking is and why we should teach it) I found myself musing over ideas for future computer classes for Los Alamos Makers. Students, and especially kids, like to see something other than words on a screen. Graphics and games good, or robotics when possible ... but another fun project a novice programmer can appreciate is music.

I found myself curious what you could do with Python, since I hadn't played much with Python sound generation libraries. I did discover a while ago that Python is rather bad at playing audio files, though I did eventually manage to write a music player script that works quite well. What about generating tones and chords?

A web search revealed that this is another thing Python is bad at. I found lots of people asking about chord generation, and a handful of half-baked ideas that relied on long obsolete packages or external program. But none of it actually worked, at least without requiring Windows or relying on larger packages like fluidsynth (which looked worth exploring some day when I have more time).

Play an arbitrary waveform with Pygame and NumPy

But I did find one example based on a long-obsolete Python package called Numeric which, when rewritten to use NumPy, actually played a sound. You can take a NumPy array and play it using a pygame.sndarray object this way:

import pygame, pygame.sndarray

def play_for(sample_wave, ms):
    """Play the given NumPy array, as a sound, for ms milliseconds."""
    sound = pygame.sndarray.make_sound(sample_wave)

Then you just need to calculate the waveform you want to play. NumPy can generate sine waves on its own, while scipy.signal can generate square and sawtooth waves. Like this:

import numpy
import scipy.signal

sample_rate = 44100

def sine_wave(hz, peak, n_samples=sample_rate):
    """Compute N samples of a sine wave with given frequency and peak amplitude.
       Defaults to one second.
    length = sample_rate / float(hz)
    omega = numpy.pi * 2 / length
    xvalues = numpy.arange(int(length)) * omega
    onecycle = peak * numpy.sin(xvalues)
    return numpy.resize(onecycle, (n_samples,)).astype(numpy.int16)

def square_wave(hz, peak, duty_cycle=.5, n_samples=sample_rate):
    """Compute N samples of a sine wave with given frequency and peak amplitude.
       Defaults to one second.
    t = numpy.linspace(0, 1, 500 * 440/hz, endpoint=False)
    wave = scipy.signal.square(2 * numpy.pi * 5 * t, duty=duty_cycle)
    wave = numpy.resize(wave, (n_samples,))
    return (peak / 2 * wave.astype(numpy.int16))

# Play A (440Hz) for 1 second as a sine wave:
play_for(sine_wave(440, 4096), 1000)

# Play A-440 for 1 second as a square wave:
play_for(square_wave(440, 4096), 1000)

Playing chords

That's all very well, but it's still a single tone, not a chord.

To generate a chord of two notes, you can add the waveforms for the two notes. For instance, 440Hz is concert A, and the A one octave above it is double the frequence, or 880 Hz. If you wanted to play a chord consisting of those two As, you could do it like this:

play_for(sum([sine_wave(440, 4096), sine_wave(880, 4096)]), 1000)

Simple octaves aren't very interesting to listen to. What you want is chords like major and minor triads and so forth. If you google for chord ratios Google helpfully gives you a few of them right off, then links to a page with a table of ratios for some common chords.

For instance, the major triad ratios are listed as 4:5:6. What does that mean? It means that for a C-E-G triad (the first C chord you learn in piano), the E's frequency is 5/4 of the C's frequency, and the G is 6/4 of the C.

You can pass that list, [4, 5, 5] to a function that will calculate the right ratios to produce the set of waveforms you need to add to get your chord:

def make_chord(hz, ratios):
    """Make a chord based on a list of frequency ratios."""
    sampling = 4096
    chord = waveform(hz, sampling)
    for r in ratios[1:]:
        chord = sum([chord, sine_wave(hz * r / ratios[0], sampling)])
    return chord

def major_triad(hz):
    return make_chord(hz, [4, 5, 6])

play_for(major_triad(440), length)

Even better, you can pass in the waveform you want to use when you're adding instruments together:

def make_chord(hz, ratios, waveform=None):
    """Make a chord based on a list of frequency ratios
       using a given waveform (defaults to a sine wave).
    sampling = 4096
    if not waveform:
        waveform = sine_wave
    chord = waveform(hz, sampling)
    for r in ratios[1:]:
        chord = sum([chord, waveform(hz * r / ratios[0], sampling)])
    return chord

def major_triad(hz, waveform=None):
    return make_chord(hz, [4, 5, 6], waveform)

play_for(major_triad(440, square_wave), length)

There are still some problems. For instance, sawtooth_wave() works fine individually or for pairs of notes, but triads of sawtooths don't play correctly. I'm guessing something about the sampling rate is making their overtones cancel out part of the sawtooth wave. Triangle waves (in scipy.signal, that's a sawtooth wave with rising ramp width of 0.5) don't seem to work right even for single tones. I'm sure these are solvable, perhaps by fiddling with the sampling rate. I'll probably need to add graphics so I can look at the waveform for debugging purposes.

In any case, it was a fun morning hack. Most chords work pretty well, and it's nice to know how to to play any waveform I can generate.

The full script is here: play_chord.py on GitHub.

security things in Linux v4.8

Previously: v4.7. Here are a bunch of security things I’m excited about in Linux v4.8:

SLUB freelist ASLR

Thomas Garnier continued his freelist randomization work by adding SLUB support.

x86_64 KASLR text base offset physical/virtual decoupling

On x86_64, to implement the KASLR text base offset, the physical memory location of the kernel was randomized, which resulted in the virtual address being offset as well. Due to how the kernel’s “-2GB” addressing works (gcc‘s “-mcmodel=kernel“), it wasn’t possible to randomize the physical location beyond the 2GB limit, leaving any additional physical memory unused as a randomization target. In order to decouple the physical and virtual location of the kernel (to make physical address exposures less valuable to attackers), the physical location of the kernel needed to be randomized separately from the virtual location. This required a lot of work for handling very large addresses spanning terabytes of address space. Yinghai Lu, Baoquan He, and I landed a series of patches that ultimately did this (and in the process fixed some other bugs too). This expands the physical offset entropy to roughly $physical_memory_size_of_system / 2MB bits.

x86_64 KASLR memory base offset

Thomas Garnier rolled out KASLR to the kernel’s various statically located memory ranges, randomizing their locations with CONFIG_RANDOMIZE_MEMORY. One of the more notable things randomized is the physical memory mapping, which is a known target for attacks. Also randomized is the vmalloc area, which makes attacks against targets vmalloced during boot (which tend to always end up in the same location on a given system) are now harder to locate. (The vmemmap region randomization accidentally missed the v4.8 window and will appear in v4.9.)

x86_64 KASLR with hibernation

Rafael Wysocki (with Thomas Garnier, Borislav Petkov, Yinghai Lu, Logan Gunthorpe, and myself) worked on a number of fixes to hibernation code that, even without KASLR, were coincidentally exposed by the earlier W^X fix. With that original problem fixed, then memory KASLR exposed more problems. I’m very grateful everyone was able to help out fixing these, especially Rafael and Thomas. It’s a hard place to debug. The bottom line, now, is that hibernation and KASLR are no longer mutually exclusive.

gcc plugin infrastructure

Emese Revfy ported the PaX/Grsecurity gcc plugin infrastructure to upstream. If you want to perform compiler-based magic on kernel builds, now it’s much easier with CONFIG_GCC_PLUGINS! The plugins live in scripts/gcc-plugins/. Current plugins are a short example called “Cyclic Complexity” which just emits the complexity of functions as they’re compiled, and “Sanitizer Coverage” which provides the same functionality as gcc’s recent “-fsanitize-coverage=trace-pc” but back through gcc 4.5. Another notable detail about this work is that it was the first Linux kernel security work funded by Linux Foundation’s Core Infrastructure Initiative. I’m looking forward to more plugins!

If you’re on Debian or Ubuntu, the required gcc plugin headers are available via the gcc-$N-plugin-dev package (and similarly for all cross-compiler packages).

hardened usercopy

Along with work from Rik van Riel, Laura Abbott, Casey Schaufler, and many other folks doing testing on the KSPP mailing list, I ported part of PAX_USERCOPY (the basic runtime bounds checking) to upstream as CONFIG_HARDENED_USERCOPY. One of the interface boundaries between the kernel and user-space are the copy_to_user()/copy_from_user() family of functions. Frequently, the size of a copy is known at compile-time (“built-in constant”), so there’s not much benefit in checking those sizes (hardened usercopy avoids these cases). In the case of dynamic sizes, hardened usercopy checks for 3 areas of memory: slab allocations, stack allocations, and kernel text. Direct kernel text copying is simply disallowed. Stack copying is allowed as long as it is entirely contained by the current stack memory range (and on x86, only if it does not include the saved stack frame and instruction pointers). For slab allocations (e.g. those allocated through kmem_cache_alloc() and the kmalloc()-family of functions), the copy size is compared against the size of the object being copied. For example, if copy_from_user() is writing to a structure that was allocated as size 64, but the copy gets tricked into trying to write 65 bytes, hardened usercopy will catch it and kill the process.

For testing hardened usercopy, lkdtm gained several new tests: USERCOPY_HEAP_SIZE_TO, USERCOPY_HEAP_SIZE_FROM, USERCOPY_STACK_FRAME_TO,
USERCOPY_STACK_FRAME_FROM, USERCOPY_STACK_BEYOND, and USERCOPY_KERNEL. Additionally, USERCOPY_HEAP_FLAG_TO and USERCOPY_HEAP_FLAG_FROM were added to test what will be coming next for hardened usercopy: flagging slab memory as “safe for copy to/from user-space”, effectively whitelisting certainly slab caches, as done by PAX_USERCOPY. This further reduces the scope of what’s allowed to be copied to/from, since most kernel memory is not intended to ever be exposed to user-space. Adding this logic will require some reorganization of usercopy code to add some new APIs, as PAX_USERCOPY’s approach to handling special-cases is to add bounce-copies (copy from slab to stack, then copy to userspace) as needed, which is unlikely to be acceptable upstream.

seccomp reordered after ptrace

By its original design, seccomp filtering happened before ptrace so that seccomp-based ptracers (i.e. SECCOMP_RET_TRACE) could explicitly bypass seccomp filtering and force a desired syscall. Nothing actually used this feature, and as it turns out, it’s not compatible with process launchers that install seccomp filters (e.g. systemd, lxc) since as long as the ptrace and fork syscalls are allowed (and fork is needed for any sensible container environment), a process could spawn a tracer to help bypass a filter by injecting syscalls. After Andy Lutomirski convinced me that ordering ptrace first does not change the attack surface of a running process (unless all syscalls are blacklisted, the entire ptrace attack surface will always be exposed), I rearranged things. Now there is no (expected) way to bypass seccomp filters, and containers with seccomp filters can allow ptrace again.

That’s it for v4.8! The merge window is open for v4.9…

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

October 04, 2016

Working with GIS, terrains and #FreeCAD

Or, how to build a precise 3D terrain from any place of the world. Again not much visually significant FreeCAD development to show this week, so here is another interesting subject, that I started looking at in an earlier post. We architects should really begin to learn about GIS. GIS stands for Geographic information system and begins to...

October 03, 2016

security things in Linux v4.7

Previously: v4.6. Onward to security things I found interesting in Linux v4.7:

KASLR text base offset for MIPS

Matt Redfearn added text base address KASLR to MIPS, similar to what’s available on x86 and arm64. As done with x86, MIPS attempts to gather entropy from various build-time, run-time, and CPU locations in an effort to find reasonable sources during early-boot. MIPS doesn’t yet have anything as strong as x86′s RDRAND (though most have an instruction counter like x86′s RDTSC), but it does have the benefit of being able to use Device Tree (i.e. the “/chosen/kaslr-seed” property) like arm64 does. By my understanding, even without Device Tree, MIPS KASLR entropy should be as strong as pre-RDRAND x86 entropy, which is more than sufficient for what is, similar to x86, not a huge KASLR range anyway: default 8 bits (a span of 16MB with 64KB alignment), though CONFIG_RANDOMIZE_BASE_MAX_OFFSET can be tuned to the device’s memory, giving a maximum of 11 bits on 32-bit, and 15 bits on EVA or 64-bit.

SLAB freelist ASLR

Thomas Garnier added CONFIG_SLAB_FREELIST_RANDOM to make slab allocation layouts less deterministic with a per-boot randomized freelist order. This raises the bar for successful kernel slab attacks. Attackers will need to either find additional bugs to help leak slab layout information or will need to perform more complex grooming during an attack. Thomas wrote a post describing the feature in more detail here: Randomizing the Linux kernel heap freelists. (SLAB is done in v4.7, and SLUB in v4.8.)

eBPF JIT constant blinding

Daniel Borkmann implemented constant blinding in the eBPF JIT subsystem. With strong kernel memory protections (CONFIG_DEBUG_RODATA) in place, and with the segregation of user-space memory execution from kernel (i.e SMEP, PXN, CONFIG_CPU_SW_DOMAIN_PAN), having a place where user-space can inject content into an executable area of kernel memory becomes very high-value to an attacker. The eBPF JIT was exactly such a thing: the use of BPF constants could result in the JIT producing instruction flows that could include attacker-controlled instructions (e.g. by directing execution into the middle of an instruction with a constant that would be interpreted as a native instruction). The eBPF JIT already uses a number of other defensive tricks (e.g. random starting position), but this added randomized blinding to any BPF constants, which makes building a malicious execution path in the eBPF JIT memory much more difficult (and helps block attempts at JIT spraying to bypass other protections).

Elena Reshetova updated a 2012 proof-of-concept attack to succeed against modern kernels to help provide a working example of what needed fixing in the JIT. This serves as a thorough regression test for the protection.

The cBPF JITs that exist in ARM, MIPS, PowerPC, and Sparc still need to be updated to eBPF, but when they do, they’ll gain all these protections immediatley.

Bottom line is that if you enable the (disabled-by-default) bpf_jit_enable sysctl, be sure to set the bpf_jit_harden sysctl to 2 (to perform blinding even for root).

fix brk ASLR weakness on arm64 compat

There have been a few ASLR fixes recently (e.g. ET_DYN, x86 32-bit unlimited stack), and while reviewing some suggested fixes to arm64 brk ASLR code from Jon Medhurst, I noticed that arm64′s brk ASLR entropy was slightly too low (less than 1 bit) for 64-bit and noticeably lower (by 2 bits) for 32-bit compat processes when compared to native 32-bit arm. I simplified the code by using literals for the entropy. Maybe we can add a sysctl some day to control brk ASLR entropy like was done for mmap ASLR entropy.

LoadPin LSM

LSM stacking is well-defined since v4.2, so I finally upstreamed a “small” LSM that implements a protection I wrote for Chrome OS several years back. On systems with a static root of trust that extends to the filesystem level (e.g. Chrome OS’s coreboot+depthcharge boot firmware chaining to dm-verity, or a system booting from read-only media), it’s redundant to sign kernel modules (you’ve already got the modules on read-only media: they can’t change). The kernel just needs to know they’re all coming from the correct location. (And this solves loading known-good firmware too, since there is no convention for signed firmware in the kernel yet.) LoadPin requires that all modules, firmware, etc come from the same mount (and assumes that the first loaded file defines which mount is “correct”, hence load “pinning”).

That’s it for v4.7. Prepare yourself for v4.8 next!

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

October 01, 2016

Zsh magic: remove all raw photos that don't have a corresponding JPEG

Lately, when shooting photos with my DSLR, I've been shooting raw mode but with a JPEG copy as well. When I triage and label my photos (with pho and metapho), I use only the JPEG files, since they load faster and there's no need to index both. But that means that sometimes I delete a .jpg file while the huge .cr2 raw file is still on my disk.

I wanted some way of removing these orphaned raw files: in other words, for every .cr2 file that doesn't have a corresponding .jpg file, delete the .cr2.

That's an easy enough shell function to write: loop over *.cr2, change the .cr2 extension to .jpg, check whether that file exists, and if it doesn't, delete the .cr2.

But as I started to write the shell function, it occurred to me: this is just the sort of magic trick zsh tends to have built in.

So I hopped on over to #zsh and asked, and in just a few minutes, I had an answer:

rm *.cr2(e:'[[ ! -e ${REPLY%.cr2}.jpg ]]':)

Yikes! And it works! But how does it work? It's cheating to rely on people in IRC channels without trying to understand the answer so I can solve the next similar problem on my own.

Most of the answer is in the zshexpn man page, but it still took some reading and jumping around to put the pieces together.

First, we take all files matching the initial wildcard, *.cr2. We're going to apply to them the filename generation code expression in parentheses after the wildcard. (I think you need EXTENDED_GLOB set to use that sort of parenthetical expression.)

The variable $REPLY is set to the filename the wildcard expression matched; so it will be set to each .cr2 filename, e.g. img001.cr2.

The expression ${REPLY%.cr2} removes the .cr2 extension. Then we tack on a .jpg: ${REPLY%.cr2}.jpg. So now we have img001.jpg.

[[ ! -e ${REPLY%.cr2}.jpg ]] checks for the existence of that jpg filename, just like in a shell script.

So that explains the quoted shell expression. The final, and hardest part, is how to use that quoted expression. That's in section 14.8.7 Glob Qualifiers. (estring) executes string as shell code, and the filename will be included in the list if and only if the code returns a zero status.

The colons -- after the e and before the closing parenthesis -- are just separator characters. Whatever character immediately follows the e will be taken as the separator, and anything from there to the next instance of that separator (the second colon, in this case) is taken as the string to execute. Colons seem to be the character to use by convention, but you could use anything. This is also the part of the expression responsible for setting $REPLY to the filename being tested.

So why the quotes inside the colons? They're because some of the substitutions being done would be evaluated too early without them: "Note that expansions must be quoted in the string to prevent them from being expanded before globbing is done. string is then executed as shell code."

Whew! Complicated, but awfully handy. I know I'll have lots of other uses for that.

One additional note: section 14.8.5, Approximate Matching, in that manual page caught my eye. zsh can do fuzzy matches! I can't think offhand what I need that for ... but I'm sure an idea will come to me.

security things in Linux v4.6

Previously: v4.5. The v4.6 Linux kernel release included a bunch of stuff, with much more of it under the KSPP umbrella.

seccomp support for parisc

Helge Deller added seccomp support for parisc, which including plumbing support for PTRACE_GETREGSET to get the self-tests working.

x86 32-bit mmap ASLR vs unlimited stack fixed

Hector Marco-Gisbert removed a long-standing limitation to mmap ASLR on 32-bit x86, where setting an unlimited stack (e.g. “ulimit -s unlimited“) would turn off mmap ASLR (which provided a way to bypass ASLR when executing setuid processes). Given that ASLR entropy can now be controlled directly (see the v4.5 post), and that the cases where this created an actual problem are very rare, means that if a system sees collisions between unlimited stack and mmap ASLR, they can just adjust the 32-bit ASLR entropy instead.

x86 execute-only memory

Dave Hansen added Protection Key support for future x86 CPUs and, as part of this, implemented support for “execute only” memory in user-space. On pkeys-supporting CPUs, using mmap(..., PROT_EXEC) (i.e. without PROT_READ) will mean that the memory can be executed but cannot be read (or written). This provides some mitigation against automated ROP gadget finding where an executable is read out of memory to find places that can be used to build a malicious execution path. Using this will require changing some linker behavior (to avoid putting data in executable areas), but seems to otherwise Just Work. I’m looking forward to either emulated QEmu support or access to one of these fancy CPUs.

CONFIG_DEBUG_RODATA enabled by default on arm and arm64, and mandatory on x86

Ard Biesheuvel (arm64) and I (arm) made the poorly-named CONFIG_DEBUG_RODATA enabled by default. This feature controls whether the kernel enforces proper memory protections on its own memory regions (code memory is executable and read-only, read-only data is actually read-only and non-executable, and writable data is non-executable). This protection is a fundamental security primitive for kernel self-protection, so making it on-by-default is required to start any kind of attack surface reduction within the kernel.

On x86 CONFIG_DEBUG_RODATA was already enabled by default, but, at Ingo Molnar’s suggestion, I made it mandatory: CONFIG_DEBUG_RODATA cannot be turned off on x86. I expect we’ll get there with arm and arm64 too, but the protection is still somewhat new on these architectures, so it’s reasonable to continue to leave an “out” for developers that find themselves tripping over it.

arm64 KASLR text base offset

Ard Biesheuvel reworked a ton of arm64 infrastructure to support kernel relocation and, building on that, Kernel Address Space Layout Randomization of the kernel text base offset (and module base offset). As with x86 text base KASLR, this is a probabilistic defense that raises the bar for kernel attacks where finding the KASLR offset must be added to the chain of exploits used for a successful attack. One big difference from x86 is that the entropy for the KASLR must come either from Device Tree (in the “/chosen/kaslr-seed” property) or from UEFI (via EFI_RNG_PROTOCOL), so if you’re building arm64 devices, make sure you have a strong source of early-boot entropy that you can expose through your boot-firmware or boot-loader.

zero-poison after free

Laura Abbott reworked a bunch of the kernel memory management debugging code to add zeroing of freed memory, similar to PaX/Grsecurity’s PAX_MEMORY_SANITIZE feature. This feature means that memory is cleared at free, wiping any sensitive data so it doesn’t have an opportunity to leak in various ways (e.g. accidentally uninitialized structures or padding), and that certain types of use-after-free flaws cannot be exploited since the memory has been wiped. To take things even a step further, the poisoning can be verified at allocation time to make sure that nothing wrote to it between free and allocation (called “sanity checking”), which can catch another small subset of flaws.

To understand the pieces of this, it’s worth describing that the kernel’s higher level allocator, the “page allocator” (e.g. __get_free_pages()) is used by the finer-grained “slab allocator” (e.g. kmem_cache_alloc(), kmalloc()). Poisoning is handled separately in both allocators. The zero-poisoning happens at the page allocator level. Since the slab allocators tend to do their own allocation/freeing, their poisoning happens separately (since on slab free nothing has been freed up to the page allocator).

Only limited performance tuning has been done, so the penalty is rather high at the moment, at about 9% when doing a kernel build workload. Future work will include some exclusion of frequently-freed caches (similar to PAX_MEMORY_SANITIZE), and making the options entirely CONFIG controlled (right now both CONFIGs are needed to build in the code, and a kernel command line is needed to activate it). Performing the sanity checking (mentioned above) adds another roughly 3% penalty. In the general case (and once the performance of the poisoning is improved), the security value of the sanity checking isn’t worth the performance trade-off.

Tests for the features can be found in lkdtm as READ_AFTER_FREE and READ_BUDDY_AFTER_FREE. If you’re feeling especially paranoid and have enabled sanity-checking, WRITE_AFTER_FREE and WRITE_BUDDY_AFTER_FREE can test these as well.

To perform zero-poisoning of page allocations and (currently non-zero) poisoning of slab allocations, build with:


and enable the page allocator poisoning and slab allocator poisoning at boot with this on the kernel command line:

page_poison=on slub_debug=P

To add sanity-checking, change PAGE_POISONING_NO_SANITY=n, and add “F” to slub_debug as “slub_debug=PF“.

read-only after init

I added the infrastructure to support making certain kernel memory read-only after kernel initialization (inspired by a small part of PaX/Grsecurity’s KERNEXEC functionality). The goal is to continue to reduce the attack surface within the kernel by making even more of the memory, especially function pointer tables, read-only (which depends on CONFIG_DEBUG_RODATA above).

Function pointer tables (and similar structures) are frequently targeted by attackers when redirecting execution. While many are already declared “const” in the kernel source code, making them read-only (and therefore unavailable to attackers) for their entire lifetime, there is a class of variables that get initialized during kernel (and module) start-up (i.e. written to during functions that are marked “__init“) and then never (intentionally) written to again. Some examples are things like the VDSO, vector tables, arch-specific callbacks, etc.

As it turns out, most architectures with kernel memory protection already delay making their data read-only until after __init (see mark_rodata_ro()), so it’s trivial to declare a new data section (“.data..ro_after_init“) and add it to the existing read-only data section (“.rodata“). Kernel structures can be annotated with the new section (via the “__ro_after_init” macro), and they’ll become read-only once boot has finished.

The next step for attack surface reduction infrastructure will be to create a kernel memory region that is passively read-only, but can be made temporarily writable (by a single un-preemptable CPU), for storing sensitive structures that are written to only very rarely. Once this is done, much more of the kernel’s attack surface can be made read-only for the majority of its lifetime.

As people identify places where __ro_after_init can be used, we can grow the protection. A good place to start is to look through the PaX/Grsecurity patch to find uses of __read_only on variables that are only written to during __init functions. The rest are places that will need the temporarily-writable infrastructure (PaX/Grsecurity uses pax_open_kernel()/pax_close_kernel() for these).

That’s it for v4.6, next up will be v4.7!

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

September 28, 2016

security things in Linux v4.5

Previously: v4.4. Some things I found interesting in the Linux kernel v4.5:


The CONFIG_STRICT_DEVMEM setting that has existed for a long time already protects system RAM from being accessible through the /dev/mem device node to root in user-space. Dan Williams added CONFIG_IO_STRICT_DEVMEM to extend this so that if a kernel driver has reserved a device memory region for use, it will become unavailable to /dev/mem also. The reservation in the kernel was to keep other kernel things from using the memory, so this is just common sense to make sure user-space can’t stomp on it either. Everyone should have this enabled. (And if you have a system where you discover you need IO memory access from userspace, you can boot with “iomem=relaxed” to disable this at runtime.)

If you’re looking to create a very bright line between user-space having access to device memory, it’s worth noting that if a device driver is a module, a malicious root user can just unload the module (freeing the kernel memory reservation), fiddle with the device memory, and then reload the driver module. So either just leave out /dev/mem entirely (not currently possible with upstream), build a monolithic kernel (no modules), or otherwise block (un)loading of modules (/proc/sys/kernel/modules_disabled).

ptrace fsuid checking

Jann Horn fixed some corner-cases in how ptrace access checks were handled on special files in /proc. For example, prior to this fix, if a setuid process temporarily dropped privileges to perform actions as a regular user, the ptrace checks would not notice the reduced privilege, possibly allowing a regular user to trick a privileged process into disclosing things out of /proc (ASLR offsets, restricted directories, etc) that they normally would be restricted from seeing.

ASLR entropy sysctl

Daniel Cashman standardized the way architectures declare their maximum user-space ASLR entropy (CONFIG_ARCH_MMAP_RND_BITS_MAX) and then created a sysctl (/proc/sys/vm/mmap_rnd_bits) so that system owners could crank up entropy. For example, the default entropy on 32-bit ARM was 8 bits, but the maximum could be as much as 16. If your 64-bit kernel is built with CONFIG_COMPAT, there’s a compat version of the sysctl as well, for controlling the ASLR entropy of 32-bit processes: /proc/sys/vm/mmap_rnd_compat_bits.

Here’s how to crank your entropy to the max, without regard to what architecture you’re on:

for i in "" "compat_"; do f=/proc/sys/vm/mmap_rnd_${i}bits; n=$(cat $f); while echo $n > $f ; do n=$(( n + 1 )); done; done

strict sysctl writes

Two years ago I added a sysctl for treating sysctl writes more like regular files (i.e. what’s written first is what appears at the start), rather than like a ring-buffer (what’s written last is what appears first). At the time it wasn’t clear what might break if this was enabled, so a WARN was added to the kernel. Since only one such string showed up in searches over the last two years, the strict writing mode was made the default. The setting remains available as /proc/sys/kernel/sysctl_writes_strict.

seccomp UM support

Mickaël Salaün added seccomp support (and selftests) for user-mode Linux. Moar architectures!

seccomp NNP vs TSYNC fix

Jann Horn noticed and fixed a problem where if a seccomp filter was already in place on a process (after being installed by a privileged process like systemd, a container launcher, etc) then the setting of the “no new privs” flag could be bypassed when adding filters with the SECCOMP_FILTER_FLAG_TSYNC flag set. Bypassing NNP meant it might be possible to trick a buggy setuid program into doing things as root after a seccomp filter forced a privilege drop to fail (generally referred to as the “sendmail setuid flaw”). With NNP set, a setuid program can’t be run in the first place.

That’s it! Next I’ll cover v4.6

Edit: Added notes about “iomem=…”

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

September 27, 2016

security things in Linux v4.4

Previously: v4.3. Continuing with interesting security things in the Linux kernel, here’s v4.4. As before, if you think there’s stuff I missed that should get some attention, please let me know.

seccomp Checkpoint/Restore-In-Userspace

Tycho Andersen added a way to extract and restore seccomp filters from running processes via PTRACE_SECCOMP_GET_FILTER under CONFIG_CHECKPOINT_RESTORE. This is a continuation of his work (that I failed to mention in my prior post) from v4.3, which introduced a way to suspend and resume seccomp filters. As I mentioned at the time (and for which he continues to quote me) “this feature gives me the creeps.” :)

x86 W^X detection

Stephen Smalley noticed that there was still a range of kernel memory (just past the end of the kernel code itself) that was incorrectly marked writable and executable, defeating the point of CONFIG_DEBUG_RODATA which seeks to eliminate these kinds of memory ranges. He corrected this in v4.3 and added CONFIG_DEBUG_WX in v4.4 which performs a scan of memory at boot time and yells loudly if unexpected memory protection are found. To nobody’s delight, it was shortly discovered the UEFI leaves chunks of memory in this state too, which posed an ugly-to-solve problem (which Matt Fleming addressed in v4.6).

x86_64 vsyscall CONFIG

I introduced a way to control the mode of the x86_64 vsyscall with a build-time CONFIG selection, though the choice I really care about is CONFIG_LEGACY_VSYSCALL_NONE, to force the vsyscall memory region off by default. The vsyscall memory region was always mapped into process memory at a fixed location, and it originally posed a security risk as a ROP gadget execution target. The vsyscall emulation mode was added to mitigate the problem, but it still left fixed-position static memory content in all processes, which could still pose a security risk. The good news is that glibc since version 2.15 doesn’t need vsyscall at all, so it can just be removed entirely. Any kernel built this way that discovered they needed to support a pre-2.15 glibc could still re-enable it at the kernel command line with “vsyscall=emulate”.

That’s it for v4.4. Tune in tomorrow for v4.5!

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

September 26, 2016

Obtendo mapas de São Paulo

No FISL do ano passado, ouvimos uma palestra sobre o geosampa bem interessante. Pouco tempo depois, o site já estava funcionando, e acabei de dar uma olhada agora, está ficando impressionante. Basicamente, é um site mantido pela prefeitura de São Paulo, que disponibiliza de maneira aberta e gratuita mapas de todo qual tipo da cidade. Veja...

security things in Linux v4.3

When I gave my State of the Kernel Self-Protection Project presentation at the 2016 Linux Security Summit, I included some slides covering some quick bullet points on things I found of interest in recent Linux kernel releases. Since there wasn’t a lot of time to talk about them all, I figured I’d make some short blog posts here about the stuff I was paying attention to, along with links to more information. This certainly isn’t everything security-related or generally of interest, but they’re the things I thought needed to be pointed out. If there’s something security-related you think I should cover from v4.3, please mention it in the comments. I’m sure I haven’t caught everything. :)

A note on timing and context: the momentum for starting the Kernel Self Protection Project got rolling well before it was officially announced on November 5th last year. To that end, I included stuff from v4.3 (which was developed in the months leading up to November) under the umbrella of the project, since the goals of KSPP aren’t unique to the project nor must the goals be met by people that are explicitly participating in it. Additionally, not everything I think worth mentioning here technically falls under the “kernel self-protection” ideal anyway — some things are just really interesting userspace-facing features.

So, to that end, here are things I found interesting in v4.3:


Russell King implemented this feature for ARM which provides emulated segregation of user-space memory when running in kernel mode, by using the ARM Domain access control feature. This is similar to a combination of Privileged eXecute Never (PXN, in later ARMv7 CPUs) and Privileged Access Never (PAN, coming in future ARMv8.1 CPUs): the kernel cannot execute user-space memory, and cannot read/write user-space memory unless it was explicitly prepared to do so. This stops a huge set of common kernel exploitation methods, where either a malicious executable payload has been built in user-space memory and the kernel was redirected to run it, or where malicious data structures have been built in user-space memory and the kernel was tricked into dereferencing the memory, ultimately leading to a redirection of execution flow.

This raises the bar for attackers since they can no longer trivially build code or structures in user-space where they control the memory layout, locations, etc. Instead, an attacker must find areas in kernel memory that are writable (and in the case of code, executable), where they can discover the location as well. For an attacker, there are vastly fewer places where this is possible in kernel memory as opposed to user-space memory. And as we continue to reduce the attack surface of the kernel, these opportunities will continue to shrink.

While hardware support for this kind of segregation exists in s390 (natively separate memory spaces), ARM (PXN and PAN as mentioned above), and very recent x86 (SMEP since Ivy-Bridge, SMAP since Skylake), ARM is the first upstream architecture to provide this emulation for existing hardware. Everyone running ARMv7 CPUs with this kernel feature enabled suddenly gains the protection. Similar emulation protections (PAX_MEMORY_UDEREF) have been available in PaX/Grsecurity for a while, and I’m delighted to see a form of this land in upstream finally.

To test this kernel protection, the ACCESS_USERSPACE and EXEC_USERSPACE triggers for lkdtm have existed since Linux v3.13, when they were introduced in anticipation of the x86 SMEP and SMAP features.

Ambient Capabilities

Andy Lutomirski (with Christoph Lameter and Serge Hallyn) implemented a way for processes to pass capabilities across exec() in a sensible manner. Until Ambient Capabilities, any capabilities available to a process would only be passed to a child process if the new executable was correctly marked with filesystem capability bits. This turns out to be a real headache for anyone trying to build an even marginally complex “least privilege” execution environment. The case that Chrome OS ran into was having a network service daemon responsible for calling out to helper tools that would perform various networking operations. Keeping the daemon not running as root and retaining the needed capabilities in children required conflicting or crazy filesystem capabilities organized across all the binaries in the expected tree of privileged processes. (For example you may need to set filesystem capabilities on bash!) By being able to explicitly pass capabilities at runtime (instead of based on filesystem markings), this becomes much easier.

For more details, the commit message is well-written, almost twice as long as than the code changes, and contains a test case. If that isn’t enough, there is a self-test available in tools/testing/selftests/capabilities/ too.

PowerPC and Tile support for seccomp filter

Michael Ellerman added support for seccomp to PowerPC, and Chris Metcalf added support to Tile. As the seccomp maintainer, I get excited when an architecture adds support, so here we are with two. Also included were updates to the seccomp self-tests (in tools/testing/selftests/seccomp), to help make sure everything continues working correctly.

That’s it for v4.3. If I missed stuff you found interesting, please let me know! I’m going to try to get more per-version posts out in time to catch up to v4.8, which appears to be tentatively scheduled for release this coming weekend. Next: v4.4.

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

Unclaimed Alcoholic Beverages

Dave was reading New Mexico laws regarding a voter guide issue we're researching, and he came across this gem in Section 29-1-14 G of the "Law Enforcement: Peace Officers in General: Unclaimed Property" laws:

Any alcoholic beverage that has been unclaimed by the true owner, is no longer necessary for use in obtaining a conviction, is not needed for any other public purpose and has been in the possession of a state, county or municipal law enforcement agency for more than ninety days may be destroyed or may be utilized by the scientific laboratory division of the department of health for educational or scientific purposes.

We can't decide which part is more fun: contemplating what the "other public purposes" might be, or musing on the various "educational or scientific purposes" one might come up with for a month-old beverage that's been sitting in the storage locker ... I'm envisioning a room surrounded by locked chain-link containing dusty shelves containing rows of half-full martini and highball glasses.

Working with terrain in #FreeCAD

Since I have not much new FreeCAD-related development to show this week, I'll showcase an existing feature that has been around for some time, which is an external workbench named geodata, programmed by the long-time FreeCAD community member and guru Microelly2. That workbench is part of the FreeCAD addons collection, which is a collection of additional...

September 25, 2016

Why an open Web is important when sea levels are rising

Cory Doctorow speaking on episode 221 of the excellent Changelog podcast:

“[t]here are things that are way more important than [whether in the internet should or shouldn’t be free]. There’s fundamental issues of economic justice, there’s climate change, there’s questions of race and gender and gender orientation, that are a lot more urgent than the future of the internet, but [...] every one of those fights is going to be won or lost on the internet.”

September 22, 2016

Comments about OARS and CSM age ratings

I’ve had quite a few comments from people stating that using age rating classification values based on American culture is wrong. So far I’ve been using the Common Sense Media research (and various other psychology textbooks) to essentially clean-room implement a content-rating to appropriate age algorithm.

Whilst I do agree that other cultures have different sensitivities (e.g. Smoking in Uganda, references to Nazis in Germany) there doesn’t appear to be much research on the suggested age ratings for different categories for those specific countries. Lots of things are outright banned for sale for various reasons (which the populous may completely ignore), but there doesn’t seem to be many statistics that back up the various anecdotal statements. For instance, are there any US-specific guidelines that say that the age rating for playing a game that involves taking illegal drugs should be 18, rather than the 14 which is inferred from CSM? Or the age rating should be 25+ for any game that features drinking alcohol in Saudi Arabia?

Suggestions (especially references) welcome. Thanks!

September 21, 2016

GNOME Software and Age Ratings

After all the tarballs for GNOME 3.22 the master branch of gnome-software is now open to new features. Along with the usual cleanups and speedups one new feature I’ve been working on is finally merging the age ratings work.


The age ratings are provided by the upstream-supplied OARS metadata in the AppData file (which can be generated easily online) and then an age classification is generated automatically using the advice from the appropriately-named Common Sense Media group. At the moment I’m not doing any country-specific mapping, although something like this will be required to show appropriate ratings when handling topics like alcohol and drugs.

At the moment the only applications with ratings in Fedora 26 will be Steam games, but I’ve also emailed any maintainer that includes an <update_contact> email address in the appdata file that also identifies as a game in the desktop categories. If you ship an application with an AppData and you think you should have an age rating please use the generator and add the extra few lines to your AppData file. At the moment there’s no requirement for the extra data, although that might be something we introduce just for games in the future.

I don’t think many other applications will need the extra application metadata, but if you know of any adult only applications (e.g. in Fedora there’s an application for the sole purpose of downloading p0rn) please let me know and I’ll contact the maintainer and ask what they think about the idea. Comments, as always, welcome. Thanks!

September 20, 2016

WebKitGTK+ 2.14

These six months has gone so fast and here we are again excited about the new WebKitGTK+ stable release. This is a release with almost no new API, but with major internal changes that we hope will improve all the applications using WebKitGTK+.

The threaded compositor

This is the most important change introduced in WebKitGTK+ 2.14 and what kept us busy for most of this release cycle. The idea is simple, we still render everything in the web process, but the accelerated compositing (all the OpenGL calls) has been moved to a secondary thread, leaving the main thread free to run all other heavy tasks like layout, JavaScript, etc. The result is a smoother experience in general, since the main thread is no longer busy rendering frames, it can process the JavaScript faster improving the responsiveness significantly. For all the details about the threaded compositor, read Yoon’s post here.

So, the idea is indeed simple, but the implementation required a lot of important changes in the whole graphics stack of WebKitGTK+.

  • Accelerated compositing always enabled: first of all, with the threaded compositor the accelerated mode is always enabled, so we no longer enter/exit the accelerating compositing mode when visiting pages depending on whether the contents require acceleration or not. This was the first challenge because there were several bugs related to accelerating compositing being always enabled, and even missing features like the web view background colors that didn’t work in accelerated mode.
  • Coordinated Graphics: it was introduced in WebKit when other ports switched to do the compositing in the UI process. We are still doing the compositing in the web process, but being in a different thread also needs coordination between the main thread and the compositing thread. We switched to use coordinated graphics too, but with some modifications for the threaded compositor case. This is the major change in the graphics stack compared to the previous one.
  • Adaptation to the new model: finally we had to adapt to the threaded model, mainly due to the fact that some tasks that were expected to be synchronous before became asyncrhonous, like resizing the web view.

This is a big change that we expect will drastically improve the performance of WebKitGTK+, especially in embedded systems with limited resources, but like all big changes it can also introduce new bugs or issues. Please, file a bug report if you notice any regression in your application. If you have any problem running WebKitGTK+ in your system or with your GPU drivers, please let us know. It’s still possible to disable the threaded compositor in two different ways. You can use the environment variable WEBKIT_DISABLE_COMPOSITING_MODE at runtime, but this will disable accelerated compositing support, so websites requiring acceleration might not work. To disable the threaded compositor and bring back the previous model you have to recompile WebKitGTK+ with the option ENABLE_THREADED_COMPOSITOR=OFF.


WebKitGTK+ 2.14 is the first release that we can consider feature complete in Wayland. While previous versions worked in Wayland there were two important features missing that made it quite annoying to use: accelerated compositing and clipboard support.

Accelerated compositing

More and more websites require acceleration to properly work and it’s now a requirement of the threaded compositor too. WebKitGTK+ has supported accelerated compositing for a long time, but the implementation was specific to X11. The main challenge is compositing in the web process and sending the results to the UI process to be rendered on the actual screen. In X11 we use an offscreen redirected XComposite window to render in the web process, sending the XPixmap ID to the UI process that renders the window offscreen contents in the web view and uses XDamage extension to track the repaints happening in the XWindow. In Wayland we use a nested compositor in the UI process that implements the Wayland surface interface and a private WebKitGTK+ protocol interface to associate surfaces in the UI process to the web pages in the web process. The web process connects to the nested Wayland compositor and creates a new surface for the web page that is used to render accelerated contents. On every swap buffers operation in the web process, the nested compositor in the UI process is automatically notified through the Wayland surface protocol, and  new contents are rendered in the web view. The main difference compared to the X11 model, is that Wayland uses EGL in both the web and UI processes, so what we have in the UI process in the end is not a bitmap but a GL texture that can be used to render the contents to the screen using the GPU directly. We use gdk_cairo_draw_from_gl() when available to do that, falling back to using glReadPixels() and a cairo image surface for older versions of GTK+. This can make a huge difference, especially on embedded devices, so we are considering to use the nested Wayland compositor even on X11 in the future if possible.


The WebKitGTK+ clipboard implementation relies on GTK+, and there’s nothing X11 specific in there, however clipboard was read/written directly by the web processes. That doesn’t work in Wayland, even though we use GtkClipboard, because Wayland only allows clipboard operations between compositor clients, and web processes are not Wayland clients. This required to move the clipboard handling from the web process to the UI process. Clipboard handling is now centralized in the UI process and clipboard contents to be read/written are sent to the different WebKit processes using the internal IPC.

Memory pressure handler

The WebKit memory pressure handler is a monitor that watches the system memory (not only the memory used by the web engine processes) and tries to release memory under low memory conditions. This is quite important feature in embedded devices with memory limitations. This has been supported in WebKitGTK+ for some time, but the implementation is based on cgroups and systemd, that is not available in all systems, and requires user configuration. So, in practice nobody was actually using the memory pressure handler. Watching system memory in Linux is a challenge, mainly because /proc/meminfo is not pollable, so you need manual polling. In WebKit, there’s a memory pressure handler on every secondary process (Web, Plugin and Network), so waking up every second to read /proc/meminfo from every web process would not be acceptable. This is not a problem when using cgroups, because the kernel interface provides a way to poll an EventFD to be notified when memory usage is critical.

WebKitGTK+ 2.14 has a new memory monitor, used only when cgroups/systemd is not available or configured, based on polling /proc/meminfo to ensure the memory pressure handler is always available. The monitor lives in the UI process, to ensure there’s only one process doing the polling, and uses a dynamic poll interval based on the last system memory usage to read and parse /proc/meminfo in a secondary thread. Once memory usage is critical all the secondary processes are notified using an EventFD. Using EventFD for this monitor too, not only is more efficient than using a pipe or sending an IPC message, but also allows us to keep almost the same implementation in the secondary processes that either monitor the cgroups EventFD or the UI process one.

Other improvements and bug fixes

Like in all other major releases there are a lot of other improvements, features and bug fixes. The most relevant ones in WebKitGTK+ 2.14 are:

  • The HTTP disk cache implements speculative revalidation of resources.
  • The media backend now supports video orientation.
  • Several bugs have been fixed in the media backend to prevent deadlocks when playing HLS videos.
  • The amount of file descriptors that are kept open has been drastically reduced.
  • Fix the poor performance with the modesetting intel driver and DRI3 enabled.

Frogs on the Rio, and Other Amusements

Saturday, a friend led a group hike for the nature center from the Caja del Rio down to the Rio Grande.

The Caja (literally "box", referring to the depth of White Rock Canyon) is an area of national forest land west of Santa Fe, just across the river from Bandelier and White Rock. Getting there involves a lot of driving: first to Santa Fe, then out along increasingly dicey dirt roads until the road looks too daunting and it's time to get out and walk.

[Dave climbs the Frijoles Overlook trail] From where we stopped, it was only about a six mile hike, but the climb out is about 1100 feet and the day was unexpectedly hot and sunny (a mixed blessing: if it had been rainy, our Rav4 might have gotten stuck in mud on the way out). So it was a notable hike. But well worth it: the views of Frijoles Canyon (in Bandelier) were spectacular. We could see the lower Bandelier Falls, which I've never seen before, since Bandelier's Falls Trail washed out below the upper falls the summer before we moved here. Dave was convinced he could see the upper falls too, but no one else was convinced, though we could definitely see the red wall of the maar volcano in the canyon just below the upper falls.

[Canyon Tree Frog on the Rio Grande] We had lunch in a little grassy thicket by the Rio Grande, and we even saw a few little frogs, well camouflaged against the dirt: you could even see how their darker brown spots imitated the pebbles in the sand, and we wouldn't have had a chance of spotting them if they hadn't hopped. I believe these were canyon treefrogs (Hyla arenicolor). It's always nice to see frogs -- they're not as common as they used to be. We've heard canyon treefrogs at home a few times on rainy evenings: they make a loud, strange ratcheting noise which I managed to record on my digital camera. Of course, at noon on the Rio the frogs weren't making any noise: just hanging around looking cute.

[Chick Keller shows a burdock leaf] Sunday we drove around the Pojoaque Valley following their art tour, then after coming home I worked on setting up a new sandblaster to help with making my own art. The hardest and least fun part of welded art is cleaning the metal of rust and paint, so it's exciting to finally have a sandblaster to help with odd-shaped pieces like chains.

Then tonight was a flower walk in Pajarito Canyon, which is bursting at the seams with flowers, especially purple aster, goldeneye, Hooker's evening primrose and bahia. Now I'll sign off so I can catalog my flower photos before I forget what's what.

September 18, 2016

#FreeCAD news and Arch workflow

So, let's continue to post more often about FreeCAD. I'm beginning to organize a bit better, gathering screenshots and ideas during the week, so I'll try to keep this going. This week has seen many improvements, specially because we've been doing intense FreeCAD work with OpeningDesign. Like everytime you make intense use of FreeCAD or...

September 14, 2016

LVFS and ODRS are down

The LVFS firmware server and ODRS reviews server are down because my credit card registered with OpenShift expired. I’ve updated my credit card details, paid the pending invoice and still can’t start any server. I rang customer service who asked me to send an email and have heard nothing back.


I have backups a few days old, but this whole situation is terrible on so many levels.

EDIT: cdaley has got everything back working again, it appears I found a corner case in the code that deals with payments.

bicycle, node, network, design

This Monday ignite berlin took place and I did a fun, five minute, pecha kucha talk that also contained some systems analysis and a design insight. For a full transcript, read on.

ring ring

There are two things that you need to know about me. The first is that I am dutch and the second is that I am becoming a sentimental old fool. I combine the two when I do cycling holidays in Holland:

cycling in the dutch fields my partner Carmen leads the way

For this we use the fietsroutenetwerk, the bicycle route network of the Netherlands. This was designed for recreational cycling in the countryside. It was rolled out between 2003 and 2012. The network is point‐to‑point:

two points connected by a line, arrows pointing both ways

Between two neighbouring nodes there is complete signage—with no gaps—to get you from one to the other. And this in both directions. Here are some of these signs:

several roadside routing signposts sources: fietsen op de fiets, het groene woud, gps.nl

The implicit promise is that these are nice routes. That means: away from cars as much as possible. And scenic—through fields, heath and forrest.

Using the nodes, local networks have been designed and built:

a network of nodes on a local map

These networks are purely infrastructural; there is no preconception of what is ‘proper’ or ‘typical’ usage. They accommodate routes of any shape and any length.

At every node, one finds a local map, with the network:

on-location display of the local map source: wikimedia commons

It can be used for planning, reference and simply reassurance. Besides that, there are old‑fashioned maps and plenty of apps and websites for planning and sharing of routes.

The local networks were knitted together to form a national network:

a dense network covers the whole country

Looking at this map I see interesting differences in patterns and densities. I don’t think this only reflects the geography, but also the character of the locals; what they consider proper cycling infrastructure and scenic routes.

The network was not always nation-wide. It was rolled out over a period of nine years, one local network at the time. I still remember crossing a province border and (screech!) there was no more network. It was back to old‑fashioned map reading and finding the third street on the left.

not invented here

I was shocked to find out that the Dutch did not invent this network system. We have to go back to the 1980s, north‐east Belgium: all the coal mines are closing. Mining engineer Hugo Bollen proposes to create a recreational cycling network, in order to initiate economic regeneration of the region. Here’s Hugo:

Hugo Bollen rides a bike in nature source: toerisme limburg

He designed the network rules explained in this blog post. The Belgians actually had to build(!) all of the cycling infrastructure, so it took them to 1995 to open the first local network. It now brings in 16.5 million Euro a year to the region.

how many?

I got curious about the total number of network nodes in Holland. I could not find this number on the internet. The net is really quite short on stats and data of the cycling network. So I needed to find out by myself. What I did was take one of my maps—

a traditional cycling map that covers a part of holland

And I counted all the nodes—there were 309. I multiplied this with the number of maps that cover all of Holland. Then I took 75% of that number to deal with map overlaps and my own over‐enthusiasm. The result: I estimate that the dutch network consists of 9270 nodes.

in awe

The reason I got curious about that number is that every time I use the network, I am impressed by a real‐genius design decision (and I don’t get to say that very often). It makes all the difference, when using the network in anger.

All these nearly‐ten thousand nodes are identified by a two‑digit number. Not the four (or more future‐proof, five) one would expect. All the nodes are simply numbered 1 through 99, and then they start at one again. And shorter is much better:

cycling route signage with direction for node 02 source: recreatieschap westfriesland

Two digits is much faster to read and write down. It is easier to memorise, short‐term. It is instant to compare and confirm. Remember, most of these actions are performed while riding a bike at a nice cruising speed.


Pushing through this two‑digit design must have been asking for trouble. Most of us can just imagine the bike‐shedding: ‘what if cyclists really need to be able to uniquely identify a node in the whole nation?’ Or: ‘will cyclists get confused by these repeating numbers?’

This older cycling signpost system has a five‑digit identification number:

a clycling signpost showing directions to nearby villages and towns source: dirk de baan

This number takes several steps to process. Two‑digit numbers are humane numbers. They exploit that way‐finding is a very local activity—although one can cover 130km a day on a bike.


Wrapping up, the cycling network is a distributed network:

three graphs: a centralised, a decentralised and a distributed network source: j4n

All nodes are equal and so are all routes. Cyclist route themselves. In that way the network works quite like… the internet.

We could call it the democratic network, because it treats everyone as equals. Or we could call it the liberal network (that would be very dutch). Or—in a post‐modern way—we could call it the atomised network.

I simply call it the bicycle route network of the Netherlands.

a vista over dutch fields with a calf and two cyclists

September 12, 2016

Art on display at the Bandelier Visitor Center

As part of the advertising for next month's Los Alamos Artists Studio Tour (October 15 & 16), the Bandelier Visitor Center in White Rock has a display case set up, and I have two pieces in it.

[my art on display at Bandelier]

The Velociraptor on the left and the hummingbird at right in front of the sweater are mine. (Sorry about the reflections in the photo -- the light in the Visitor Center is tricky.)

The turtle at front center is my mentor David Trujillo's, and I'm pretty sure the rabbit at far left is from Richard Swenson.

The lemurs just right of center are some of Heather Ward's fabulous scratchboard work. You may think of scratchboard as a kids' toy (I know I used to), but Heather turns it into an amazing medium for wildlife art. I'm lucky enough to get to share her studio for the art tour: we didn't have a critical mass of artists in White Rock, just two of us, so we're borrowing space in Los Alamos for the tour.

September 09, 2016

Click Hooks

After being asked about what I like about Click hooks I thought it would be nice to write up a little bit of the why behind them in a blog post. The precursor to this story is that I told Colin Watson that he was wrong to build hooks like this; he kindly corrected me and helped me fix my code to match but I still wasn't convinced. Now today I see some of the wisdom in the Click hook design and I'm happy to share it.

The standard way to think about hooks is as a way to react to the changes to the system. If a new application is installed then the hook gets information about the application and responds to the new data. This is how most libraries work with providing signals about the data that they maintain, and we apply that same logic to thinking about filesystem hooks. But filesystem hooks are different because the coherent state is harder to query. In your library you might respond the signal for a few things, but in many code paths the chances are you'll just go through the list of original objects to do operations. With filesystem hooks that complete state is almost never used, only the caches are that are created by the hooks themselves.

Click hooks work by creating a directory of symbolic links that matches the current state of the system, and then asks you to ensure your cache matches that state of the system. This seems inefficient because you have to determine which parts of your cache need to change, which get removed and which get added. But it results in better software because your software, including your hooks, has errors in it. I'm sorry to be the first one to tell you, but there are bugs. If your software is 99% correct, there is still something it is doing wrong. When you have delta updates that update the cache that error compounds and never gets completely corrected with each update because the complete state is never examined. So slowly the quality of your cache gets worse, not awful, but worse. By transferring the current system state to the cache each time you get the error rate of your software in the cache, but you don't get the compounded error rate of each delta. This adds up.

The design of the hooks system in Click might feel wrong as you start to implement one, but I think that after you create a few hooks you'll find there is wisdom in it. And as you use other hook systems in other platforms think about checking the system state to ensure you're always creating the best cache possible, even if the hook system there didn't force you to do it.

September 08, 2016

Watch this person use Excel for an hour

Joel Spolsky, of Stack Overflow, Trello, and Fog Creek, did an internal presentation where he just walked through how he uses Microsoft Excel for about an hour.

It’s riveting for two reasons.

First, I learned a bunch of techniques that I didn’t know existed (transpose! named values! oh my!). Unfortunately, many of those don’t apply to Google Spreadsheets, which is worth using due to the simple and powerful collaboration tools. A few of the techniques are universal to spreadsheets, though.

Second, he’s good at it. There is something compelling about watching someone with deep skill and knowledge do their work, regardless of what it is. In the same way, I can enjoy watching a skilled musical perform regardless of my interest and taste in their musical genre.

This style of presentation, featuring a simple tour of the just-beyond-basic features, is a great way to share with co-workers. I’ve learned a ton from watching Stephen use Photoshop, and I got hooked on split-panes in iTerm after watching Malena screen-share in an unrelated presentation.

September 07, 2016

darktable 2.0.6 released

we're proud to announce the sixth bugfix release for the 2.0 series of darktable, 2.0.6!

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.0.6.

as always, please don't use the autogenerated tarball provided by github, but only our tar.xz. the checksum is:

2368c1865221032061645342ba8c00bcd6d224e9829a55bc610e6cb67de738c1  darktable-2.0.6.tar.xz
8376ab1bb74f4a25998ff1a7f03c8498b57064bf27700c9af53a7356e5a2ee1e  darktable-2.0.6.dmg

and the changelog as compared to 2.0.5 can be found below.

New Features

  • Jpeg format writer: use libexiv2 to write metadata, like with other formats
  • Accept non-mosaiced raw files with 4 channels, assume they are RGBA (alpha channel is ignored)


  • Once again, fix for yet another gtk theming regression...
  • OpenCL: properly discard CPU-based OpenCL devices. Fixes crashes on startup with some broken OpenCL implementations like pocl.
  • darktable-cli: do not even try to open display, we don't need it.
  • Rawspeed: NikonDecoder: stop accepting generic camera entries. Fixes multitude of Nikon raw loading issues.
  • OpenCL: fix border handling in crop&rotate module
  • Hotpixels iop: make it actually work for X-Trans
  • Clipping IOP: scale width of gray crop path with zoom level
  • One more fixup to canon lens name reading from exif
  • Fixup Bayer pattern for Olympus SP570UZ
  • Fix internal build issue: do not assume that Perl's @INC contains '.'

Base Support

  • Canon EOS-1D X Mark II
  • Canon EOS 1300D
  • Canon EOS Kiss X80
  • Canon EOS Rebel T6
  • Canon EOS M10
  • Canon PowerShot G7 X Mark II
  • Canon PowerShot G9 X
  • Fujifilm X-T2
  • GITUP GIT2 action camera
  • Panasonic DMC-FZ18 (16:9, 3:2)
  • Panasonic DMC-FZ50 (16:9, 3:2)
  • Pentax K-1
  • Sony DSLR-A380
  • Sony ILCE-6300
  • Nikon D500
  • Some other whitelevel fixups for some other Nikon cameras (in particular, mostly for 12-bit and not compressed raws)

White Balance Presets

  • Canon EOS-1D X Mark II
  • Canon EOS 1300D
  • Canon EOS Kiss X80
  • Canon EOS Rebel T6
  • Canon EOS M10
  • Canon PowerShot G7 X Mark II
  • Fujifilm X-T10
  • Sony ILCE-6300

Translations Updates

  • Slovak

Design sprints and healthcare

With the help of a few of my co-workers, I've written about a new design sprint process we've been using at silverorange, and how it applies in healthcare organizations. It started as a post on our silverorange blog, but was pulled into GV's Sprint Stories publication (thanks to John Zeratsky).

If you love design processes and healthcare (and who doesn't), read the article: Running a design sprint in a healthcare organization

On Surplus

“We as human beings find a way to waste most surpluses that technology hands to us.”

—Stewart Butterfield of Slack speaking on The Ezra Klein Show podcast.

He also makes a good analogy between our difficulty managing the new ability to communicate with anyone/anytime and the difficulty of dealing with the abundance of easy/cheap calories available to many of us.

September 06, 2016

The Taos Earthships (and a lovely sunset)

We drove up to Taos today to see the Earthships.

[Taos Earthships] Earthships are sustainable, completely off-the-grid houses built of adobe and recycled materials. That was pretty much all I knew about them, except that they were weird looking; I'd driven by on the highway a few times (they're on highway 64 just west of the beautiful Rio Grande Gorge Bridge) but never stopped and paid the $7 admission for the self-guided tour.

[Earthship construction] Seeing them up close was fun. The walls are made of old tires packed with dirt, then covered with adobe. The result is quite strong, though like all adobe structures it requires regular maintenance if you don't want it to melt away. For non load bearing walls, they pack adobe around old recycled bottles or cans.

The houses have a passive solar design, with big windows along one side that make a greenhouse for growing food and freshening the air, as well as collecting warmth in cold weather. Solar panels provide power -- supposedly along with windmills, but I didn't see any windmills in operation, and the ones they showed in photos looked too tiny to offer much help. To help make the most of the solar power, the house is wired for DC, and all the lighting, water pumps and so forth run off low voltage DC. There's even a special DC refrigerator. They do include an AC inverter for appliances like televisions and computer equipment that can't run directly off DC.

Water is supposedly self sustaining too, though I don't see how that could work in drought years. As long as there's enough rainfall, water runs off the roof into a cistern and is used for drinking, bathing etc., after which it's run through filters and then pumped into the greenhouse. Waste water from the greenhouse is used for flushing toilets, after which it finally goes to the septic tank.

All very cool. We're in a house now that makes us very happy (and has excellent passive solar, though we do plan to add solar panels and a greywater system some day) but if I was building a house, I'd be all over this.

We also discovered an excellent way to get there without getting stuck in traffic-clogged Taos (it's a lovely town, but you really don't want to go near there on a holiday, or a weekend ... or any other time when people might be visiting). There's a road from Pilar that crosses the Rio Grande then ascends up to the mesa high above the river, continuing up to highway 64 right near the earthships. We'd been a little way up that road once, on a petroglyph-viewing hike, but never all the way through. The map said it was dirt from the Rio all the way up to 64, and we were in the Corolla, since the Rav4's battery started misbehaving a few days ago and we haven't replaced it yet.

So we were hesitant. But the nice folks at the Rio Grande Gorge visitor center at Pilar assured us that the dirt section ended at the top of the mesa and any car could make it ("it gets bumpy -- a New Mexico massage! You'll get to the top very relaxed"). They were right: the Corolla made it with no difficulty and it was a much faster route than going through Taos.

[Nice sunset clouds in White Rock] We got home just in time for the rouladen I'd left cooking in the crockpot, and then finished dinner just in time for a great sunset sky.

A few more photos: Earthships (and a great sunset).

September 04, 2016

From the Community Vol. 1

From the Community Vol. 1

Welcome to the first installment of From the Community, a (hopefully) quarterly blog post to highlight a few of the things our community members have been doing!

Rapid Photo Downloader Process Model

@damonlynch has a great write up of Rapid Photo Download’s process model. Rapid Photo Downloader is built using Python, so if you’re looking for a good way to add threads to your Python program, this write up has some good information for you, check it out!

rpd process model

Community-built Software downloads page

Free Software development tends to move at a pretty good pace, so there is always something new to try out! Not all of the new things warrant a new release, but our community steps up and builds the software so that others can use and test! Instead of random links to dropboxes and such, we’ve created a Community-built Software page to help centralize and make it easy for our users to help find and download the freshest builds of software from our great community members. Keep in mind that support may be limited for these builds and they’re considered testing, so quality may vary, but if you covet the newest, shiniest things, this is the place for you!

Glitch art filters coming to G’MIC

G’MIC will be getting some cool glitch art filters in 1.7.6. @thething is interested in glitch art and requested some new filters in G’MIC, and @David_Tschumperle delivered very quickly!

You can flip blocks:

GMIC block flipping

and warp your images:

GMIC image warping

An Alternative to Watermarking

Watermarking is ugly and takes focus away from your image. Why not try and add an attribution bar to your images? In this post, @patdavid lays out how to add a bar underneath your image with your name, the image title, and a little logo. @David_Tschumperle followed that effort up with an alternate implementation using G’MIC instead of imagemagic. Lastly, @vato rolled the imagemagick version into a bash script with the necessary parameters exposed as variables at the beginning of the script.

Here is an example image by @Morgan_Hardwood:

attribution bar example

Help Author a Tutorial for Beginners

Finally, we’re still working on our beginner article to help new users navigate the myriad of free software photography software that is out there. If you have ideas, or better yet, want to author a bit of content with our community, please join and help out! The post is community wiki and has complete revision control, so don’t be afraid to jump in and contribute!

September 02, 2016

Fedora 25 and Additional Software Sources

I was asked to produce a checklist for applications that we want to show up in GNOME Software in Fedora 25. In this post I’ll refer to applications as graphical programs, rather than other system add-on components like drivers and codecs (which the next post will talk about). There is a big checklist, which really is the bare minimum that the distributor has to provide so that the application is listed correctly. If any of these points is causing problems or is confusing, please let me know and I’ll do my best to help.

So, these things really have to be done:

  • Verify that you ship a .desktop file for each built application, and that these keys exist: Name, Comment, Icon, Categories, Keywords and Exec and that desktop-file-validate correctly validates the file.
  • Verify that there is a PNG (with transparent background) or SVG icon is installed in /usr/share/icons, /usr/share/icons/hicolor/*/apps/*, or /usr/share/${app_name}/icons/* and is at least 64×64 in size.
  • At least one valid AppData file with the suffix .appdata.xml file must be installed into /usr/share/appdata with an <id> that matches the name of the .desktop file, e.g. gimp.appdata.xml. Ideally the name of both the desktop file and appdata should be reverse DNS, e.g. com.hughski.ColorHug.desktop rather than colorhug-client.desktop although this isn’t critically important.
  • Include several 16:9 aspect screenshots in the AppData file along with a compelling translated description made up of multiple paragraphs. Make sure you follow the style guide, which can be tested using appstream-util validate foo.appdata.xml
  • Make sure that there are not two applications installed with one package; in this case split up the package so that there are multiple subpackages or mark one of the .desktop files as NoDisplay=true. Make sure the application-subpackages depend on any -common subpackage and deal with upgrades (perhaps using a metapackage) if you’ve shipped the application before.
  • Make sure your application is visible in the example.xml.gz file when running appstream-builder on the binary rpm(s).
  • Make sure the AppStream metadata is regenerated when the application is updated in the repo, for more details see an entire blog post on this
  • Ensure that enabled_metadata=1 is set in the .repo file. This means that PackageKit will automatically download just the application metadata even when the repository is disabled.

August 30, 2016

Back from Krita Sprint 2016

Last week, I spent 4 days at the Krita Sprint in Deventer, where several contributors gathered to discuss the current hot topics, draw and hack together.

You can read a global report of the event on krita.org news.

On my side, besides meeting old and new friends, and discussing animation, brushes and vector stuff, I made three commits:
-replace some duplicate icons by aliases in qrc files
-update the default workspaces
-add a new “Eraser Switch Opacity” feature (this one is on a separate branch for now)

I also filed new tasks on phabricator for two feature requests to improve some color and animation workflow:



Once again, I feel it’s been a great and productive meeting for everyone. A lot of cool things are ready for next Krita version, this is exciting! So much thanks to KDE e.V. for the travel support, and to the Krita foundation for hosting the event and providing accomodation and food.

August 29, 2016

Happy Porting!

Last year, I wrote about how library authors should pretty darn well never ever make their users spend time on "porting". Porting is always a waste of time. No matter how important the library author thinks his newly fashionable way of doing stuff is, it is never ever as important as the time porting takes away from the application author's real mission: the work on their applications. I care foremost about my users; I expect a library author to care about their users, i.e, people like me.

So, today I was surprised by Goodbye, Q_FOREACH by Marc Mutz. (Well known for his quixotic crusade to de-Qt Qt.)

Well, fuck.

Marc, none, not a single one of all of the reasons you want to deprecate Q_FOREACH is a reason I care even a little bit about. It's going to be deprecated? Well, that's a decision, and a dumb one. It doesn't work on std containers, QVarLengthArray or C arrays? I don't use it on those. It adds 100 bytes of text size? Piffle. It makes it hard to reason about the loop for you? I don't care.

What I do care is the 1559 places where we use Q_FOREACH in Krita. Porting this will take weeks.

Marc, I hope that you will have a patch ready for us on phabricator soon: you can add it to this project and keep iterating until you've fixed all the bugs.

Happy porting, Marc!

Come into the real world and learn how well this let's-depracate-and-let-the-poor-shmuck-port-their-code attitude works out.

August 26, 2016

More map file conversions: ESRI Shapefiles and GeoJSON

I recently wrote about Translating track files between mapping formats like GPX, KML, KMZ and UTM But there's one common mapping format that keeps coming up that's hard to handle using free software, and tricky to translate to other formats: ESRI shapefiles.

ArcGIS shapefiles are crazy. Typically they come as an archive that includes many different files, with the same base name but different extensions: filename.sbn, filename.shx, filename.cpg, filename.sbx, filename.dbf, filename.shp, filename.prj, and so forth. Which of these are important and which aren't?

To be honest, I don't know. I found this description in my searches: "A shape file map consists of the geometry (.shp), the spatial index (.shx), the attribute table (.dbf) and the projection metadata file (.prj)." Poking around, I found that most of the interesting metadata (trail name, description, type, access restrictions and so on) was in the .dbf file.

You can convert the whole mess into other formats using the ogr2ogr program. On Debian it's part of the gdal-bin package. Pass it the .shp filename, and it will look in the same directory for files with the same basename and other shapefile-related extensions. For instance, to convert to KML:

 ogr2ogr -f KML output.kml input.shp

Unfortunately, most of the metadata -- comments on trail conditions and access restrictions that were in the .dbf file -- didn't make it into the KML.

GPX was even worse. ogr2ogr knows how to convert directly to GPX, but that printed a lot of errors like "Field of name 'foo' is not supported in GPX schema. Use GPX_USE_EXTENSIONS creation option to allow use of the <extensions> element." So I tried ogr2ogr -f "GPX" -dsco GPX_USE_EXTENSIONS=YES output.gpx input.shp but that just led to more errors. It did produce a GPX file, but it had almost no useful data in it, far less than the KML did. I got a better GPX file by using ogr2ogr to convert to KML, then using gpsbabel to convert that KML to GPX.

Use GeoJSON instead to preserve the metadata

But there is a better way: GeoJSON.

ogr2ogr -f "GeoJSON" -t_srs crs:84 output.geojson input.shp

That preserved most, maybe all, of the metadata the .dbf file and gave me a nicely formatted file. The only problem was that I didn't have any programs that could read GeoJSON ...

[PyTopo showing metadata from GeoJSON converted from a shapefile]

But JSON is a nice straightforward format, easy to read and easy to parse, and it took surprisingly little work to add GeoJSON parsing to PyTopo. Now, at least, I have a way to view the maps converted from shapefiles, click on a trail and see the metadata from the original shapefile.

See also:

August 25, 2016

Summer Talks, PurpleEgg

I recently gave talks at Flock in Krakow and GUADEC in Karlsruhe:

Flock: What’s Fedora’s Alternative to vi httpd.conf Video Slides: PDF ODP
GUADEC: Reworking the desktop distribution Video Slides: PDF ODP

The topics were different but related: The Flock talk talked about how to make things better for a developer using Fedora Workstation as their development workstation, while the GUADEC talk was about the work we are doing to move Fedora to a model where the OS is immutable and separate from applications. A shared idea of the two talks is that your workstation is not your development environment environment. Installing development tools, language runtimes, and header files as part of your base operating system implies that every project you are developing wants the same development environment, and that simply is not the case.

At both talks, I demo’ed a small project I’ve been working on with codename of PurpleEgg (I didn’t have that codename yet at Flock – the talk instead talks about “NewTerm” and “fedenv”.) PurpleEgg is about creating easily creating containerized environments dedicated to a project, and about integrating those projects into the desktop user interface in a natural, slick way.

The command line client to PurpleEgg is called pegg:

[otaylor@localhost ~]$ pegg create django mydjangosite
[otaylor@localhost ~]$ cd ~/Projects/mydjangosite
[otaylor@localhost mydangjosite]$  pegg shell
[[mydjangosite]]$ python manage.py runserver
August 24, 2016 - 19:11:36
Django version 1.9.8, using settings 'mydjangosite.settings'
Starting development server at
Quit the server with CONTROL-C.

“pegg create” step did the following steps:

  • Created a directory ~/Projects/mydjangosite
  • Created a file pegg.yaml with the following contents:
base: fedora:24
- python3-virtualenv
- python3-django
  • Created a Docker image is the Fedora 24 base image plus the specified packages
  • Created a venv/ directory in the specified directory and initialized a virtual environment there
  • Ran ‘django-admin startproject’ to create the standard Django project

pegg shell

  • Checked to see if the Docker image needed updating
  • Ran a bash prompt inside the Docker image with a customized prompt
  • Activated the virtual environment

The end result is that, without changing the configuration of the host machine at all, in a few simple commands we got to a place where we can work on a Django project just as it is documented upstream.

But with the PurpleEgg application installed, you get more: you get search results in the GNOME Activities Overview for your projects, and when you activate a search result, you see a window like:


We have a terminal interface specialized for our project:

  • We already have the pegg environment activated
  • New tabs also open within that environment
  • The prompt is uncluttered with relevant information moved to the header bar
  • If the project is checked into Git, the header bar also tracks the Git branch

There’s a fair bit more that could be done: a GUI for creating and importing projects as in GNOME Builder, GUI integration for Vagrant and Docker, configuring frequently used commands in pegg.yaml, etc.

At the most basic, the idea is that server-side development is terminal-centric and also somewhat specialized – different languages and frameworks have different ways of doing things. PurpleEgg embraces working like that, but adds just enough conventions so that we can make things better for the developer – just because the developer wants a terminal doesn’t mean that all we can give them is a big pile of terminals.

PurpleEgg codedump is here. Not warrantied to be fit for any purpose.

August 24, 2016

Getting S3 Statistics using S3stat

I’ve been using Amazon S3 as a CDN for the LVFS metadata for a few weeks now. It’s been working really well and we’ve shifted a huge number of files in that time already. One thing that made me very anxious was the bill that I was going to get sent by Amazon, as it’s kinda hard to work out the total when you’re serving cough millions of small files rather than a few large files to a few people. I also needed to keep track of which files were being downloaded for various reasons and the Amazon tools make this needlessly tricky.

I signed up for the free trial of S3stat and so far I’ve been pleasantly surprised. It seems to do a really good job of graphing the spend per day and also allowing me to drill down into any areas that need attention, e.g. looking at the list of 404 codes various people are causing. It was fairly easy to set up, although did take a couple of days to start processing logs (which is all explained in the set up). Amazon really should be providing something similar.

Screenshot from 2016-08-24 11-29-51

For people providing less than 200,000 hits per day it’s only $10, which seems pretty reasonable. For my use case (bazillions of small files) it rises to a little-harder-to-justify $50/month.

I can’t justify the $50/month for the LVFS, but luckily for me they have a Cheap Bastard Plan (their words, not mine!) which swaps a bit of advertising for a free unlimited license. Sounds like a fair swap, and means it’s available for a lot of projects where $600/yr is better spent elsewhere.

Devo Firmware Updating

Does anybody have a Devo RC transmitter I can borrow for a few weeks? I need model 6, 6S, 7E, 8, 8S, 10, 12, 12S, F7 or F12E — it doesn’t actually have to work, I just need the firmware upload feature for testing various things. Please reshare/repost if you’re in any UK RC groups that could help. Thanks!

August 18, 2016

Updating Firmware on 8Bitdo Game Controllers

I’ve spent a few days adding support for upgrading the firmware of the various wireless 8Bitdo controllers into fwupd. In my opinion, the 8Bitdo hardware is very well made and reasonably priced, and also really good retro fun.

Although they use a custom file format for firmware, and also use a custom flashing protocol (seriously hardware people, just use DFU!) it was quite straightforward to integrate into fwupd. I’ve created a few things to make this all work:

  • a small libebitdo library in fwupd
  • a small ebitdo-tool binary that talks to the device and can flash a vendor supplied .dat file
  • a ebitdo fwupd provider that uses libebitdo to flash the device
  • a firmware repo that contains all the extra metadata for the LVFS

I guess I need to thank the guys at 8Bitdo; after asking a huge number of questions they open sourced their OS-X and Windows flashing tools, and also allowed me to distribute the firmware binary on the LVFS. Doing both of those things made it easy to support the hardware.

Screenshot from 2016-08-18 10-36-56

The result of all this is that you can now do fwupd update when the game-pad is plugged in using the USB cable (not just connected via bluetooth) and the firmware will be updated to the latest version. Updates will show in GNOME Software, and the world is one step being closer to being awesome.

August 17, 2016

Making New Map Tracks with Google Earth

A few days ago I wrote about track files in maps, specifically Translating track files between mapping formats. I promised to follow up with information on how to create new tracks.

For instance, I have some scans of old maps from the 60s and 70s showing the trails in the local neighborhood. There's no newer version. (In many cases, the trails have disappeared from lack of use -- no one knows where they're supposed to be even though they're legally trails where you're allowed to walk.) I wanted a way to turn trails from the old map into GPX tracks.

My first thought was to trace the old PDF map. A lot of web searching found a grand total of one page that talks about that: How to convert image of map into vector format?. It involves using GIMP to make an image containing just black lines on a white background, saving as uncompressed TIFF, then using a series of commands in GRASS. I made a start on that, but it was looking like it might be a big job that way. Since a lot of the old trails are still visible as faint traces in satellite photos, I decided to investigate tracing satellite photos in a map editor first, before trying the GRASS method.

But finding a working open source map editor turns out to be basically impossible. (Opportunity alert: it actually wouldn't be that hard to add that to PyTopo. Some day I'll try that, but now I was trying to solve a problem and hoping not to get sidetracked.)

The only open source map editor I've found is called Viking, and it's terrible. The user interface is complicated and poorly documented, and I could input only two or three trail segments before it crashed and I had to restart. Saving often, I did build up part of the trail network that way, but it was so slow and tedious restoring between crashes that I gave up.

OpenStreetMap has several editors available, and some of them are quite good, but they're (quite understandably) oriented toward defining roads that you're going to upload to the OpenStreetMap world map. I do that for real trails that I've walked myself, but it doesn't seem appropriate for historical paths between houses, some of which are now fenced off and few of which I've actually tried walking yet.

Editing a track in Google Earth

In the end, the only reasonable map editor I found was Google Earth -- free as in beer, not speech. It's actually quite a good track editor once I figured out how to use it -- the documentation is sketchy and no one who writes about it tells you the important parts, which were, for me:

Click on "My Places" in the sidebar before starting, assuming you'll want to keep these tracks around.

Right-click on My Places and choose Add->Folder if you're going to be creating more than one path. That way you can have a single KML file (Google Earth creates KML/KMZ, not GPX) with all your tracks together.

Move and zoom the map to where you can see the starting point for your path.

Click the "Add Path" button in the toolbar. This brings up a dialog where you can name the path and choose a color that will stand out against the map. Do not hit Return after typing the name -- that will immediately dismiss the dialog and take you out of path editing mode, leaving you with an empty named object in your sidebar. If you forget, like I kept doing, you'll have to right-click it and choose Properties to get back into editing mode.

Iconify, shade or do whatever your window manager allows to get that large, intrusive dialog out of the way of the map you're trying to edit. Shade worked well for me in Openbox.

Click on the starting point for your path. If you forgot to move the map so that this point is visible, you're out of luck: there's no way I've found to move the map at this point. (You might expect something like dragging with the middle mouse button, but you'd be wrong.) Do not in any circumstances be tempted to drag with the left button to move the map: this will draw lots of path points.

If you added points you don't want -- for instance, if you dragged on the map trying to move it -- Ctrl-Z doesn't undo, and there's no Undo in the menus, but Delete removes previous points. Whew.

Once you've started adding points, you can move the map using the arrow keys on your keyboard. And you can always zoom with the mousewheel.

When you finish one path, click OK in its properties dialog to end it.

Save periodically: click on the folder you created in My Places and choose Save Place As... Google Earth is a lot less crashy than Viking, but I have seen crashes.

When you're done for the day, be sure to File->Save->Save My Places. Google Earth apparently doesn't do this automatically; I was forever being confused why it didn't remember things I had done, and why every time I started it it would give me syntax errors on My Places saying it was about to correct the problem, then the next time I'd get the exact same error. Save My Places finally fixed that, so I guess it's something we're expected to do now and then in Google Earth.

Once I'd learned those tricks, the map-making went fairly quickly. I had intended only to trace a few trails then stop for the night, but when I realized I was more than halfway through I decided to push through, and ended up with a nice set of KML tracks which I converted to GPX and loaded onto my phone. Now I'm ready to explore.

August 16, 2016

Design Team Fedora Activity Day (FAD) Event Report

Fedora Design Team Logo

design team fad attendees portrait

From left to right: Mo Duffy, Marie Nordin, Masha Leonova, Chris Roberts, Radhika Kolathumani, Sirko Kemter (photo credit: Sirko Kemter)

Two weekends ago now, we had a 2-day Fedora Activity Day (heh, a 2-day day) for the Fedora Design Team. We had three main goals for this FAD, although one of them we didn’t cover (:-() :

  • Hold a one-day badges hackfest – the full event report is available for this event – we have wanted to do an outreach activity for some time so this was a great start.
  • Work out design team logistics – some of our members have changed location causing some meeting time issues despite a few different attempts to work around them. We had a few other issues to tackle too (list to come later in this post.) We were able to work through all points and come up with solutions except for one (we ran out of time.)
  • Usability test / brainstorm on the Design Team Hub on Fedora Hubs – so the plan was that the Design Team Hub would be nearly ready for the Flock demo the next week, but this wasn’t exactly the case so we couldn’t test it. With all of the last-minute prep for the workshop event, we didn’t have any time to have much discussion on hubs, either. We did, however, discuss some related hub needs in going through our own workflow in our team logistics discussion, so we did hit on this briefly.

So I’m going to cover the topics discussed aside from the workshop (which already has a full event report), but first I want to talk a little bit about the logistics of planning a FAD and how that worked out first since I totally nerded out on that aspect and learned a lot I want to share. Then, I’ll talk about our design team discussion, the conclusions we reached, and the loose ends we need to tie up still.


I had already planned an earlier Design Team FAD for January 2015, so I wasn’t totally new to the process. There were definitely challenges though.


First, we requested funding from the Fedora Council in late March. We found out 6 weeks later (early May, a little less than 3 months before the event) that we had funding approval, although the details about how that would work weren’t solidified until less than 4 weeks before the event.

Happily, I assumed it’d be approved, filed a request to use the Red Hat Westford facility for the event. There were two types of tickets I had to file for this – a GWS Special Event Request and a GWS Meeting Support Request. The special event request was the first one – I filed that on June 1 (2 months ahead) and it was approved June 21 (took about 3 weeks.) Then, on 7/25 the week before the event, I filed the meeting support request to have the room arranged in classroom style as well as open up the wall between the two medium-sized conference rooms so we had one big room for the community event. I also set up a meeting with the A/V support guy, Malcolm, to get a quick run through of how to get that working. It was good I went ahead and filed the initial request since it took 3 weeks to go through.

The reason it took a while to work out the details on the budget was because we scheduled the event for right before Flock, which meant coordinating / sharing budgets. We did this both to save money and also to make sure we could discuss design-team related Flock stuff before heading to Flock. While this saved some money ultimately, IMHO the complications weren’t worth it:

  • We had to wait for the Flock talk proposals to be reviewed and processed before we knew which FAD attendees would also be funded for Flock, which delayed things.
  • Since things were delayed from that, we ended up missing on some great flight pricing, which meant Ryan Lerch wasn’t able to come 🙁
  • To be able to afford the attendees we had with less than 4 weeks to go, we had to do this weird flight nesting trick jzb figured out. Basically, we booked home<=>BOS round trip tickets, then BOS<=>KRK round trip tickets. This meant Sirko had to fly to Boston after Flock before he could head home to PNH, but it saved a *ton* of money.
fad budget spreadsheet screenshot

behold, our budget

Another complication: we maxed out my corporate card limit before everything was booked. 🙂 I now have a credit increase, so hopefully next event this won’t happen!

The biggest positive budget-wise for this event was the venue cost – free. 🙂 Red Hat Westford kindly hosted us.

I filed the expense reports for the event this past week, and although the entire event was under budget, we had some unanticipated costs as well as a small overage in food budget:

  • Our original food budget was $660. We spent $685.28. We were $25.28 over. (Pretty good IMHO. I used an online pizza calculator to figure out budget for the community event and was overly generous in how much pizza people would consume. 🙂 )
  • We spent $185.83 in unanticipated costs. This included tolls (it costs $3.50 to leave Logan Airport), parking fees, gas, and hotel taxes ($90 in hotel taxes!)

Lessons Learned:

  • Sharing budget with other events slows your timeline down – proceed with caution!
  • Co-location with another event is a better way to share costs logistically.
  • Pizza calculators are a good tool for figuring out food budget. 🙂
  • Budget in a tank of gas if you’ve got a rental.
  • Figure out what tolls you’ll encounter. Oh and PAY CASH, in the US EzPass with a rental car is a ripoff.
  • Ask the hotel for price estimates including taxes/fees.


I rented a minivan to get folks between Westford and the airport as well as between the hotel and the office. I carpool with my husband to work, so I picked it up near the Red Hat Westford office and set up the booking so I was able to leave it at Logan Airport after the last airport run.

Our chariot. I cropped him out of the portrait. Sorry, Toyota Sienna! It has nice pickup. I still am never buying a minivan ever, even if I have more kids. Never minivan, never!

Our chariot. I cropped him out of the portrait. Sorry, Toyota Sienna! It has nice pickup. I still am never buying a minivan ever, even if I have more kids. Never minivan, never!

With international flights and folks coming in on different nights, and the fact I actually live much closer to the airport than the hotel up in Westford (1 hour apart) – by the time the FAD started, I was really worn down as I had 3 nights in a row leading up to the FAD where I wasn’t getting home until midnight at the earliest and I had logged many hours driving, particularly in brutal Boston rush hour traffic. For dropoffs, it was not as bad as everybody left on the same day and there were only 2 airport trips then. Still – not getting home before my kids went to bed and my lack of sleep was a definite strain on my family.

So we had a free venue, but at a cost. For future FAD event planners, I would recommend either trying to get flights coming in on the same day as much as possible and/or sharing the load of airport pickups. Even better, would be to hold the event closer to the airport, but this wasn’t an option for us because of the cost that would entail and the fact we have such a geographically-distributed team.

The transportation situation - those time estimates aren't rush hour yet!

The transportation situation – those time estimates aren’t rush hour yet!

One thing that went very well that is common sense but bears repeating anyway – if you’re picking folks up from the airport, get their phone #’s ahead of time. Having folks phone numbers made pickup logistics waaaaay easier. If you have international numbers, look up how to dial them ahead of time. 🙂

Lessons Learned:

  • Try hard to cluster flights when possible to make for less pickups if the distance between airport / venue is great.
  • If possible, share responsibility for driving with someone to spread the load.
  • Closer to the airport logistically means spending less time in a car and less road trips, leaving more time for hacking.
  • Don’t burn yourself out before the event even starts. 🙂
  • Collect the phone numbers of everyone you’re picking up, or provide them some way of contacting you just in case you can’t find each other.
We're dispersed...

We’re dispersed… (original list of attendees’ locations or origin)


This one went pretty smoothly. Westford has a lot of restaurants; actually, we have a lot more restaurants in Westford with vegetarian options than we did less than 2 years ago at the last Design Team FAD.

For the community event, the invite mentioned that we’d be providing pizzas. We had some special dietary requests from that, so I looked up pizza place that could accommodate, would deliver, and had good ratings. There were two that met the criteria so I went with the one that had the best ratings.

Since the Fedora design team FAD participants were leading / teaching the session, I went over the menu with them the day before the community event, took their orders for non-pizza sandwiches/salads, and called the order in right then and there. (I worried placing the order too far in advance would mean it’d get lost in the shuffle. Lesson learned from the 2015 FAD where Panera forgot our order!) Delivery was a must, because of the ease of not having to go and pick it up.

For snacks, we stopped by a local supermarket either before or after lunch on the first day and grabbed whatever appealed to us. Total bill: $30, and we had tons of drinks and yummy snacks (including fresh blueberries) that kept us tided over the whole weekend and were gone by the end.

We were pretty casual with other meals. Folks at the hotel had breakfast at the hotel, which meant less receipts to track for me. We just drove to places close by for lunch and dinner, and being a local + vegetarian meant we had options for everybody. I agonized way too much about lunch and dinner last FAD (although there were less options then.) Keeping it casual worked well this time; one night we tried to have dinner at a local Indian place and found out they had recently been evicted! (Luckily, there was a good Indian place right down the road.)

Lessons Learned:

  • For large orders, call in the day before and not so far in advance that the restaurant forgets your order.
  • Supermarkets are a cheap way to get a snack supply. Making it a group run ensures everyone has something they can enjoy.
  • Having a local with dietary restrictions can help make sure food options are available for everyone.

Okay, enough for logistics nerdery. Let’s move on to the meat here!

Design Team Planning

We spent most of the first day on Fedora Design team planning with a bit of logistics work for the workshop the following day. First, we started by opening up an Inkscape session up on the projector and calling out the stuff we wanted to work on. It ended up like this:

Screenshot of FAD brainstorming session from Inkscape

Screenshot of FAD brainstorming session from Inkscape

But let’s break it down because I suspect you had to be there to get this. Our high-level list of things to discuss broke down like this:

Discussion Topics

  • Newcomers
    – how can we better welcome newcomers to the team?
  • Pagure migration
    Fedora Trac is going to be sunset in favor of Pagure. How will we manage this transition?
  • Meeting times
    – we’ve been struggling to find a meeting time that works for everyone because we are so dispersed. What to do?
  • Status of our ticket queue
    – namely, our ticket system doesn’t have enough tickets for newbies to take!
  • Badges
    – conversely, we have SO MANY badge tickets needing artwork. How to manage?
  • Distro-related design
    – we need to create release artwork every release, but there’s no tickets for it so we end up forgetting about it. What to do?
  • Commops Thread
    – this point refers to Justin’s design-team list post about ambassadors working with the design team – how can we better work with ambassadors to get nice swag out without compromising the Fedora brand?

Let’s dive into each one.


This is the only topic I don’t think we fully explored. We did have some ideas here though:

  • Fedora Hubs will definitely help provide a single landing page for new comers to see what we’re working on in one place to get a feel for the projects we have going on – right now our work is scattered. Having a badge mission for joining the design team should make for a better onboarding experience – we need to work out what badges would be on that path though. One of the pain points we talked about was how incoming newbies go straight to design team members instead of looking at the ticket queue, which makes the process more manual and thus slower. We’re hoping Hubs can make it more self-service.
  • We had the idea to have something like whatcanidoforfedora.org, but specifically for the design team. One of the things we talked about is having it serve up tickets tagged with a ‘newbie’ tag from both the design-team and badges ticket systems, and have the tickets displayed by category. (E.g., are you interested in UX? Here’s a UX ticket.) The tricky part – our data wouldn’t be static as whatcanidoforfedora.org’s is – we wouldn’t want to present people with a ticket that was already assigned, for example. We’d only want to present tickets that were open and unassigned. Chris did quite a bit of investigation into this and seems to think it might be possible to modify asknot-ng to support this.
  • A Fedora Hubs widget that integrated with team-specific asknot instances was a natural idea that came out of this.
  • We do regular ticket triage during meetings. We decided as part of that effort, we should tag tickets with a difficulty level so it’s easier to find tickets for newbies, and maybe even try to have regular contributors avoid the easy ones to leave them open for newbies. We had some discussion about ticket difficulty level scales that we didn’t get to finish – at one point we were thinking:
    • Easy (1 point) (e.g., a simple text-replacement badge.)
    • Moderate (3 points) (e.g., a fresh badge concept with new illustration work.)
    • Difficult / Complex (10 points) (e.g., a minor UX project or a full badge series of 4-5 badges with original artwork.)

    Or something like this, and have a required number of points. This is a discussion we really need to finish.

  • Membership aspects we talked about – what level of work do we want to require for team emembership? Once a member, how much work do we want to require (if any) to stay “current?” How long should a membership be inactive before we retire? (Not to take anything away from someone – but it’s handy to have a list of active members and a handle on how many active folks there are to try to delegate tasks and plan things like this FAD or meetups at Flock.) No answers, but a lot of hard questions. This came up naturally thinking about membership from the beginning to the end.
  • We talked about potentially clearing inactive accounts out of the design-team group and doing this regularly. (By inactive, we mean FAS account has not been logged into from any Fedora service for ~1 year.)
  • Have a formal mentor process, so as folks sign up to join the team, they are assigned a mentor, similar to the ambassador process. Right now, we’re a bit all over the place. It’d be nice for incoming folks to have one person to contact (and this has worked well in the past, e.g., Mo mentoring interns, and Marie mentoring new badgers.)

Pagure migration

We talked about what features we really needed to be able to migrate:

  • The ability to export the data, since we use our trac tickets for design asset storage. We found out this is being worked on, so this concern is somewhat allayed.
  • The ability to generate reports for ticket review in meetings. (We rely on the custom reports Chris and Paul Frields created for us at the last FAD.) We talked through this and decided we wanted a few things:
    • We’d like to be able to do an “anti-tag” in pagure. So we’d want to view a list of tickets that did not have the “triage” tag on them, so we could go through them and triage them, and add a ‘triage’ tag as we completed triage. That would help us keep track of what new tickets needed to be assessed and which had already been evaluated.
    • We’d like some time-based automation of tag application, but don’t know how that would work. For example, right now if a reporter hasn’t responded for 4 weeks, we classify that ticket as “stalled.” So we’d want tickets where the reporter hasn’t responded in 4 weeks to be marked as “stalled.” Similarly, tickets that haven’t had activity for 2 weeks or more are considered “aging”, so we’d like an “aging” tag applied to them. So on and so forth.
    • We need attachment support for tickets – we discovered this was being worked on too. Currently pagure supports PNG image attachments but we have a wider range of asset types we need to attach – PDFs, Scribus SLAs, SVGs, etc. We tested these out in pagure and they didn’t work.

We agreed we need to follow up with pingou on our needs and our ideas here to see if any of these RFEs (or other solutions) could be worked out in Pagure. We were pretty excited that work was already happening on some of the items we thought would help meet our needs in being able to migrate over.

We don’t have enough tickets! (AKA we are too awesome)

We tend to grab tickets and finish them (or at least hold on to them) pretty quickly on the design team these days. This makes it harder for newbies to find things to work on to meet our membership requirement. We talked about a couple of things here, in addition to related topics already covered in the newbie discussion summary:

  • We need to be more strict about removing assignees from tickets with inactivity. If we’ve pinged the ticket owner twice (which should happen in at least a 4 week period of inactivity from the assignee) and had no response, we should unapologetically just reopen up the ticket for others to take. No hard feelings! Would be even better if we could automate this….
  • We should fill out the ticket queue with our regular release tasks. Which leads to another topic…

Distro-related design (Release Artwork)

Our meetings are very ticket-driven, so we don’t end up covering release artwork during them. Which leads to a scramble… we’ve been getting it done, but it’d be nice for it to involve less stress!

Ideally, we’d like some kind of solution that would automatically create tickets in our system for each work item per release once a new release cycle begins… but we don’t want to create a new system for trac since we’ll be migrating to pagure anyway. So we’ll create these tickets manually now, and hope to automate this once we’ve migrated to pagure.

We also reviewed our release deliverables and talked through each. A to-do item that came up here: We should talk to Jan Kurik and have him remove the splash tasks (we don’t create those splash screens anymore) and add social media banner tasks (we’ve started getting requests for these.) We should also drop CD, DVD, and DVD for multi, and DVD for workstation (transcribing this now I wonder if it’s right.) We also should talk to bproffitt about which social media Fedora users the most and what kind of banners we should create for those accounts for each release. So in summary: we need to drop some unnecessary items from the release schedule that we don’t create anymore, and we should do more research about social media banners and have them added to the schedule.

Another thing I forgot when I initially posted this – we need some kind of entropy / inspiration to keep our default wallpapers going. For the past few releases, we’ve gotten a lot of positive feedback and very few complaints, but we need more inspiration. An idea we came up with was to have a design-team internal ‘theme scheme’ where we go through the letters of the alphabet and draw some inspiration from an innovator related to that letter. We haven’t picked one for F25 yet and need to soon!

Finally, we talked about wallpapers. We’d like for the Fedora supplemental wallpapers to be installed by default – they tend to be popular but many users also don’t know they are there. We thought a good solution might be to propose an internship (maybe Outreachy, maybe GSoC?) to revive an old desktop team idea of wallpaper channels, and we could configure the Fedora supplementals to be part of the channel by default and maybe Nuancier could serve them up.


We never seem to have time to talk through the badges tickets during our meetings, and there are an awful lot of them. We talked about starting to hold a monthly badge meeting to see if this will address it, with the same kind of ticket triage approach we use for the main design team meetings. Overall, Marie and Maria have been doing a great job mentoring baby badgers!

Commops Thread

We also covered Justin’s design-team list post about ambassadors working with the design team, particularly about swag as that tends to be a hot-button issue. For reasons inexplicable to me except for perhaps that I am spaz, I stopped taking notes in Inkscape and started using the whiteboard on this one:

photo of whiteboard (contents described below)

Swag discussion whiteboard (with wifi password scrubbed 🙂 )

We had a few issues we were looking to address here:

  • Sometimes swag is produced too cheaply and doesn’t come out correctly. For example, recently Fedora DVDs were produced with sleeves where Fedora blue came out… black. (For visuals of some good examples compared to bad examples with these sorts of mistakes, check this out.)
  • Sometimes ambassadors don’t understand which types of files to send to printers – they grab a small size bitmap off of the wiki without asking for the print-ready version and things come out pixelated or distorted.
  • Sometimes files are used that don’t have a layer for die cutting – which results in sticker sheets with no cuts that you have to manually cut out with scissors (a waste!)
  • Sometimes files are sent to the printer with no bleeds – and the printer ends up going into the file and manipulating it, sometimes with disastrous results. If a design team member had been involved, they would have known to set the bleeds before sending to the printer.
  • Generally, printers sometimes have no clue, and without a designer working with them they make guesses that are oftentimes wrong and result in poor output.
  • Different regions have different price points and quality per type of item. For example, DVD production in Cambodia is very, very expensive – but print and embroidery items are high-quality and cheap.

Overall, we had concerns about money getting wasted on swag when – with a little coordination – we could produce higher-quality products and save money.

We brainstormed some ideas that we thought might help:

  • Swag quality oversight – Goods produced too cheaply hurt our brand. Could we come up with an approved vendor list, so we have some assurances of a base level of quality? This can be an open process, so we can add additional vendors at any time, but we’ll need some samples of work before they can be approved, and keep logs of our experience with them.
  • Swag design oversight – Ambassadors enjoy their autonomy. We recognize that’s important, but at a certain point sometimes overenthusiastic folks without design knowledge can end up spending a lot of money on items that don’t reflect our brand too well. We thought about setting some kind of cap – if you’re spending more than say $100 on swag, you need design team signoff – a designer will work with you to produce print-ready files and talk to the vendor to make sure everything comes out with a base quality level.
  • Control regional differences – Could we suggest one base swag producer per ambassador region, and indicate what types of products we use them for by default? Per product, we should have a base quality level requirement – e.g., DVDs cannot be burnt – they must be pressed.
  • Okay, I hope this is a fair summary of the discussion. I feel like we could have an entire FAD that focused just on swag. I think we had a lot of ideas here, and it could use more discussion too.

    Meeting Times

    We talked about meeting times. There is no way to get a meeting time that works for everybody, so we decided to split into North America / EMEA / LATAM, and APAC regions. Sirko, Ryan Lerch, and Yogi will lead the APAC time (as of yet to be determined.) And the North America / LATAM / EMEA time will be the traditional design team time – Thursdays at 10 AM ET. Each region will meet on a rotating basis, so one week it’ll be region #1, the next region #2. Each region will meet at least 2x a month then.

    How do we stay coordinated? We came up with a cool idea – the first item of each meeting will be to review the meetbot logs from the other region’s last meeting. That way, we’ll be able to keep up with what the other region is doing, and any questions/concerns we have, they’ll see when they review our minutes the next week. We haven’t had a chance to test this out yet, but I’m curious to see how it works in practice!


    Chris’ flight left on Sunday morning, but everybody else had flights over to Poland which left in the evening, so before we went to the airport, we spent some time exploring Boston. First we went to the Isabella Stewart Gardener Museum, as it was a rainy day. (We’d wanted to do a walking tour.) We had lunch at Boloco, a cool Boston burrito-chain, then the sun decided to come out so we found a parking spot by Long Wharf and I gave everybody a walking tour of Quincy Market and the North End. Then we headed to the airport and said our goodbyes. 🙂

    From left to right: Mo, Masha, Marie, Radhika

    From left to right: Mo, Masha, Marie, Radhika

    What’s Next?

    There’s a lot of little action items embedded here. We covered a lot of ground, but we have a lot more work to do! OK, it’s taken me two weeks to get to this point and I don’t want this blog post delayed anymore, so I’m just going for it and posting now. 🙂 Enjoy!

    August 14, 2016

    Translating track files between mapping formats

    I use map tracks quite a bit. On my Android phone, I use OsmAnd, an excellent open-source mapping tool that can download map data generated from free OpenStreetMap, then display the maps offline, so I can use them in places where there's no cellphone signal (like nearly any hiking trail). At my computer, I never found a decent open-source mapping program, so I wrote my own, PyTopo, which downloads tiles from OpenStreetMap.

    In OsmAnd, I record tracks from all my hikes, upload the GPX files, and view them in PyTopo. But it's nice to go the other way, too, and take tracks or waypoints from other people or from the web and view them in my own mapping programs, or use them to find them when hiking.

    Translating between KML, KMZ and GPX

    Both OsmAnd and PyTopo can show Garmin track files in the GPX format. PyTopo can also show KML and KMZ files, Google's more complicated mapping format, but OsmAnd can't. A lot of track files are distributed in Google formats, and I find I have to translate them fairly often -- for instance, lists of trails or lists of waypoints on a new hike I plan to do may be distributed as KML or KMZ.

    The command-line gpsbabel program does a fine job translating KML to GPX. But I find its syntax hard to remember, so I wrote a shell alias:

    kml2gpx () {
            gpsbabel -i kml -f $1 -o gpx -F $1:t:r.gpx
    so I can just type kml2gpx file.kml and it will create a file.gpx for me.

    More often, people distribute KMZ files, because they're smaller. They're just gzipped KML files, so the shell alias is only a little bit longer:

    kmz2gpx () {
            gunzip -c $1 > $kmlfile
            gpsbabel -i kml -f $kmlfile -o gpx -F $kmlfile:t:r.gpx

    Of course, if you ever have a need to go from GPX to KML, you can reverse the gpsbabel arguments appropriately; and if you need KMZ, run gzip afterward.

    UTM coordinates

    A couple of people I know use a different format, called UTM, which stands for Universal Transverse Mercator, for waypoints, and there are some secret lists of interesting local features passed around in that format.

    It's a strange system. Instead of using latitude and longitude like most world mapping coordinate systems, UTM breaks the world into 60 longitudinal zones. UTM coordinates don't usually specify their zone (at least, none of the ones I've been given ever have), so if someone gives you a UTM coordinate, you need to know what zone you're in before you can translate it to a latitude and longitude. Then a pair of UTM coordinates specifies easting, and northing which tells you where you are inside the zone. Wikipedia has a map of UTM zones.

    Note that UTM isn't a file format: it's just a way of specifying two (really three, if you count the zone) coordinates. So if you're given a list of UTM coordinate pairs, gpsbabel doesn't have a ready-made way to translate them into a GPX file. Fortunately, it allows a "universal CSV" (comma separated values) format, where the first line specifies which field goes where. So you can define a UTM UniCSV format that looks like this:

    Trailhead,13,0395145,3966291,Trailhead on Buckman Rd
    Sierra Club TH,13,0396210,3966597,Alternate trailhead in the arroyo
    then translate it like this:
    gpsbabel -i unicsv -f filename.csv -o gpx -F filename.gpx
    I (and all the UTM coordinates I've had to deal with) are in zone 13, so that's what I used for that example and I hardwired that into my alias, but if you're near a zone boundary, you'll need to figure out which zone to use for each coordinate.

    I also know someone who tends to send me single UTM coordinate pairs, because that's what she has her Garmin configured to show her. For instance, "We'll be using the trailhead at 0395145 3966291". This happened often enough, and I got tired of looking up the UTM UniCSV format every time, that I made another shell function just for that.

    utm2gpx () {
            unicsv=`mktemp /tmp/point-XXXXX.csv` 
            echo "name,utm_z,utm_e,utm_n,comment" >> $unicsv
            printf "Point,13,%s,%s,point" $1 $2 >> $unicsv
            gpsbabel -i unicsv -f $unicsv -o gpx -F $gpxfile
            echo Created $gpxfile
    So I can say utm2gpx 0395145 3966291, pasting the two coordinates from her email, and get a nice GPX file that I can push to my phone.

    What if all you have is a printed map, or a scan of an old map from the pre-digital days? That's part 2, which I'll post in a few days.

    August 11, 2016

    LVFS has a new CDN

    Now that we’re hitting cough Cough COUGH1 million users a month the LVFS is getting slower and slower. It’s really just a flask app that’s handling the admin panel and then apache is serving a set of small files to a lot of people. As switching to a HA server is taking longer than I hoped2, I’m in the process of switching to using S3 as a CDN to take the load off. I’ve pushed a commit that changes the default in the fwupd.conf file. If you want to help test this, you can do a substitution of secure-lvfs.rhcloud.com to s3.amazonaws.com/lvfsbucket in /etc/fwupd.conf although the old CDN will be running for a long time indeed for compatibility.

    1. Various vendors have sworn me to secrecy
    2. I can’t believe GPGME and python-gpg is the best we have…

    Flatpak cross-compilation support

    A couple of weeks ago, I hinted at a presentation that I wanted to do during this year's GUADEC, as a Lightning talk.

    Unfortunately, I didn't get a chance to finish the work that I set out to do, encountering a couple of bugs that set me back. Hopefully this will get resolved post-GUADEC, so you can expect some announcements later on in the year.

    At least one of the tasks I set to do worked out, and was promptly obsoleted by a nicer solution. Let's dive in.

    How to compile for a different architecture

    There are four possible solutions to compile programs for a different architecture:

    • Native compilation: get a machine of that architecture, install your development packages, and compile. This is nice when you have fast machines with plenty of RAM to compile on, usually developer boards, not so good when you target low-power devices.
    • Cross-compilation: install a version of GCC and friends that runs on your machine's architecture, but produces binaries for your target one. This is usually fast, but you won't be able to run the binaries created, so might end up with some data created from a different set of options, and won't be able to run the generated test suite.
    • Virtual Machine: you'd run a virtual machine for the target architecture, install an OS, and build everything. This is slower than cross-compilation, but avoids the problems you'd see in cross-compilation.
    The final option is one that's used more and more, mixing the last 2 solutions: the QEmu user-space emulator.

    Using the QEMU user-space emulator

    If you want to run just the one command, you'd do something like:

    qemu-static-arm myarmbinary

    Easy enough, but hardly something you want to try when compiling a whole application, with library dependencies. This is where binfmt support in Linux comes into play. Register the ELF format for your target with that user-space emulator, and you can run myarmbinary without any commands before it.

    One thing to note though, is that this won't work as easily if the qemu user-space emulator and the target executable are built as a dynamic executables: QEmu will need to find the libraries for your architecture, usually x86-64, to launch itself, and the emulated binary will also need to find its libraries.

    To solve that first problem, there are QEmu static binaries available in a number of distributions (Fedora support is coming). For the second one, the easiest would be if we didn't have to mix native and target libraries on the filesystem, in a chroot, or container for example. Hmm, container you say.

    Running QEmu user-space emulator in a container

    We have our statically compiled QEmu, and a filesystem with our target binaries, and switched the root filesystem. Well, you try to run anything, and you get a bunch of errors. The problem is that there is a single binfmt configuration for the kernel, whether it's the normal OS, or inside a container or chroot.

    The Flatpak hack

    This commit for Flatpak works-around the problem. The binary for the emulator needs to have the right path, so it can be found within the chroot'ed environment, and it will need to be copied there so it is accessible too, which is what this patch will do for you.

    Follow the instructions in the commit, and test it out with this Flatpak script for GNU Hello.

    $ TARGET=arm ./build.sh
    $ ls org.gnu.hello.arm.xdgapp
    918k org.gnu.hello.arm.xdgapp

    Ready to install on your device!

    The proper way

    The above solution was built before it looked like the "proper way" was going to find its way in the upstream kernel. This should hopefully land in the upcoming 4.8 kernel.

    Instead of launching a separate binary for each non-native invocation, this patchset allows the kernel to keep the binary opened, so it doesn't need to be copied to the container.

    In short

    With the work being done on Fedora's static QEmu user-space emulators, and the kernel feature that will land, we should be able to have a nice tickbox in Builder to build for any of the targets supported by QEmu.

    Get cross-compiling!

    Adding suggestions to AppData files

    An oft-requested feature is to show suggestions for other apps to install. This is useful if the apps are part of a larger suite of application, or if the apps are in some way complimentary to each other. A good example might be that we want to recommend libreoffice-writer when the user is looking at the details (or perhaps had just installed) libreoffice-calc.

    At the moment we’ve got got any UI using this kind of data, as simply put, there isn’t much data to use. Using the ODRS I can kinda correlate things that the same people look at (i.e. user A got review for B and C, so B+C are possibly related) but it’s not as good as actual upstream information.

    Those familiar with my history will be unsuprised: AppData to the rescue! By adding lines like this in the foo.appdata.xml file you can provide some information to the software center:


    You don’t have to specify the parent app (e.g. libreoffice-writer.desktop in this case) and is the only tag that’s accepted. If isn’t found in the AppStream metadata then it’s just ignored, so it’s quite safe to add things that might not be in stable distros.

    If enough upstreams do this then we can look at what UI makes sense. If you make use of this feature, please let me know and I can make sure we discuss the use-case in the design discussions.

    August 10, 2016

    Double Rainbow, with Hummingbirds

    A couple of days ago we had a spectacular afternoon double rainbow. I was out planting grama grass seeds, hoping to take take advantage of a rainy week, but I cut the planting short to run up and get my camera.

    [Double rainbow]

    [Hummingbirds and rainbow] And then after shooting rainbow shots with the fisheye lens, it occurred to me that I could switch to the zoom and take some hummingbird shots with the rainbow in the background. How often do you get a chance to do that? (Not to mention a great excuse not to go back to planting grass seeds.)

    (Actually, here, it isn't all that uncommon since we get a lot of afternoon rainbows. But it's the first time I thought of trying it.)

    Focus is always chancy when you're standing next to the feeder, waiting for birds to fly by and shooting whatever you can. Next time maybe I'll have time to set up a tripod and remote shutter release. But I was pretty happy with what I got.

    Photos: Double rainbow, with hummingbirds.

    August 09, 2016

    compressing dynamic range with exposure fusion

    modern sensor capture an astonishing dynamic range, namely some sony sensors or canon with magic lantern's dual iso feature.

    this is in a range where the image has to be processed carefully to display it in pleasing ways on a monitor, let alone the limited dynamic range of print media.

    example images

    use graduated density filter to brighten foreground


    graduated density filter

    using the graudated density iop works well in this case since the horizon here is more or less straight, so we can easily mask it out with a simple gradient in the graduated density module. now
    what if the objects can't be masked out so easily?

    more complex example

    this image needed to be substantially underexposed in order not to clip the interesting highlight detail in the clouds.

    original image, then extreme settings in the shadows and highlights iop (heavy fringing despite bilateral filter used for smoothing). also note how the shadow detail is still very dark. third one is tone mapped (drago) and fourth is default darktable processing with +6ev exposure.





    tone mapping also flattens a lot of details why this version already has some local contrast enhancement applied to it. this can quickly result in unnatural results. similar applies to colour saturation (for reasons of good taste, no link to examples at this point..).

    the last image in the set is just a regular default base curve pushed by six stops using the exposure module. the green colours of the grass look much more natural than in any of the other approaches taken so far (including graduated density filters, these need some fiddling in the colour saturation..). unfortunately we lose a lot of detail in the highlights (to say the least).

    this can be observed for most images, here is another example (original, then pushed +6ev):



    exposure fusion

    this is precisely the motivation behind the great paper entitled Exposure Fusion: what if we develop the image a couple of times, each time exposing for a different feature (highlights, mid-tones, shadows), and then merge the results where they look best?

    this has been available in software for a while in enfuse
    even with a gui called EnfuseGUI.
    we now have this feature in darktable, too.

    find the new fusion combo box in the darktable base curve module:


    options are to merge the image with itself two or three times. each extra copy of the image will be boosted by an additional three stops (+3ev and +6ev), then the base curve will be applied to it and the laplacian pyramids of the resulting images will be merged.


    this is a list of input images and the corresponding result of exposure fusion:

















    image from beginning:


    note that the feature is currently merged to git master, but unreleased.


    Blog backlog, Post 4, Headset fixes for Dell machines

    At the bottom of the release notes for GNOME 3.20, you might have seen the line:
    If you plug in an audio device (such as a headset, headphones or microphone) and it cannot be identified, you will now be asked what kind of device it is. This addresses an issue that prevented headsets and microphones being used on many Dell computers.
    Before I start explaining what this does, as a picture is worth a thousand words:

    This selection dialogue is one you will get on some laptops and desktop machines when the hardware is not able to detect whether the plugged in device is headphones, a microphone, or a combination of both, probably because it doesn't have an impedance detection circuit to figure that out.

    This functionality was integrated into Unity's gnome-settings-daemon version a couple of years ago, written by David Henningsson.

    The code that existed for this functionality was completely independent, not using any of the facilities available in the media-keys plugin to volume keys, and it could probably have been split as an external binary with very little effort.

    After a bit of to and fro, most of the sound backend functionality was merged into libgnome-volume-control, leaving just 2 entry points, one to signal that something was plugged into the jack, and another to select which type of device was plugged in, in response to the user selection. This means that the functionality should be easily implementable in other desktop environments that use libgnome-volume-control to interact with PulseAudio.

    Many thanks to David Henningsson for the original code, and his help integrating the functionality into GNOME, Bednet for providing hardware to test and maintain this functionality, and Allan, Florian and Rui for working on the UI notification part of the functionality, and wiring it all up after I abandoned them to go on holidays ;)

    August 07, 2016

    SIGGRAPH 2016 report

    Anaheim, 23 – 28 July 2016

    This year was the 25th anniversary of my SIGGRAPH membership (I am a proud member since ’91)! It was also my 18th visit in a row to the annual convention (since ’99). We didn’t have a booth on the trade show this year though. Expenses are so high! Since 2002 we exhibited 7 times, we skipped years more often, but since 2011 we were there every year. The positive side of not exhibiting was that I finally had time and energy to have meetings and participate in other events.

    Friday 22 – Saturday 23: Toronto


    But first: an unexpected last minute change in the planning. Originally I was going to Anaheim to also meet with the owners of Tangent Animation about their (near 100% Blender) feature film studio. Instead they suggested it would be much more practical to rebook my flight and have a day stopover in Toronto to see the studio and have more time to meet.

    I spent two half days with them, and it was really blown away by the work they do there. I saw the opening 10 minutes of their current feature film (“Run Ozzy Run”). The film is nearly finished, currently being processed for grading and sound. The character designs are adorable, the story is engaging and funny, and they pulled off surprising good quality animation and visuals – especially knowing it’s still a low budget project made with all the constraints associated with it. And they used Blender! Very impressive how they managed to get quite massive scenes work. They hired a good team of technical artists and developers to support them. Their Cycles coder is a former Mental-Ray engineer, who will become a frequent contributor to Cycles.

    I also had a sneak peek of the excellent concept art of the new feature that’s in development – more budget, and much more ambitious even. For that project they offer to invest substantially in Blender, we spent the 2nd day on outlining a deal. In short that is:

    • Tangent will sponsor two developers to work in Blender Institute on 2.8 targets (defined by us)
    • Tangent will sponsor one Cycles developer, either to work in Blender Institute or in Toronto.
    • All of this full time and decently paid positions, for at least 1 year. Can be effective in September.

    Sunday 24: SIGGRAPH Anaheim

    blenderbof162 PM: Blender Birds of a Feather, community meeting

    As usual we start the meeting with giving everyone a short moment to say who they are what they do with Blender (or want to see happen). This takes 25+ minutes! There were visitors from Boeing, BMW, Pixar, Autodesk, Microsoft, etc.

    The rest of the time I did my usual presentation (talk about who we are, what we did last year, and the plans for next year).

    You can download the pdf of the slides here.

    3:30 PM : Blender Birds of a Feather, Spotlight event

    Theory Animation’s David Andrade offered to organise this ‘open stage’ event, giving artists or developers 5 minutes of time to show the work they did with Blender. It was great to see this organised so well! There was a huge line-up even, lasting 90 minutes even. Some highlights from my memory:

    • Theory Animation showed work they did for the famous TV show “Silicon Valley”. The hilarious “Pipey” animation is theirs.
    • Sean Kennedy is doing a lot of Blender vfx for tv series. Amazing work (can’t share here, sorry), and he gave a warm plea for more development attention for the Compositor in Blender.
    • Director Stephen Norrington (Blade, League of Extraordinary Gentlemen) is using Blender! He showed vfx work he did for a stop motion / puppet short film.
    • JT Nelson showed results of Martin Felke’s Blender Fracture Branch. Example.
    • Nimble Collective premiered their first “Animal Facts” short, The Chicken.

    Afterwards we went for drinks and food to one of the many bar/restaurants close by. (Well close, on the map it looked like 2 blocks, but in Anaheim these blocks were half a mile! Made the beer taste even better though :)

    Monday 25: the SIGGRAPH Animation Festival, Jury Award!

    Selfie with badge + ribbon

    Aside of all these interesting encounters you can have in LA (I met with people from Paramount Animation), the absolute highlight of Monday was picking up the Jury prize for Cosmos Laundromat. Still vividly remembering 25 years ago, struggling with the basics of CG, I never thought I’d be cheered on and applauded by 1000+ people in the Siggraph Electronic Theater!

    Clearly the award is not just mine, it’s for director Mathieu Auvray and writer Esther Wouda, the team of artists and developers who worked on the film, and most of all for everyone who contributed to Blender and to Blender Cloud in one way or another.

    Wait… but the amount of surprises weren’t over that day. I sneaked away from the festival screening and went to AMD’s launch party. I was pleasantly surprised to watch corporate VP Roy Taylor spending quite some time talking about Blender, ending with “We love Blender, we love the Blender Community!” AMD is very serious about focusing on 3D creators online, to serve the creative CG communities of which Blender users are one of the biggest now. If AMD could win back the hearts of Blender artists…

    Theory Animation guys!

    After the event I met with Roy Taylor, he confirmed the support they already give to Blender developer Mike Erwin (to upgrade OpenGL). Roy said AMD is committed to help us in many more ways, so I asked for one more full time Cycles coder. Deal! Support for 1 year full time developer on Cycles to finish the ‘OpenCL split kernel’ project is being processed now. I’ll be busy hiring people the coming period!

    Later in the evening I met with several Blender artists. They got the award handed over by me to show my appreciation. Big fun :)

    Tuesday 26 – Wednesday 27, SIGGRAPH tradeshow and meetings

    Not having a booth was a blessing (at least for once!). I could freely move around and plan the days with meetings and time to attend the activities outside of the trade show as well. Here’s a summary of activities and highlights

    • Tradeshow impression
      This year’s show seemed a bit smaller than last year, but on both days it felt crowded in most places, the attendance was very good. Best highlights are still the presentations by artists to show their work on larger booths such as Nvidia or Foundry. Also for having an original Vive experience it was worth the visit. Google’s Tango was there, but the marketing team failed to impress demoing it – 3d scanning the booth failed completely all the time (don’t put tv screens on walls if you want to scan!).
    • USD-1Pixar USD launch lunch
      Pixar presented the official launch of the Universal Scene Description format, a set of formats with a software library to manage your entire pipeline. The  USD design is very inviting for Blender to align well with – we already share some of the core design decisions, but USD is quite more advanced. It will be interesting to see whether USD will be used for pipeline IO (file exchange) among applications as well.
    • Autodesk meeting
      Autodesk has appointed a director open source strategy, he couldn’t attend but connected me with Marc Stevens and Chris Vienneau, executives in the M&E department. They also brought in Arnold’s creator Marcos Fajardo.
      Marcos expressed their interest in having Arnold support for Blender. We discussed the (legal, licensing) technicalities of this a bit more, but for as long they stick to data transport between the programs (like PRman and VRay do now using Blender’s render API) there’s no issue. With Marc and Chris I had a lengthy discussion about Autodesk’s (lack of) commitment to open source and openly accessible production pipelines. They said that Autodesk is changing their strategy though and they will show this with actively sharing sources or participating in open projects as well. I invited them to publish the FBX spec doc (needs to get blessings from board, but they’ll try) and to work with Pixar on getting the character module for USD fleshed out (make it work for Maya + Max, in open license). The latter suggestion was met with quite some enthusiasm. Would make the whole FBX issue go away mostly. 
    • Nvidia
      It was very cool to meet with Ross Cunniff, Technology Lead at NVIDIA. He is nice down-to-earth and practical. With his connections it’ll be easier to get a regular seed of GTX cards to developers. I’ve asked for a handful 1080ies right away! Nvidia will also actively work on getting Blender Cycles files in the official benchmarking suites.
    • Massive Software
      David Andrade (Theory Animation) setup a meeting with me and industry legend Stephen Regelous, founder of Massive Software and the genius behind the epic Lord of the Rings battle scenes. Stephen said that at Massive user meetings there’s an increasing demand for Blender support. He explained me how they do it; basically everything’s low poly and usually gets rendered in 1 pass! The Massive API has a hook into the render engine to generate the geometry on the fly, to prevent huge file or caching bottlenecks. In order to get this work for Blender Cycles, a similar hook should be written. They currently don’t have the engineers to do this, but they’d be happy to support someone for it.
    • Khronos
      I further attended the WebGL meeting (with demos by Blend4web team) and the Khronos party. Was big fun, a lot of Blender users and fans there! The Khronos initiative remains incredibly important – they are keeping the graphics standards open (like OpenGL, glTF) and make innovation available for everyone (WebGL and Vulkan).

    Friday 29, San Francisco and Bay Area

    on-highway1Wednesday evening and Thursday I took my time driving the touristic route north to San Francisco. I wanted to meet some friends there (loyal Blender supporter David Jeske, director/layout artist Colin Levy, CG industry consultants Jon and Kathleen Peddie, Google engineer Keir Mierle) and visit two business contacts.

    • Nimble Collective
      Located in a lovely office in Mountain View (looks like it’s always sunny and pleasant there!) this startup is also heavily investing in Blender and using it for a couple of short film projects. I leave it them to release the info on the films :) but it’s going to be amazing good! I also had a demo of their platform, which is like a ‘virtual’ animation production workstation, which you can use in a browser. The Blender demo on their platform was feeling very responsive, including fast Cycles renders.
      The visit ended participating in their “weekly”. Just like the Blender Institute weekly! An encouraging and enthusiast gathering to celebrate results and work that’s been done.
    • Netflix
      netflix_cosmoslaundromatThe technical department from Netflix contacted us a while ago, they were looking for high quality HDR content to do streaming and other tests. We then sent them the OpenEXR files of Cosmos Laundromat, which is unclipped high resolution color. Netflix took it to a specialist HDR grading company and they showed me the result – M I N D blowing! Really awesome to see how the dynamics of Cycles renders (like the hard morning light) works on a screen that allows a dynamic ‘more than white’ display. Cosmos Laundromat is now on Netflix, as one of the first HDR films.
      We then discussed how Netflix could do more with our work. Obviously they’re happy to share the graded HDR film, but they’re especially interested in getting more content – especially in 4k. A proposal for sponsoring our work is being evaluated internally now.

    Sunday 31 July, Back home

    I was gone for 9 days, with 24 hours spent in airplanes. But it was worth it :) Jetlag usually kicks in then, took a week to resolve. In the coming weeks there’s a lot of work waiting, especially setting up all the projects around Blender 2.8. A new design/planning doc on 2.8 is first priority.

    Please feel invited to discuss the topics in our channels and talk to me in person in IRC about Blender 2.8 and Cycles development work. Or send me a mail with feedback. That’s ton at blender.org, as usual.

    Ton Roosendaal
    August 7, 2016

    August 06, 2016

    Adding a Back button in Python Webkit-GTK

    I have a little browser script in Python, called quickbrowse, based on Python-Webkit-GTK. I use it for things like quickly calling up an anonymous window with full javascript and cookies, for when I hit a page that doesn't work with Firefox and privacy blocking; and as a quick solution for calling up HTML conversions of doc and pdf email attachments.

    Python-webkit comes with a simple browser as an example -- on Debian it's installed in /usr/share/doc/python-webkit/examples/browser.py. But it's very minimal, and lacks important basic features like command-line arguments. One of those basic features I've been meaning to add is Back and Forward buttons.

    Should be easy, right? Of course webkit has a go_back() method, so I just have to add a button and call that, right? Ha. It turned out to be a lot more difficult than I expected, and although I found a fair number of pages asking about it, I didn't find many working examples. So here's how to do it.

    Add a toolbar button

    In the WebToolbar class (derived from gtk.Toolbar): In __init__(), after initializing the parent class and before creating the location text entry (assuming you want your buttons left of the location bar), create the two buttons:

            backButton = gtk.ToolButton(gtk.STOCK_GO_BACK)
            backButton.connect("clicked", self.back_cb)
            self.insert(backButton, -1)
            forwardButton = gtk.ToolButton(gtk.STOCK_GO_FORWARD)
            forwardButton.connect("clicked", self.forward_cb)
            self.insert(forwardButton, -1)

    Now create those callbacks you just referenced:

       def back_cb(self, w):
        def forward_cb(self, w):

    That's right, you can't just call go_back on the web view, because GtkToolbar doesn't know anything about the window containing it. All it can do is pass signals up the chain.

    But wait -- it can't even pass signals unless you define them. There's a __gsignals__ object defined at the beginning of the class that needs all its signals spelled out. In this case, what you need is

           "go-back-requested": (gobject.SIGNAL_RUN_FIRST,
                                  gobject.TYPE_NONE, ()),
           "go-forward-requested": (gobject.SIGNAL_RUN_FIRST,
                                  gobject.TYPE_NONE, ()),
    Now these signals will bubble up to the window containing the toolbar.

    Handle the signals in the containing window

    So now you have to handle those signals in the window. In WebBrowserWindow (derived from gtk.Window), in __init__ after creating the toolbar:

            toolbar.connect("go-back-requested", self.go_back_requested_cb,
            toolbar.connect("go-forward-requested", self.go_forward_requested_cb,

    And then of course you have to define those callbacks:

    def go_back_requested_cb (self, widget, content_pane):
        # Oops! What goes here?
    def go_forward_requested_cb (self, widget, content_pane):
        # Oops! What goes here?

    But whoops! What do we put there? It turns out that WebBrowserWindow has no better idea than WebToolbar did of where its content is or how to tell it to go back or forward. What it does have is a ContentPane (derived from gtk.Notebook), which is basically just a container with no exposed methods that have anything to do with web browsing.

    Get the BrowserView for the current tab

    Fortunately we can fix that. In ContentPane, you can get the current page (meaning the current browser tab, in this case); and each page has a child, which turns out to be a BrowserView. So you can add this function to ContentPane to help other classes get the current BrowserView:

        def current_view(self):
            return self.get_nth_page(self.get_current_page()).get_child()

    And now, using that, we can define those callbacks in WebBrowserWindow:

    def go_back_requested_cb (self, widget, content_pane):
    def go_forward_requested_cb (self, widget, content_pane):

    Whew! That's a lot of steps for something I thought was going to be just adding two buttons and two callbacks.

    August 03, 2016

    The Fedora Design Team’s Inkscape/Badges Workshop!

    Fedora Design Team Logo

    This past weekend, the Fedora Design Team held an Inkscape and Fedora Badges workshop at Red Hat’s office in Westford, Massachusetts. (You can see our public announcement here.)

    Badges Workshop

    Why did the Fedora Design Team hold this event?

    At our January 2015 FAD, one of the major themes of things we wanted to do as a team was outreach, to both help teach Fedora and the FLOSS creative tools set as a platform for would-be future designers, as well as to bring more designers into our team. We planned to do a badges workshop at some future point to try to achieve that goal, and this workshop (which was part of a longer Design FAD event I’ll detail in another post) was it. We collectively feel that designing artwork for badges is a great “gateway contribution” for Fedora contributors because:

    • The badges artwork standards and process is extremely well-documented.
    • The artwork for a badge is a small, atomic unit of contribution that does not take up too much of a contributor’s time to create.
    • Badges individually touch on varying areas of the Fedora project, so by making a single badge you could learn (in a rather gentle way) how a particular aspect of how the Fedora project works (as a first step towards learning more about Fedora.)
    • The process of creating badge artwork and submitting it from start to finish is achievable during a one-day event, and being able to walk away from such an event having submitted your first open source contribution is pretty motivating!

    This is the first event of this kind the Fedora Design team has held, and perhaps any Fedora group? We aimed for a general, local community audience rather than attaching this event to a larger technology-focused conference event or release party. We explicitly wanted to bring folks not currently affiliated with Fedora or even the open source community into our world with this event.

    Preparing for the event

    Photo of event handouts

    There was a lot we had to do in order to prepare for this event. Here’s a rough breakdown:

    Marketing (AKA getting people to show up!)

    We wanted to outreach to folks in the general area of Red Hat’s Westford Office. Originally, we had wanted to have the event located closer to Boston and partner with a university, but for various reasons we needed to have this event in the summer – a poor time for recruiting university students. Red Hat Westford graciously offered us space for free, but without something like a university community, we weren’t sure how to go about advertising the event to get people to sign up.

    Here’s what we ended up doing:

    • We created an event page on EventBrite (free to use for free events.) That gave us a bit of marketing exposure – we got 2 signups from on Event Brite referrals. The site also helped us with event logistics (see next section for more on that.)
    • We advertised the event on Red Hat’s Westford employee list – Red Hat has local office mailing lists for each office, so we advertised the event on there asking area employees to spread the word about the event to friends and family. We got many referrals this way.
    • We advertised the event on a public Westford community Facebook page – I don’t know about other areas, but in the Boston area, many of the individual towns have public town bulletin boards set up as Facebook groups, and event listings are allowed and even encouraged on many of these sites. I was able to get access to one of the more popular Westford groups and posted about our event there – first about a month out, then a reminder the week before. We received a number of referrals this way as well.

    Photo of the event


    We had to formally reserve the space, and also figure out how many people were coming so we knew how much and what kinds of food to order – among many other little logistical things. Here’s how we tackled that:

    • Booking the space – I filed a ticket with Red Hat’s Global Workplace Services group to book the space. We decided to open up 30 slots for the workshop, which required booking two conference rooms on the first floor of the office (generally considered the space we offer for public events) and also requesting those rooms be set up classroom-style with a partition opened up between them to make one large classroom. The GWS team was easy to work with and a huge help in making things run smoothly.
    • Managing headcount – As mentioned earlier, we set up an EventBrite page for the event, which allowed us to set up the 30 slots and allow people to sign up to reserve a slot in the class. This was extremely helpful in the days leading up to the event, because it provided me a final head count for ordering food and also a way to communicate with attendees before the event (as registration requires providing an email address.) We had a last-minute cancellation of two slots, and we were able to push out information to the three channels we’d marketed the event to and get those slots filled the day before the event so we had a full house day of.
    • Ordering food – I called the day before the event to order the food. We went with a local Italian place that did delivery and ordered pizzas and soda for the guests and sandwiches / salads for the instructors (I gathered instructor orders right before making the call.) We had a couple of attendees who had special dietary needs, so I made sure to order from a place that could accommodate.
    • Session video recording – During the event, we used BlueJeans to wirelessly project our slides to the projectors. Consequentially, this also resulted in recordings being taken of the sessions. On my to-do list is to edit those down to just the useful bits and post them, sending the link to attendees.
    • Surveying attendees – After the event, Event Brite helpfully allowed us to send out a survey (via Survey Monkey) to the attendees to see how it went.
    • Making slides available – Several attendees asked for us to send out the slides we used (I just sent them out this afternoon, and have provided them here as well!)
    • Getting permission – I knew we were going to be writing up an event report like this, so I did get the permission/consent of everyone in the room before taking pictures and hitting record on the BlueJeans session.
    • Parking / Access – I realized too late that we probably should have provided parking information up front to attendees, but luckily it was pretty straightforward and we had plenty of spots up front. Radhika helpfully stood by the front entrance as attendees arrived to allow them in the front door and escort them to the classroom.
    • Audio/Video training – Red Hat somewhat recently got a new A/V system I wasn’t familiar with, and there are specific things you need to know about getting the two projectors in the two rooms in sync when the partition is open, so I was lucky to book a meeting with one of Red Hat’s extremely helpful media folks to meet with me the day before and teach me how to run the A/V system.


    Inkscape / Badges Prep Work

    We also needed to prepare for the sessions themselves, of course:

    • Working out an agenda – We talked about the agenda for the event on our mailing list as well as during team meetings, but the rough agenda was basically to offer an Inkscape install fest followed by a basic Inkscape class (mizmo), run through an Inkscape tutorial (gnokii), and then do a badges workshop (riecatnor & mleonova.) We’ll talk about how well this worked later in this post. 🙂
    • Prepare slides / talking points – riecatnor, mleonova, and myself prepped some slides for our sessions; gnokii prepared a tutorial.
    • Prepare handouts – You can see in one of the photos above that we provided attendees with handouts. There were two keyboard shortcut printouts – one for basic / most frequently used ones, the other a more extended / full list we found provided by Michael van der Nest. We also provided a help sheet on how to install Inkscape. We printed them the morning of and distributed them at each seat in the classroom.
    • Prepare badges – riecatnor and mleonova very carefully combed through open badge requests in need of artwork and put together a list of those most appropriate for newbies, filling in ideas for artwork concepts and tips/hints for the would-be badgers who’d pick up the tickets at the event. They also provided the list of ticket numbers for these badges on the whiteboard at the event.

    Marie explaining the anatomy of a badge

    The Agenda / Materials

    Here’s a rough outline of our agenda, with planned and actual times:

    Here’s the materials we used:

    As mentioned elsewhere in this post, we did record the sessions, but I’ve got to go through the recordings to see how usable they are and edit them down if they are. I’ll do another post if that’s the case with links to the videos.

    How did the event go?

    Unfortunately, despite our best efforts (and a massive amount of prep work,) I don’t think any of us would qualify the event as a home run. We ran into a number of challenges, some of our own (um, mine, actually!) making, some out of our control. That being said, thus far our survey results have been very positive – basically attendees agreed with our self-analysis and felt it was a good-to-very good, useful event that could have been even better with a few tweaks.

    graph showing attendees rated the presentation good-to-excellent

    The Good

    • Generally attendees enjoyed the sessions and found them useful. As you can see in the chart above, of 8 survey respondents, 2 thought it was excellent, 3 thought it was very good, and 3 thought it was good. I’ll talk more about the survey results later on, but enjoy this respondent’s quote: “I’m an Adobe person and I’ve never used other design softwares, so I’m happy I learned about a free open source software that will help me become more of an asset when I finish college and begin looking for a career.”
    • The event was sold out – interest in what we had to say and teach is high!. We had all 30 slots filled over a week before the event; when we had 2 last-minute dropouts, we were able to quickly re-fill those slots. I don’t know if every single person who signed up attended, but we weren’t left with any extra seats in the room at the peak of attendance.
    • The A/V system worked well. We had a couple of mysterious drops from BlueJeans that lead some some furious reconnecting to continue the presentation, but overall, our A/V setup worked well. I
    • The food was good. There was something to eat for everyone, and it all arrived on time. For close to 40 people, it cost $190. This included 11 pizzas (9 large, 2 medium gluten free), 4 salads, 2 sandwiches, and 5 2-liter bottles of soda. (Roughly $5.30/person.) Maybe a silly point to make, but food is important too, especially since the event ran right through lunch (10 AM – 3 PM.)
    • We didn’t frighten newbies away (at least, not right away.) About half of the attendees came with Inkscape preinstalled, half didn’t. We divided them into different halves of the room. The non-preinstallers (who we classified as “newbies,”) stayed until a little past lunch, which I consider a victory – they were able to follow at least the first long session, stayed for food, and completed most of gnokii’s tutorial.
    • Inkscape worked great, even cross-platform. Inkscape worked like a champ – there were no catastrophic crashes and generally people seemed to enjoy using it. We had everyone installed by about 20 minutes into the first session – one OS X laptop had some issues due to some settings in the OS X control panel relating to XQuartz, but we were able to solve them. Everyone left the event with a working copy of Inkscape on their system! I would guesstimate we had about 1/3 OS X, 1/3 Windows, and 1/3 Linux machines (the latter RH employees + family mostly. 🙂 )
    • No hardware issues. We instructed attendees to bring their own hardware and all did, with the exception of one attendee who contacted me ahead of time – I was able to arrange to provide a loaner laptop for her. Some folks forgot to bring a computer mouse and I had enough available to lend.

    survey results about event length - too long

    The Bad

    • We ran too long. We originally planned the workshop to last from 10 AM to 2 PM. We actually ran until about 4 PM; although we officially ended at 3 PM with everyone in the room’s consent around 1:30. This is almost entirely my fault; I covered the Inkscape Bootcamp slides too slowly. We had a range of skill levels in the room, and while I was able to keep the newbies on board during my session, the more advanced folks were bored until gnokii ran his (much more advanced) tutorial. The survey results also provided evidence for this, as folks felt the event ran too long and some respondents felt it moved too slow, others too fast.
    • We covered too much material. Going hand-in-hand with running too long, we also tried to do too much. We tried to provide instruction for everyone from the absolute beginner, to Adobe convert, to more experienced attendee, and lost folks along the way as the pacing and level of detail needed for each different audience is too different to pull off successfully in one event. In our post-event session, the Fedora Design Team members running the event agreed we should cut a lot of the basic Inkscape instruction and instead focus on badges as the conduit for more (perhaps one-on-one lab session style) Inkscape instruction to better focus the event.
    • We lost people after lunch. We lost about half of our attendees not long after lunch. I believe this is for a number of reasons, not the least of which we covered so much material to start, they simply needed to go decompress (one survey respondent: “I ended up having to leave before the badges part because my brain hurt from the button tutorial. Maybe don’t do quite so many things next time?”) Another interesting thing to note is the half of the room that was less experienced (they didn’t come with Inkscape pre-installed and along the way tended to need more instructor help,) is the half that pretty much cleared out, while the more experienced half of the room was still full by the official end of the event. This helps support the notion that the newbies were overwhelmed and the more experienced folks hungry for more information.
    • FAS account creation was painful. We should have given the Fedora admins a heads up that we’d be signing 30 folks up for FAS accounts all at the same time – we didn’t, oops! Luckily we got in touch via IRC, so folks were finally able to sign up for accounts without being blocked due to getting flagged as potential spammers. The general workflow for FAS account signup (as we all know) is really clunky and definitely made things more difficult than it needed to be.
    • We should have been more clear about the agenda / had slides available. This one came up multiple times on the survey – folks wanted a local copy of the slides / agenda at the event so when they got lost they could try to help themselves. We were surprised by unwilling folks seemed to be to ask for help, despite our attempts to set a laid back, audience-participation heavy environment. In chatting with some of the attendees over lunch and after the event, both newbie and experienced folks expressed a desire to avoid ‘slowing everybody else down’ by asking a question and wanting to try to ‘figure it out myself first.’
    • No OSD keypress guides. We forgot to run an app that showed our keypresses while we demoed stuff, which would have made our instructions easier to follow. One of the survey respondents pointed this one out.
    • We didn’t have name badges Another survey comment – we weren’t wearing name badges and our names weren’t written anywhere, so some folks forgot our names and didn’t know how to call for us.
    • We weren’t super-organized on assisting folks around the room. We should have set a game plan before starting and assigned some of the other staff to stand in particular corners of the room and kind of assign them that area to help people one-on-one. This would have helped because as just mentioned, people were reluctant to ask for help. Pacing behind them as they worked and taking note of their screens when they seemed stuck and offering help worked well.

    Workshop participants working on their projects

    Survey results so far

    Thus far we’ve had 8 respondents out of the 30 attendees, which is actually not an awful response rate. Here’s a quick rundown of the results:

    1. How likely is it that you would recommend the event to a friend or colleague? 2 detractors, 3 passives, 2 promoters; net promoter score 0 (eek)
    2. Overall, how would you rate the event? Excellent (2), Very Good (3), Good (3), Fair (0), Poor (o)
    3. What did you like about the event? This was a freeform text field. Some responses:
      • “I think the individuals running the event did a great job catering to the inexperience of some of the audience members. The guy that ran the button making lab was incredibly knowledgeable and he helped me learn a lot of new tools in a software I’ve never used before that I may not have found on my own.”
      • “The first Inkscape walk through of short cut keys and their use. Presenter was confident, well prepared and easy to follow. Everyone was very helpful later as we tried “Evil Computer” mods with assistance from knowledgeable artists.”
      • “I enjoyed learning about Inkscape. Once I understood all the basic commands it made it very easy to render cool-looking logos.”
      • “It was a good learning experience. It taught me some things about graphics that I did not know.”
    4. What did you dislike about the event? This was a freeform text field. Some responses:
      • “I wish there was more of an agenda that went out. I tried installing Inkscape at my home before going, but I ran into some issues so I went to the office early to get help. Then I found out that the first hour of the workshop was actually designed to help people instal it. It also went much later than originally indicated and although it didn’t bother me, many people left at the time it was supposed to end, therefore not being able to see how to be an open source contributor.”
      • “The button explanation was very fast and confusing. I’m hoping the video helps because I can pause it and looking away for a moment won’t mean I miss something important.”
      • “Hard to follow directions, too fast paced”
      • “The pace was sometimes too slow.”
      • “While the pace felt good, it can be hard to follow what specific keypresses/mouse movements produced an effect on the projector. When it’s time to do it yourself, you may have forgotten or just get confused. A handout outlining the steps for each assignment would have been helpful.”
    5. How organized was the event? Extremely organized (0), Very organized (5), Somewhat organized (3), Not so organized (0), Not at all organized (0)
    6. How friendly was the staff? Extremely friendly (4), Very friendly (4), Somewhat friendly (0), Not so friendly (0), Not at all friendly (0)
    7. How helpful was the staff? Extremely helpful (2), Very helpful (3), Somewhat helpful (3), not so helpful (0), not at all helpful (0).
    8. How much of the information you were hoping to get from this event did you walk away with? All of the information (4), most of the information (2), some of the information (2), a little of the information (0), none of the information (0)
    9. Was the event length too long, too short, or about right? Much too long (0), somewhat too long (3), slightly too long (3), about right (2), slightly too short (0), somewhat too short (0), much too short (0).
    10. Freeform Feedback: Some example things people wrote:
      • “I’m an Adobe person and I’ve never used other design softwares, so I’m happy I learned about a free open source software that will help me become more of an asset when I finish college and begin looking for a career.”
      • “Overall fantastic event. I hope I’m able to find out if another workshop like this is ever held because I’d definitely go.”
      • “If you are willing to make the slides available and focus on tool flow it would help as I am still looking for how BADGE is obtained and distributed.”

    mleonova showing off our badges

    Looking forward!

    Despite some of the hiccups, it is clear attendees got a lot out of the event and enjoyed it. There are a lot of recommendations / suggestions documented in this post for improving the next event, should one of us decide to run another one.

    In general, in our post-event discussion we agreed that future events should have a tighter experience level pre-requisite; for example, absolute beginners tended to like the Inkscape bootcamp material, so maybe have a separate Inkscape bootcamp event for them. The more experienced users enjoyed gnokii’s project-style, fast-paced tutorial and the badges workshop, so having an event that included just that material and had a pre-requisite (perhaps you must be able to install Inkscape on your own and be at least a little comfortable using it) would probably work well.

    Setting a time limit of 3-4 hours and sticking to it, with check-ins, would be ideal. I think an event like this with this many attendees needs 2-3 people minimum running it to work smoothly. If there were 2-3 Fedorans co-located and comfortable with the material, it could be run fairly cheaply; if the facility is free, you could do it for around $200 if you provide food.

    Anyway I hope this event summary is useful, and helps folks run events like this in the future! A big thanks to the Fedora Council for funding the Fedora Design Team FAD and this event!

    July 31, 2016

    New Stellarium User Guide is available

    Dear all,

    while we were working on new features for the 0.15 release, we have also thoroughly reworked the Stellarium User Guide (SUG). This should now include all changes introduced since the 0.12 series and be up-to-date with the 0.15 series. It includes many details about landscape creation, skyculture creation, telescope control, putting your deep-sky photos among the stars, how to start scripting, creation of 3D sceneries for Stellarium, and much more.

    The SUG is now almost 300 pages and available for download as hyperlinked PDF from stellarium.org. It is also packed in the Windows install package download, so you don't need a separate download.

    The online user guide on the wiki will no longer be updated, and may even go away if we do not hear a major outcry from you.

    Clear skies for observing, and now you have something to read for the cloudy nights as well ;-)

    Kind regards,

    Stellarium 0.15.0

    In memory of our team member Barry Gerdes.

    Version 0.15.0 is based on Qt5.6. Starting with this version, some graphics cards have been blacklisted by Qt and are automatically forced to use ANGLE on Windows.
    We introduce a major internal change with the StelProperty system.
    This allows simpler access to internal variables and therefore more ways of operation.
    Most notably this version introduces an alternative control option via RemoteControl, a new webserver interface plugin.
    We also introduce another milestone towards providing better astronomical accuracy for historical applications: experimental support of getting planetary positions from JPL DE430 and DE431 ephemerides. This feature is however not fully tested yet.
    The major changes:
    - Added StelProperty system
    - Added new plugin for exhibitions and planetariums - Remote Control
    - Added new skycultures: Macedonian, Ojibwe, Dakota/Lakota/Nakota,
    - Updated code of plugins
    - Added Bookmarks tool and updated AstroCalc tool
    - Added new functions for Scripting Engine and new scripts
    - Added Miller Cylindrical Projection
    - Added updates and improvements in DSO and star catalogues (include initial
    support of The Washington Double Star Catalog)
    - azimuth lines (also targetting geographic locations) in ArchaeoLines plugin
    - Many fixes and improvements...

    In addition, we prepared a new user guide.

    A huge thanks to our community whose contributions help to make Stellarium better!

    Full list of changes:
    - Added getting planetary positions from JPL DE430 and DE431 ephemerides (SoCiS2015 project)
    - Added RemoteControl and preliminary RemoteSync plugins (SoCiS2015 project)
    - Added StelProperty system (SoCiS2015 project)
    - Added immediate saving of settings for plugins (Angle Measure, Archeo Lines, Compass Marks)
    - Added Belarusian translation for landscapes and sky cultures (LP: #1520303)
    - Added Bengali description for landscapes and sky cultures (LP: #1548627)
    - Added new skycultures: Macedonian, Ojibwe, Dakota/Lakota/Nakota, Kamilaroi/Euahlayi
    - Added support Off-Axis Guider feature in Oculars plugin (LP: #1354427)
    - Added support permanent rotation angle for CCD in Oculars plugin
    - Added type of mount for telescopes in Oculars plugin
    - Added improvements for displaying data in decimal format
    - Added possibility to drawing of permanent orbits of the planets (disables of hiding orbits for planets, when they are out of field of view). (LP: #1509674)
    - Added tentative support for screens with 4K resolution for Windows packages (LP: #1372781)
    - Enabled support for side-by-side assembly technology for Windows packages (LP: #1400045)
    - Added CLI options --angle-d3d9, --angle-d3d11, --angle-warp for fine-tuning ANGLE flavour selection on Windows.
    - Added improvements in Stellarium's installer on Windows
    - Added improvements in Telescope Control plugin
    - Added feature for build a dependency graphs of various characteristics of exoplanets (Exoplanets plugin)
    - Added support of the proper names for exoplanets and their host stars (Exoplanets plugin)
    - Added improvement for Search Tool
    - Added improvement for scripting engine
    - Added their Bayer designation for some stars in Scorpius (LP: #1518437)
    - Added updates and improvements in Stellarium DSO Catalog
    - Added initial support of subset of The Washington Double Star Catalog (LP: #1537449)
    - Added Prime Vertical and Colures lines
    - Added new functions for Scripting Engine
    - Added new DSO textures
    - Finished migration from Phonon to QtMultimedia (LP: #1260108)
    - Added scripting function to block tracking or centering for special installations.
    - Added visualization of ephemerides
    - Added config option for animation speed of pointers (gui/pointer_animation_speed = 1.0)
    - Added implementation of semi-transparent mask in the Oculars plugin (LP: #1511393)
    - Added hiding the halo when inner planet between Sun and observer (or moon between planet and observer) (LP: #1533647)
    - Added a tool for fill custom settings of position of Great Red Spot on Jupiter
    - Added Bookmarks tool (LP: #1106779)
    - Added new scripts: Best objects in the New General Catalog, The Jack Bennett Catalog, Binosky: Deep Sky Objects for Binoculars, Herschel 400 Tour, Binocular Highlights, 20 Fun Naked-Eye Double Stars, List of largest known stars
    - Added Circumpolar Circles (LP: #1590785)
    - Added Miller Cylindrical Projection
    - Allow viewport offset change in scripts.
    - Allow centering zenith or pole via scripting (LP: #1068529)
    - Allow freezing/unfreezing average atmospheric brightness (e.g. for balanced-brightness image export scripts.)
    - Allow saving of output.txt to another file so that it can be read by other programs on Windows while Stellarium is still open.
    - Allow min/max values and wraparound settings for AngleSpinBox
    - Allow configurable speed and script speed buttons
    - Allow storing and retrieval of screen location for StelDialogs (LP: #1249251)
    - Allow polygonal horizons with many negative values (LP: #1554639)
    - Allow altitude-dependent twinkling for stars (LP: #1594065)
    - Allow display of sun's halo if sun is just outside viewport (LP: #1294498)
    - Reconfigure viewDialog GUI to put constellation switches to skylore tab.
    - Limit location coordinate spinboxes to useful coordinates
    - Apply Fluctuations in the Moon's Mean Longitude in DeltaT calculations (Source: Spencer Jones, H., 'The Rotation of the Earth, and the Secular Accelerations of the Sun, Moon and Planets', MNRAS, 99 (1939), 541-558 [http://adsabs.harvard.edu/abs/1939MNRAS..99..541S])
    - Applying device pixel ratio to the pixmap, so that it displays correctly on Mac's.
    - Added improvements for Paste and Search feature (Search Tool)
    - Added ecliptical coordinates info for objects in scripting engine
    - Added exit pupil calculation in the Oculars plugin (LP: #1500225)
    - Added support MSVC2015
    - Added automatic reloading catalogs after updating for some plugins
    - Added a tour of Messier Objects
    - Added fix to circumvent text rendering bug (CLI option: -t)
    - Introduce env variable STEL_OPTS to allow preconfiguring default CLI options.
    - Added option for hide background under buttons on bottom toolbar (LP: #1204639)
    - Added check position on the screen for orbits of satellites (LP: #1510530)
    - Added new option to changing behaviour of displaying of the labels of DSO on the screen (LP: #1600283)
    - Star catalogues has been updated from 'XHIP: An Extended Hipparcos Compilation' data.
    - Fixed validation of day in Date and Time dialog (LP: #1206284)
    - Fixed display of sidereal time (mod24), show apparent sidereal time only if nutation is used.
    - Fixed issue of saving some setting from the View window (LP: #1509639)
    - Fixed issue for reset of number of satellite orbit segments (LP: #1510592)
    - Fixed bug in download of stars catalogs in debug mode (LP: #1514542)
    - Fixed issue with smooth blending/fading in ArchaeoLines plugin
    - Fixed loading scenes for Scenery 3D plugin (LP: #1533069)
    - Fixed connection troubles in Telescope Control Plugin on Windows (LP: #1530372)
    - Fixed wrong altitude of culmination in Observability plugin (LP: #1531561)
    - Fixed the meteor radiants movements when time is switched manually (LP: #1535950)
    - Fixed misbehaving zoom out to initial view position (LP: #1537446)
    - Fixed format for declination in AstroCalc
    - Fixed value of ecliptic obliquity and ecliptic coordinates of date (LP: #1520792)
    - Fixed zoom/art brightness handling (LP: #1520783)
    - Fixed perspective mode with offset viewport in scenery3d (LP: #1509728)
    - Fixed drawing reticle for telescope (LP: #1526348)
    - Fixed wrong altitudes for some locations (LP: #1530759)
    - Fixed window location having offscreen frame when leaving fullscreen (LP: #1471954)
    - Fixed core.moveToAltAzi(90,XX) issue (LP: #1068529)
    - Fixed some skyculture links
    - Fixed issue of sidereal time: sidereal time is no longer displayed negative in the Western timezones.
    - Fixed online search tool for MPC website
    - Fixed translation of Egyptian planet names (LP: #1548008)
    - Fixed bug about wrong rise/set times in Observability for years far in the past
    - Fixed issue for resets flip buttons in Oculars plugin (LP: #1511389)
    - Fixed proper detection of GLSL ES version on Raspberry Pi with VC4 driver (and maybe other devices).
    - Fixed odd DateTimeDialog behavior during daylight saving change
    - Fixed key handling issue on Mac OS X in Scenery3D (LP: #1566805)
    - Fixed omission in documentation (LP: #1574583, #1575059)
    - Fixed a loss of focus in the sky when you click on the button (LP: #1578773)
    - Fixed issue of getting location from network.
    - Fixed bug in visualization of opposition/conjunction longitude
    - Fixed crash of Navigational Stars plugin (LP: #1598375)
    - Fixed satellites mutual occultation (LP: #1389765)
    - Fixed NaN in landscape brightness computation (LP: #1597129)
    - Fixed oversized corona (LP: #1599513)
    - Fixed displaying common names of DSO after changes filters of catalogs (LP: #1600283)
    - Ensure Large File Support for DE431 also for ARM boards.
    - Changed behaviour for drawing of the planet orbits (LP: #1509673)
    - Make moon halo visible again even when below -45 degrees (LP: #1586796)
    - Reduce planet brightness in daylight (LP: #1503248)
    - Updated AstroCalc tool
    - Updated icons for View dialog
    - Updated ssystem.ini (LP: #1509693, #1509692)
    - Updated names of stars (LP: #1550642)
    - Updated the search rules in the search dialog (LP: #1593965)
    - Avoid false display of tiny eclipse factor (rounding error).
    - Avoid issues around GLdouble in GLES2/ARM boards.
    - Reduce brightness of stars for ocular and CCD views
    - Hide displaying markers for meteor radiants during daylight
    - Cosmetic updates in Equation Of Time plugin
    - Enabled permanent visualization of position angles for galaxies
    - Updated bookmarks in Solar System Editor plugin
    - Updated default config options
    - Updated scripts
    - Updated shortcuts for scripts
    - Updated norwegian skyculture descriptions
    - Updated connection behaviour for autodiscovery location through network (FreeGeoIP)
    - Updated and optimized GUI
    - Updated Navigational Stars plugin
    - Implementation of quick turning to different directions (examples: CdC, HNSKY)
    - Important optimizations of planet position computation
    - Refactoring coloring markers of the DSO
    - Refactoring of the generating parts of the infrastructure (LP: #1571391)
    - Refactoring Telescope Control plugin
    - Removed info about Moon phases (avoid inconsistency for strings).
    - Removed rotation of movement by convergence angle correction in Scenery 3D plugin.

    July 28, 2016

    E-Interiores: Next-generation interior design with Blender

    By: Dalai Felinto, Blender Developer

    Meet e-interiores. This Brazilian interior design e-commerce startup transformed their creation process into an entire new fashion. This tale will show you how Blender made this possible, and how far we got.

    We developed a new platform based on a semi-vanilla Blender, Fluid Designer, and our own pipelines. Thanks to the accomplished results, e-interiores was able to consolidate a partnership with the giant Tok&Stok providing a complete design of a room, in 72 hours.

    A long time ago in a galaxy far far away

    During its initial years, e-interiores focused on delivering top-notch projects, with state of the art 3d rendering. Back then, this would involve a pantheon of software, namely: AutoCAD, SketchUp, VRay, Photoshop.

    All those mainstream tools were responsible for producing technical drawings, 3D studies, final renderings, and the presentation boards. Although nothing could be said about the final quality of their deliverables, the overall process was “artisanal” at most and extremely time consuming.

    Would it be possible to handle those steps inside a single tool? How much time could be saved from handling the non-essential tasks to the computer itself?

    New times require new tools

    The benefits of automatization in a pipeline are known and easily measured. But how much thought does a studio give to customization? How much can a studio gain from a custom-tailored tool?

    It was clear that we had to minimize the time spent on the preparation, rendering and presentation. This would leave the creators free to dedicate their time and sweat over what really matters: which furnitures to use and how to arrange them, which colors and materials to employ, the interior design itself.

    A fresh start

    The development paradigm was as such:

    • Vanilla Blender: The underneath software should stay as close to its consumer version as possible
    • Addon: The core of the project would be to create a Python script to control the end to end user experience
    • Low entry barrier: the users should not have to be skilled in any previous 3D software, specially not in Blender

    The development started by cleaning up the Blender Interface completely. I wanted the user to be unaware of the software being used underneath. We took a few hints from Fluid Designer (the theme is literally their startup file), but we focused on making the interface tied to the specifics of e-interiores working steps.

    You have the tools to create the unchanged elements of the space – walls, floor, …, the render point of views, the dynamic elements of the project, and the library. Besides that, there are a whole different set of tools dedicated to create the final boards, add notations, measurements, …

    A little bit about coding

    Although I wanted to keep Blender as close to its pristine release condition as possible, there were some changes in Blender that were necessary. They mostly orbited around the Font objects functionality which we use extensively in the boards preparations.

    The simplest solution in this case was to make the required modifications myself, and contribute them back to Blender. The following contributions are all part of the official Blender code, helping not only our project, but anyone that requires a more robust all-around text editing functionality:

    With this out of the way we have a total 18,443 lines of code for the core system, 1,458 of model conversion and 2,407 for database. All of this ammounts to over 22 thousand lines of Python scripting.

    Infrastructure barebones

    The first tools we drafted are what we call the skeleton. We have parametric walls, doors, windows. We can make floor and ceilings. We can adjust their measurements later. We can play with their style and materials.

    Objects library

    We have over 12,000 3d models made available to us by Tok&Stok. The challenge was to batch convert them into a format Cycles could use. The files were originally in Collada, and modelled and textured for realtime usage. We then ditched the lightmaps, removed the support meshes, and assigned Cycles-hand-made materials based on the object category.

    Part of this was only possible thanks to the support of Blender developer and Collada functionality maintainer Gaia Clary. Many thanks!

    More dynamic elements

    Curtains, Mirrors, marbles, blindex . . . there a few components of a project that are custom-made and adjusted on an individual case basis.


    This is where the system shines. The moment an object is on the scene we can automatically generate the lighting layout, the descriptive memorial, and the product list.

    The boards are the final deliverable to the clients. This is where the perspectives, the project lists, the blueprints all come together. The following animation illustrates the few steps involved in creating a board with all the used products, with their info gathered from our database.

    Miscellaneous results

    Finally you can see a sample of the generated result of the initial projects done with this platform. Thanks to Blender’s script possibilities and customization we put together an end-to-end experience to our designer and architects.

    July 27, 2016

    A Chiaroscuro Portrait

    A Chiaroscuro Portrait

    Following the Old Masters

    Introduction (Concept/Theory)

    The term Chiaroscuro is derived from the Italian chiaro meaning ‘clear, bright’ and oscuro meaning ‘dark, obscure’. In art the term has come to refer to the use of bold contrasts between light and shadow, particularly across an entire composition, where they are a prominent feature of the work.

    This interplay of shadow and light is particularly important in allowing the viewer to extrapolate volume from a flat image. The use of a single light source helps to accentuate the perception of volume as well as adding drama and dynamics to the scene.

    Historically the use of chiaroscuro can often be associated with the works of old masters such as Rembrandt and Caravaggio. The use of such extreme lighting immediately evokes a sense of shape and volume, while focusing the attention of the viewer.

    Rembrandt Self Portrait Self Portrait with Gorget by Rembrandt
    Girl with a Pearl Earring Girl with a Pearl Earring by Johannes Vermeer

    The aim of this tutorial will be to emulate the lighting characteristics of chiaroscuro in producing a portrait to evoke the feeling of an old master painting.


    In examining chiaroscuro portraiture, it becomes apparent that a strong characteristic of the images is the use of single light source on the scene. So this tutorial will focus on using a single source to illuminate the portrait.

    Getting the keylight off the camera is essential. The closer the keylight is to the axis of the camera the larger the reduction in shadows. This is counter to the intention of this workflow. Shadows are an essential component in producing this look, and on-camera lighting simply will not work.

    The reason to choose a softbox versus the myriad of other light modifiers available is simple: control. Umbrellas can soften the light, but due to their open nature have a tendency to spill light everywhere while doing so. A softbox allows the light to be softened while also retaining a higher level of spill control.

    Light spill can still occur with a softbox, so the best option is to bring the light in as close as possible to the subject. Due to the inverse square nature of light attenuation, this will help to drop the background very dark (or black) when exposing properly for the subject.

    Inverse Square Light Fall Off

    For example, in the sample images above, a 20 inch softbox was initially located about 18 inches away from the subject (first). The rear wall was approximately 48 inches away from the subject or just over twice the distance from the softbox. Thus, on a proper exposure for the subject, the background would be around 3 stops lower in light. This is seen as the background in the first image has dropped to a dark gray.

    When the light distance to the subject is doubled and the light distance to the rear wall stays the same, the ratio is not as extreme between them. The light distance from the subject is now 36 inches, while the light distance to the rear wall is still 48 inches. When properly exposing for the subject, the rear wall is now only about 1 stop lower in light.

    In the final example, the distance from the light to both the subject and the rear wall are very close. As such, a proper exposure for the subject almost brings the wall to a middle exposure.

    What this example provides is a good visual guide for how to position the subject and light relative to the surroundings to create the desired look. To accentuate the ratio between dark and light in the image it would be best to move the light as close to the subject as possible.

    If there is nothing to reflect light on the shadow side of the subject, then the shadows would fall to very dark or black. Usually, there are at least walls and ceilings in a space that will reflect some light, and the amount falling on the shadow side can be attenuated by either moving the subject nearer to a wall on that side, or using a bounce/reflector as desired.



    The setup for the shot would be to push the key light in very close to the model, while still allowing some bounce to slightly fill the shadows.

    Mairi Light Setup

    As noted previously, having the key light close to the model would allow the rest of the scene to become much darker. The softbox is arranged such that the face is almost completely vertical and the bottom edge is just above the models eyes. This was to feather the lower edge of the light falloff along the front of the model.

    There are 2 main adjustments that can be made to fine-tune the image result with this setup.

    The first is the key light distance/orientation to the subject. This will dictate the proper exposure for the subject. For this image the intention is to push the key light in as close as possible without being in frame. There is also the option of angling the key light relative to the subject. In the diagram above, the softbox is actually angled away from the subject. The intention here was to feather the edge of the light in order to control spill onto the rest of the model (putting more emphasis on her face).

    The second adjustment, once the key light is in a good location, is the distance from the key light and subject together, to the surrounding walls (or a reflector if one is being used). Moving both subject and keylight closer to the side wall will increase the amount of reflected light being bounced into the shadows.

    Mood Board

    If possible, it can be extremely helpful to both the model and photographer to have a Mood Board available. This is usually just a collection or collage of images that help to convey the desired feeling or desired result from the session. For help in directing the model, the images do not necessarily need the same lighting setup. The intention is to help the model understand what your vision is for the pose and facial expressions.

    The Shoot

    The lighting is set up and the model understands what type of look is desired, so all that’s left is to shoot the image!

    Mairi Contact Sheet

    In the end, I favored the last image in the sequence for a combination of the models head position/body language and the slight smile she has.


    Having chosen the final image from the contact sheet, it’s now time to proceed with developing the image and retouching as needed.

    If you’d like to follow along you can download the raw .ORF file:

    Mairi_Troisieme.ORF (13MB)

    This file is licensed (Creative Commons, By-Attribution, Non-Commercial, Share-Alike), and is the same image that I shared with everyone on the forums for a PlayRaw processing practice. You can see how other folks approached processing this image in the topic on discuss. If you decide to try this out for yourself, come share your results with us!

    Raw Development

    There are various Free raw processing tools available and for this tutorial I will be using the wonderful darktable.

    darktable logo

    Base Curve

    Not surprisingly the initial image loaded without any modifications is a bit dark and rather flat looking. By default darktable should have recognized that the file is from Olympus, and attempted to apply a sane base curve to the linear raw data. If it doesn’t you can choose the preset “olympus like alternate”.

    I found that the preset tended to crush the darkest tones a bit too much, and instead opted for a simple curve with a single point as seen here:

    darktable base curve

    Resist the temptation to try and adjust overall exposure and contrast with the base curve. These parameters will be adjusted shortly in the appropriate modules. The base curve is only intended to transform the linear raw rgb to something that looks good on your output device. The base curve will affect how the contrasts, colors, and saturation all relate in the final output. For the purposes of this tutorial, it is enough to simply choose a preset.

    The next series of steps focus on adjusting various exposure parameters for the image. Conceptually they start with the most broad adjustment, exposure, then to slightly more targeted adjustments such as contrast, brightness, and saturation, then finish with targeted tonal adjustments in tone curves.

    darktable manual: base curve


    Once the base curve is set, the next module to adjust would be the overall exposure of the image (and the black point). This is done in the “exposure” module (below the base curve).

    darktable exposure

    The important area to watch while adjusting the exposure for the image is the histogram. The image was exposed a little dark, so increase the exposure overall for the image. In the histogram, avoid clipping any channels by allowing them to be pushed outside the range. In this case, the desire is to provide a nice mid-level brightness to the models face. The exposure can be raised until the channels begin to clip on the far right of the histogram, then brought back down a bit to leave some headroom.

    The darkest areas of the histogram on the left are clipped a bit, so raising the black level brings the detail back in the darkest shadows. When in doubt try to let the histogram guide you with data from the image. Particularly around the highest and lowest values (avoid clipping if possible).

    An easy way to think of the exposure module is that it allows the entire image exposure to be shifted along with compressing/expanding the overall range by modifying the black point.

    darktable manual: exposure

    Contrast Brightness Saturation

    Where the Exposure module shifts the overall image values from a global perspective, modules such as the “contrast brightness saturation” allow finer tuning of the image within the range of the exposure.

    To emphasis the models face, while also strengthening the interplay of shadow and light on the image, drop the brightness down to taste. I brought the brightness levels down quite a bit (-0.31) to push almost all of the image below medium brightness.

    darktable contrast brightness saturation

    Overall this helps to emphasis the models face over the rest of the image initially. While the rest of the image is comprised of various dark/neutral tones, the models face is not. Pushing the saturation down as well will remove much of the color from the scene and face. This is done to bring the skin tones back down to something slightly more natural looking, while also muting some of those tones.

    darktable contrast brightness saturation

    The skin now looks a bit more natural but muted. The background tones have become more neutral as well. A very slight bump in contrast to taste finishes out this module.

    darktable manual: contrast brightness saturation

    Tone Curve

    A final modification to the exposure of the image is through a tone curve adjustment. This gives us the ability to make some slight changes to particular tonal ranges. In this case pushing the darker tones down a bit more while boosting the upper mid and high tones.

    darktable tone curve

    This is actually a type of contrast increase, but controlled to specific tones based on the curve. The darkest darks (bottom of the curve) get pushed a little bit darker, which will include most of the sweater, background, and shadow side of the models face. The very slight rolling boost to the lighter tones primarily helps to allow the face to brighten up against the background even more.

    The changes are very slight and to taste. The tone curve is very sensitive to changes, and often only very small modifications are required to achieve a given result.

    darktable manual: tone curve


    By default the sharpen module will apply a small amount of sharpening to the image. The module uses unsharp mask for sharpening, so the radius parameter is the blur radius into the unsharp mask. I wanted to sharpen lightly very fine details, so set the radius to ~1, with an amount around 0.9 and no threshold. This produced results that are very hard to distinguish from the default settings, but appears to sharpen smaller structures just slightly more.

    darktable exposure

    I personally include a final sharpening step as a side effect of using wavelet decompose for skin retouching later in the process with GIMP. As such I am not usually as concerned about sharpening here as much. If I were, there are better modules for adjusting sharpening from wavelets using the equalizer module.

    darktable manual: sharpen

    Denoise (profiled)

    The darktable team and its users profiled many different cameras for noise profiles at various ISOs to build a statistical model with brightness across the three color channels. Using these profiles, darktable can then do a better job at efficiently denoising images. In the case of my camera (Olympus OM-D E-M5), there was a profile already captured for ISO200.

    darktable denoise profiled

    In this case, the chroma noise wasn’t too bad, and a very slight reduction in luma noise would be sufficient for the image. As such, I used a non-local means with a large patch size (to retain sharpness) and a low strength. This was all applied uniformly against the HSV lightness option.

    darktable manual: denoise - profiled


    Finally! The image tones and exposure are in a desirable state, so export the results to a new file. I tend to use either TIF or PNG at 16 bit. This is in case I want to work in a full 16 bit workflow with the latest GIMP, or may want to in the future.


    When there are still some pixel-level modifications that need to be done to the image, the go-to software is GIMP.

    • Skin retouching
    • spot healing/touchups
    • Background rebuild
    GIMP - GNU Image Manipulation Program <3

    Skin Retouching with Wavelet Decompose

    This step is not always needed, but who doesn’t want their skin to look a little nicer if possible?

    The ability to modify an image based on detail scales isolated on their own layers is a very powerful tool. The approach is similar to frequency separation, but has the advantage of providing multiple frequencies to modify simultaneously of progressively larger and larger detail scales. This offers a large range of flexibility and an easier workflow vs. frequency separation (you can work on any detail scale simply by switching to a different layer).

    I used to use the wonderful Wavelet Decompose plugin from marcor on the GIMP plugin registry. I have since switched to using the same result from G’MIC once David Tschumperlé added it in for me. It can be found in G’MIC under:

    Details → Split details [wavelets]

    Running Split details [wavelets] against the image to produce 5 wavelet scales and a residual layer yields (cropped):

    Wavelet scales example decompose

    The plugin (or script) will produce 5 layers of isolated details plus a residual layer of low-frequency color information. Seen here in ascending size of detail scales. The finest scales (1 & 2) will be hard to discern the details as they are quite fine.

    To help visualizing what the different scale levels look like here is a view of the same levels above, normalized:

    Wavelet scales normalized

    The normalized view shows clearly the various types of detail scales on each layer.

    There are various types of changes that can be made to the final image from these details scales. In this image, we are going to focus on evening out the skin tones overall. The scales with the biggest impact on even skin tones for this image are 4 and 5.

    A good workflow when smoothing overall skin tones and using wavelet scales is to work on smoothing from the largest detail scales and working down to finer scales. Usually, a nice amount of pleasing tonal smoothing can be accomplished in the first couple of coarse detail scales.

    Skin Retouching Zones

    Different portions of a face will often require different levels of smoothing. Below is a rough map of facial contours to consider when retouching. Not all faces will require the exact same regions, but it is a good starting point to consider when approaching a new image.

    Skin retouching by zones

    The selections are made with the Free Select Tool with the “Feather edges” option on and set to roughly 30px.


    A good starting point to consider is the forehead on the largest detail scale (5). The basic workflow is to select a region of interest and a layer of detail, then to suppress the features on that detail level. The method of suppressing features is a matter of personal taste but is usually done across the entire selection using a blur filter of some sort.

    A good first choice would be to use a gaussian blur (or Selective Gaussian Blur) to smooth the selection. A better choice, if G’MIC is installed, is to use a bilateral blur for its edge-preserving properties. The rest of these examples will use the bilateral blur for smoothing.

    Considering the forehead region:

    Sking retouching wavelet scales forehead

    The first image is the original. The second image is after running a bilateral blur (in G’MIC: Smooth [bilateral]), with the default parameter values:

    • Spatial variance: 10
    • Value variance: 7
    • Iterations: 2

    These values were chosen from experience using this filter for the same purpose across many, many images. The results of running a single blur on the largest wavelet scale is immediately obvious. The unevenness of the skin and tones overall are smoothed in a pleasing way, while still retaining the finer details that allow the eye to see a realistic skin texture.

    The last image is the result of working on the next detail scale layer down (Wavelet scale 4), with much softer blur parameters:

    • Spatial variance: 5
    • Value variance: 2
    • Iterations: 1

    This pass does a good job of finishing off the skin tones globally. The overall impression of the skin is much smoother than the original, but crucial fine details are all left intact (wrinkles, pores) to keep the it looking realistic.

    This same process is repeated for each of the facial regions described. In some cases the results of running the first bilateral blur on the largest scale level is enough to even out the tones (the cheeks and upper lip for example). The chin got the same treatment as the forehead. The process is entirely subjective, and will vary from person to person for the parameters. Experimentation is encouraged here.

    More importantly, the key word to consider while working on skin tones is moderation. It is also important to check your results zoomed out, as this will give you an impression of the image as seen when scaled to something more web-sized. A good rule of thumb might be:

    “If it looks good to you, go back and reduce the effect more”.

    The original vs. results after wavelet smoothing:

    Mairi Face Wavelet Wavelet Smoothed.
    Click to compare original
    <noscript> <figure> <img alt="Mairi Face Original" height="741" src="https://pixls.us/articles/a-chiaroscuro-portrait/face-original.jpg" width="640" /> Original </figure> </noscript>

    When the work is finished on the wavelet scales, a new layer from all of the visible layers can be created to continue touching up spot areas that may need it.

    Layer → New from Visible

    Spot Touchups

    The use of wavelets is good for a large-scale selection area smoothing but a different set of tools is required for spot touchups where needed. For example, there is a stray hair that runs across the models forehead that can be removed using the Heal tool.

    For best results when using the Heal tool, use a hard edged brush. Soft edges can sometimes lead to a slight smearing in the feathered edge of a brush that is undesirable. Due to the nature of the heal algorithm sampling, it is also advisable to avoid trying to heal across hard/contrasty edges.

    This is also a good tool to use for small blemishes that might have been tedious to repair across all of the wavelet scales from the previous section. This is also a good time to repair hot-spots, fly-away hairs, or other small details.

    Sweater Enhancement

    The model is wearing a nicely textured sweater but the details and texture are a slightly muted. A small increase in contrast and local details will help to bring some enhancement to the textures and tones. One method of enhancing local details would be to use the Unsharp Mask enhancement with a high radius and low amount (HiRaLoAm is an acronym some might use for this).

    Create a duplicate of the “Spot Healing” layer that was worked on in the previous step, and apply an Unsharp Mask to the layer using HiRaLoAm values.

    For example, a good starting point for parameters might be:

    • Radius: 200
    • Amount: 0.25

    With these parameters the sharpen function will instead tend to increase local contrast more, providing more “presence” or “pop” to the sweater texture.

    Background Rebuild

    The background of the image is a little too uniformly dark and could benefit from some lightening and variation. A nice lighter background gradient will enhance the subject a little.

    Normally this could be obtained through the use of a second strobe (probably gridded or with a snoot) firing at the background. In our case we will have to fake the same result through some masking.

    First, a crop is chosen to focus the composition a little stronger on the subject. I placed the center of the models face along the right-side golden section vertical and tried to place things near the center of the frame:

    Mairi cropped

    The slight-centered crop is to emulate the type of crop that might be expected from a classical painting (thereby strengthening the overall theme of the portrait further).

    Subject Isolation

    There are a few different methods to approach the background modification. The method I describe here is simply one of them.

    The image at this point is duplicated and the duplicate has the levels raised to brighten it up considerably. In this way, a simple layer mask can control the brightness and where it occurs in the image at this point.

    Mairi isolation
    Mairi isolation layers

    This is what will give our background a gradient of light. To get our subject back to dark will require masking the subject on a layer mask again. A quick way to get a mask to work from is to add a layer mask to the “Over” layer, letting the background show through, but turning the subject opaque.

    Add a layer mask to the “Over” layer as a “Grayscale copy of layer”, and check the “Invert mask” option:

    Mairi isolation add layer mask

    With an initial mask in place, a quick use of the tool:

    Colors → Threshold

    will allow you to modify the mask to define the shoulder of the model as a good transition. The mask will be quite narrow. Adjust the threshold until the lighter background is speckle-free and there is a good definition of the edge of the sweater against the background.

    Mairi threshold

    Once the initial mask is in place it can be cleaned up further by making the subject entirely opaque (white on the mask), and the background fully transparent (black on the mask). This can be done with paint tools easily. For not much work a decent mask and result can be had:

    Mairi isolation final

    This provides a nice contrast of the background being lighter behind the darker portions of the model and the opposite on the lighter subjects face.

    Lighten Face Highlights

    Speaking of the subjects face, there’s a nice simple method for applying a small accent on the highlighted portions of the models face in order to draw more attention to her.

    Duplicate the lightened layer that was used to create the background gradient, move it to the top of the layer stack, and remove the layer mask from it.

    Mairi Lighten Face Layers

    Set the layer mode of the copied layer to “Lighten only.

    As before, add a new layer mask to it, “Grayscale copy of layer”, but don’t check the “Invert mask” option. This time use the Levels tool:

    Colors → Levels

    to raise the blacks of the mask up to about mid-way or more. This will isolate the lightening mask to the brightest tones in the image that happen to correspond to the models face. You should see your adjustments modify the mask on-canvas in real-time. When you are happy with the highlights, apply.

    Mairi Lighten Highlights

    Last Sharpening Pass + Grain

    Finally, using I like to apply a last pass of sharpening to the image, and to overlay some grain from a grain field I have to help add some structure to the image as well as mask any gradient issues when rebuilding the background. For this particular image the grain step isn’t really needed as there’s already sufficient luma noise to provide its own structure.

    Usually, I will use the smallest of the wavelet scales from the prior steps and sometimes the next largest scale as well (Wavelet scale 1 & 2). I’ll leave Wavelet scale 1 at 100% opacity, and scale 2 usually around 50% opacity (to taste, of course).

    Mairi Final

    Minor touchups that could still be done might include darkening the chair in the bottom right corner, darkening the gradient in the bottom left corner, and possibly adding a slight white overlay to the eyes to subtly give them a small pop.

    As it stands now I think the image is a decent representation of a chiaroscuro portrait that mimics the style of a classical composition and interplay between light and shadows across the subject.

    July 25, 2016

    I hate deals

    One of my favourite tech-writers, Paul Miller from The Verge, has articulated something I've always felt, but have never been able to express well: I hate deals.

    From Why I'm a Prime Day Grinch: I hate deals by Paul Miller:

    Deals aren't about you. They're about improving profits for the store, and the businesses who distribute products through that store. Amazon's Prime Day isn't about giving back to the community. It's about unloading stale inventory and making a killing.

    But what about when you decide you really do want / need something, and it just happens to be on sale? Well, lucky you. I guess I've grown too bitter and skeptical. I just assume automatically that if something's on sale AND I want to buy it, I must've messed up in my decision making process somewhere along the way.

    I also hate parties and fun.

    July 24, 2016

    Preparation to release of version 0.15.0

    Greetings all!

    We plan to release Stellarium 0.15.0 at the end of next week (31 July).

    This is another major release, who has many changes in code and few new skycultures. If you can assist with translation to any of the 136 languages which Stellarium supports, please go to Launchpad Translations and help us out: https://translations.launchpad.net/stellarium

    Thank you!

    July 19, 2016

    GUADEC Flatpak contest

    I will be presenting a lightning talk during this year's GUADEC, and running a contest related to what I will be presenting.


    To enter the contest, you will need to create a Flatpak for a piece of software that hasn't been flatpak'ed up to now (application, runtime or extension), hosted in a public repository.

    You will have to send me an email about the location of that repository.

    I will choose a winner amongst the participants, on the eve of the lightning talks, depending on, but not limited to, the difficulty of packaging, the popularity of the software packaged and its redistributability potential.

    You can find plenty of examples (and a list of already packaged applications and runtimes) on this Wiki page.


    A piece of hardware that you can use to replicate my presentation (or to replicate my attempts at a presentation, depending ;). You will need to be present during my presentation at GUADEC to claim your prize.

    Good luck to one and all!

    July 18, 2016

    Breeze everywhere

    The first half of this year, I had the chance to work on icon and design for two big free-software projects.

    First, I’ve been hired to work on Mageia. I had to refresh the look for Mageia 6, which mostly meant making new icons for the Mageia Control Center and all the internal tools.


    I proposed to replace the oxygen-like icons with some breeze-like icons.
    This way it integrates much better with modern desktop, and of course it looks especially good with plasma.


    The result is around 1/3 of icons directly imported from breeze, 1/3 are modified versions and 1/3 are created from scratch. I tried to follow as much as possible the breeze guidelines, but had to adapt some rules to the context.


    I also made a wallpaper to go with it, which will be in the extra wallpaper package so not used by default:

    available in different sizes on this link.

    And another funny wallpaper for people that are both mageia users and Pepper & Carrot fans:

    available in different sizes on this link
    (but I’m not sure yet if this one will be packaged at all…)

    Note that we still have some visual issues with the applets.
    It seems to be a problem with how gtkcreate_pixbuf is used. But more important, those applet don’t even react to clic in plasma (while this seems at least to work fine in all other desktop).
    Since no one seems to have an easy fix or workaround yet, if someone has an idea to help…

    Soon after I finished my work on Mageia, I’ve been hired to work on fusiondirectory.
    I had to create a new theme for the web interface, and again I proposed to base it on breeze, similar to what I did for Mageia but in another different context. Also, I modified the CSS to look like breeze-light interface theme. The result theme is called breezy, and is now used by default since the last release.


    I had a lot of positive feedback on this new theme, people seem to really like it.

    Before to finish, a special side note for the breeze team: Thank you so much for all the great work! It has been a pleasure to start from it. Feel free to look at the mageia and fusiondirectory git repositories to see if there are icons that could be interesting to push upstream to breeze icon set.

    July 15, 2016

    Fri 2016/Jul/15

    • Update from La Mapería

      La Mapería is working reasonably well for now. Here are some example maps for your perusal. All of these images link to a rather large PDF that you can print on a medium-format plotter — all of these are printable on a 61 cm wide roll of paper (or one that can put out US Arch D sheets).

      Valladolid, Yucatán, México, 1:10,000

      Ciudad de México
      Centro de la Ciudad de México, 1:10,000

      Ajusco y Sur de la Ciudad de México, 1:50,000

      Victoria, BC
      Victoria, British Columbia, Canada, 1:50,000

      Boston, Massachusetts, USA, 1:10,000

      Walnut Creek
      Walnut Creek, California, USA, 1:50,000

      Butano State Park
      Butano State Park and Pescadero, California, USA, 1:20,000

      Provo, Utah, USA, 1:50,000

      Nürnberg, Germany, 1:10,000

      Karlsruhe, Germany, 1:10,000

      That last one, for Karlsruhe, is where GUADEC will happen this year, so enjoy!

      Next steps

      La Mapería exists right now as a Python program that downloads raster tiles from Mapbox Studio. This is great in that I don't have to worry about setting up an OpenStreetMap stack, and I can just worry about the map stylesheet itself (this is the important part!) and a little code to render the map's scale and frame with arc-minute markings.

      I would prefer to have a client-side renderer, though. Vector tiles are the hot new thing; in theory I should be able to download vector tiles and render them with Memphis, a Cairo-based renderer. I haven't investigated how to move my Mapbox Studio stylesheet to something that Memphis can use (... or that any other map renderer can use, for that matter).

      Also, right now making each map with La Mapería involves extracting geographical coordinates by hand, and rendering the map several times while tweaking it to obtain just the right area I want. I'd prefer a graphical version where one can just mouse around.

      Finally, the map style itself needs improvements. It works reasonably well for 1:10,000 and 1:50,000 right now; 1:20,000 is a bit broken but easy to fix. It needs tweaks to map elements that are not very common, like tunnels. I want to make it work for 1:100,000 for full-day or multi-day bike trips, an possibly even smaller scales for motorists and just for general completeness.

      So far two of my friends in Mexico have provided pull requests for La Mapería — to fix my not-quite-Pythonic code, and to make the program easier to use the first time. Thanks to them! Contributions are appreciated.

    July 13, 2016

    Too much of a good thing

    So the last couple of months, after our return from Italy, were nicely busy. At the day job, we were getting ready to create an image to send to the production facility for the QML-based embedded application we had been developing, and besides, there were four reorganizations in one month, ending with the teams being reshuffled in the last week before said image had to be ready. It was enough pressure that I decided to take last week off from the day job, just to decompress a bit and focus on Krita stuff that was heaping up.

    Then, since April, Krita-wise, there was the Kickstarter, the kick-off for the artbook, the Krita 3.0 release... The 3.0 release doubled the flow of bugs, donations, comments, mails to the foundation, questions on irc, reddit, forum and everywhere else. (There's this guy who has sent me over fifty mails asking for Krita to be released for Windows XP, OSX 10.5 and Ubuntu 12.02, for example). And Google Summer of Code kicked off, with three students working on Krita.

    And, of course, daily life didn't stop, though more and more non-work, non-krita things got postponed or cut out. There were moments when I really wanted to cancel our bi-weekly RPG session just to have another Monday evening free for Krita-related work.

    I don't mind being busy, and I like being productive, and I especially like shipping: at too many day jobs we never shipped, which was extremely frustrating.

    But then last Wednesday evening, a week ago, I suddenly felt queer after dinner, just before we'd start he RPG session. A pressing, heavy pain on my breast, painful upper arms, sweating, nausea, dizziness... I spent the next day in hospital getting checked for heart problems. The conclusion was, it wasn't a heart attack, just was all the symptoms of one. No damage done, in any case, that the tests could figure out, and I am assured they are very acccurate.

    Still, I'm still tired and slow and have a hard time focusing, so I didn't have time to prepare Krita 3.0.1. I didn't manage to finish the video-export refactoring (that will also make it possible to pass file export configurations to Krita on the command line). I also didn't get through all the new bugs, though I managed to fix over a dozen. The final bugs in the spriter export plugin also are waiting to be squashed. Setting up builds for the master branch for three operating systems and two architectures was another thing I had to postpone to later. And there are now so many donations waiting for a personal thank-you mail that I have decided to just stop sending them. One thing I couldn't postpone or drop was creating a new WBSO application for an income tax rebate for the hours spent on the research for Krita's scripting plugin.

    I'm going forward with a bit of reduced todo list, so, in short, if you're waiting for me to do something for you, be aware that you might have to wait a bit longer or that I won't be able to do it. If you want your Krita bug fixed with priority, don't tell me to fix it NOW, because any kind of pressure will be answered with a firm nolle prosequi.

    July 12, 2016

    HD Photo Slideshow with Blender

    HD Photo Slideshow with Blender

    Because who doesn't love a challenge?

    While I was out at Texas Linux Fest this past weekend I got to watch a fun presentation from the one and only Brian Beck. He walked through an introduction to Blender, including an overview of creating his great The Lady in the Roses image that was a part of the 2015 Libre Calendar project.

    Coincidentally, during my trip home community member @Fotonut asked about software to create an HD slideshow with images. The first answer that jumped into my mind was to consider using Blender (a very close second was OpenShot because I had just spent some time talking with Jon Thomas about it).

    Brian Beck Roses The Lady in the Roses by Brian Beck cba

    I figured this much Blender being talked about deserved at least a post to answer @Fotonut‘s question in greater detail. I know that many community members likely abuse Blender in various ways as well – so please let me know if I get something way off!

    Enter Blender

    The reason that Blender was the first thing that popped into many folks minds when the question was posed is likely because it has been a go-to swiss-army knife of image and video creation for a long, long time. For some it was the only viable video editing application for heavy use (not that there weren’t other projects out there as well). This is partly due to to the fact that it integrates so much capability into a single project.

    The part that we’re interested in for the context of Fotonut’s original question is the Video Sequence Editor (VSE). This is a very powerful (though often neglected) part of Blender that lets you arrange audio and video (and image!) assets along a timeline for rendering and some simple effects. Which is actually perfect for creating a simple HD slideshow of images, as we’ll see.

    The Plan

    Blenders interface is likely to take some getting used to for newcomers (right-click!) but we’ll be focusing on a very small subset of the overall program—so hopefully nobody gets lost. The overall plan will be:

    1. Setup the environment for video sequence editing
    2. Include assets (images) and how to manipulate them on the timeline
    3. Add effects such as cross-fades between images
    4. Setup exporting options

    There’s also an option of using a very helpful add-on for automatically resizing images to the correct size to maintain their aspect ratios. Luckily, Blender’s add-on system makes it trivially easy to set up.


    On opening Blender for the first time we’re presented with the comforting view of the default cube in 3D space. Don’t get too cozy, though. We’re about to switch up to a different screen layout that’s already been created for us by default for Video Editing.

    Blender default main window The main blender default view.

    The developers were nice enough to include various default “Screen Layout” options for different tasks, and one of them happens to be for Video Editing. We can click on the screen layout option on the top menu bar and choose the one we want from the list (Video Editing):

    Blender screen layout options Choosing a new Screen Layout option.

    Our screen will then change to the new layout where the top left pane is the F-curve window, the top right is the video preview, the large center section is the sequencer, and the very bottom is a timeline. Blender will let you arrange, combine, and collapse all the various panes into just about any layout that you might want, including changing what each of them are showing. For our example we will mostly leave it all as-is with the exception of the F-curve pane, which we won’t be using and don’t need.

    Blender video editing layout The Video Editing default layout.

    What we can do now is to define what the resolution and framerate of our project should be. This is done in the Properties pane, which isn’t shown right now. So we will change the F-Curve pane into the Properties pane by clicking on the button shown in red above to change the panel type. We want to choose Properties from the options in the list:

    Blender change pane to properties

    Which will turn the old F-Curve pane into the Properties pane:

    Blender properties

    You’ll want to set the appropriate X and Y resolution for your intended output (don’t forget to set the scaling from the default 50% to 100% now as well) as well as your intended framerate. Common rates might be 23.976 (23.98), 25, 30, or even 60 frames per second. If your intended target is something like YouTube or an HD television you can probably safely use 30 or 60 (just remember that a higher frame rate means a longer render time!).

    For our example I’m going to set the output resolution to 1920 × 1080 at 30fps.

    One Extra Thing

    Blender does need a little bit of help when it comes to using images on the sequence editor. It has a habit of scaling images to whatever the output resolution is set to (ignoring the original aspect ratios). This can be fixed by simply applying a transform to the images but normally requires us to manually compute and enter the correct scaling factors to get the images back to their original aspect ratios.

    I did find a nice small add-on on this thread at blenderartists.org that binds some handy shortcuts onto the VSE for us. The author kgeogeo has the add-on hosted on Github, and you can download the Python file directly from here: VSE Transform Tool (you can Right-Click and save the link). Save the .py file somewhere easy to find.

    To load the add-on manually we’re going to change the Properties panel to User Preferences:

    Blender change to preferences

    Click on the Add-ons tab to open that window and at the bottom of the panel is an option to “Install from File…”. Click that and navigate to the VSE_Transform_Tool.py file that you downloaded previously.

    Blender add-ons

    Once loaded, you’ll still need to Activate the plugin by clicking on the box:

    Blender adding add-ons

    That’s it! You’re now all set up to begin adding images and creating a slideshow. You can set the User Preferences pane back to Properties if you want to.

    Adding Images

    Let’s have a look at adding images onto the sequencer.

    You can add images by either choosing Add → Image from the VSE menu and navigating to your images location, choosing them:

    Blender VSE add image

    Or by drag-and-dropping your images onto the sequencer timeline from Nautilus, Finder, Explorer, etc…

    When you do, you’ll find that a strip now appears on the VSE window (purple in my case) that represents your image. You should also see a preview of your video in the top-right preview window (sorry for the subject).

    Blender VSE add image

    At this point we can use the handy add-on we installed previously by Right-Clicking on the purple strip to make sure it’s activated and then hitting the “T” key on the keyboard. This will automatically add a transform to the image that scales it to the correct aspect ratio for you. A small green Transform strip will appear above your purple image strip now:

    Blender VSE add transform strip

    Your image should now also be scaled to fit at the correct aspect ratio.

    Adjusting the Image

    If you scroll your mouse wheel in the VSE window, you will zoom in and out of time editor based on time (the x-axis in the sequencer window). You’ll notice that the time compresses or expands as you scroll the mouse wheel.

    The middle-mouse button will let you pan around the sequencer.

    The right-mouse button will select things. You can try this now by extending how long your image is displayed in the video. Right-Click on the small arrow on the end of the purple strip to activate it. A small number will appear above it indicating which frame it is currently on (26 in my example):

    Blender VSE

    With the right handle active you can now either press “G” on the keyboard and drag the mouse to re-position the end of the strip, or Right-Click and drag to do the same thing. The timeline in seconds is shown along the bottom of the window for reference. If we wanted to let the image be visible for 5 seconds total, we could drag the end to the 5+00 mark on the sequencer window.

    Since I set the framerate to 30 frames per second, I can also drag the end to frame 150 (30fps * 5s = 150 frames).

    Blender VSE five seconds

    When you drag the image strip, the transform strip will automatically adjust to fit (so you don’t have to worry about it).

    If you had selected the center of the image strip instead of the handle on one end and tried to move it, you would find that you can move the entire strip around instead of one end. This is how you can re-position image strips, which you may want to do when you add a second image to your sequencer.

    Add a new image to your sequencer now following the same steps as above.

    When I do, it adds a new strip back at the beginning of the timeline (basically where the current time is set):

    Blender VSE second image

    I want to move this new strip so that it overlaps my first image by about half a second (or 15 frames). Then I will pull the right handle to resize the display time to about 5 seconds also.

    Click on the new strip (center, not the ends), and press the “G” key to move it. Drag it right until the left side overlaps the previous image strip by a little bit:

    Blender VSE drag strip

    When you click on the strip right handle to modify it’s length, notice the window on the far right of the VSE. The Edit Strip window should also show the strip “Length” parameter in case you want to change it by manually inputting a value (like 150):

    Blender VSE adjust strip

    I forgot to use the add-on to automatically fix the aspect ratio. With the strip selected I can press “T” at any time to invoke the add-on and fix the aspect ratio.

    Adding a Transition Effect

    With the two image strips slightly overlapping, we now want to define a simple cross fade between the two images as a transition effect. This is actually something alreayd built into the Blender VSE for us, and is easy to add. We do need to be careful to select the right things to get the transition working correctly, though.

    Once you’ve added a transform effect to a strip, you’ll need to make sure that subsequent operations use the transform strip as opposed to the original image strip.

    For instance, to add a cross fade transition between these two images, click the first image strip transform (green), then Shift-Click on the second image transform strip (green). Now they are both selected, so add a Gamma Cross by using the Add menu in the VSE (Add → Effect Strip… → Gamma Cross):

    Blender VSE add gamma cross

    This will add a Gamma Cross effect as a new strip that is locked to the two images overlap. It will do a cross-fade between the two images for the duration of the overlap. You can Left-Click now and scrub over the cross-fade strip to see it rendered in the preview window if you’d like:

    Blender Gamma Cross

    At any time you can also use the hotkey “Alt-A” to view a render preview. This may run slow if your machine is not super-fast, but it should run enough to give you a general sense of what you’ll get.

    If you want to modify the transition effect by changing its length, you can just increase the overlap between the strips as desired (using the original image strip — if you try to drag the transform strip you’ll find it locked to the original image strip and won’t move).

    Repeat Repeat

    You can basically follow these same steps for as many images as you’d like to include.


    To generate your output you’ll still need to change a couple of things to get what you want…

    Render Length

    You may notice on the VSE that there are vertical lines outside of which things will appear slightly grayed out. This is a visual indicator of the total start/end of the output. This is controlled via the Start and End frame settings on the timeline (bottom pane):

    Blender VSE start and end

    You’ll need to set the End value to match your last output frame from your video sequence. You can find this value by selecting the last strip in your sequence and pressing the “G” key: the start/end frame numbers of that last strip will be visible (you’ll want the last frame value, of course).

    Blender VSE end frame Current last frame of my video is 284

    In my example above, my anticipated last frame should be 284, but the last render frame is currently set to 250. I would need to update that End frame to match my video to get output as expected.

    Render Format

    Back on the Properties panel (assuming you set the top-left panel back to Properties earlier—if not do so now), if we scroll down a bit we should see a section dedicated to Output.

    Blender Properties Output Options

    You can change the various output options here to do frame-by-frame dumps or to encode everything into a video container of some sort. You can set the output directory to be something different if you don’t want it rendered into /tmp here.

    For my example I will encode the video with H.264:

    Blender output h264

    By choosing this option, Blender will then expose a new section of the Properties panel for setting the Encoding options:

    Blender output encoding options

    I will often use the H264 preset and will enable the Lossless Output checkbox option. If I don’t have the disk space to spare I can also set different options to shrink the resulting filesize down further. The Bitrate option will have the largest effect on final file size and image quality.

    When everything is ready (or you just want to test it out), you can render your output by scrolling back to the top of the Properties window and pressing the Animation button, or by hitting Ctrl-F12.

    Blender Render Button

    The Results

    After adding portraits of all of the GIMP team from LGM London and adding gamma cross fade transitions, here are my results:

    In Summary

    This may seem overly complicated, but in reality much of what I covered here is the setup to get started and the settings for output. Once you’ve done this successfully it becomes pretty quick to use. One thing you can do is set up the environment the way you like it and then save the .blend file to use as a template for further work like this in the future. The next time you need to generate a slideshow you’ll have everything all ready to go and will only need to start adding images to the editor.

    While looking for information on some VSE shortcuts I did run across a really interesting looking set of functions that I want to try out: the Blender Velvets. I’m going to go off and give it a good look when I get a chance as there’s quite a few interesting additions available.

    For Blender users: did I miss anything?

    July 10, 2016

    How GNOME Software uses libflatpak

    It seems people are interested in adding support for flatpaks into other software centers, and I thought I might be useful to explain how I did this in gnome-software. I’m lucky enough to have a plugin architecture to make all the flatpak code be self contained in one file, but that’s certainly not a requirement.

    Flatpak generates AppStream metadata when you build desktop applications. This means it’s possible to use appstream-glib and a few tricks to just load all the enabled remotes into an existing system store. This makes searching the new applications using the (optionally stemmed) token cache trivial. Once per day gnome-software checks the age of the AppStream cache, and if required downloads a new copy using flatpak_installation_update_appstream_sync(). As if by magic, appstream-glib notices the file modification/creation and updates the internal AsStore with the new applications.

    When listing the installed applications, a simple call to flatpak_installation_list_installed_refs() returns us the list we need, on which we can easily set other flatpak-specific data like the runtime. This is matched against the AppStream data, which gives us a localized and beautiful application to display in the listview.

    At this point we also call flatpak_installation_list_installed_refs_for_update() and then do flatpak_installation_update() with the NO_DEPLOY flag set. This just downloads the data we need, and can be cancelled without anything bad happening. When populating the updates panel I can just call flatpak_installation_list_installed_refs() again to find installed applications that have downloaded updates ready to apply without network access.

    For the sources list I’m calling flatpak_installation_list_remotes() then ignoring any set as disabled or noenumerate. Most remotes have a name and title, and this makes the UI feature complete. When collecting information to show in the ui like the size we have the metadata already, but we also add the size of the runtime if it’s not already installed. This is the same idea as flatpak_installation_install(), where we also install any required runtime when installing the main application. There is a slight impedance mismatch between the flatpak many-installed-versions and the AppStream only-one-version model, but it seems to work well enough in the current code. Flatpak splits the deployment into a runtime containing common libraries that can be shared between apps (for instance, GNOME 3.20 or KDE5) and the application itself, so the software center always needs to install the runtime for the application to launch successfully. This is something that is not enforced by the CLI tool. Rather than installing everything for each app, we can also install other so-called extensions. These are typically non-essential like the various translations and any debug information, but are not strictly limited to those things. libflatpak automatically keeps the extensions up to date when updating, so gnome-software doesn’t have to do anything special at all.

    Updating single applications is trivial with flatpak_installation_update() and launching applications is just as easy with flatpak_installation_launch(), although we only support launching the newest installed version of an application at the moment. Reading local bundles works well with flatpak_bundle_ref_new(), although we do have to load the gzipped AppStream metadata and the icon ourselves. Reading a .flatpakrepo file is slightly more work, but the data is in keyfile format and trivial to parse with GKeyFile.

    Overall I’ve found libflatpak to be surprisingly easy to work with, requiring none of the kludges of all the different package-based systems I’ve worked on developing PackageKit. Full marks to Alex et al.

    July 08, 2016

    Railway gauges

    Episode 3 in a series “Things that are the way they are because of constraints that no longer apply” (or: why we don’t change processes we have invested in that don’t make sense any more)

    The standard railway gauge (that is, the distance between train rails) for over half of the world’s railways (including the USA and UK)  is 4′ 8.5″, or 1.435m. While a few other railway gauges are in common use, including, to my surprise, in Ireland, where the gauge is 5′ 3″, or 1.6m. If you’re like me, you’ve wondered where these strange numbers came from.

    Your first guess might be that, similar to the QWERTY keyboard, it comes from the inventor of the first train, or the first successful commercial railway, and that there was simply no good reason to change it once the investment had been made in thbat first venture, in the interests of interoperability. There is some truth to this, as railways were first used in coal mines to extract coal by horse-drawn carriages, and in the English coal mines of the North East, the “standard” gauge of 4′ 8″ was used. When George Stephenson started his seminal work on the development of the first commercial railway and the invention of the Stephenson Rocket steam locomotive, his experience from the English coal mines led him to adopt this gauge of 4′ 8″. To allow for some wiggle room so that the train and carriages could more easily go around bends, he increased the gauge to 4′ 8.5″.

    But why was the standard gauge for horse-drawn carriages 4′ 8″? The first horse-drawn trams used the same gauge, and all of their tools were calibrated for that width. That’s because most wagons, built with the same tools, had that gauge at the time. But where did it come from in the first place? One popular theory, which I like even if Snopes says it’s probably false, is that the gauge was the standard width of horse-drawn carriages all the way back to Roman times. The 4′ 8.5″ gauge roughly matches the width required to comfortably accommodate a horse pulling a carriage, and has persisted well beyond the end of that constraint.



    July 07, 2016

    QWERTY keyboards

    Episode 2 in a series “Things that are the way they are because of constraints that no longer apply” (or: why we don’t change processes we have invested in that don’t make sense any more)

    American or English computer users are familiar with the QWERTY keyboard layout – which takes its name from the layout of letters on the first row of the traditional us and en_gb keyboard layouts. There are other common layouts in other countries, mostly tweaks to this format like AZERTY (in France) or QWERTZ (in Germany). There are also non-QWERTY related keyboard layouts like Dvorak, designed to allow increased typing speed, but which have never really gained widespread adoption. But where does the QWERTY layout come from?

    The layout was first introduced with the Remington no. 1 typewriter (AKA the Scholes and Glidden typewriter) in 1874. The typewriter had a set of typebars which would strike the page with a single character, and these were arranged around a circular “basket”. The page was then moved laterally by one letter-width, ready for the next keystrike. The first attempt laid out the keys in alphabetical order, in two rows, like a piano keyboard. Unfortunately, this mechanical system had some issues – if two typebars situated close together were struck in rapid succession, they would occasionally jam the mechanism. To avoid this issue, common bigrams were distributed around the circle, to minimise the risk of jams.

    The keyboard layout was directly related to the layout of typebars around the basket, since the keyboard was purely mechanical – pushing a key activated a lever system to swing out the correct typebar. As a result, the keyboard layout the company settled on, after much trial and error, had the familiar QWERTY layout we use today. At this point, too much is invested in everything from touch-type lessons and sunk costs of the population who have already learned to type for any other keyboard format to become viable, even though the original constraint which led to this format obviously no longer applies.

    Edit: A commenter pointed me to an article on The Atlantic called “The Lies You’ve Been Told About the QWERTY Keyboard” which suggests an alternate theory. The layout changed to better serve the earliest users of the new typewriter, morse code transcribing telegraph operators. A fascinating lesson in listening to your early users, for sure, but also perhaps a warning on imposing early-user requirements on later adopters?

    Cosmos Laundromat wins SIGGRAPH 2016 Computer Animation Festival Jury’s Choice Award

    A few days ago we wrote about three Blender-made films being selected for the SIGGRAPH 43rd annual Computer Animation Festival. Today we are happy to announce that Cosmos Laundromat Open Movie (by Blender Institute) has won the Jury’s Choice Award!

    Producer Ton Roosendaal says:

    SIGGRAPH always brings the best content together for the Computer Animation Festival from the most talented artists and we are honoured to be acknowledged in this way for all our hard work and dedication.


    Get ready to see more and more pictures of Victor and Frank as Cosmos Laundromat takes over SIGGRAPH 2016!

    Google Expeditions – Education in VR

    By: Mike Pan, Lead Artist at Vida Systems

    The concept of virtual-reality has been around for many decades now. However it is only in the last few years that technology has matured enough for VR to really take off. At Vida Systems, we have been at the forefront of this VR resurgence every step of the way.


    Vida Systems had the amazing opportunity to work with Google on their Expeditions project. Google Expeditions is a VR learning experience designed for classrooms. With a simple smartphone and a Cardboard viewer, students can journey to far-away places and feel completely immersed in the environment. This level of immersion not only delights the students, it actually helps learning as they are able to experience places in a much more tangible way.


    To fulfill the challenge of creating stunning visuals, we rely on Blender and the Cycles rendering engine. First, each topic is carefully researched. Then the 3D artists work to create a scene based on the layout set by the designer. With Cycles, it is incredibly easy to create photorealistic artwork in a short period of time. Lighting, shading and effects can all be done with realtime preview.


    With the built-in VR rendering features including stereo camera support and equirectangular panoramic camera, we can render the entire scene with one click and deliver the image without stitching or resampling, saving us valuable time.


    For VR, the image needs to be noise-free, in stereo, and high resolution. Combining all 3 factors means our rendering time for a 4K by 4K frame is 8 times longer than a traditional 1080p frame. With two consumer-grade GPUs working together (980Ti and 780), Cycles was able to crunch through most of our scenes in under 3 hours per frame.

    Working in VR has some limitations. The layout has to follow realworld scales, otherwise it would look odd in 3D. It is also more demanding to create the scene, as everything has to look good from every angle. We also spent a lot of time on the details. The images had to stand up to scrutiny. Any imperfection would be readily visible due to the level of immersion offered by VR.


    For this project, we tackled a huge variety of topics, ranging from geography to anatomy. This was only possible thanks to the four spectacular artists we have: Felipe Torrents, Jonathan Sousa de Jesus, Diego Gangl and Greg Zaal.



    Our work can be seen in the Google Expeditions app available for Android.

    On blender.org we are always looking for inspiring user stories! Share yours with foundation@blender.org.

    Follow us on Twitter or Facebook to get the latest user stories!

    July 06, 2016

    GIMP at Texas LinuxFest

    I'll be at Texas LinuxFest in Austin, Texas this weekend. Friday, July 8 is the big day for open source imaging: first a morning Photo Walk led by Pat David, from 9-11, after which Pat, an active GIMP contributor and the driving force behind the PIXLS.US website and discussion forums, gives a talk on "Open Source Photography Tools". Then after lunch I'll give a GIMP tutorial. We may also have a Graphics Hackathon/Q&A session to discuss all the open-source graphics tools in the last slot of the day, but that part is still tentative. I'm hoping we can get some good discussion especially among the people who go on the photo walk.

    Lots of interesting looking talks on Saturday, too. I've never been to Texas LinuxFest before: it's a short conference, just two days, but they're packing a lot into those two days and but it looks like it'll be a lot of fun.

    July 05, 2016

    Flatpak and GNOME Software

    I wanted to write a little about how Flatpak apps are treated differently to packages in GNOME Software. We’ve now got two plugins in master, one called flatpak-user and another called flatpak-system. They both share 99% of the same code, only differing in how they are initialised. As you might expect, -user does per-user installation and updating, and the latter does it per-system for all users. Per-user applications that are specific to just a single user account are an amazingly useful concept, as most developers found using tools like jhbuild. We default to installing software at the moment for all users, but there is actually a org.gnome.software.install-bundles-system-wide dconf key that can be used to reverse this on specific systems.

    We go to great lengths to interoperate with the flatpak command line tool, so if you install the nightly GTK3 build of GIMP per-user you can install the normal version system-wide and they both show in the installed and updates panel without conflicting. We’ve also got file notifications set up so GNOME Software shows the correct application state straight away if you add a remote or install a flatpak app on the command line. At the moment we show both packages and flatpaks in the search results, but when we suggest apps on the overview page we automatically prefer the flatpak version if both are available. In Ubuntu, snappy results are sorted above package results unconditionally, but I don’t know if this is a good thing to do for flatpaks upstream, comments welcome. I’m sure whatever defaults I choose will mortally offend someone.

    Screenshot from 2016-07-05 14-45-35

    GNOME Software also supports single-file flatpak bundles like gimp.flatpak – just double click and you’re good to install. These files are somewhat like a package in that all the required files are included and you can install without internet access. These bundles can also install a remote (ie a reference to a flatpak repository) too, which allows them to be kept up to date. Such per-application remotes are only used for the specific application and not the others potentially in the same tree (for the curious, this is called a “noenumerate” remote). We also support the more rarely seen dummy.flatpakrepo files too; these allow a user to install a remote which could contain a number of applications and makes it very easy to set up an add-on remote that allows you browse a different set of apps than shipped, for instance the Endless-specific apps. Each of these files contains all the metadata we need in AppStream format, with translations, icons and all the things you expect from a modern software center. It’s a shame snappy decided not to use AppStream and AppData for application metadata, as this kind of extra data really makes the UI really beautiful.

    Screenshot from 2016-07-05 14-54-18

    With the latest version of flatpak we also do a much better job of installing the additional extensions the application needs, for instance locales or debug data. Sharing the same code between the upstream command line tool and gnome-software means we always agree on what needs installing and updating. Just like the CLI, gnome-software can update flatpaks safely live (even when the application is running), although we do a little bit extra compared to the CLI and download the data we need to do the update when the session is idle and on suitable unmetered network access. This means you can typically just click the ‘Update’ button in the updates panel for a near-instant live-update. This is what people have wanted for years, and I’ve told each and every bug-report that live updates using packages only works 99.99% of the time, exploding in a huge fireball 0.01% of the time. Once all desktop apps are packaged as flatpaks we will only need to reboot for atomic offline updates for core platform updates like a new glibc or the kernel. That future is very nearly now.

    Screenshot from 2016-07-05 14-54-59

    darktable 2.0.5 released

    we're proud to announce the fifth bugfix release for the 2.0 series of darktable, 2.0.5!

    the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.0.5.

    as always, please don't use the autogenerated tarball provided by github, but only our tar.xz. the checksum is:

    $ sha256sum darktable-2.0.5.tar.xz
    898b71b94e7ef540eb1c87c829daadc8d8d025b1705d4a9471b1b9ed91b90a02 darktable-2.0.5.tar.xz
    $ sha256sum darktable-2.0.5.dmg
    e0ae0e5e19771810a80d6851e022ad5e51fb7da75dcbb98d96ab5120b38955fd  darktable-2.0.5.dmg

    and the changelog as compared to 2.0.4 can be found below.

    New Features

    • Add geolocation to watermark variables


    • Mac: bugfix + build fix
    • Lua: fixed dt.collection not working
    • Fix softproofing with some internal profiles
    • Fix non-working libsecret pwstorage backend
    • Fixed a few issues within (rudimentary) lightroom import
    • Some fixes related to handling of duplicates and/or tags

    Base Support

    • Canon EOS 80D (no mRAW/sRAW support!)

    White Balance Presets

    • Canon EOS 80D

    Noise Profiles

    • Canon EOS 80D

    Translations Updates

    • Danish
    • German
    • Slovak

    July 04, 2016

    Texas Linux Fest 2016

    Texas Linux Fest 2016

    Everything's Bigger in Texas!

    While in London this past April I got a chance to hang out a bit with LWN.net editor and fellow countryman, Nathan Willis. (It sounds like the setup for a bad joke: “An Alabamian and Texan meet in a London pub…”). Which was awesome because even though we were both at LGM2014, we never got a chance to sit down and chat.

    So it was super-exciting for me to hear from Nate about possibly doing a photowalk and Free Software photo workshop at the 2016 Texas Linux Fest, and as soon as I cleared it with my boss, I agreed!

    Dot at LGM 2014 My Boss

    So… mosey on down to Austin, Texas on July 8-9 for Texas Linux Fest and join Akkana Peck and myself for a photowalk first thing of the morning on Friday (July 8) to be immediately followed by workshops from both of us. I’ll be talking about Free Software photography workflows and projects and Akkana will be focusing on a GIMP workshop.

    This is part of a larger “Open Graphics” track on the entire first day that also includes Ted Gould creating technical diagrams using Inkscape, Brian Beck doing a Blender tutorial, and Jonathon Thomas showing off OpenShot 2.0. You can find the full schedule on their website.

    I hope to see some of you there!

    July 03, 2016

    Midsummer Nature Notes from Traveling

    A few unusual nature observations noticed over the last few weeks ...

    First, on a trip to Washington DC a week ago (my first time there). For me, the big highlight of the trip was my first view of fireflies -- bright green ones, lighting once or twice then flying away, congregating over every park, lawn or patch of damp grass. What fun!

    Predatory grackle


    But the unusual observation was around mid-day, on the lawn near the Lincoln Memorial. A grackle caught my attention as it flashed by me -- a male common grackle, I think (at least, it was glossy black, relatively small and with only a moderately long tail).

    It turned out it was chasing a sparrow, which was dodging and trying to evade, but unsuccessfully. The grackle made contact, and the sparrow faltered, started to flutter to the ground. But the sparrow recovered and took off in another direction, the grackle still hot on its tail. The grackle made contact again, and again the sparrow recovered and kept flying. But the third hit was harder than the other two, and the sparrow went down maybe fifteen or twenty feet away from me, with the grackle on top of it.

    The grackle mantled over its prey like a hawk and looked like it was ready to begin eating. I still couldn't quite believe what I'd seen, so I stepped out toward the spot, figuring I'd scare the grackle away and I'd see if the sparrow was really dead. But the grackle had its eye on me, and before I'd taken three steps, it picked up the sparrow in its bill and flew off with it.

    I never knew grackles were predatory, much less capable of killing other birds on the wing and flying off with them. But a web search on grackles killing birds got quite a few hits about grackles killing and eating house sparrows, so apparently it's not uncommon.

    Daytime swarm of nighthawks

    Then, on a road trip to visit friends in Colorado, we had to drive carefully past the eastern slope of San Antonio Mountain as a flock of birds wheeled and dove across the road. From a distance it looked like a flock of swallows, but as we got closer we realized they were far larger. They turned out to be nighthawks -- at least fifty of them, probably considerably more. I've heard of flocks of nighthawks swarming around the bugs attracted to parking lot streetlights. And I've seen a single nighthawk, or occasionally two, hawking in the evenings from my window at home. But I've never seen a flock of nighthawks during the day like this. An amazing sight as they swoop past, just feet from the car's windshield.

    Flying ants

    [Flying ant courtesy of Jen Macke]

    Finally, the flying ants. The stuff of a bad science fiction movie! Well, maybe if the ants were 100 times larger. For now, just an interesting view of the natural world.

    Just a few days ago, Jennifer Macke wrote a fascinating article in the PEEC Blog, "Ants Take Wing!" letting everyone know that this is the time of year for ants to grow wings and fly. (Jen also showed me some winged lawn ants in the PEEC ant colony when I was there the day before the article came out.) Both males and females grow wings; they mate in the air, and then the newly impregnated females fly off, find a location, shed their wings (leaving a wing scar you can see if you have a strong enough magnifying glass) and become the queen of a new ant colony.

    And yesterday morning, as Dave and I looked out the window, we saw something swarming right below the garden. I grabbed a magnifying lens and rushed out to take a look at the ones emerging from the ground, and sure enough, they were ants. I saw only black ants. Our native harvester ants -- which I know to be common in our yard, since I've seen the telltale anthills surrounded by a large bare area where they clear out all vegetation -- have sexes of different colors (at least when they're flying): females are red, males are black. These flying ants were about the size of harvester ants but all the ants I saw were black. I retreated to the house and watched the flights with binoculars, hoping to see mating, but all the flyers I saw seemed intent on dispersing. Either these were not harvester ants, or the females come out at a different time from the males. Alas, we had an appointment and had to leave so I wasn't able to monitor them to check for red ants. But in a few days I'll be watching for ants that have lost their wings ... and if I find any, I'll try to identify queens.

    June 29, 2016

    Color Manipulation with the Colour Checker LUT Module

    Color Manipulation with the Colour Checker LUT Module

    hanatos tinkering in darktable again...

    I was lucky to get to spend some time in London with the darktable crew. Being the wonderful nerds they are, they were constantly working on something while we were there. One of the things that Johannes was working on was the colour checker module for darktable.

    Having recently acquired a Fuji camera, he was working on matching color styles from the built-in rendering on the camera. Here he presents some of the results of what he was working on.

    This was originally published on the darktable blog, and is being republished here with permission. —Pat


    for raw photography there exist great presets for nice colour rendition:

    unfortunately these are eat-it-or-die canned styles or icc lut profiles. you have to apply them and be happy or tweak them with other tools. but can we extract meaning from these presets? can we have understandable and tweakable styles like these?

    in a first attempt, i used a non-linear optimiser to control the parameters of the modules in darktable’s processing pipeline and try to match the output of such styles. while this worked reasonably well for some of pat’s film luts, it failed completely on canon’s picture styles. it was very hard to reproduce generic colour-mapping styles in darktable without parametric blending.

    that is, we require a generic colour to colour mapping function. this should be equally powerful as colour look up tables, but enable us to inspect it and change small aspects of it (for instance only the way blue tones are treated).


    in git master, there is a new module to implement generic colour mappings: the colour checker lut module (lut: look up table). the following will be a description how it works internally, how you can use it, and what this is good for.

    in short, it is a colour lut that remains understandable and editable. that is, it is not a black-box look up table, but you get to see what it actually does and change the bits that you don’t like about it.

    the main use cases are precise control over source colour to target colour mapping, as well as matching in-camera styles that process raws to jpg in a certain way to achieve a particular look. an example of this are the fuji film emulation modes. to this end, we will fit a colour checker lut to achieve their colour rendition, as well as a tone curve to achieve the tonal contrast.


    to create the colour lut, it is currently necessary to take a picture of an it8 target (well, technically we support any similar target, but didn’t try them yet so i won’t really comment on it). this gives us a raw picture with colour values for a few colour patches, as well as a in-camera jpg reference (in the raw thumbnail..), and measured reference values (what we know it should look like).

    to map all the other colours (that fell in between the patches on the chart) to meaningful output colours, too, we will need to interpolate this measured mapping.


    we want to express a smooth mapping from input colours \(\mathbf{s}\) to target colours \(\mathbf{t}\), defined by a couple of sample points (which will in our case be the 288 patches of an it8 chart).

    the following is a quick summary of what we implemented and much better described in JP’s siggraph course [0].

    radial basis functions

    radial basis functions are a means of interpolating between sample points via

    $$f(x) = \sum_i c_i\cdot\phi(| x - s_i|),$$

    with some appropriate kernel \(\phi(r)\) (we’ll get to that later) and a set of coefficients \(c_i\) chosen to make the mapping \(f(x)\) behave like we want it at and in between the source colour positions \(s_i\). now to make sure the function actually passes through the target colours, i.e. \(f(s_i) = t_i\), we need to solve a linear system. because we want the function to take on a simple form for simple problems, we also add a polynomial part to it. this makes sure that black and white profiles turn out to be black and white and don’t oscillate around zero saturation colours wildly. the system is

    $$ \left(\begin{array}{cc}A &P\\P^t & 0\end{array}\right) \cdot \left(\begin{array}{c}\mathbf{c}\\\mathbf{d}\end{array}\right) = \left(\begin{array}{c}\mathbf{t}\\0\end{array}\right)$$


    $$ A=\left(\begin{array}{ccc} \phi(r_{00})& \phi(r_{10})& \cdots \\ \phi(r_{01})& \phi(r_{11})& \cdots \\ \phi(r_{02})& \phi(r_{12})& \cdots \\ \cdots & & \cdots \end{array}\right),$$

    and \(r_{ij} = | s_i - t_j |\) is the distance (CIE 76 \(\Delta\)E, \(\sqrt{(L_s - L_t)^2 + (a_s - a_t)^2 + (b_s - b_t)^2}\) ) between source colour \(s_i\) and target colour \(t_j\), in our case

    $$P=\left(\begin{array}{cccc} L_{s_0}& a_{s_0}& b_{s_0}& 1\\ L_{s_1}& a_{s_1}& b_{s_1}& 1\\ \cdots \end{array}\right)$$

    is the polynomial part, and \(\mathbf{d}\) are the coefficients to the polynomial part. these are here so we can for instance easily reproduce \(t = s\) by setting \(\mathbf{d} = (1, 1, 1, 0)\) in the respective row. we will need to solve this system for the coefficients \(\mathbf{c}=(c_0,c_1,\cdots)^t\) and \(\mathbf{d}\).

    many options will do the trick and solve the system here. we use singular value decomposition in our implementation. one advantage is that it is robust against singular matrices as input (accidentally map the same source colour to different target colours for instance).

    thin plate splines

    we didn’t yet define the radial basis function kernel. it turns out so-called thin plate splines have very good behaviour in terms of low oscillation/low curvature of the resulting function. the associated kernel is

    $$\phi(r) = r^2 \log r.$$

    note that there is a similar functionality in gimp as a gegl colour mapping operation (which i believe is using a shepard-interpolation-like scheme).

    creating a sparse solution

    we will feed this system with 288 patches of an it8 colour chart. that means, with the added four polynomial coefficients, we have a total of 292 source/target colour pairs to manage here. apart from performance issues when executing the interpolation, we didn’t want that to show up in the gui like this, so we were looking to reduce this number without introducing large error.

    indeed this is possible, and literature provides a nice algorithm to do so, which is called orthogonal matching pursuit [1].

    this algorithm will select the most important hand full of coefficients \(\in \mathbf{c},\mathbf{d}\), to keep the overall error low. In practice we run it up to a predefined number of patches (\(24=6\times 4\) or \(49=7\times 7\)), to make best use of gui real estate.

    the colour checker lut module


    gui elements

    when you select the module in darkroom mode, it should look something like the image above (configurations with more than 24 patches are shown in a 7\(\times\)7 grid instead). by default, it will load the 24 patches of a colour checker classic and initialise the mapping to identity (no change to the image).

    • the grid shows a list of coloured patches. the colours of the patches are the source points \(\mathbf{s}\).
    • the target colour \(t_i\) of the selected patch \(i\) is shown as offset controlled by sliders in the ui under the grid of patches.
    • an outline is drawn around patches that have been altered, i.e. the source and target colours differ.
    • the selected patch is marked with a white square, and the number shows in the combo box below.


    to interact with the colour mapping, you can change both source and target colours. the main use case is to change the target colours however, and start with an appropriate palette (see the presets menu, or download a style somewhere).

    • you can change lightness (L), green-red (a), blue-yellow (b), or saturation (C) of the target colour via sliders.
    • select a patch by left clicking on it, or using the combo box, or using the colour picker
    • to change source colour, select a new colour from your image by using the colour picker, and shift-left-click on the patch you want to replace.
    • to reset a patch, double-click it.
    • right-click a patch to delete it.
    • shift-left-click on empty space to add a new patch (with the currently picked colour as source colour).

    example use cases

    example 1: dodging and burning with the skin tones preset

    to process the following image i took of pat in the overground, i started with the skin tones preset in the colour checker module (right click on nothing in the gui or click on the icon with the three horizontal lines in the header and select the preset).

    then, i used the colour picker (little icon to the right of the patch# combo box) to select two skin tones: very bright highlights and dark shadow tones. the former i dragged the brightness down a bit, the latter i brightened up a bit via the lightness (L) slider. this is the result:

    original dialed down contrast in skin tones

    example 2: skin tones and eyes

    in this image, i started with the fuji classic chrome-like style (see below for a download link), to achieve the subdued look in the skin tones. then, i picked the iris colour and saturated this tone via the saturation slider.

    as a side note, the flash didn’t fire in this image (iso 800) so i needed to stop it up by 2.5ev and the rest is all natural lighting..

    +2.5ev classic chrome saturated eyes

    use darktable-chart to create a style

    as a starting point, i matched a colour checker lut interpolation function to the in-camera processing of fuji cameras. these have the names of old film and generally do a good job at creating pleasant colours. this was done using the darktable-chart utility, by matching raw colours to the jpg output (both in Lab space in the darktable pipeline).

    here is the link to the fuji styles, and how to use them. i should be doing pat’s film emulation presets with this, too, and maybe styles from other cameras (canon picture styles?). darktable-chart will output a dtstyle file, with the mapping split into tone curve and colour checker module. this allows us to tweak the contrast (tone curve) in isolation from the colours (lut module).

    these styles were created with the X100T model, and reportedly they work so/so with different camera models. the idea is to create a Lab-space mapping which is well configured for all cameras. but apparently there may be sufficient differences between the output of different cameras after applying their colour matrices (after all these matrices are just an approximation of the real camera to XYZ mapping).

    so if you’re really after maximum precision, you may have to create the styles yourself for your camera model. here’s how:

    step-by-step tutorial to match the in-camera jpg engine

    note that this is essentially similar to pascal’s colormatch script, but will result in an editable style for darktable instead of a fixed icc lut.

    • need an it8 (sorry, could lift that, maybe, similar to what we do for basecurve fitting)

    • shoot the chart with your camera:

      • shoot raw + jpg
      • avoid glare and shadow and extreme angles, potentially the rims of your image altogether
      • shoot a lot of exposures, try to match L=92 for G00 (or look that up in your it8 description)
    • develop the images in darktable:

      • lens and vignetting correction needed on both or on neither of raw + jpg
      • (i calibrated for vignetting, see lensfun)
      • output colour space to Lab (set the secret option in darktablerc: allow_lab_output=true)
      • standard input matrix and camera white balance for the raw, srgb for jpg.
      • no gamut clipping, no basecurve, no anything else.
      • maybe do perspective correction and crop the chart
      • export as float pfm
    • darktable-chart

      • load the pfm for the raw image and the jpg target in the second tab
      • drag the corners to make the mask match the patches in the image
      • maybe adjust the security margin using the slider in the top right, to avoid stray colours being blurred into the patch readout
      • you need to select the gray ramp in the combo box (not auto-detected)
      • export csv
    darktable-lut-tool-crop-01 darktable-lut-tool-crop-02 darktable-lut-tool-crop-03 darktable-lut-tool-crop-04

    edit the csv in a text editor and manually add two fixed fake patches HDR00 and HDR01:

    name;fuji classic chrome-like
    description;fuji classic chrome-like colorchecker

    this is to make sure we can process high-dynamic range images and not destroy the bright spots with the lut. this is needed since the it8 does not deliver any information out of the reflective gamut and for very bright input. to fix wide gamut input, it may be needed to enable gamut clipping in the input colour profile module when applying the resulting style to an image with highly saturated colours. darktable-chart does that automatically in the style it writes.

    • fix up style description in csv if you want
    • run darktable-chart --csv
    • outputs a .dtstyle with everything properly switched off, and two modules on: colour checker + tonecurve in Lab

    fitting error

    when processing the list of colour pairs into a set of coefficients for the thin plate spline, the program will output the approximation error, indicated by average and maximum CIE 76 \(\Delta\)E for the input patches (the it8 in the examples here). of course we don’t know anything about colours which aren’t represented in the patch. the hope would be that the sampling is dense enough for all intents and purposes (but nothing is holding us back from using a target with even more patches).

    for the fuji styles, these errors are typically in the range of mean \(\Delta E\approx 2\) and max \(\Delta E \approx 10\) for 24 patches and a bit less for 49. unfortunately the error does not decrease very fast in the number of patches (and will of course drop to zero when using all the patches of the input chart).

    provia 24:rank 28/24 avg DE 2.42189 max DE 7.57084
    provia 49:rank 53/49 avg DE 1.44376 max DE 5.39751
    astia-24:rank 27/24 avg DE 2.12006 max DE 10.0213
    astia-49:rank 52/49 avg DE 1.34278 max DE 7.05165
    velvia-24:rank 27/24 avg DE 2.87005 max DE 16.7967
    velvia-49:rank 53/49 avg DE 1.62934 max DE 6.84697
    classic chrome-24:rank 28/24 avg DE 1.99688 max DE 8.76036
    classic chrome-49:rank 53/49 avg DE 1.13703 max DE 6.3298
    mono-24:rank 27/24 avg DE 0.547846 max DE 3.42563
    mono-49:rank 52/49 avg DE 0.339011 max DE 2.08548

    future work

    it is possible to match the reference values of the it8 instead of a reference jpg output, to calibrate the camera more precisely than the colour matrix would.

    • there is a button for this in the darktable-chart tool
    • needs careful shooting, to match brightness of reference value closely.
    • at this point it’s not clear to me how white balance should best be handled here.
    • need reference reflectances of the it8 (wolf faust ships some for a few illuminants).

    another next step we would like to take with this is to match real film footage (porta etc). both reference and film matching will require some global exposure calibration though.


    • [0] Ken Anjyo and J. P. Lewis and Frédéric Pighin, “Scattered data interpolation for computer graphics” in Proceedings of SIGGRAPH 2014 Courses, Article No. 27, 2014. pdf
    • [1] J. A. Tropp and A. C. Gilbert, “Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit”, in IEEE Transactions on Information Theory, vol. 53, no. 12, pp. 4655-4666, Dec. 2007.

    Tue 2016/Jun/28

    • La Mapería

      It is Hack Week at SUSE, and I am working on La Mapería (the map store), a little program to generate beautiful printed maps from OpenStreetMap data.

      I've gotten to the point of having something working: the tool downloads rendered map tiles, assembles them with Cairo as a huge PDF surface, centers the map on a sheet of paper, and prints nice margins and a map scale. This was harder to me than it looks: I am pretty good at dealing with pixel coordinates and transformations, but a total newbie with geodetic calculations, geographical coodinate conversions, and thinking in terms of a physical map scale instead of just a DPI and a paper size.

      Printed map Printed map 2

      The resulting chart has a map and a frame with arc-minute markings, and a map scale rule. I want to have a 1-kilometer UTM grid if I manage to wrap my head around map projections.

      Coordinates and printed maps

      The initial versions of this tool evolved in an interesting way. Assembling a map from map tiles is basically this:

      1. Figure out the tile numbers for the tiles in the upper-left and the lower-right corners of the map.
      2. Composite each tile into a large image, like a mosaic.

      The first step is pretty easy if you know the (latitude, longitude) of the corners: the relevant conversion from coordinates to tile numbers is in the OpenStreetMap wiki. The second step is just two nested for() loops that paste tile images onto a larger image.

      When looking at a web map, it's reasonably easy to find the coordinates for each corner. However, I found that printed maps want one to think in different terms. The map scale corresponds to the center of the map (it changes slightly towards the corners, due to the map's projection). So, instead of thinking of "what fits inside the rectangle given by those corners", you have to think in terms of "how much of the map will fit given your paper size and the map scale... around a center point".

      So, my initial tool looked like

      python3 make-map.py
              --from-lat=19d30m --from-lon=-97d
              --to-lat=19d22m --to-lon=-96d47m

      and then I had to manually scale that image to print it at the necessary DPI for a given map scale (1:50,000). This was getting tedious. It took me a while to convert the tool to think in terms of these:

      • Paper size and margins
      • Coordinates for the center point of the map
      • Printed map scale

      Instead of providing all of these parameters in the command line, the program now takes a little JSON configuration file.

      La Mapería generates a PDF or an SVG (for tweaking with Inkscape before sending it off to a printing bureau). It draws a nice frame around the map, and clips the map to the frame's dimensions.

      La Mapería is available on github. It may or may not work out of the box right now; it includes my Mapbox access token — it's public — but I really would like to avoid people eating my Mapbox quota. I'll probably include the map style data with La Mapería's source code so that people can create their own Mapbox accounts.

      Over the rest of the week I will be documenting how to set up a Mapbox account and a personal TileStache cache to avoid downloading tiles repeatedtly.

    June 26, 2016

    How to un-deny a host blocked by denyhosts

    We had a little crisis Friday when our server suddenly stopped accepting ssh connections.

    The problem turned out to be denyhosts, a program that looks for things like failed login attempts and blacklists IP addresses.

    But why was our own IP blacklisted? It was apparently because I'd been experimenting with a program called mailsync, which used to be a useful program for synchronizing IMAP folders with local mail folders. But at least on Debian, it has broken in a fairly serious way, so that it makes three or four tries with the wrong password before it actually uses the right one that you've configured in .mailsync. These failed logins are a good way to get yourself blacklisted, and there doesn't seem to be any way to fix mailsync or the c-client library it uses under the covers.

    Okay, so first, stop using mailsync. But then how to get our IP off the server's blacklist? Just editing /etc/hosts.deny didn't do it -- the IP reappeared there a few minutes later.

    A web search found lots of solutions -- you have to edit a long list of files, but no two articles had the same file list. It appears that it's safest to remove the IP from every file in /var/lib/denyhosts.

    So here are the step by step instructions.

    First, shut off the denyhosts service:

    service denyhosts stop

    Go to /var/lib/denyhosts/ and grep for any file that includes your IP:

    grep aa.bb.cc.dd *

    (If you aren't sure what your IP is as far as the outside world is concerned, Googling what's my IP will helpfully tell you, as well as giving you a list of other sites that will also tell you.)

    Then edit each of these files in turn, removing your IP from them (it will probably be at the end of the file).

    When you're done with that, you have one more file to edit: remove your IP from the end of /etc/hosts.deny

    You may also want to add your IP to /etc/hosts.allow, but it may not make much difference, and if you're on a dynamic IP it might be a bad idea since that IP will eventually be used by someone else.

    Finally, you're ready to re-start denyhosts:

    service denyhosts start

    Whew, un-blocked. And stay away from mailsync. I wish I knew of a program that actually worked to keep IMAP and mbox mailboxes in sync.

    June 23, 2016

    Siggraph 2016 Computer Animation Festival Selections

    We are proud to share the news that 3 films completely produced with Blender have been selected for the 43rd Computer Animation Festival to be celebrated in Anaheim, California, 24-28 July 2016! The films are Cosmos Laundromat (Blender Institute, directed by Mathieu Auvray), Glass Half (Blender Institute, directed by Beorn Leonard) and Alike (directed and produced by Daniel M. Lara and Rafa Cano).


    The films are going to be screened at the Electronic Theater, which is one of the highlights of the SIGGRAPH conference. SIGGRAPH is widely considered the most prestigious forum for the publication of computer graphics research and it is an honour to see such films in the same venue where computer graphics has been pioneered for decades.

    Here you can see a trailer of the Animation Festival, where some shots of Cosmos Laundromat can be spotted.

    June 22, 2016

    Sharing is Caring

    Sharing is Caring

    Letting it all hang out

    It was always my intention to make the entire PIXLS.US website available under a permissive license. The content is already all licensed Creative Commons, By Attribution, Share-Alike (unless otherwise noted). I just hadn’t gotten around to actually posting the site source.

    Until now(ish). I say “ish“ because I apparently released the code back in April and am just now getting around to talking about it.

    Also, we finally have a category specifically for all those darktable weenies on discuss!

    Don’t Laugh

    I finally got around to pushing my code for this site up to Github on April 27 (I’m basing this off git logs because my memory is likely suspect). It took a while, but better late than never? I think part of the delay was a bit of minor embarrassment on my part for being so sloppy with the site code. In fact, I’m still embarrassed - so don’t laugh at me too hard (and if you do, at least don’t point while laughing too).

    Carrie White Brian De Palma’s interpretation of my fears…

    So really this post is just a reminder to anyone that was interested that this site is available on Github:


    In fact, we’ve got a couple of other repositories under the Github Organization PIXLS.US including this website, presentation assets, lighting diagram SVG’s, and more. If you’ve got a Github account or wanted to join in with hacking at things, by all means send me a note and we’ll get you added to the organization asap.

    Note: you don’t need to do anything special if you just want to grab the site code. You can do this quickly and easily with:

    git clone https://github.com/pixlsus/website.git

    You actually don’t even need a Github account to clone the repo, but you will need one if you want to fork it on Github itself, or to send pull-requests. You can also feel free to simply email/post patches to us as well:

    git format-patch testing --stdout > your_awesome_work.patch

    Being on Github means that we also now have an issue tracker to report any bugs or enhancements you’d like to see for the site.

    So no more excuses - if you’d like to lend a hand just dive right in! We’re all here to help! :)

    Speaking of Helping

    Speaking of which, I wanted to give a special shout-out to community member @paperdigits (Mica), who has been active in sharing presentation materials in the Presentations repo and has been actively hacking at the website. Mica’s recommendations and pull requests are helping to make the site code cleaner and better for everyone, and I really appreciate all the help (even if I am scared of change).

    Thank you, Mica! You rock!

    Those Stinky darktable People

    Yes, after member Claes asked the question on discuss about why we didn’t have a darktable category on the forums, I relented and created one. Normally I want to make sure that any category is going to have active people to maintain and monitor the topics there. I feel like having an empty forum can sometimes be detrimental to the perception of a project/community.

    darktable logo

    In this case, any topics in the darktable category will also show up in the more general Software category as well. This way the visibility and interactions are still there, but with the added benefit that we can now choose to see only darktable posts, ignore them, or let all those stinky users do what they want in there.

    Besides, now we can say that we’ve sufficiently appeased Morgan Hardwood‘s organizational needs…

    So, come on by and say hello in the brand new darktable category!

    June 21, 2016

    AAA game, indie game, card-board-box

    Early bird gets eaten by the Nyarlathotep
    The more adventurous of you can use those (designed as embeddable) Lua scripts to transform your DRM-free GOG.com downloads into Flatpaks.

    The long-term goal would obviously be for this not to be needed, and for online games stores to ship ".flatpak" files, with metadata so we know what things are in GNOME Software, which automatically picks up the right voice/subtitle language, and presents its extra music and documents in the respective GNOME applications.
    But in the meanwhile, and for the sake of the games already out there, there's flatpak-games. Note that lua-archive is still fiddly.
    Support for a few Humble Bundle formats (some formats already are), grab-all RPMs and Debs, and those old Loki games is also planned.
    It's late here, I'll be off to do some testing I think :)

    PS: Even though I have enough programs that would fail to create bundles in my personal collection to accept "game donations", I'm still looking for original copies of Loki games. Drop me a message if you can spare one!

    Sharing Galore

    Sharing Galore

    or, Why This Community is Awesome

    Community member and RawTherapee hacker Morgan Hardwood brings us a great tutorial + assets from one of his strolls near the Söderåsen National Park (Sweden!). Ofnuts is apparently trying to get me to burn the forum down by sharing his raw file of a questionable subject. After bugging David Tschumperlé he managed to find a neat solution to generating a median (pixel) blend of a large number of images without making your computer throw itself out a window.

    So much neat content being shared for everyone to play with and learn from! Come see what everyone is doing!

    Old Oak - A Tutorial

    Sometimes you’re just hanging out minding your own business and talking photography with friends and other Free Software nuts when someone comes running by and drops a great tutorial in your lap. Just as Morgan Hardwood did on the forums a few days ago!

    Old Oak by Morgan Hardoowd Old Oak by Morgan Hardwood cbsa

    He introduces the image and post:

    There is an old oak by the southern entrance to the Söderåsen National Park. Rumor has it that this is the oak under which Gandalf sat as he smoked his pipe and penned the famous saga about J.R.R. Tolkien. I don’t know about that, but the valley rabbits sure love it.

    The image itself is a treat. I personally love images where the lighting does interesting things and there are some gorgeous things going on in this image. The diffused light flooding in under the canopy on the right with the edge highlights from the light filtering down make this a pleasure to look at.

    Of course, Morgan doesn’t stop there. You should absolutely go read his entire post. He not only walks through his entire thought process and workflow starting at his rationale for lens selection (50mm f/2.8) all the way through his corrections and post-processing choices. To top it all off, he has graciously shared his assets for anyone to follow along! He provides the raw file, the flat-field, a shot of his color target + DCP, and finally his RawTherapee .PP3 file with all of his settings! Whew!

    If you’re interested I urge you to go check out (and participate!) in his topic on the forums: Old Oak - A Tutorial.

    I Will Burn This Place to the Ground

    Speaking of sharing material, Ofnuts has decided that he apparently wants me to burn the forums to the ground, put the ashes in a spaceship, fly the spaceship into the sun, and to detonate the entire solar system into a singularity. Why do I say this?

    Kill It With Fire! Kill it with fire!

    Because he started a topic appropriately entitled: “NSFPAOA (Not Suitable for Pat and Other Arachnophobes)”, in which he shares his raw .CR2 file for everyone to try their hand at processing that cute little spider above. There have already been quite a few awesome interpretations from folks in the community like:

    CarVac Version A version by CarVac
    MLC Morgin Version By MLC/Morgin
    By Jonas Wagner By Jonas Wagner
    iarga By iarga
    by PkmX By PkmX
    by Kees Guequierre By Kees Guequierre

    Of course, I had a chance to try processing it as well. Here’s what I ended up with:


    Ahhhh, just writing this post is a giant bag of NOPE*. If you’d like to join in on the fun(?) and share your processing as well - go check out the topic!

    Now let’s move on to something more cute and fuzzy, like an ALOT…

    * I kid, I’m not really an arachnophobe (within reason), but I can totally see why someone would be.

    Median Blending ALOT of Images with G’MIC

    Hyperbole and a Half ALOT The ALOT. Borrowed from Allie Brosh and here because I really wanted an excuse to include it.

    I count myself lucky to have so many smart friends that I can lean on to figure out or help me do things (more on that in the next post). One of those friends is G’MIC creator and community member David Tschumperlé.

    A few years back he helped me with some artwork I was generating with imagemagick at the time. I was averaging images together to see what an amalgamation would look like. For instance, here is what all of the Sports Illustrated swimsuit edition (NSFW) covers (through 2000) look like, all at once:

    Sport Illustrated Swimsuit Covers Through 2000

    A natural progression of this idea was to consider doing a median blend vs. mean. The problem is that a mean average is very easy and fast to calculate as you advance through the image stack, but the median is not. This is relevant because I began to look at these for videos (in particular music videos), where the image stack was 5,000+ images for a video easily (that is ALOT of frames!).

    It’s relatively easy to generate a running average for a series of numbers, but generating the median value requires that the entire stack of numbers be loaded and sorted. This makes it prohibitive to do on a huge number of images, particularly at HD resolutions.

    So it’s awesome that, yet again, David has found a solution to the problem! He explains it in greater detail on his topic:

    A guide about computing the temporal average/median of video frames with G’MIC

    He basically chops up the image frame into regions, then computes the pixel-median value for those regions. Here’s an example of his result:

    P!nk Try Mean/Median Mean/Median samples from P!nk - Try music video.

    Now I can start utilizing median blends more often in my experiments, and I’m quite sure folks will find other interesting uses for this type of blending!

    Sharing Galore

    Sharing Galore

    or, Why This Community is Awesome

    Community member and RawTherapee hacker Morgan Hardwood brings us a great tutorial + assets from one of his strolls near the Söderåsen National Park (Sweden!). Ofnuts is apparently trying to get me to burn the forum down by sharing his raw file of a questionable subject. After bugging David Tschumperlé he managed to find a neat solution to generating a median (pixel) blend of a large number of images without making your computer throw itself out a window.

    So much neat content being shared for everyone to play with and learn from! Come see what everyone is doing!

    Old Oak - A Tutorial

    Sometimes you’re just hanging out minding your own business and talking photography with friends and other Free Software nuts when someone comes running by and drops a great tutorial in your lap. Just as Morgan Hardwood did on the forums a few days ago!

    Old Oak by Morgan Hardoowd Old Oak by Morgan Hardwood cbsa

    He introduces the image and post:

    There is an old oak by the southern entrance to the Söderåsen National Park. Rumor has it that this is the oak under which Gandalf sat as he smoked his pipe and penned the famous saga about J.R.R. Tolkien. I don’t know about that, but the valley rabbits sure love it.

    The image itself is a treat. I personally love images where the lighting does interesting things and there are some gorgeous things going on in this image. The diffused light flooding in under the canopy on the right with the edge highlights from the light filtering down make this a pleasure to look at.

    Of course, Morgan doesn’t stop there. You should absolutely go read his entire post. He not only walks through his entire thought process and workflow starting at his rationale for lens selection (50mm f/2.8) all the way through his corrections and post-processing choices. To top it all off, he has graciously shared his assets for anyone to follow along! He provides the raw file, the flat-field, a shot of his color target + DCP, and finally his RawTherapee .PP3 file with all of his settings! Whew!

    If you’re interested I urge you to go check out (and participate!) in his topic on the forums: Old Oak - A Tutorial.

    I Will Burn This Place to the Ground

    Speaking of sharing material, Ofnuts has decided that he apparently wants me to burn the forums to the ground, put the ashes in a spaceship, fly the spaceship into the sun, and to detonate the entire solar system into a singularity. Why do I say this?

    Kill It With Fire! Kill it with fire!

    Because he started a topic appropriately entitled: “NSFPAOA (Not Suitable for Pat and Other Arachnophobes)”, in which he shares his raw .CR2 file for everyone to try their hand at processing that cute little spider above. There have already been quite a few awesome interpretations from folks in the community like:

    CarVac Version A version by CarVac
    MLC Morgin Version By MLC/Morgin
    By Jonas Wagner By Jonas Wagner
    iarga By iarga
    by PkmX By PkmX
    by Kees Guequierre By Kees Guequierre

    Of course, I had a chance to try processing it as well. Here’s what I ended up with:


    Ahhhh, just writing this post is a giant bag of NOPE*. If you’d like to join in on the fun(?) and share your processing as well - go check out the topic!

    Now let’s move on to something more cute and fuzzy, like an ALOT…

    * I kid, I’m not really an arachnophobe (within reason), but I can totally see why someone would be.

    Median Blending ALOT of Images with G’MIC

    Hyperbole and a Half ALOT The ALOT. Borrowed from Allie Brosh and here because I really wanted an excuse to include it.

    I count myself lucky to have so many smart friends that I can lean on to figure out or help me do things (more on that in the next post). One of those friends is G’MIC creator and community member David Tschumperlé.

    A few years back he helped me with some artwork I was generating with imagemagick at the time. I was averaging images together to see what an amalgamation would look like. For instance, here is what all of the Sports Illustrated swimsuit edition (NSFW) covers (through 2000) look like, all at once:

    Sport Illustrated Swimsuit Covers Through 2000

    A natural progression of this idea was to consider doing a median blend vs. mean. The problem is that a mean average is very easy and fast to calculate as you advance through the image stack, but the median is not. This is relevant because I began to look at these for videos (in particular music videos), where the image stack was 5,000+ images for a video easily (that is ALOT of frames!).

    It’s relatively easy to generate a running average for a series of numbers, but generating the median value requires that the entire stack of numbers be loaded and sorted. This makes it prohibitive to do on a huge number of images, particularly at HD resolutions.

    So it’s awesome that, yet again, David has found a solution to the problem! He explains it in greater detail on his topic:

    A guide about computing the temporal average/median of video frames with G’MIC

    He basically chops up the image frame into regions, then computes the pixel-median value for those regions. Here’s an example of his result:

    P!nk Try Mean/Median Mean/Median samples from P!nk - Try music video.

    Now I can start utilizing median blends more often in my experiments, and I’m quite sure folks will find other interesting uses for this type of blending!

    June 18, 2016

    Cave 6" as a Quick-Look Scope

    I haven't had a chance to do much astronomy since moving to New Mexico, despite the stunning dark skies. For one thing, those stunning dark skies are often covered with clouds -- New Mexico's dramatic skyscapes can go from clear to windy to cloudy to hail or thunderstorms and back to clear and hot over the course of a few hours. Gorgeous to watch, but distracting for astronomy, and particularly bad if you want to plan ahead and observe on a particular night. The Pajarito Astronomers' monthly star parties are often clouded or rained out, as was the PEEC Nature Center's moon-and-planets star party last week.

    That sort of uncertainty means that the best bet is a so-called "quick-look scope": one that sits by the door, ready to be hauled out if the sky is clear and you have the urge. Usually that means some kind of tiny refractor; but it can also mean leaving a heavy mount permanently set up (with a cover to protect it from those thunderstorms) so it's easy to carry out a telescope tube and plunk it on the mount.

    I have just that sort of scope sitting in our shed: an old, dusty Cave Astrola 6" Newtonian on an equatorian mount. My father got it for me on my 12th birthday. Where he got the money for such a princely gift -- we didn't have much in those days -- I never knew, but I cherished that telescope, and for years spent most of my nights in the backyard peering through the Los Angeles smog.

    Eventually I hooked up with older astronomers (alas, my father had passed away) and cadged rides to star parties out in the Mojave desert. Fortunately for me, parenting standards back then allowed a lot more freedom, and my mother was a good judge of character and let me go. I wonder if there are any parents today who would let their daughter go off to the desert with a bunch of strange men? Even back then, she told me later, some of her friends ribbed her -- "Oh, 'astronomy'. Suuuuuure. They're probably all off doing drugs in the desert." I'm so lucky that my mom trusted me (and her own sense of the guys in the local astronomy club) more than her friends.

    The Cave has followed me through quite a few moves, heavy, bulky and old fashioned as it is; even when I had scopes that were bigger, or more portable, I kept it for the sentimental value. But I hadn't actually set it up in years. Last week, I assembled the heavy mount and set it up on a clear spot in the yard. I dusted off the scope, cleaned the primary mirror and collimated everything, replaced the finder which had fallen out somewhere along the way, set it up ... and waited for a break in the clouds.

    [Hyginus Rille by Michael Karrer] I'm happy to say that the optics are still excellent. As I write this (to be posted later), I just came in from beautiful views of Hyginus Rille and the Alpine Valley on the moon. On Jupiter the Great Red Spot was just rotating out. Mars, a couple of weeks before opposition, is still behind a cloud (yes, there are plenty of clouds). And now the clouds have covered the moon and Jupiter as well. Meanwhile, while I wait for a clear view of Mars, a bat makes frenetic passes overhead, and something in the junipers next to my observing spot is making rhythmic crunch, crunch, crunch sounds. A rabbit chewing something tough? Or just something rustling in the bushes?

    I just went out again, and now the clouds have briefly uncovered Mars. It's the first good look I've had at the Red Planet in years. (Tiny achromatic refractors really don't do justice to tiny, bright objects.) Mars is the most difficult planet to observe: Dave liks to talk about needing to get your "Mars eyes" trained for each Mars opposition, since they only come every two years. But even without my "Mars eyes", I had no trouble seeing the North pole with dark Acidalia enveloping it, and, in the south, the sinuous chain of Sini Sabaeus, Meridiani, Margaritifer, and Mare Erythraeum. (I didn't identify any of these at the time; instead, I dusted off my sketch pad and sketched what I saw, then compared it with XEphem's Mars view afterward.)

    I'm liking this new quick-look telescope -- not to mention the childhood memories it brings back.

    June 17, 2016

    Appimages, Snaps, XDG-Apps^WFlatpaks

    Lots of excitement... When Canonical announced that their snaps work on a number of other Linux distributions, the reactions were predictable, sort of amusing and missing the point.

    In the end, all this going back and forth, these are just turf wars. There are Redhat/Fedora people scared and horrified that Canonical/Ubuntu might actually set a standard for once, there are probably Canonical/Ubuntu people scared that their might not set a standard (though after several days of this netstorm, I haven't seen anything negative from their side, there are traditional packagers worried that the world may change and that they lose their "curating" position.

    And there's me scared that I'll have to maintain debs, rpms, flatpaks, snaps, appimages, OSX bundles, MSI installers, NSIS installers and portable zips. My perspective is a bit that of an outsider, I don't care about the politics, though I do wish that it isn't a dead certainty that we'll end up having both flatpaks (horrible name, by the way) and snaps in the Linux world.

    Both the Canonical and the Fedora side claim to be working with the community, and, certainly, I was approached about snap and helped make a Krita snap. Which is a big win, both for me and for snap. But both projects ignore the appimage project, which is a real community effort, without corporate involvement. Probably because there is no way for companies to use appimage to create a lock-in effort or chance monetization, it'll always be a community project, ignored by the big Linux companies.

    Here's my take, speaking a someone who is actually releasing software to end users using some of these new-fangled systems.

    The old rpm/deb way of packaging is excellent for creating the base system. For software where having the latest version doesn't matter that much for productivity. It's a system that's been used for about twenty years and served us reasonably well. But if you are developing software for end users that is regularly updated, where the latest version is important because it always has improvements that let the users do more work, it's a problem. It's a ghastly drag having to actually make the packages if you're not part of a distribution, and having to make packages for several distributions is not feasible for a small team. And if we don't, then when there are distributions that do not backport new versions to old releases because they only backport bugfixes, not releases, users lose out.

    Snap turns out to be pretty easy to make, and pretty easy to upload to Ubuntu's app store, and pretty easy to find once it's there, seeing that there were already more than a thousand downloads after a few days. I don't care about the security technology, that's just not relevant for Krita. If you use Krita, you want it to access your files. It takes about five minutes to make a new snap and upload it -- pretty good going. I was amazed and pleased that the snap now runs on a number of other distributions, and if Canonical/Ubuntu follows up on that, plugs the holes and fixes the bugs, it'll be a big plus. Snap also offers all kinds of flexibility, like adding a patched Qt, that I haven't even tried yet. I also haven't checked how to add translations yet, but that's also because the system we use to release translations for Krita needs changing, and I want to do that first.

    I haven't got any experience with flatpak. I know there was a start on making a Krita flatpak, but I haven't seen any results. I think that the whole idea of a runtime, which is a dependency thing, is dumb, though. Sure, it'll save some disk space, but at the cost of added complexity. I don't want that. For flatpak, I'll strike a wait-and-see attitude: I don't see the need for it, but if it materializes, and takes as little of my time as snap, I might make them. Unless I need to install Fedora for it, because that's one Linux distribution that just doesn't agree with me.

    Appimages, finally, are totally amazing, because they run everywhere. They don't need any kind of runtime or installation. Creating the initial AppImage recipe took a lot of time and testing, mainly because of the run-everywhere requirement. That means fiddly work trying to figure out which low-level libraries need to be included to make OpenGL work, and which don't. There might be bumps ahead, for instance if we want to start using OpenCL -- or so I was told in a comment on LWN. I don't know yet. Integration with the desktop environment is something Simon is working on, by installing a .desktop file in the user's home directory. Sandboxing is also being worked on, using some of the same technology as flatpak, apparently. Automatic updates is also something that is becoming possible. I haven't had time to investigate those things yet, because of release pressures, kickstarter pressures and all that sort of thing. One possible negative about appimages is that users have a hard time understanding them -- they just cannot believe that download, make executable, go is all there's to it. So much so that I've considered making a tar.xz with an executable appimage inside so users are in a more familiar territory. Maybe even change the extension from .appimage to .exe?

    Anyway, when it comes to actually releasing software to end users in a way that doesn't drive me crazy, I love AppImages, I like snap, I hate debs, rpms, repositories, ppa's and their ilk and flatpak has managed to remain a big unknown. If we could get a third format to replace all the existing formats, say flatsnapimage, wouldn't that be lovely?

    Wouldn't it?

    June 16, 2016

    silverorange job opening: Back-end Web Developer

    Silverorange, the web design and development company where I work, is looking to hire another great back-end web developer. It’s a nice place to work.

    Translation parameters in angular-gettext

    As a general rule, I try not to include new features in angular-gettext: small is beautiful and for the most part I consider the project as finished. However, Ernest Nowacki just contributed one feature that was too good to leave out: translation parameters.

    To understand what translation parameters are, consider the following piece of HTML:

    <span translate>Last modified: {{post.modificationDate | date : 'yyyy-MM-dd HH:mm'}} by {{post.author}}.</span>

    The resulting string that needs to be handled by your translators is both ugly and hard to use:

    msgid "Last modified: {{post.modificationDate | date : 'yyyy-MM-dd HH:mm'}} by {{post.author}}."

    With translation parameters you can add local aliases:

    <span translate
          translate-params-date="post.modificationDate | date : 'yyyy-MM-dd HH:mm'"
        Last modified: {{date}} by {{author}}.

    With this, translators only see the following:

    msgid "Last modified: {{date}} by {{author}}."

    Simply beautiful.

    You’ll need angular-gettext v2.3.0 or newer to use this feature.

    More information in the documentation: https://angular-gettext.rocketeer.be/dev-guide/translate-params/.

    Comments | More on rocketeer.be | @rubenv on Twitter

    June 15, 2016

    Running Krita Snaps on Other Distributions

    This is pretty cool: in the week before the Krita release, Michael Hall submitted a snapcraft definition for making a Krita snap. A few iterations later, we have something that works (unless you're using an NVidia GPU with the proprietary drivers). Adding Krita to the Ubuntu app store was also really easy.

    And now, if you go to snapcraft.io and click on a Linux distribution's logo, you'll get instructions on how to get snap running on your system -- and that means the snap package for Krita can work on Arch, Debian, Fedora, Gentoo -- and Ubuntu of course. Pretty unbelievable! OpenSUSE is still missing though...

    Of course, running a snap still means you need to install something before you can run Krita while an AppImage doesn't need anything making it executable. Over the past month, I've encountered a lot of Linux users who just couldn't believe it's so easy, and were asking for install instructions :-)

    June 11, 2016

    The 2016 Kickstarter

    This year's kickstarter fundraising campaign for Krita was more nerve-wracking than the previous two editions. Although we ended up 135% funded, we were almost afraid we wouldn't make it, around the middle. Maybe only the release of Krita 3.0 turned the campaign around. Here's my chaotic and off-the-cuff analysis of this campaign.

    Campaign setup

    We were ambitious this year and once again decided upon two big goals: text and vector, because we felt both are real pain points in Krita that really need to be addressed. I think now that we probably should have made both into super-stretch goals one level above the 10,000 euro Python stretch goal and let our community decide.

    Then we could have made the base level one stretch goal of 15,000 euros, and we'd have been "funded" on the second day and made the Kickstarter expectation that a succesful campaign is funded immediately. Then we could have opened the paypal pledges really early into the campaign and advertise the option properly.

    We also hadn't thought through some stretch goals in sufficient depth, so sometimes we weren't totally sure ourselves what we're offering people. This contrasts with last year, where the stretch goals were precisely defined. (But during development became gold-plated -- a 1500 stretch goal should be two weeks of work, which sometimes became four or six weeks.)

    We did have a good story, though, which is the central part of any fundraiser. Without a good story that can be summarized in one sentence, you'll get nowhere. And text and vector have been painful for our users for years now, so that part was fine.

    We're also really well-oiled when it comes to preparation: Irina, me and Wolthera sat together for a couple of weekends to first select the goals, then figure out the reward levels and possible rewards, and then to write the story and other text. We have lists of people to approach, lists of things that need to be written in time to have them translated into Russian and Japanese -- that's all pretty well oiled.

    Not that our list of rewards was perfect, so we had to do some in-campaign additions, and we made at least one mistake: we added a 25 euro level when the existing 25 euros rewards had sold out. But the existing rewards re-used overstock from last year, and for the new level we have to have new goodies made. And that means our cost for those rewards is higher than we thought. Not high enough that those 25 euros pledges don't help towards development, but it's still a mistake.

    Our video was very good this year: about half of the plays were watched to the end, which is an amazing score!

    Kickstarter is becoming a tired formula

    Already after two days, people were saying on the various social media sites that we wouldn't make it. The impression with Kickstarter these days is that if you're not 100% funded in one or two days, you're a failure. Kickstarter has also become that site where you go for games, gadgets and gags.

    We also noticed less engagement: fewer messages and comments on the kickstarter site itself. That could have been a function of a less attractive campaign, of course.

    That Kickstarter still hasn't got a deal with Paypal is incredible. And Kickstarter's campaign tools are unbelievably primitive: from story editor to update editor (both share the same wysiwyg editor which is stupidly limited, and you can only edit updates for 30 minutes) to the survey tools, which don't allow copy and paste between reward levels or any free text except in the intro. Basically, Kickstarter isn't spending any money on its platform any more, and it shows.

    It is next to impossible to get news coverage for a fundraising campaign

    You'd think that "independent free software project funds full-time development through community, not commercial, support" would make a great story, especially when the funding is a success and the results are visible for everyone. You'd think that especially the free software oriented media would be interested in a story like this. But, with some exceptions, no.

    Last year, I was told by a journalist reporting on free and open source software that there are too many fundraising campaigns to cover. He didn't want to drown his readers in them, and it would be unethical to ignore some and cover others.

    But are there so many fundraisers for free software? I don't know, since none get into the news. I know about a few, mostly in the graphics software category -- synfig, blender, Jehan's campaign for Zemarmot, the campaign by the Software Freedom Conversancy, KDE's Randa campaign. But that's really just a handful.

    I think that the free and open source news media are doing their readers a disservice by not covering campaigns like ours; and they are doing the ecosystem a disservice. Healthy, independent projects that provide software in important categories, like Krita, are essential for free software to prosper.


    Without the release, we might not have made it. But doing a Kickstarter is exhausting: it's only a month, but feels like two or three. Doing a release and a Kickstarter is double exhausting. We did raise Krita's profile and userbase to a whole other level, though! (Which also translates into a flood of bug reports, and bugzilla basically has become unmanageable for us: we need more triagers and testers, badly!)

    Right now, I'd like to take a few days off, and Dmitry smartly is taking a few days off, but there's still so much on my backlog that it's not going to happen.

    I also had a day job for three days a week during the campaign, during which I wasn't available for social media work or promo, and I really felt that to be a problem. But I need that job to fund my own work on Krita...


    Kickstarter lets one know where the backers are coming from. Kickstarter itself is a source of backers: about 4500 euros came from Kickstarter itself. Next up is Reddit with 3000 euros, twitter with 1700, facebook 1400, krita.org 1000 and blendernation with 900. After that, the long tail starts. So, in the absence of news coverage, social media is really important and the Blender community is once again proven to be much bigger than most people in the free software community realize.


    The campaign was a success, and the result pretty much the right size, I think. If we had double the result, we would have had to find another freelancer to work on Krita full-time. I'm not sure we're ready for that yet. We've also innovated this year, by deciding to offer artists in our communities commissions to create art for the rewards. That's something we'll be setting in motion soon.

    Another innovation is that we decided to produce an art book with work by Krita artists. Calls for submissions will go out soon! That book will also go into the shop, and it's kind of an exercise for the other thing we want to do this year: publish a proper Pepper and Carrot book.

    If sales from books will help fund development further, we might skip one year of Kickstarter-like fund raising, in the hope that a new platform will spring up that will offer a fresh way of doing fund raising.

    June 10, 2016

    Visual diffs and file merges with vimdiff

    I needed to merge some changes from a development file into the file on the real website, and discovered that the program I most often use for that, meld, is in one of its all too frequent periods where its developers break it in ways that make it unusable for a few months. (Some of this is related to GTK, which is a whole separate rant.)

    That led me to explore some other diff/merge alternatives. I've used tkdiff quite a bit for viewing diffs, but when I tried to use it to merge one file into another I found its merge just too hard to use. Likewise for emacs: it's a wonderful editor but I never did figure out how to get ediff to show diffs reliably, let alone merge from one file to another.

    But vimdiff looked a lot easier and had a lot more documentation available, and actually works pretty well.

    I normally run vim in an xterm window, but for a diff/merge tool, I want a very wide window which will show the diffs side by side. So I used gvimdiff instead of regular vimdiff: gvimdiff docs.dev/filename docs.production/filename

    Configuring gvimdiff to see diffs

    gvimdiff initially pops up a tiny little window, and it ignores Xdefaults. Of course you can resize it, but who wants to do that every time? You can control the initial size by setting the lines and columns variables in .vimrc. About 180 columns by 60 lines worked pretty well for my fonts on my monitor, showing two 80-column files side by side. But clearly I don't want to set that in .vimrc so that it runs every time I run vim; I only want that super-wide size when I'm running a side-by-side diff.

    You can control that by checking the &diff variable in .vimrc:

    if &diff
        set lines=58
        set columns=180

    If you do decide to resize the window, you'll notice that the separator between the two files doesn't stay in the center: it gives you lots of space for the right file and hardly any for the left. Inside that same &diff clause, this somewhat arcane incantation tells vim to keep the separator centered:

        autocmd VimResized * exec "normal \<C-w>="

    I also found that the colors, in the vim scheme I was using, made it impossible to see highlighted text. You can go in and edit the color scheme and make your own, of course, but an easy way quick fix is to set all highlighting to one color, like yellow, inside the if $diff section:

        highlight DiffAdd    cterm=bold gui=none guibg=Yellow
        highlight DiffDelete cterm=bold gui=none guibg=Yellow
        highlight DiffChange cterm=bold gui=none guibg=Yellow
        highlight DiffText   cterm=bold gui=none guibg=Yellow

    Merging changes

    Okay, once you can view the differences between the two files, how do you merge from one to the other? Most online sources are quite vague on that, but it's actually fairly easy:

    ]c jumps to the next difference
    [c jumps to the previous difference
    dp makes them both look like the left side (apparently stands for diff put
    do makes them both look like the right side (apparently stands for diff obtain

    The only difficult part is that it's not really undoable. u (the normal vim undo keystroke) works inconsistently after dp: the focus is generally in the left window, so u applies to that window, while dp modified the right window and the undo doesn't apply there. If you put this in your .vimrc

    nmap du :wincmd w<cr>:normal u<cr>:wincmd w<cr>
    then you can use du to undo changes in the right window, while u still undoes in the left window. So you still have to keep track of which direction your changes are going.

    Worse, neither undo nor this du command restores the highlighting showing there's a difference between the two files. So, really, undoing should be reserved for emergencies; if you try to rely on it much you'll end up being unsure what has and hasn't changed.

    In the end, vimdiff probably works best for straightforward diffs, and it's probably best get in the habit of always merging from right to left, using do. In other words, run vimdiff file-to-merge-to file-to-merge-from, and think about each change before doing it to make it less likely that you'll need to undo.

    And hope that whatever silly transient bug in meld drove you to use vimdiff gets fixed quickly.

    June 09, 2016

    Display Color Profiling on Linux

    Display Color Profiling on Linux

    A work in progress

    This article by Pascal de Bruijn was originally published on his site and is reproduced here with permission.  —Pat

    Attention: This article is a work in progress, based on my own practical experience up until the time of writing, so you may want to check back periodically to see if it has been updated.

    This article outlines how you can calibrate and profile your display on Linux, assuming you have the right equipment (either a colorimeter like for example the i1 Display Pro or a spectrophotometer like for example the ColorMunki Photo). For a general overview of what color management is and details about some of its parlance you may want to read this before continuing.

    A Fresh Start

    First you may want to check if any kind of color management is already active on your machine, if you see the following then you’re fine:

    $ xprop -display :0.0 -len 14 -root _ICC_PROFILE
    _ICC_PROFILE: no such atom on any window.

    However if you see something like this, then there is already another color management system active:

    $ xprop -display :0.0 -len 14 -root _ICC_PROFILE
    _ICC_PROFILE(CARDINAL) = 0, 0, 72, 212, 108, 99, 109, 115, 2, 32, 0, 0, 109, 110

    If this is the case you need to figure out what and why… For GNOME/Unity based desktops this is fairly typical, since they extract a simple profile from the display hardware itself via EDID and use that by default. I’m guessing KDE users may want to look into this before proceeding. I can’t give much advice about other desktop environments though, as I’m not particularly familiar with them. That said, I tested most of the examples in this article with XFCE 4.10 on Xubuntu 14.04 “Trusty”.

    Display Types

    Modern flat panel displays are comprised of two major components for purposes of our discussion, the backlight and the panel itself. There are various types of backlights, White LED (most common nowadays), CCFL (most common a few years ago), RGB LED and Wide Gamut CCFL, the latter two of which you’d typically find on higher end displays. The backlight primarily defines a displays gamut and maximum brightness. The panel on the other hand primarily defines the maximum contrast and acceptable viewing angles. Most common types are variants of IPS (usually good contrast and viewing angles) and TN (typically mediocre contrast and poor viewing angles).

    Display Setup

    There are two main cases, there are laptop displays, which usually allow for little configuration, and regular desktop displays. For regular displays there are a few steps to prepare your display to be profiled, first you need to reset your display to its factory defaults. We leave the contrast at its default value. If your display has a feature called dynamic contrast you need to disable it, this is critical, if you’re unlucky enough to have a display for which this cannot be disabled, then there is no use in proceeding any further. Then we set the color temperature setting to custom and set the R/G/B values to equal values (often 100/100/100 or 255/255/255). As for the brightness, set it to a level which is comfortable for prolonged viewing, typically this means reducing the brightness from its default setting, this will often be somewhere around 25–50 on a 0–100 scale. Laptops are a different story, often you’ll be fighting different lighting conditions, so you may want to consider profiling your laptop at its full brightness. We’ll get back to the brightness setting later on.

    Before continuing any further, let the display settle for at least half an hour (as its color rendition may change while the backlight is warming up) and make sure the display doesn’t go into power saving mode during this time.

    Another point worth considering is cleaning the display before starting the calibration and profiling process, do keep in mind that displays often have relatively fragile coatings, which may be deteriorated by traditional cleaning products, or easily scratched using regular cleaning cloths. There are specialist products available for safely cleaning computer displays.

    You may also want to consider dimming the ambient lighting while running the calibration and profiling procedure to prevent (potential) glare from being an issue.


    If you’re in a GNOME or Unity environment it’s highly recommend to use GNOME Color Manager (with colord and argyll). If you have recent versions (3.8.3, 1.0.5, 1.6.2 respectively), you can profile and setup your display completely graphically via the Color applet in System Settings. It’s fully wizard driven and couldn’t be much easier in most cases. This is what I personally use and recommend. The rest of this article focuses on the case where you are not using it.

    Xubuntu users in particular can get experimental packages for the latest argyll and optionally xiccd from my xiccd-testing PPAs. If you’re using a different distribution you’ll need to source help from its respective community.

    Report On The Uncalibrated Display

    To get an idea of the displays uncalibrated capabilities we use argyll’s dispcal:

    $ dispcal -H -y l -R
    Uncalibrated response:
    Black level = 0.4179 cd/m^2
    50%   level = 42.93 cd/m^2
    White level = 189.08 cd/m^2
    Aprox. gamma = 2.14
    Contrast ratio = 452:1
    White     Visual Daylight Temperature = 7465K, DE 2K to locus =  3.2

    Here we see the display has a fairly high uncalibrated native whitepoint at almost 7500K, which means the display is bluer than it should be. When we’re done you’ll notice the display becoming more yellow. If your displays uncalibrated native whitepoint is below 6500K you’ll notice it becoming more blue when loading the profile.

    Another point to note is the fairly high white level (brightness) of almost 190 cd/m2, it’s fairly typical to target 120 cd/m2 for the final calibration, keeping in mind that we’ll lose 10 cd/m2 or so because of the calibration itself. So if your display reports a brightness significantly higher than 130 cd/m2 you may want to considering turning down the brightness another notch.

    Calibrating And Profiling Your Display

    First we’ll use argyll’s dispcal to measure and adjust (calibrate) the display, compensating for the displays whitepoint (targeting 6500K) and gamma (targeting industry standard 2.2, more info on gamma here):

    $ dispcal -v -m -H -y l -q l -t 6500 -g 2.2 asus_eee_pc_1215p

    Next we’ll use argyll’s targen to generate measurement patches to determine its gamut:

    $ targen -v -d 3 -G -f 128 asus_eee_pc_1215p

    Then we’ll use argyll’s dispread to apply the calibration file generated by dispcal, and measure (profile) the displays gamut using the patches generated by targen:

    $ dispread -v -N -H -y l -k asus_eee_pc_1215p.cal asus_eee_pc_1215p

    Finally we’ll use argyll’s colprof to generate a standardized ICC (version 2) color profile:

    $ colprof -v -D "Asus Eee PC 1215P" -C "Copyright 2013 Pascal de Bruijn" \
              -q m -a G -n c asus_eee_pc_1215p
    Profile check complete, peak err = 9.771535, avg err = 3.383640, RMS = 4.094142

    The parameters used to generate the ICC color profile are fairly conservative and should be fairly robust. They will likely provide good results for most use-cases. If you’re after better accuracy you may want to try replacing -a G with -a S or even -a s, but I very strongly recommend starting out using -a G.

    You can inspect the contents of a standardized ICC (version 2 only) color profile using argyll’s iccdump:

    $ iccdump -v 3 asus_eee_pc_1215p.icc

    To try the color profile we just generated we can quickly load it using argyll’s dispwin:

    $ dispwin -I asus_eee_pc_1215p.icc

    Now you’ll likely see a color shift toward the yellow side. For some possibly aged displays you may notice it shifting toward the blue side.

    If you’ve used a colorimeter (as opposed to a spectrophotometer) to profile your display and if you feel the profile might be off, you may want to consider reading this and this.

    Report On The Calibrated Display

    Next we can use argyll’s dispcal again to check our newly calibrated display:

    $ dispcal -H -y l -r
    Current calibration response:
    Black level = 0.3432 cd/m^2
    50%   level = 40.44 cd/m^2
    White level = 179.63 cd/m^2
    Aprox. gamma = 2.15
    Contrast ratio = 523:1
    White     Visual Daylight Temperature = 6420K, DE 2K to locus =  1.9

    Here we see the calibrated displays whitepoint nicely around 6500K as it should be.

    Loading The Profile In Your User Session

    If your desktop environment is XDG autostart compliant, you may want to considering creating a .desktop file which will load the ICC color profile during all users session login:

    $ cat /etc/xdg/autostart/dispwin.desktop
    [Desktop Entry]
    Name=Argyll dispwin load color profile
    Exec=dispwin -I /usr/share/color/icc/asus_eee_pc_1215p.icc

    Alternatively you could use colord and xiccd for a more sophisticated setup. If you do make sure you have recent versions of both, particularly for xiccd as it’s still a fairly young project.

    First we’ll need to start xiccd (in the background), which detects your connected displays and adds it to colord‘s device inventory:

    $ nohup xiccd &

    Then we can query colord for its list of available devices:

    $ colormgr get-devices

    Next we need to query colord for its list of available profiles (or alternatively search by a profile’s full filename):

    $ colormgr get-profiles
    $ colormgr find-profile-by-filename /usr/share/color/icc/asus_eee_pc_1215p.icc

    Next we’ll need to assign our profile’s object path to our display’s object path:

    $ colormgr device-add-profile \
       /org/freedesktop/ColorManager/devices/xrandr_HSD121PHW1_70842_pmjdebruijn_1000 \

    You should notice your displays color shift within a second or so (xiccd applies it asynchronously), assuming you haven’t already applied it via dispwin earlier (in which case you’ll notice no change).

    If you suspect xiccd isn’t properly working, you may be able to debug the issue by stopping all xiccd background processes, and starting it in debug mode in the foreground:

    $ killall xiccd
    $ G_MESSAGES_DEBUG=all xiccd

    Also in xiccd‘s case you’ll need to create a .desktop file to load xiccd during all users session login:

    $ cat /etc/xdg/autostart/xiccd.desktop
    [Desktop Entry]
    GenericName=X11 ICC Daemon
    Comment=Applies color management profiles to your session

    You’ll note that xiccd does not need any parameters, since it will query colord‘s database what profile to load.

    If your desktop environment is not XDG autostart compliant, you need to ask them how to start custom commands (dispwin or xiccd respectively) during session login.

    Dual Screen Caveats

    Currently having a dual screen color managed setup is complicated at best. Most programs use the _ICC_PROFILE atom to get the system display profile, and there’s only one such atom. To resolve this issue new atoms were defined to support multiple displays, but not all applications actually honor them. So with a dual screen setup there is always a risk of applications applying the profile for your first display to your second display or vice versa.

    So practically speaking, if you need a reliable color managed setup, you should probably avoid dual screen setups altogether.

    That said, most of argyll’s commands support a -d parameter for selecting which display to work with during calibration and profiling, but I have no personal experience with them whatsoever, since I purposefully don’t have a dual screen setup.

    Application Support Caveats

    As my other article explains display color profiles consist of two parts, one part (whitepoint & gamma correction) is applied via X11 and thus benefits all applications. There is however a second part (gamut correction) that needs to be applied by the application. And application support for both input and display color management vary wildly. Many consumer grade applications have no color management awareness whatsoever.

    Firefox can do color management and it’s half-enabled by default, read this to properly configure Firefox.

    GIMP for example has display color management disabled by default, you need to enable it via its preferences.

    Eye of GNOME has display color management enabled by default, but it has nasty corner case behaviors, for example when a file has no metadata no color management is done at all (instead of assuming sRGB input). Some of these issues seem to have been resolved on Ubuntu Trusty (LP #272584).

    Darktable has display color management enabled by default and is one of the few applications which directly support colord and the display specific atoms as well as the generic _ICC_PROFILE atom as fallback. There are however a few caveats for darktable as well, documented here.

    This article by Pascal de Bruijn was originally published on his site and is reproduced here with permission.